ID |
Date |
Author |
Type |
Category |
Subject |
2594
|
Fri Feb 12 11:44:11 2010 |
josephb | Update | Computers | Status of the IP change over |
Quote: |
After Joe left:
- Turned on op440m and returned him his keyboard and mouse.
- Damped MC2.
- Opened PSL shutter - locked PMC, FSS,
- Started StripTool displays on op540m.
- op340m doesn't respond to ping from anyone.
- started FSS SLOW and RCPID scripts on op540 - need to kill and restart on op430m.
- ASS wouldn't come up - it doesn't know who linux1 is.
- MC autolocker wouldn't run on op540m because of a perl module issue, started it on op440m - it needs to be killed and restarted on op430m.
- probably mafalda, linux2, and op430m need some attention - they are all in the same rack.
As of 7:18 PM, the MC is locked and the PSL seems normal + all suspensions are damped and the ELOG is back up as well as the SVN.
|
5) op340m has had its hosts table and other network files updated. I also removed its outdated hosts.deny file which was causing some issues with ssh.
6) On op340m I started FSSSlowServo, with "nohup ./FSSSlowServo", after killing it on op540m.
I also kill RCthermalPID.pl, and started with "nohup ./RCthermalPID.pl" on op540m.
7) c1ass is fixed now. There was a typo in the resolv.conf file (namerserver -> nameserver) which has been fixed. It is now using the DNS server running on linux1 for all its host name needs.
8) I killed the autlockMCmain40m process running on op440m, modified the script to run on op340m, logged into op340m, went to /cvs/cds/caltech/scripts/MC and ran nohup ./autolockMCmain40m
9) Linux2 does not look like it has not been connected for awhile and its wasn't connected when we started the IP change over yesterday. Is it supposed to still be in use? If so, I can hook it up fairly easily. op340m, as noted earlier, has been switched over. Mafalda has been switched over.
10) c0rga has now been switched over.
11) aldabella, the vacuum laptop has had its starting environment variables tweaked (in the /home/controls/.cshrc file) so that it looks on the 192.168.113 network instead of the 131.215.113. This should mean Steve will not have any more trouble starting up his vacuum control screen.
12) Ottavia has been switched over.
13) At this time, only the GPIB devices and a few laptops remain to get switched over. |
2595
|
Fri Feb 12 11:56:02 2010 |
josephb | Update | Computers | Nodus slow ssh fixed | Koji pointed out that logging into Nodus was being abnormally slow. I tracked it down to the fact we had forgotten to update the address for the DNS server running on linux1 in the resolv.conf file on nodus. Basically it was looking for a DNS server which didn't exit, and thus was timing out before going to the next one. SSHing into nodus should be more responsive. |
2597
|
Fri Feb 12 13:56:16 2010 |
josephb | Update | Computers | Finishing touches on IP switch over | The GPIB interfaces have been updated to the new 192.168.113.xxx addresses, with Alberto's help.
Spare ethernet cables have been moved into a cabinet halfway down the x-arm.
The illuminators have a white V error on the alarm handler, but I'm not sure why. I can turn them on and off using the video screen controls (except for the x arm, which has no computer control, just walk out and turn it on).
There's a laptop or two I haven't tracked down yet, but that should be it on IPs.
At some point, a find and replace on 131.215.xxx.yyy addresses to 192.168.xxx.yyy should be done on the wiki. I also need to generate an up to date ethernet IP spreadsheet and post it to the wiki.
|
2599
|
Fri Feb 12 15:59:16 2010 |
josephb | Update | Computers | Testpoints not working | Non-testpoint channels seem to be working in data viewer, however testpoints are not. The tpman process is not running on fb40m. My rudimentary attempts to start it have failed.
# /usr/controls/tpman &
13929
# VMIC RFM 5565 (0) found, mapped at 0x2868c90
VMIC RFM 5579 (1) found, mapped at 0x2868c90
Could not open 5565 reflective memory in /dev/daqd-rfm1
16 kHz system
Spawn testpoint manager
no test point service registered
Test point manager startup failed; -1
It looks like it may be an issue with the reflected memory (although the cables are plugged in and I see the correct lights lit on the RFM card in back of fb40m.)
The fact that this is a RFM error is confirmed by /usr/install/rfm2g_solaris/vmipci/sw-rfm2g-abc-005/util/diag/rfm2g_util and entering 3 (which should be the device number).
Interestingly, the device number 4 works, and appears to be the correct RFM network (i.e. changing ETMY lscPos offset changes to the corresponding value in memory).
So, my theory is that when Alex put the cards back in, the device number (PCI slot location?) was changed, and now the tpman code doesn't know where to look for it.
Edit: Doesn't look like PCI slot location is it, given there's 4 slots and its in #3 currently (or 2 I suppose, depending on which way you count). Neither seems much the number 4. So I don't know how that device number gets set.
|
2610
|
Wed Feb 17 12:45:19 2010 |
josephb | Update | Computers | Updated Megatron and its firewall | I updated the IP address on the Cisco Linksys wireless N router, which we're using to keep megatron separated from the rest of the network. I then went in and updated megatrons resolv.conf and host files. It is now possible to ssh into megatron again from the control machines. |
2622
|
Mon Feb 22 09:45:34 2010 |
josephb | Update | General | Prep for Power Supply Stop |
Quote: |
Autoburts have not been working since the network changeover last Thursday.
Last snapshot was around noon on Feb 11... 
It turns out this happened when the IP address got switched from 131.... to 192.... Here's the horrible little piece of perl code which was failing:
$command = "/usr/sbin/ifconfig -a > $temp";
system($command);
open(TEMP,$temp) || die "Cannot open file $temp\n";
$site = "undefined";
#
# this is a horrible way to determine site location
while ($line = <TEMP>) {
if ($line =~ /10\.1\./) {
$site = "lho";
} elsif ($line =~ /10\.100\./) {
$site = "llo";
} elsif ($line =~ /192\.168\./) {
$site = "40m";
}
}
if ($site eq "undefined") {
die "Cannot Determine Which LIGO Observatory this is\n";
I've now put in the correct numbers for the 40m...and its now working as before. I also re-remembered how the autoburt works:
1) op340m has a line in its crontab to run /cvs/cds/caltech/burt/autoburt/burt.cron (I've changed this to now run at 7 minutes after the hour instead of at the start of the hour).
2) burt.cron runs /cvs/cds/scripts/autoburt.pl (it was using a perl from 1999 to run this - I've now changed it to use the perl 5.8 from 2002 which was already in the path).
3) autoburt.pl looks through every directory in 'target' and tries to do a burt of its .req file.
Oh, and it looks like Joe has fixed the bug where only op440m could ssh into op340m by editing the host.allow or host.deny file (+1 point for Joe).
But he forgot to elog it (-1 point for Joe).®
|
I knew there was going to be a script somewhere with a hard coded IP address. My fault for missing it. However, in regards to the removal of op340m's host.deny file, I did elog it here. Item number 5. |
2626
|
Mon Feb 22 11:46:55 2010 |
josephb | Update | Computers | fb40m | I fixed the JetStor 416S raid array IP address by plugging in my laptop to its ethernet port, setting my IP to be on the same subnet, and using the web interface. (After finally tracking down the password, it has been placed in the usual place).
After this change, I powered up the fb40m2 machine and reboot the fb40m machine. This seems to have made all the associated lights green.
Data viewer is working such that is recording from the point I fixed the JetStor raid array and did the fb40m reboot. It also can go back in time before the IP switch over. |
2628
|
Mon Feb 22 13:08:27 2010 |
josephb | Update | Computers | Minor tweaks to c1omc | While working on c1omc, I created a .cshrc file in the controls home directory, and had it source the cshrc.40m file so that useful shortcuts like "target" and "c" work, among other things. I also fixed the resolv.conf file so that it correctly uses linux1 as its name server (speeding up ssh login times). |
2727
|
Mon Mar 29 10:40:59 2010 |
josephb | Update | Cameras | GigE camera no work from screen |
Quote: |
Not that this is an urgent concern, just a data point which shows that it doesn't just not work at the sites.
|
I had to restart the dhcpd server on Ottavia that allows us to talk to the camera. I then also changed the configuration script on the camera so that it no longer thinks ottavia is 131.215.113.97, but correctly 192.168.113.97. Overall took 5 minutes.
I also looked up services for Centos 5, and set it using the program serviceconf to start the DHCP server when Ottavia is rebooted now. That should head off future problems of that nature. For reference, to start the dhcp server manually, become root and type "service dhcpd start".
|
2734
|
Tue Mar 30 11:16:05 2010 |
josephb | HowTo | Computers | ezca update information (CDS SVN) | I'd like to try installing an updated multi-threaded ezca extension later this week, allowing for 64-bit builds of GDS ezca tools, provided by Keith Thorne. The code can be found in the LDAS CVS under gds, as well as in CDS subversion repository, located at
https://redoubt.ligo-wa.caltech.edu/websvn/
Its under gds/epics/ in that repository. The directions are fairly simple:
1) to install ezca with mult-threading in an existing EPICS installation
-copy ezca_2010mt.tar.gz (EPICS_DIR)/extensions/src
-cd (EPICS_DIR)/extensions/src
-tar -C -xzf ezca_2010mt
-modify (EPICS_DIR)/extensions/Makefile to point 'ezca' at 'ezca_2010mt'
-cd ezca_2010mt
-set EPICS_HOST_ARCH appropriately
-make
|
2739
|
Wed Mar 31 10:34:02 2010 |
josephb | Update | elog | Elog not responding this morning | When I went to use the elog this morning, it wasn't responding. I killed the process on nodus, and then restarted, per the 40m wiki instructions. |
2744
|
Wed Mar 31 16:55:05 2010 |
josephb | Update | Computers | 2 computers from Alex and Rolf brought to 40m | I went over to Downs today and was able to secure two 8 core machines, along with mounting rails. These are very thin looking 1U chassis computers. I was told by Rolf the big black box computers might be done tomorrow afternoon. Alex also kept one of the 8 core machines since he needed to replace a hard drive on it, and also wanted to keep for some further testing, although he didn't specify how long.
I also put in a request with Alex and Rolf for the RCG system to produce code which includes memory location hooks for plant models automatically, along with a switch to flip from the real to simulated inputs/outputs.
|
2791
|
Mon Apr 12 17:37:52 2010 |
josephb | Update | Computers | Y end simulated plant progress | Currently, the y end plant is yep.mdl. In order to compile it properly (for the moment at least) requires running the normal makefile, then commenting out the line in the makefile which does the parsing of the mdl, and rerunning after modifying the /cds/advLigo/src/fe/yep/yep.c file.
The modifications to the yep.c file are to change the six lines that look like:
"plant_mux[0] = plant_gndx" into lines that look like "plant_mux[0] = plant_delayx". You also have to add initialization of the plant_delayx type variables to zero in the if(feInt) section, near where plant_gndx is set to zero.
This is necessary to get the position feedback within the plant model to work properly.
#NOTE by Koji
CAUTION:
This entry means that Makefile was modified not to parse the mdl file.
This affects making any of the models on megatron. |
Attachment 1: YEP.png
|
|
Attachment 2: YEP_PLANT.png
|
|
2798
|
Tue Apr 13 12:49:35 2010 |
josephb | Update | Computers | Y end simulated plant progress |
Quote: |
Currently, the y end plant is yep.mdl. In order to compile it properly (for the moment at least) requires running the normal makefile, then commenting out the line in the makefile which does the parsing of the mdl, and rerunning after modifying the /cds/advLigo/src/fe/yep/yep.c file.
The modifications to the yep.c file are to change the six lines that look like:
"plant_mux[0] = plant_gndx" into lines that look like "plant_mux[0] = plant_delayx". You also have to add initialization of the plant_delayx type variables to zero in the if(feInt) section, near where plant_gndx is set to zero.
This is necessary to get the position feedback within the plant model to work properly.
#NOTE by Koji
CAUTION:
This entry means that Makefile was modified not to parse the mdl file.
This affects making any of the models on megatron.
|
To prevent this confusion in the future, at Koji's suggestion I've created a Makefile.no_parse_mdl in /home/controls/cds/advLIGO on megatron. The normal makefile is the original one (with correct parsing now). So the correct procedure is:
1) "make yep"
2) Modify yep.c code
3) "make -f Makefile.no_parse_mdl yep" |
2808
|
Mon Apr 19 13:23:03 2010 |
josephb | Configuration | Computers | yum update fixed on control room machines | I went to Ottavia, and tried running yum update. It was having dependancy issues with mjpegtools, which was a rpmforge provided package. In order to get it to update, I moved the rpmforge priority above (a lower number) that of epel ( epel -> 20 from 10, rpmforge -> 10 to 20). This resolved the problem and the updates proceeded (all 434 of them). yum update on Ottavia now reports nothing needs to be done.
I went to Rosalba and found rpmfusion repositories enabled. The only one of the 3 repositories in each file enabled was the first one.
I then added priority listing to all the repositories on Rosalba. I set CentOS-Base and related to priority=1. I set CentOS-Media.repo priority to 1 (although it is disabled - just to head off future problems). I set all epel related to priorities to 20. I set all rpmforge related priorities to 10. I set all rpmfusion related priorities to 30, and left the first repo in rpmfusion-free-updates and rpmfusion-nonfree-updates were enabled. All other rpmfusion testing repositories were disabled by me.
I then had to by hand downgrade expat to expat-1.95.8-8.3.el5_4.2.x86_64 (the rpmforge version). I also removed and reinstalled x264.x86_64. Lastly I removed and reinstalled lyx. yum update was then run and completed successfully.
I installed yum-priorities on Allegra and made all CentOS-Base repositories priority 1. I similarly made the still disabled CentOS-Media priority 1. I made all epel related repos priority 20. I made all lscsoft repos priority=50 (not sure why its on Allegra and none of the other ones). I made all rpmforge priorities 10. I then ran "yum update" which updated 416 packages.
So basically all the Centos control room machines are now using the following order for repositories:
CentOS-Base > rpmforge > epel > (rpmfusion - rosalba only) > lscsoft (allegra only)
I'm not sure if rpmfusion and lscsoft are necessary, but I've left them for now. This should mean "yum update" will have far fewer problems in the future.
|
2824
|
Wed Apr 21 11:32:31 2010 |
josephb | Update | CDS | 40m CDS hardware update and software requests | This is mostly a reminder to myself about what I discussed with Jay and Alex this morning.
The big black IO chassis are "almost" done. Except for the missing parts. We have 2 Dolphin, 1 Large and 1 Small I/O Chassis due to us. One Dolphin is effectively done and is sitting in the test stand. However, 2 are missing timing boards, and 3 are missing the boards necessary for the connection to the computer. The parts were ordered a long time ago, but its possible they were "sucked to one of the sites" by Rolf (remember this is according to Jay). They need to either track them down in Downs (possibly they're floating around and were just confused by the recent move), get them sent back from the sites, or order new ones (I was told by one person that the place they order from them notoriously takes a long time, sometimes up to 6 weeks. I don't know if this is exaggeration or not...). Other than the missing parts, they still need to wire up the fans and install new momentary power switches (apparently the Dolphin boards want momentary on/off buttons). Otherwise, they're done.
We are due another CPU, just need to figure out which one it was in the test stand.
6 more BIO boards are done. When I went over the plans with Jay, we realized we needed 7 more, not 6, so they're putting another one together. Some ADC/DAC interface boards are done. I promised to do another count here, to determine how many we have, how many we need, and then report that back to Jay before I steal the ones which are complete. Unfortunately, he did not have a new drawing for the ASC/vertex wiring, so we don't have a solid count of stuff needed for them. I'll be taking a look at the old drawings and also looking at what we physically have.
I did get Jay to place the new LSC wiring diagram into the DCC (which apparently the old one never was put in or we simply couldn't find it). Its located at: https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?docid=10985
I talked briefly with Alex, reminded him of feature requests and added a new one:
1) Single part representing a matrix of filter banks
2) Automatic generation of Simulated shared memory locations and an overall on/off switch for ADC/DACs
3) Individual excitation and test point pieces (as opposed to having to use a full filter bank). He says these already exist, so when I do the CVS checkout, I'll see if they work.
I also asked where the adl default files lived, and he pointed me at ~/cds/advLigo/src/epics/util/
In that directory are FILTER.adl, GDS_TP.adl, MONITOR.adl. Those are the templates. We also discovered the timing signal at some point was changed from something like SYS-DCU_ID to FEC-DCU_ID, so I basically just need to modify the .adl files to fix the time stamp channel as well. I basically need to do a CVS checkout, put the fixes in, then commit back to the CVS. Hopefully I can do that sometime today.
I also brought over 9 Contec DO-32L-PE boards, which are PCIe isolated digital output boards which do into the IO chassis. These have been placed above the 2 new computers, behind the 1Y6 rack.
|
2826
|
Wed Apr 21 16:48:38 2010 |
josephb | Update | CDS | Hardware update | Alberto and myself went to downs and acquired the 3rd 4x processor (Dual core, so 8x cores total) computer. We also retrieved 6 BIO interface boards (blue front thin boxes), 4 DAC interface boards, and 1 ADC interface boards. The tops have not been put on yet, but we have the tops and a set of screws for them. For the moment, these things have been placed behind the 1Y6 rack and under the table behind the 1Y5 rack
.
The 6 BIO boards have LIGO travelers associated with them: SN LIGO-S1000217 through SN LIGO-S1000222. |
2832
|
Thu Apr 22 15:44:34 2010 |
josephb | Update | Computers | | I updated the default FILTER.adl file located in /home/controls/cds/advLigo/src/epics/util/ on megatron. I moved the yellow ! button up slightly, and fixed the time string in the upper right. |
Attachment 1: New_example_CDS_filter.png
|
|
2841
|
Mon Apr 26 10:21:45 2010 |
josephb | Update | LSC | Started dev of LSC FE |
Quote: |
Joe and I started working on the new LSC FE control today. We made a diagram of the system in Simulink, but were unable to compile it.
Joe checked out the latest CDS software out of their new SVN and put it somewhere (perhaps his home directory).
|
The SVN checkout was done on megatron. It is located under /home/controls/cds/advLigoRTS
So, to compile (or at least try to) you need to copy the .mdl file from /cvs/cds/caltech/cds/advLigo/src/epics/simLink to /home/controls/cds/advLigoRTS/src/epics/simLink on megatron, then run make SYS in the advLigoRTS directory on megatron.
The old checkout from CVS exists on megatron under /home/controls/cds/advLigo. |
2844
|
Mon Apr 26 11:29:37 2010 |
josephb | Update | Computers | Updated bitwise.pm in RCG SVN plus other fixes | To fix a problem one of the models was having, I checked the CVS version of the Bitwise.pm file into the SVN (located in /home/controls/cds/advLigoRTS/src/epics/util/lib), which adds left and right bit shifting funtionality. The yec model now builds with the SVN checkout.
Also while trying to get things to work, I discovered the cdsRfmIO piece (used to read and write to the RFM card) now only accepts 8 bit offsets. This means we're going to have to change virtually all of the RFM memory locations for the various channels, rather than using the values from its previous incarnation, since most were 4 bit numbers. It also means it going to eat up roughly twice as much space, as far as I can tell.
Turns out the problem we were having getting to compile was nicely answered by Koji's elog post. The shmem_daq value was not set to be equal to 1. This caused it to look for myrimnet header files which did not exist, and caused compile time errors. The model now compiles on megatron.
[Edit by KA: 4 bit and 8 bit would mean "bytes". I don't recall which e-log of mine Joe is referring.]
|
2845
|
Mon Apr 26 12:24:58 2010 |
josephb | Update | General | Daily Downs update | Talked with Jay briefly this morning.
We are due another 1-U 4 core (8 CPU) machine, which is one of the ones currently in the test stand. I'm hoping sometime this week I can convince Alex to help me remove it from said test stand.
The megatron machine we have is definitely going to be used in the 40m upgrade (to answer a question of Rana's from last Wednesday's meeting). Thats apparently the only machine of that class we get, so moving it to the vertex for use as the LSC or SUS vertex machine may make sense. Overall we'll have the ASS, OMC, Megatron (SUS?), along with the new 4 1-U machines, for LSC, IO, End Y and End X. We are getting 4 more IO chassis, for a total 5. ASS and OMC machine will be going without full new chassis.
Speaking of IO chassis, they are still being worked on. Still need a few cards put in and some wiring work done. I also didn't see any other adapter boards finished either. |
2849
|
Tue Apr 27 11:16:13 2010 |
josephb | Configuration | CDS | Wiki page with CDS .mdl names, shared memory allocation | I've added a new page in the wiki which describes the current naming scheme for the .mdl model files used for the real time code generator. Note, that these model names do not necessarily have to be the names of the channels contained within. Its still possible to make all suspension related channels start with C1:SUS- for example. I'm also allocating 1024 8 byte channels for shared memory address space for each controller and each simulated plant.
The wiki page is here
Name suggestions, other front end models that are needed long term (HEPI is listed for example, even though we don't have it here, since in the long run we'd like to port the simulated plant work to the sites) are all welcome. |
2860
|
Thu Apr 29 14:37:16 2010 |
josephb | Update | CDS | New Channel Name to Memory Location file | Awhile back we had requested a feature for the RCG code where a single file would define a memory location's name as well as its explicit hex address. Alex told me it had been implemented in the latest code in SVN. After being unable to find said file, I went back and talked to him and Rolf. Rolf said it existed, but had not been checked into the SVN yet.
I now have a copy of that file, called G1.ipc. It is supposed to live in /cvs/cds/caltech/chans/ipc/ , so I created the ipc directory there. The G1.ipc file is actually for a geo install, so we'll eventually make a C1.ipc file.
The first couple lines look like:
# /cvs/cds/geo/chans/ipc/G1.ipc
[default]
ipcType=SHMEM
ipcRate=2048
ipcNum=0
desc=default entry
[G1:OMC-QPD1P]
ipcType=SHMEM
ipcRate=32768
ipcNum=0
desc=Replaces 0x2000
#[G1:OMC-NOTUSED]
#ipcType=SHMEM
#ipcRate=32768
#ipcNum=1
[G1:OMC-QPD2P]
ipcType=SHMEM
ipcRate=32768
ipcNum=1
desc=Replaces 0x2008
There are also section using ipcType IPC:
[G1:SUS-ADC_CH_24]
ipcType=PCI
ipcRate=16384
ipcNum=1
desc=Replaces 0x20F0
[G1:SUS-ADC_CH_25]
ipcType=PCI
ipcRate=16384
ipcNum=2
desc=Replaces 0x20F0
Effectively the ipcNum tells it which memory location to use, starting with 0x2000 (at least thats how I'm interpreting it. Every entry of a given ipcType has a different ipcNum which seems to be correlated to its description (at least early on - later in the file many desc= lines repeat, which I think means people were copy/pasting and got tired of editing the file. Once I get a C1.ipc file going, it should make our .mdl files much more understandable, at least for communicating between models. It also looks like it somehow interacts with the ADCs/DACs with ipcType PCI, although I'm hoping to get a full intro how to use the file tomorrow from Rolf and Alex. |
2861
|
Thu Apr 29 15:48:47 2010 |
josephb | Update | CDS | New CDS overview diagram in wiki | I've added a diagram in the wiki under IFO Upgrade 2009-2010->New CDS->Diagram section Joe_CDS_Plan.pdf (the .svg file I used to create it is also there). This was mostly an exercise in me learning inkscape as well as putting out a diagram with which lists control and model names and where they're running.
A direct link is: CDS_Plan.pdf |
2871
|
Mon May 3 15:39:39 2010 |
josephb | Update | CDS | Daily Downs update | Talked with Jay briefly today. Apparently there are 3 IO chassis currently on the test stand at Downs and undergoing testing (or at least they were when Alex and Rolf were around). They are being tested to determine which slots refer to which ADC, among other things. Apparently the numbering scheme isn't as simple as 0 on the left, and going 1,2,3,4, etc. As Rolf and Alex are away this week, it is unlikely we'll get them before their return date.
Two other chassis (which apparently is one more than the last time I talked with Jay), are still missing cards for communicating between the computer and the IO chassis, although Gary thinks I may have taken them with me in a box. I've done a look of all the CDS stuff I know of here at the 40m and have not seen the cards. I'll be checking in with him tomorrow to figure out when (and if) I have the the cards needed. |
2872
|
Mon May 3 16:53:27 2010 |
josephb | Update | CDS | Updated lsc.mdl and the ifo plant model with memory locations | I've updated the LSC and IFO models that Rana created with new shared memory locations. I've used the C1:IFO- for the ifo.mdl file outputs, which in turn are read by the lsc.mdl file. The LSC outputs being lsc control signals are using C1:LSC-. Optics positions would presumably be coming from the associated suspension model, and am currently using SUP, SPX, and SPY for the suspension plant models (suspension vertex, suspension x end, suspension y end).
I've updated the web view of these models on nodus. They can be viewed at: https://nodus.ligo.caltech.edu:30889/FE/
I've also created a C1.ipc file in /cvs/cds/caltech/chans/ipc which assigns ipcNum to each of these new channels in shared memory. |
2877
|
Tue May 4 13:14:43 2010 |
josephb | Update | CDS | lsc.mdl and ifo.mdl to build (with caveats) | I got around to actually try building the LSC and IFO models on megatron. Turns out "ifo" can't be used as a model name and breaks when trying to build it. Has something to do with the find and replace routines I have a feeling (ifo is used for the C1, H1, etc type replacements throughout the code). If you change the model name to something like ifa, it builds fine though. This does mean we need a new name for the ifo model.
Also learned the model likes to have the cdsIPCx memory locations terminated on the inputs if its being used in a input role (I.e. its bringing the channel into the model). However when the same part is being used in an output role (i.e. its transmitting from the model to some other model), if you terminate the output side, it gives errors when you try to make.
Its using the C1.ipc file (in /cvs/cds/caltech/chans/ipc/) just fine. If you have missing memory locations in the C1.ipc file (i.e. you forgot to define something) it gives a readable error message at compile time, which is good. The file seems to be being parsed properly, so the era of writing "0x20fc" for block names is officially over. |
2878
|
Tue May 4 14:57:53 2010 |
josephb | Update | Computers | Ottavia has moved | Ottavia was moved this afternoon from the control room into the lab, adjacent to Mafalda in 1Y3 on the top shelf. It has been connected to the camera hub, as well as the normal network. Its cables are clearly labeled. Note the camera hub cable should be plugged into the lower ethernet port. Brief tests indicate everything is connected and it can talk to the control room machines.
The space where Ottavia used to be is now temporarily available as a good place to setup a laptop, as there is keyboard, mouse, and an extra monitor available. Hopefully this space may be filled in with a new workstation in the near future. |
2895
|
Fri May 7 14:51:04 2010 |
josephb | Update | CDS | Working on meta .mdl file scripts | I'm currently working on a set of scripts which will be able to parse a "template" mdl file, replacing certain key words, with other key words, and save it to a new .mdl file.
For example you pass it the "template" file of scx.mdl file (suspension controller ETMX), and the keyword ETMX, followed by an output list of scy.mdl ETMY, bs.mdl BS, itmx.mdl ITMX, itmy.mdl ITMY, prm.mdl PRM, srm.mdl SRM. It produces these new files, with the keyword replaced, and a few other minor tweaks to get the new file to work (gds_node, specific_cpu, etc). You can then do a couple of copy paste actions to produce a combined sus.mdl file with all the BS, ITM, PRM, SRM controls (there might be a way to handle this better so it automatically merges into a single file, but I'd have to do something fancy with the positioning of the modules - something to look into).
I also have plans for a script which gets passed a mdl file, and updates the C1.ipc file, by adding any new channels and incrementing the ipcNum appropriately. So when you make a change you want to propagate to all the suspensions, you run the two scripts, and have an already up to date copy of memory locations - no additional typing required.
Similar scripts could be written for the DAQ screens as well, so as to have all the suspension screens look the same after changing one set. |
2903
|
Mon May 10 17:47:16 2010 |
josephb | Summary | CDS | Finished | So I finished writing a script which takes an .ipc file (the one which defines channel names and numbers for use with the RCG code generator), parses it, checks for duplicate channel names and ipcNums, and then parses and .mdl file looking for channel names, and outputs a new .ipc file with all the new channels added (without modifying existing channels).
The script is written in python, and for the moment can be found in /home/controls/advLigoRTS/src/epics/simLink/parse_mdl.py
I still need to add all the nice command line interface stuff, but the basic core works. And already found an error in my previous .ipc file, where I used the channel number 21 twice, apparently.
Right now its hard coded to read in C1.ipc and spy.mdl, and outputs to H1.ipc, but I should have that fixed tonight. |
2922
|
Wed May 12 12:32:04 2010 |
josephb | Configuration | CDS | Modified /etc/rc.d/rc.local on megatron | I modified the /etc/rc.d/rc.local file on megatron removing a bunch of the old test module names and added the new lsc and lsp modules, as well as a couple planned suspension models and plants, to shared memory so that they'll work. Basically I'm trying to move forward into the era of working on the actual model we're going to use in the long term as opposed to continually tweaking "test" models.
The last line in the file is now: /usr/bin/setup_shmem.rtl lsc lsp spy scy spx scx sus sup&
I removed mdp mdc mon mem grc grp aaa tst tmt. |
2923
|
Wed May 12 12:58:26 2010 |
josephb | Configuration | CDS | Setup fb to handle lsc, lsp models on megatron | I modified /cvs/cds/caltech/target/fb and changed the line "set controller_dcu=10" to "set controller_dcu=13" (where 13 is the lsc dcu_id number).
I also changed the set gds_server line from having 10 and 11 to 13 and 14 (lsc and lsp).
The file /cvs/cds/caltech/fb/master was modified to use C1LSC.ini and C1LSP.ini, as well as tpchn_C2.par (LSC) and tpchn_C3.par (LSP)
testpoint.par in /cvs/cds/caltech/target/gds/param was modified to use C-node1 and C-node2 (1 less then the gds_node_id for lsc and lsp respectively).
Note all the values of gds_node_id, dcu_id, and so forth are recorded at http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/Existing_RCG_DCUID_and_gds_ids |
2927
|
Thu May 13 15:19:44 2010 |
josephb | Update | CDS | Trying to get lsc.mdl and lsp.mdl working | I had a chat with Alex this morning and discovered that the dcu_ids 13,14,15,16 are reserved currently, and should not be used. I was told 9-12 and 17-26 were fine to use. I pointed out that we will eventually have more modules than that. His response was he is currently working on the framebuilder code and "modernizing" it, and that those restrictions will hopefully be lifted in the future although he isn't certain at this time what the real maximum gds_id number is (he was only willing to vouch for up to 26 - although the OMC seems to be currently working and set to 30).
Alex also suggested running an iop module to provide timing (since we are using adcSlave=1 option in the models). Apparently these are x00.mdl, x01.mdl, x11.mdl files in the /home/control/cds/advLigoRTS/src/epics/simLink/ directory. I saved x00.mdl as io1.mdl (I didn't want to use io0 as its a pain to differentiate between a zero and 'O'. This new IOP is using gds_node=1, dcu_id=9. I modified the approriate files to include it.
I modified /etc/rc.d/rc.local and added io1 to shmem line. I modified /cvs/cds/caltech/target/fb/daqdrc to use dcu_id 9 as the controller (this is the new iop model dcu_id number). In that same directory I modifed the file master by adding /cvs/cds/caltech/chans/daq/C1IO1.ini as well as uncommenting tpchn_C1 line. I modified testpoint.par in /cvs/cds/caltech/target/gds/param to include C-node0, and modified the prognum for lsc and lsp to 0x31001003 and 0x31001005.
So I started the 3 processes with startio1, startlsc, startlsp, then went to the fb directory and started the framebuilder. However, the model lsc.mdl is still having issues, although lsp and io1 seem to be working. At this point I just need to track down what fundamentally is different between lsc and lsp and correct it in the lsc model. I'm hoping its not related to the fact that we actually had a previous lsc front end and there's some legacy stuff getting in the way. One thing I can test is changing the name and see if that runs.
|
2932
|
Fri May 14 12:14:26 2010 |
josephb | Update | CDS | Need to track down old code for lsc system and remove them | I'm currently in the process of tracking down what legacy code is interfering with the new lsc model.
It turns out if you change the name of lsc file to something else (say scx as a quick test for example), it runs fine. In fact, the lsc and scx GDS_TP screens work in that case (since they're looking at the same channels). As one would expect, running them both at the same time causes problems. Note to self, make sure the other one is killed first. It does mean the lsc code gets loaded part way, but doesn't seem to communicate on EPICs or to the other models. However, I don't know what existing code is interfering. Currently going trhough the target directories and so forth. |
2946
|
Tue May 18 14:30:31 2010 |
josephb | Update | CDS | LSC.mdl problem found and fixed | After having checked old possibilities and deciding I wasn't imagining the lsc.mdl file not working, but working as another name, I tracked Alex down and asked for help.
After scratching our heads, we finally tracked it down to the RCG code itself, as opposed to any existing code.
Apparently, the skeleton.st file (located in /home/controls/cds/advLigoRTS/src/epics/util/) has special additional behavior for models with the following names: lsc, asc, hepi, hepia, asc40m, ascmc, tchsh1, tchsh2.
Alex was unsure what this additional code was for. To disable it, we went into the skeleton.st file, and changed the name "SEQUENCER_NAME_lsc" to "SEQUENCER_NAME_lsc_removed" where ever it occured. These names were in #ifdef statements, so now these codes will only be used if the model is named lsc_removed. This apparently fixed the problem. Running startlsc now runs the code as it should, and I can proceed to testing the communication to the lsp model.
Alex said he'd try to figure out what these special #ifdef code pieces are intended for and hopefully completely remove them once we've determined we don't need it. |
2948
|
Tue May 18 16:19:19 2010 |
josephb | Update | CDS | We have two new IO chassis | We have 2 new IO chassis with mounting rails and necessary boards for communicating to the computers. Still need boards to talk to the ADCs, DACs, etc, but its a start. These two IO chassis are currently in the lab, but not in their racks.
They will installed into 1X4 and 1Y5 tomorrow. In addition to the boards, we need some cables, and the computers need the approriate real time operating systems setup. I'm hoping to get Alex over sometime this week to help work on that. |
2953
|
Wed May 19 16:09:11 2010 |
josephb | Update | CDS | Racks to small for IO Chassis rails | So I discovered the hard way that the racks are not standard width, when I was unable to place a new IO chassis into the racks with rails attached. The IO chassis is narrow enough to fit through without the rails however.
I've talked to Steve and we decided on having some shelves made. I've asked Steve to get us 6. 1 for each end (2), 1 for SUS, 1 for LSC, 1 for IO, and 1 extra. |
2958
|
Thu May 20 13:12:28 2010 |
josephb | Update | CDS | Preparations for testing lsc,lsp, scy,spy together | In /cvs/cds/caltech/target/fb modified:
master: cleaned up so only io1 (IO processor), LSC, LSP, SCY, SPY were listed, along with their associated tpchan files.
daqdrc: fixed "dcu_rate 9 = 32768" to "dcu_rate 9 = 65536" (since the IO processor is running at 64k)
Added "dcu_rate 21 = 16384" and "dcu_rate 22 = 16384"
Changed "set gds_server = "megatron" "megatron" "megatron" 9 "megatron" 10 "megatron" 11;" to
set gds_server = "megatron" "megatron" 9 9;
The above change was made after reading Rolf's Admin guide: http://lhocds.ligo-wa.caltech.edu:8000/40m/Upgrade_09/CDS?action=AttachFile&do=get&target=RCG_admin_guide.pdf
The set gds_server is simply telling which computer the gds daemons are running on, and we don't need to do it 5 times.
In /cvs/cds/caltech/gds/params modified:
testpoint.par: added C-node7 and C-node8 for SCY and SPY respectively. |
2989
|
Wed May 26 10:58:29 2010 |
josephb | Update | CDS | New RCG checkout for use with all machines plus some issues | Now that we have multiple machines we'd like to run the new front end code on, I'm finding it annoying to have to constantly copy files back and forth to have the latest models on different machines. So I've come to the conclusion that Rana was right all along, and I should working somewhere in /cvs/cds/caltech which gets mounted by everyone.
However, this leads to the svn problem: I.e. I need recent code checked out from the RCG repository, but our current /cvs/cds/caltech/cds/advLigo directory is covered by the 40m SVN. So for the moment, I've checked out the advLigoRTS from https://redoubt.ligo-wa.caltech.edu/svn/advLigoRTS/trunk into /cvs/cds/caltech/cds/advLigoRTS. This directory will be kept as up to date as I can keep it, both by running svn update to get Alex/Rolf's changes and on my end by keeping the new and updated models. It will remain linked the RCG repository and not the 40m repository. At some point a better solution is needed, but its the best I can come up with for now.
Also, because we are starting to compile on different machines sometimes, you may run into a problem where a code won't run on a different machine. This can be fixed by commenting out some lines in the startup script. Go to the /cvs/cds/caltech/scripts directory. Then edit the associated startSYS file by commenting out the lines that look like:
if [ `hostname` != megatron ]; then
echo Cannot run `basename $0` on `hostname` computer
exit 1
fi
Unfortunately, this gets reverted each time "make SYS" and "make install-SYS" gets run.
The other issue this leads to is that some machines don't have as many CPUs available as others. For example our new thin 1U machines have only 4 dual cores (8 CPUs total). This means the specific_cpu setting of any of the codes cannot be higher than 7 (cores being numbered 0 through 7). Core 0 is reserved for the real time kernel, and Core 1 will be used on all machines for the IO processor. This leaves only cores 2 through 7 available for models to use which include LSC, LSP, SUS, SUP, SPY, SCY, SPX, SCX, OMC, OMP, OAF, OAP?, IOC, IOP. Since there are more than 6 models, duplication in final production code of specific_cpus will be necessary. Codes which are all running on Megatron at one point will have to be rebuilt with new specific_cpu values when run on the actual final machine. |
2990
|
Wed May 26 12:59:26 2010 |
josephb | Update | CDS | Created sus, sup, scx, spx models | I created the sus model, which is the suspension controller for ITMX, ITMY, BS, PRM, SRM. I also created sup, which is the suspension plant model for those same optics.
Updated /cvs/cds/caltech/target/fb master and daqdrc files to add SUS, SUP models. Megatron's /etc/rc.d/rc.local file has been updated to include all the necessary models as well.
The suspension controller needs the Binary IO outputs need to be checked and corrected if wrong by changing the constant connected to the exclusive or gates. Right now its using the end suspension binary output values which may not be correct. |
3005
|
Fri May 28 10:44:47 2010 |
josephb | Update | PEM | DAQ down |
Quote: |
Although trends are available, I am unable to get any full data from in the past (using DTT or DV). I started the FB's daqd process a few times, but no luck. 
I blame Joe's SimPlant monkeying from earlier today for lack of a better candidate. I checked and the frames are actually on the FB disk, so its something else.
|
I tried running dataviewer and dtt this morning. Dataviewer seemed to be working. I was able to get trends, full data on a 2k channel (seismic channels) and full data on a 16k channel (C1:PEM-AUDIO_MIC1) This was tried for a period 24 hours a go for a 10 minute stretch.
I also tried dtt and was able to get 2k and 16k channel data, for example C1:PEM-AUDIO_MIC1. Was this problem fixed by someone last night or did time somehow fix it? |
3007
|
Fri May 28 11:35:33 2010 |
josephb | Update | CDS | Taking a step backwards to get stuff running | I've modified the lsc.mdl and lsp.mdl files back to an older configuration, where we do not use an IO processor. This seems to let things work for the time being on megatron while I try to figure out what the is wrong with the "correct" setup which includes the IO processor.
Basically I removed the adcSlave = 1 line in the cdsParameters block.
I've attached a screen shot of the desktop showing one filter bank in the LSP model passing its output correctly to a filter block in the LSC. I also put in a quick test filter (an integrator) and you can see it got to 80 before I turned off the offset.
So far this is only running on megatron, not the new machine in the new Y end.
The models being use for this are located in /cvs/cds/caltech/cds/advLigoRTS/src/epics/simLink |
Attachment 1: working_lsc_lsp.png
|
|
3008
|
Fri May 28 13:17:05 2010 |
josephb | Update | CDS | Fixed problem with channel access on c1iscex | Talked with Alex and tracked down why the codes were not working on the new c1iscex finally. The .bashrc and .cshrc files in /home/controls/ on c1iscex has the following lines:
setenv EPICS_CA_ADDR_LIST 131.215.113.255
setenv EPICS_CA_AUTO_ADDR_LIST NO
This was interfering with channel access and preventing read and writes from working properly. We simply commented them out. After logging out and back in, the things like ezcaread and write started working, and we were able to get the models passing data back and forth.
Next up, testing RFM communications between megatron on c1iscex. To do this, I'd like to move Megatron down to 1Y3, and setup a firewall for it and c1iscex so I can test the frame builder and testpoints at the same time on both machines. |
3057
|
Tue Jun 8 20:52:25 2010 |
josephb | Update | PEM | DAQ up (for the moment) | As a test, I did a remote reboot of both Megatron and c1iscex, to make sure there was no code running that might interfere with the dataviewer. Megatron is behind a firewall, so I don't see how it could be interfering with the frame builder. c1iscex was only running a test module from earlier today when I was testing the multi-filter matrix part. No daqd or similar processes were running on this machine either, but it is not behind a firewall at the moment.
Neither of these seemed to affect the lack of past data. I note the error message from dataviewer was "read(); errno=9".
Going to the frame builder machine, I ran dmesg. I get some disturbing messages from May 26th and June 7th. There are 6-7 of these pairs of lines for each of these days, spread over the course of about 30 minutes.
Jun 7 14:05:09 fb ufs: [ID 213553 kern.notice] NOTICE: realloccg /: file system full
Jun 7 14:11:14 fb last message repeated 19 times
There's also one:
Jun 7 13:35:14 fb syslogd: /usr/controls/main_daqd.log: No space left on device
I went to /usr/controls/ and looked at the file. I couldn't read it with less, it errored with Value too large for defined data type. Turns out the file was 2.3 G. And had not been updated since June 7th. There were also a bunch of core dump files from May 25th, and a few more recent. However the ones from May 25th were somewhat large, half a gig each or so. I decided to delete the main_daqd.log file as well as the core files.
This seems to have fixed the data history for the moment (at least with one 16k channel I tested quickly). However, I'm now investigating why that log file seems to have filled up, and see if we can prevent this in the future.
Quote: |
As before, I am unable to get data from the past. With DTT on Allegra I got data from now, but its unavailable from 1 hour ago. Same problem using mDV on mafalda. I blame Joe again - or the military industrial complex.
|
Quote:
|
Quote: |
Although trends are available, I am unable to get any full data from in the past (using DTT or DV). I started the FB's daqd process a few times, but no luck. 
I blame Joe's SimPlant monkeying from earlier today for lack of a better candidate. I checked and the frames are actually on the FB disk, so its something else.
|
I tried running dataviewer and dtt this morning. Dataviewer seemed to be working. I was able to get trends, full data on a 2k channel (seismic channels) and full data on a 16k channel (C1:PEM-AUDIO_MIC1) This was tried for a period 24 hours a go for a 10 minute stretch.
I also tried dtt and was able to get 2k and 16k channel data, for example C1:PEM-AUDIO_MIC1. Was this problem fixed by someone last night or did time somehow fix it?
|
|
|
3069
|
Fri Jun 11 15:04:25 2010 |
josephb | Update | CDS | Multi-filter matrix medm screens finished and script for copying filters from SOS file | I've finished the MEDM portion of the RCG FiltMuxMatrix part. Now it generates an appropriate medm screen for the matrix, with links to all the filter banks. The filter bank .adl files are also generated, and placed in a sub directory with the name of the filter matrix as the name of the sub directory.
The input is the first number and the output is the second number. This particular matrix has 5 inputs (0 through 4) and 15 outputs (0 through 14). Unfortunately, the filter names can't be longer than 24 characters, which forced me to use numbers instead of actual part names for the input and output.
The key to the numbers is:
Inputs:
DARM 0
MICH 1
PRC 2
SRC 3
CARM 4
Outputs:
AS_DC 0
AS11_I 1
AS11_Q 2
AS55_I 3
AS55_Q 4
POP_DC 5
POP11_I 6
POP11_Q 7
POP55_I 8
POP55_Q 9
REFL_DC 10
REFL11_I 11
REFL11_Q 12
REFL55_I 13
REFL55_Q 14
To get this working required modifications to the feCodeGen.pl and the creation of mkfiltmatrix.pl (which was based off of mkmatrix.pl). These are located in /cvs/cds/caltech/cds/advLigoRTS/src/epics/util/
In related news, I asked Valera if he could load the simulated plant filters he had generated, and after several tries, his answer was no. He says it has the same format as those filter they pass to the feed forward banks down in Livingston, so he's not sure why they won't work.
I tested my script, FillFotonFilterMatrix.py, on some simple second order section filters (like gain of 1, with b1 = 1.01, b2 = 0.02, a1 = 1.03, a2 = 1.04), and it populated the foton filter file correctly, and was parsed fine by Foton itself. So I'm going to claim the script is done and its the fault of the filters we're trying to load. This script is now living in /cvs/cds/caltech/chans/ , along with a name file called lsp_dof2pd_mtrx.txt which tells the script that DARM is 0, CARM is 1, etc. To run it, you also need a SOS.txt file with the filters to load, similar to the one Valera posted here, but preferably loadable.
I also updated my progess on the wiki, here. |
Attachment 1: FiltMatrixMedm.png
|
|
3080
|
Wed Jun 16 11:31:19 2010 |
josephb | Summary | Computers | Removed scaling fonts from medm on Allegra | Because it was driving me crazy while working on the new medm screens for the simulated plant, I went and removed the aliased font entries in /usr/share/X11/fonts/misc/fonts.alias that are associated with medm. Specifically I removed the lines starting with widgetDM_. I made a backup in the same directory called fonts.alias.bak with the old lines.
Medm now behaves the same on op440m, rosalba, and allegra - i.e. it can't find the widgetDM_ scalable fonts and defaults to a legible fixed font. |
3106
|
Wed Jun 23 15:15:53 2010 |
josephb | Summary | Computers | 40m computer security issue from last night and this morning | The following is not 100% accurate, but represents my understanding of the events currently. I'm trying to get a full description from Christian and will hopefully be able to update this information later today.
Last night around 7:30 pm, Caltech detected evidence of computer virus located behind a linksys router with mac address matching our NAT router, and at the IP 131.215.114.177. We did not initially recognize the mac address as the routers because the labeled mac address was off by a digit, so we were looking for another old router for awhile. In addition, pings to 131.215.114.177 were not working from inside or outside of the martian network, but the router was clearly working.
However, about 5 minutes after Christian and Mike left, I found I could ping the address. When I placed the address into a web browser, the address brought us to the control interface for our NAT router (but only from the martian side, from the outside world it wasn't possible to reach it).
They turned logging on the router (which had been off by default) and started monitoring the traffic for a short time. Some unusual IP addresses showed up, and Mike said something about someone trying to IP spoof warning coming up. Something about a file sharing port showing up was briefly mentioned as well.
The outside IP address was changed to 131.215.115.189 and dhcp which apparently was on, was turned off. The password was changed and is in the usual place we keep router passwords.
Update: Christian said Mike has written up a security report and that he'll talk to him tomorrow and forward the relevant information to me. He notes there is possibly an infected laptop/workstation still at large. This could also be a personal laptop that was accidently connected to the martian network. Since it was found to be set to dhcp, its possible a laptop was connected to the wrong side and the user might not have realized this.
|
3107
|
Wed Jun 23 15:33:42 2010 |
josephb | Update | CDS | Daily Downs Update | I visited downs and announced that I would be showing up again until all the 40m hardware is delivered.
I brought over 4 ADC boards and 5 DAC boards which slot into the IO chassis.
The DACs are General Standards Corporation, PMC66-16AO16-16-F0-OF, PCIe4-PMC-0 adapters.
The ADCs are General Standards Corporation, PMC66-16AI6455A-64-50M, PCIe4-PMC-0 adapters.
These new ones have been placed with the blue and gold adapter boards, under the table behind the 1Y4-1Y5 racks.
With the 1 ADC and 1 DAC we already have, we now have enough to populated the two ends and the SUS IO chassis. We have sufficient Binary Output boards for the entire 40m setup. I'm going back with a full itemized list of our current equipment, and bring back the remainder of the ADC/DAC boards we're due. Apparently the ones which were bought for us are currently sitting in a test stand, so the ones I took today were from a different project, but they'll move the test stand ones to that project eventually.
I'm attempting to push them to finish testing the IO chassis and the remainder of those delivered as well.
I'd like to try setting up the SUS IO chassis and the related computer this week since we now have sufficient parts for it. I'd also like to move megatron to 1Y3, to free up space to place the correct computer and IO chassis where its currently residing. |
3116
|
Thu Jun 24 16:59:24 2010 |
josephb | Update | VAC | Finished restoring the access connector and door | [Jenne, Kiwamu, Steve, Sharmila, Katherine, Joe]
We finished bolting the door on the new ITMX (old ITMY) and putting the access connector section back into place. We finished with torquing all the bolts to 40 foot-pounds. |
3119
|
Fri Jun 25 08:10:23 2010 |
josephb | Update | CDS | Daily Downs Update | Yesterday afternoon I went to downs and acquired the following materials:
2 100 ft long blue fibers, for use with the timing system. These need to be run from the timing switch in 1Y5/1Y6 area to the ends.
3 ADCs (PMC66-16AI6455A-64-50M) and 2 DACs (PMC66-16AO16-16-F0-OF), bringing our total of each to 8.
7 ADC adapter boards which go in the backs of the IO chassis, bringing our total for those (1 for each ADC) to 8.
There were no DAC adapter boards of the new style available. Jay asked Todd to build those in the next day or two (this was on Thursday), so hopefully by Monday we will have those.
Jay pointed out there are different styles of the Blue and Gold adapter boxes (for ADCs to DB44/37) for example. I'm re-examining the drawings of the system (although some drawings were never revised to the new system, so I'm trying to interpolate from the current system in some cases), to determine what adapter style and numbers we need. In any case, those do not appear to have been finished yet (there basically stuffed boards in a bag in Jay's office which need to be put into the actual boxes with face plates).
When I asked Rolf if I could take my remaining IO chassis, there was some back and forth between him and Jay about numbers they have and need for their test stands, and having some more built. He needs some, Jay needs some, and the 40m still needs 3. Some more are being built. Apparently when those are finished, I'll either get those, or the ones that were built for the 40m and are currently in test stands.
Edit:
Aparently Friday afternoon (when we were all at Journal Club), Todd dropped off the 7 DAC adapter boards, so we have a full set of those.
Things still needed:
1) 3 IO chassis (2 Dolphin style for the LSC and IO, and 1 more small style for the South end station (new X)). We already have the East end station (new Y) and SUS chassis.
2) 2 50+ meter Ethernet cables and a router for the DAQ system. The Ethernet cables are to go from the end stations to 1Y5-ish, where the DAQ router will be located.
3) I still need to finish understanding the old drawings drawings to figure out what blue and gold adapter boxes are needed. At most 6 ADC, 3 DAC are necessary but it may be less, and the styles need to be determined.
4) 1 more computer for the South end station. If we're using Megatron as the new IO chassis, then we're set on computers. If we're not using Megatron in the new CDS system, then we'll need a IO computer as well. The answer to this tends to depend on if you ask Jay or Rolf.
|
|