ID |
Date |
Author |
Type |
Category |
Subject |
8108
|
Tue Feb 19 12:02:00 2013 |
Jamie | Update | IOO | IMC table levelling. |
In order to address the issue of low MC1 OSEM voltages, Yuta and I looked at the IMC table levelling. Looking with the bubble level, Yuta confirmed that the table was indeed out of level in the direction that would cause MC1 to move closer to it's cage, and therefore lower it's OSEM voltages. Looking at the trends, it looks like the table was not well levelled after TT1 installation. We should have been more careful, and we should have looked at the MC1/3 voltages after levelling.
Yuta moved weights around on the table to recover level with the bubble level. Unfortunately this did not bring us back to good MC1 voltages. We speculate that the table was maybe not perfectly level to begin with. We decided to try to recover the MC1 OSEM voltages, rather than go solely with the bubble level, since we believe that the MC suspensions should be a good reference. Yuta then moved weights around until we got the MC1/3 voltages back into an acceptable range. The voltages are still not perfect, but I believe that they're acceptable.
The result is that, according to the bubble level, the IMC table is low towards MC2. We are measuring spot positions now. If the spot positions look ok, then I think we can live with this amount of skew. Otherwise, we'll have to physically adjust the MC1 OSEMS.

|
8109
|
Tue Feb 19 15:10:02 2013 |
Jamie | Update | CDS | c1iscex alive again |
c1iscex is back up. It is communicating with it's IO chassis, and all of it's models (c1x01, c1scx, c1spx) are running again.
The problem was that the IO chassis had no connection to the computer. The One Stop card in the IO chassis, which is the PCIe bridge from the front-end machine and the IO chassis, was showing four red lights instead of the dozen or so green lights that it usually shows. Upon closer inspection, the card appeared to be complaining that it had no connection to the host card in the front-end machine. Un-illuminated lights on the host card seemed to be pointing to the same thing.
There are two connector slots on the expansion card, presumably for a daisy chain situation. Looking at other IO chassis in the lab I determined that the cable from the front-end machine was plugged into the wrong slot in the One Stop card. wtf.
Did someone unplug the cable connecting c1iscex to it's IO chassis, and then replug it in in the wrong slot? A human must have done this. |
8128
|
Thu Feb 21 14:32:02 2013 |
Jamie | Update | CDS | c1iscex models restarted |
Quote: |
c1iscex is dead again. Red lights, no "breathing" on the FE status screen.
|
The c1iscex machine itself wasn't dead, the models were just not running. Here are the last messages in dmesg:
[130432.926002] c1spx: ADC TIMEOUT 0 7060 20 7124
[130432.926002] c1scx: ADC TIMEOUT 0 7060 20 7124
[130433.941008] c1x01: timeout 0 1000000
[130433.941008] c1x01: exiting from fe_code()
I'm guessing maybe the timing signal was lost, so the ADC stopped clocking. Since the ADC clock is the everything clock, all the "fe" code (ie. models) aborted. Not sure what would have caused it.
I restarted all the models ("rtcds restart all") and everything came up fine. Obviously we should keep our eyes on things, and note if anything strange was happening if this happens again. |
8140
|
Fri Feb 22 20:28:17 2013 |
Jamie | Update | Computers | linux1 dead, then undead |
At around 2:30pm today something brought down most of the martian network. All control room workstations, nodus, etc. were unresponsive. After poking around for a bit I finally figured it had to be linux1, which serves the NFS filesystem for all the important CDS stuff. linux1 was indeed completely unresponsive.
Looking closer I noticed that the Fibrenetix FX-606-U4 SCSI hardware RAID device connected to linux1 (see #1901), which holds cds network filesystem, was showing "IDE Channel #4 Error Reading" on it's little LCD display. I assumed this was the cause of the linux1 crash.
I hard shutdown linux1, and powered off the Fibrenetix device. I pulled the disk from slot 4 and replaced it with one of the spares we had in the control room cabinets. I powered the device back up and it beeped for a while. Unfortunately the device was requiring a password to access it from the front panel, and I could find no manual for the device in the lab, nor does the manufacturer offer the manual on it's web site.
Eventually I was able to get linux1 fully rebooted (after some fscks) and it seemed to mount the hardware RAID (as /dev/sdc1) fine. The brought the NFS back. I had to reboot nodus to get it recovered, but all the control room and front-end linux machines seemed to recover on their own (although the front-ends did need an mxstream restart).
The remaining problem is that the linux1 hardware RAID device is still currently unaccessible, and it's not clear to me that it's actually synced the new disk that I put in it. In other words I have very little confidence that we actually have an operational RAID for /opt/rtcds. I've contacted the LDAS guys (ie. Dan Kozak) who are managing the 40m backup to confirm that the backup is legit. In the mean time I'm going to spec out some replacement disks onto which to copy /opt/rtcds, and also so that we can get rid of this old SCSI RAID thing. |
8159
|
Mon Feb 25 20:04:22 2013 |
Jamie | Update | SUS | suspension controller model modifications in prep for global damping initiative |
[Jamie, Brett, Jenne]
We made some small modifications to the sus_single_control suspension controller library part to get in/out the signals that Brett needs for his "global damping" work. We brought out the POS signal before the SUSPOS DOF filter, and we added a new GLOBPOS input to accommodate the global damping control signals. We added a new EPIC input to control a switch between local and global damping. It's all best seen from this detail from the model:

The POSOUT goto goes to an additional output. As you can see I did a bunch of cleanup to the spaghetti in this part of the model as well.
As the part has a new input and output now we had to modify c1sus, c1scx, c1scy, and c1mcs models as well. I did a bunch of cleanup in those models as well. The models have all been compiled and installed, but a restart is still needed. I'll do this first thing tomorrow morning.
All changes were committed to the userapps SVN, like they should always be. 
We still need to update the SUS MEDM screens to display these new signals, and add switches for the local/global switch. I'll do this tomorrow.
During the cleanup I found multiple broken links to the sus_single_control library part. This is not good. I assume that most of them were accidental, but we need to be careful when modifying things. If we break those links we could think we're updating controller models when in fact we're not.
The one exception I found was that the MC2 controller link was clearly broken on purpose, as the MC2 controller has additional stuff added to it ("STATE_ESTIMATE"):

I can find no elog that mentions the words "STATE" and "ESTIMATE". This is obviously very problematic. I'm assuming Den made these modifications, and I found this report: 7497, which mentions something about "state estimation" and MC2. I can't find any other record of these changes, or that the MC2 controller was broken from the library. This is complete mickey mouse bullshit. Shame shame shame. Don't ever make changes like this and not log it.
I'm going to let this sit for a day, but tomorrow I'm going to remove replace the MC2 controller with a proper link to the sus_single_control library part. This work was never logged so it didn't happen as far as I'm concerned.
|
8168
|
Tue Feb 26 10:17:44 2013 |
Jamie | Update | SUS | removed global/local switch from sus_single_control |
[jamie, brett]
Yesterday we added some new control logic to the sus_single_control part to allow for global damping. Today we decided that a binary switch between local/global damping was probably a bit extreme since we might want to smoothly ramp between them, instead of just hard switching. So we removed this switch and are now just summing the control inputs from global and local damping right before the output matrix.
Changes were committed to the SVN, and all suspension models were recompiled/installed/restarted.
|
8169
|
Tue Feb 26 10:20:31 2013 |
Jamie | Update | SUS | MC2 suspension controller reverted to library part |
I made good on my threat from yesterday to convert the MC2 suspension controller to the library part. Whatever changes were in MC2 were thrown out, although they are archived in the SVN. Again, this kind of undocumented breaking is forbidden.
Change was committed to SVN, and c1mcs was recompiled/installed/restarted. |
8209
|
Fri Mar 1 18:23:28 2013 |
Jamie | Update | Computer Scripts / Programs | updated version of "getdata" |
I updated the getdata script so that it can now handle downloading long stretches of data.
/opt/rtcds/caltech/c1/scripts/general/getdata
It now writes the data to disk incrementally while it's downloading from the server, so it doesn't fill up memory.
I also added a couple new options:
* --append allows for appending to existing data files
* --noplot suppresses plotting during download |
8223
|
Mon Mar 4 18:11:10 2013 |
Jamie | Update | SUS | Cleaning up suspension POS inputs |
I did a little bit of cleanup of the suspension POS inputs today, both in model and MEDM land.
model
sus_single_control.mdl was simplified such that now there are just two position inputs:
- POS: with LSC filter
- ALTPOS: unfiltered
The regular LSC inputs go into POS, and any optic-specific extra pos inputs go into ALTPOS after being properly filtered and summed.
So for instance, MC2_MCL and MC2_ALS are filtered and summed then go into MC2 ALTPOS. The ETM ALS inputs go into ALTPOS.
I modified the GLOBAL DAMPING outputs so that they are filtered in the GLOBAL block before being sent to be summed before going into ALTPOS for {I,E}TM{X,Y}.
All suspension models were rebuilt/installed/restarted.
MEDM
The SUS_SINGLE.adl template screen was modified such that the POS button now points to optic-specific POS filter screens at:
/opt/rtcds/caltech/c1/medm/master/sus/SUS_$(OPTIC)_POS.adl
For MC1, MC3, PRM, BS, SRM these are links to SUS_SINGLE_POS.adl. The rest of the suspensions (MC2, {I,E}TM{X,Y}) now have custom screens that are variations of SUS_SINGLE_POS but with their extra filter screens added in. For instance, here is the new SUS_ETMX_POS.adl:

This gets rid of all the white screen crap that was in here before.
All of this has been committed to the SVN. NOTE: symlinks were heavily used when sorting this stuff out, so check for symlinks when modifying in the future. |
8243
|
Wed Mar 6 18:27:24 2013 |
Jamie | Update | General | now recording input TT channels to frames, but why no autoburt? |
I spent some time trying to figure out how to get a record of the pointing of the input pointing tip-tilt (TT) channels.
Frames
Currently the TT pointing is done via the offset in the PIT/YAW filter banks, ie. C1:IOO-TT1_PIT_OFFSET, which is an EPICS record. I added these channels to the C0EDCU.ini, which (I'm pretty sure) specifies which EPICS channels are recorded to frames.
controls@pianosa:~ 0$ grep C1:IOO-TT /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini
[C1:IOO-TT1_PIT_OFFSET]
[C1:IOO-TT1_YAW_OFFSET]
[C1:IOO-TT2_PIT_OFFSET]
[C1:IOO-TT2_YAW_OFFSET]
controls@pianosa:~ 0$
I then confirmed that the data is being recorded:
controls@pianosa:~ 0$ FrChannels /frames/full/10466/C-R-1046657424-16.gwf | grep TT
C1:IOO-TT1_PIT_OFFSET 16
C1:IOO-TT1_YAW_OFFSET 16
C1:IOO-TT2_PIT_OFFSET 16
C1:IOO-TT2_YAW_OFFSET 16
controls@pianosa:~ 0$
BURT
The EPICS records for these channels *should* be recorded by autoburt, but Yuta noticed they were not:
controls@pianosa:~ 0$ grep -R C1:IOO-TT1_PIT_OFFSET /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2013/Mar/6/
controls@pianosa:~ 1$
The autoburt log seems to indicate some sort of connection problem:
controls@pianosa:! 130$ grep C1:IOO-TT1_PIT_OFFSET /opt/rtcds/caltech/c1/burt/autoburt/logs/c1assepics.log
pv >C1:IOO-TT1_PIT_OFFSET< nreq=-1
pv >C1:IOO-TT1_PIT_OFFSET.HSV< nreq=-1
pv >C1:IOO-TT1_PIT_OFFSET.LSV< nreq=-1
pv >C1:IOO-TT1_PIT_OFFSET.HIGH< nreq=-1
pv >C1:IOO-TT1_PIT_OFFSET.LOW< nreq=-1
C1:IOO-TT1_PIT_OFFSET ... ca_search_and_connect() ... OK
C1:IOO-TT1_PIT_OFFSET.HSV ... ca_search_and_connect() ... OK
C1:IOO-TT1_PIT_OFFSET.LSV ... ca_search_and_connect() ... OK
C1:IOO-TT1_PIT_OFFSET.HIGH ... ca_search_and_connect() ... OK
C1:IOO-TT1_PIT_OFFSET.LOW ... ca_search_and_connect() ... OK
C1:IOO-TT1_PIT_OFFSET ... not connected so no ca_array_get_callback()
C1:IOO-TT1_PIT_OFFSET.HSV ... not connected so no ca_array_get_callback()
C1:IOO-TT1_PIT_OFFSET.LSV ... not connected so no ca_array_get_callback()
C1:IOO-TT1_PIT_OFFSET.HIGH ... not connected so no ca_array_get_callback()
C1:IOO-TT1_PIT_OFFSET.LOW ... not connected so no ca_array_get_callback()
controls@pianosa:~ 0$
This is in contrast to successfully recorded channels:
controls@pianosa:~ 0$ grep C1:LSC-DARM_OFFSET /opt/rtcds/caltech/c1/burt/autoburt/logs/c1lscepics.log
pv >C1:LSC-DARM_OFFSET< nreq=-1
pv >C1:LSC-DARM_OFFSET.HSV< nreq=-1
pv >C1:LSC-DARM_OFFSET.LSV< nreq=-1
pv >C1:LSC-DARM_OFFSET.HIGH< nreq=-1
pv >C1:LSC-DARM_OFFSET.LOW< nreq=-1
C1:LSC-DARM_OFFSET ... ca_search_and_connect() ... OK
C1:LSC-DARM_OFFSET.HSV ... ca_search_and_connect() ... OK
C1:LSC-DARM_OFFSET.LSV ... ca_search_and_connect() ... OK
C1:LSC-DARM_OFFSET.HIGH ... ca_search_and_connect() ... OK
C1:LSC-DARM_OFFSET.LOW ... ca_search_and_connect() ... OK
C1:LSC-DARM_OFFSET ... ca_array_get_callback() nreq 1 ... OK
C1:LSC-DARM_OFFSET.HSV ... ca_array_get_callback() nreq 1 ... OK
C1:LSC-DARM_OFFSET.LSV ... ca_array_get_callback() nreq 1 ... OK
C1:LSC-DARM_OFFSET.HIGH ... ca_array_get_callback() nreq 1 ... OK
C1:LSC-DARM_OFFSET.LOW ... ca_array_get_callback() nreq 1 ... OK
controls@pianosa:~ 0$
In fact all the records in the c1assepics log are showing the same "not connected so no ca_array_get_callback()" error. I don't know what the issue is. I have no problem reading the values from the command line, with e.g. ezcaread. So I'm perplexed.
If anyone has any idea why the c1ass EPICS records would fail to autoburt, let me know. |
8245
|
Wed Mar 6 20:21:34 2013 |
Jamie | Update | General | Beatbox pulled from rack |
I pulled the beatbox from the 1X2 rack so that I could try to hack in some output whitening filters. These are shamefully absent because of my mis-manufacturing of the power on the board.
Right now we're just using the MON output. The MON output buffer (U10) is the only chip in the output section that's stuffed:

The power problem is that all the AD829s were drawn with their power lines reversed. We fixed this by flipping the +15 and -15 power planes and not stuffing the differential output drivers (AD8672).
It's possible to hack in some resistors/capacitors around U10 to get us some filtering there. It's also possible to just stuff U9, which is where the whitening is supposed to be, then just jump it's output over to the MON output jack. That might be the cleanest solution, with the least amount of hacking on the board.
In any event, we really need to make a v2 of these boards ASAP. Before we do that, though, we need to figure out what we're going to do with the "disco comparator" stage back near the RF input. (There are also a bunch of other improvements that will be incorporated into v2). |
8278
|
Tue Mar 12 12:06:22 2013 |
Jamie | Update | Computers | FB recovered, RAID power supply #1 dead |
The framebuilder RAID is back online. The disk had been mounted read-only (see below) so daqd couldn't write frames, which was in turn causing it to segfault immediately, so it was constantly restarting.
The jetstor RAID unit itself has a dead power supply. This is not fatal, since it has three. It has three so it can continue to function if one fails. I have removed the bad supply and gave it to Steve so he can get a suitable replacement.
Some recovery had to be done on fb to get everything back up and running again. I ran into issues trying to do it on the fly, so I eventually just rebooted. It seemed to come back ok, except for something going on with daqd. It was reporting the following error upon restart:
[Tue Mar 12 11:43:54 2013] main profiler warning: 0 empty blocks in the buffer
It was spitting out this message about once a second, until eventually the daqd died. When it restarted it seemed to come back up fine. I'm not exactly clear what those messages were about, but I think it has something to do with not being able to dump it's data buffers to disk. I'm guessing that this was a residual problem from the umounted /frames, which somehow cleared on it's own. Everything seems to be ok now.
Quote: |
Manasa just went inside to recenter the AS beam on the camera after our Yarm spot centering exercises of the evening, and heard a loud beeping. We determined that it is the RAID attached to the framebuilder, which holds all of our frame data that is beeping incessantly. The top center power switch on the back (there are FOUR power switches, and 3 power cables, btw. That's a lot) had a red light next to it, so I power cycled the box. After the box came back up, it started beeping again, with the same front panel message:
H/W monitor power #1 failed.
|
DO NOT DO THIS. This is what caused all the problems. The unit has three redundant power supplies, for just this reason. It was probably continuing to function fine. The beeping was just to tell you that there was something that needed attention. Rebooting the device does nothing to solve the problem. Rebooting in an attempt to silence beeping is not a solution. Shutting of the RAID unit is basically the equivalent of ripping out a mounted external USB drive. You can damage the filesystem that way. The disk was still functioning properly. As far as I understand it the only problem was the beeping, and there were no other issues. After you hard rebooted the device, fb lost it's mounted disk and then went into emergency mode, which was to remount the disk read-only. It didn't understand what was going on, only that the disk seemed to disappear and the reappear. This was then what caused the problems. It was not the beeping, it was the restarting the RAID that was mounted on fb.
Computers are not like regular pieces of hardware. You can't just yank the power on them. Worse yet is yanking the power on a device that is connected to a computer. DON"T DO THIS UNLESS YOU KNOW WHAT YOU"RE DOING. If the device is a disk drive, then doing this is a sure-fire way to damage data on disk.
|
8292
|
Thu Mar 14 11:51:14 2013 |
Jamie | Update | General | Beatbox upgraded with output whitening, reinstalled |
Quote: |
I pulled the beatbox from the 1X2 rack so that I could try to hack in some output whitening filters. These are shamefully absent because of my mis-manufacturing of the power on the board.
Right now we're just using the MON output. The MON output buffer (U10) is the only chip in the output section that's stuffed:

The power problem is that all the AD829s were drawn with their power lines reversed. We fixed this by flipping the +15 and -15 power planes and not stuffing the differential output drivers (AD8672).
It's possible to hack in some resistors/capacitors around U10 to get us some filtering there. It's also possible to just stuff U9, which is where the whitening is supposed to be, then just jump it's output over to the MON output jack. That might be the cleanest solution, with the least amount of hacking on the board.
|
I modified the beatbox according to this plan. I stuffed the whitening filter stage (U9) as indicated in the schematic (I left out the C26 compensation cap which, according to the AD829 datasheet, is not actually needed for our application). I also didn't have any 301 ohm resistors so I stuffed R18 with 332 ohm, which I think should be fine.
Instead of messing with the working monitor output that we have in place, I stuffed the J5 SMA connector and wired U9 output to it in a single-ended fashion (ie. I grounded the shield pins of J5 to the board since we're not driving it differentially). I then connected J5 to the I/Q MON outputs on the front panel. If there's a problem we can just rewire those back to the J4 MON outputs and recover exactly where we were last week.
It all checks out: 0 dB of gain at DC, 1 Hz zero, 10 Hz pole, with 20 dB of gain at high frequencies.
I installed it back in the rack, and reconnected X/Y ARM ALS beatnote inputs and the delay lines. The I/Q outputs are now connected directly to the DAQ without going through any SR560s (so we recover four SR560s). |
8316
|
Wed Mar 20 14:44:47 2013 |
Jamie | Update | Optics | updated calculations of PRC/SRC g-factors and ARM mode matching |
Below are new alamode calculations of the PRC and SRC g-factors and arm mode matchings. These include fixes to the ABCD matrices for the flipped folding mirrors that properly (hopefully) take into account the focusing effect of passing through the optic substrates.
I've used nominal curvatures of -600 m for the G&H PR2/SR2 optics, and -700 m for the Laseroptik PR3/SR3 dichroics.
An interesting and slightly disappointing note is that it looks like it actually would have been better to flip PR3 instead of PR2, although the difference isn't too big. We should considering flipping SR3 instead or SR2 when dealing with the SRC. I'll take responsibility for messing up the calculation for the flipped TTs.
PRC
PR2 Reff |
PR3 Reff |
PRC g-factor (t/s) |
ARM mode matching |
Inf |
Inf |
.94/.94 |
.999 |
-600 |
-700 |
.99/.98 |
.84 |
413 (flipped) |
-700 |
.94/.93 |
.999 |
-600 |
409 (flipped) |
.92/.94 |
.999 |
413 (flipped) |
409 (flipped) |
.87/.89 |
.97 |
SRC
SR2 Reff |
SR3 Reff |
SRC g-factor (t/s) |
ARM mode matching |
Inf |
Inf |
.96/.96 |
.999 |
-600 |
-700 |
NA |
NA |
413 (flipped) |
-700 |
.96/.95 |
.998 |
-600 |
391 (flipped) |
.94/.96 |
.996 |
413 (flipped) |
391 (flipped) |
.90/.92 |
.96 |
RXA: maybe nominal, but we don't actually have measurements of the installed optics' curvatures, so there could be ~10-15% errors in the RoC. Which translates into a 1-2% error in the g-factor. |
8328
|
Thu Mar 21 13:50:08 2013 |
Jamie | Update | Locking | incident angles |
Is there a reason to use non-45 degree incident angles on the steering mirrors between the laser and the PD? I would always use 45 degree incident angles unless there is a really good reason not to. |
8335
|
Mon Mar 25 11:42:45 2013 |
Jamie | Update | Computers | c1lsc mx_stream ok |
I'm not exactly sure what the problem was here, but I think it had to do with a stuck mx_stream process that wasn't being killed properly. I manually killed the process and it seemed to come up fine after that. The regular restart mechanisms should work now.
No idea what caused the process to hang in the first place, although I know the newer RCG (2.6) is supposed to address some of these mx_stream issues. |
8352
|
Tue Mar 26 11:33:55 2013 |
Jamie | Update | Optics | HOWTO calculate effective RoC of flipped TT |
In case anyone is curious how I got the numbers for the effective radius of curvature of the flipped TT mirrors, I include the code below. Now you can calculate at home!
Here's the calculation for the effective RoC of a flipped SR2 with nominal un-flipped HR RoC of -600:
>> [Mt, Ms] = TTflipped(600, 5);
>> M2Reff('t', Mt, 5)
ans =
412.9652
>>
|
8355
|
Tue Mar 26 16:10:31 2013 |
Jamie | Update | 40m Upgrading | ETMY table leveling |
Steve's suggestion for how to level the end table using "swivel leveling mounts":

|
8374
|
Fri Mar 29 17:24:43 2013 |
Jamie | Update | Computers | FB RAID power supply replaced |
Steve ordered a replacement power supply for the FB JetStor power supply that failed a couple weeks ago. I just installed it and it looks fine. |
8383
|
Mon Apr 1 16:24:09 2013 |
Jamie | Frogs | LSC | PD whitening switching fixed (loose connection at break-out box) |
Quote: |
We discovered that the analog whitening filter of the REFL55_I board is not switching when we operate the button on the user interface. We checked with the Stanford analyzer that the transfer function always correspond to the whitening on.
|
This turned out to just be a loose connection of the ribbon cable from Contec board in the LSC IO chassis at the BIO break-out box. The DSUB connector at the break-out box was not strain relieved! I reseated the connector and strain relieved it and now everything is switching fine.


I wonder if we'll ever learn to strain relieve... |
8400
|
Wed Apr 3 14:45:34 2013 |
Jamie | Update | Computers | updated EPICS database (channels selected for saving) |
Quote: |
I modified /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini to include the C1:LSC-DegreeOfFreedom_TRIG_MON channels. These are the same channel that cause the LSC screen trigger indicators to light up.
I vaguely followed Koji's directions in elog 5991, although I didn't add new grecords, since these channels are already included in the .db file as a result of EpicsOut blocks in the simulink model. So really, I only did Step 2. I still need to restart the framebuilder, but locking (attempt at locking) is happening.
The idea here is that we should be able to search through this channel, and when we get a trigger, we can go back and plot useful signals (PDs, error signals, cotrol signals,....), and try to figure out why we're losing lock.
Rana tells me that this is similar to an old LockAcq script that would run DTT and get data.
EDIT: I restarted the daqd on the fb, and I now see the channel in dataviewer, but I can only get live data, no past data, even though it says that it is (16,float). Here's what Dataviewer is telling me:
Connecting to NDS Server fb (TCP port 8088)
Connecting.... done
read(); errno=0
LONG: DataRead = -1
No data found
read(); errno=9
read(); errno=9
T0=13-03-29-08-59-43; Length=432010 (s)
No data output.
|
I seem to be able to retrieve these channels ok from the past:
controls@pianosa:/opt/rtcds/caltech/c1/scripts 0$ tconvert 1049050000
Apr 03 2013 18:46:24 UTC
controls@pianosa:/opt/rtcds/caltech/c1/scripts 0$ ./general/getdata -s 1049050000 -d 10 --noplot C1:LSC-PRCL_TRIG_MON
Connecting to server fb:8088 ...
nds_logging_init: Entrynds_logging_init: Exit
fetching... 1049050000.0
Hit any key to exit:
controls@pianosa:/opt/rtcds/caltech/c1/scripts 0$
Maybe DTT just needed to be reloaded/restarted? |
8402
|
Wed Apr 3 15:00:24 2013 |
Jamie | Summary | Electronics | Sorensen supplies in LSC rack (1Y2) |
I investigated the situation of the two Sorensen supplies in the LSC rack (1Y2). They are there solely to supply power to the LSC LO RF distribution box. One is +18 V and the other is +28 V. All we need to do is make a new longer cable with the appropriate plug on one end (see below), long enough to go from the bottom of the 1Y3 rack to the top of 1Y2, and we could move them over quickly. Some sort of non-standard circular socket connector is used on the distribution box:

It could probably use thicker conduction wire as well.
If someone else makes the cable I'll move everything over. |
8404
|
Wed Apr 3 17:40:18 2013 |
Jamie | Configuration | Electronics | putting together a 110 MHz LSC demod board |
I started to look into putting together a 110 MHz demod board to be used as POP110 (see #8399).
We have five spare old-skool EuroCard demod boards (LIGO-D990511). From what I gather (see #4538, #4708) there are two modifications we do to these boards to make them ready for prime time:
- appropriate LP filter at PD RF input (U5 -> MC SCLF-*)
- swap out T1 transformer network with a commercial phase shifting power splitter (MC PQW/PSCQ)
#4538 also describes some other modifications but I'm not sure if those were actually implemented or not:
- removal of the attenuator/DC block/ERA-5 amp sections at the I/Q outputs
- swap ERA-5 amp with "Cougar"(?) amp at LO input.
What we'll need for a 110 demod:
I'll scrounge or order. |
8407
|
Wed Apr 3 18:41:22 2013 |
Jamie | Configuration | Electronics | putting together a 110 MHz LSC demod board |
This SCPQ-150+, which is surface mount, might also work in place of the PSCQ-2-120, which is through-mount. Would need to be reconciled with the board layout. |
8415
|
Thu Apr 4 14:37:15 2013 |
Jamie | Configuration | Electronics | putting together a 110 MHz LSC demod board |
I'm having Steve order the following:
2x SXBP-100+
2x SCLF-135+
2x PSCQ-2-120+
If you want him to add anything to the order let him know ASAP. |
8431
|
Tue Apr 9 14:55:13 2013 |
Jamie | Update | CDS | overbooked test points cause of DAQ problems |
Folks were complaining that they were getting zeros whenever they tried to open fast channels in DTT or Dataviewer. It turned out that the problem was that all available test points were in use in the c1lsc model:

There is a limit to how many test points can be open to a single model (in point of fact I think the limit is on the data rate from the model to the frame builder, not the actual number of open test points). In any event, they was all used up. The grid at the bottom right of the C1LSC GDS screen was all full of non-zeros, and the FE TRATE number was red, indicating that the data rate from this model had surpassed threshold.
The result of this overbooking is that any new test points just get zeros. This is a pretty dumb failure mode (ideally one would not be able to request the TP at all with an appropriate error message), but it is what it is. This usually means that there are too many dtt/dataviewers left with open connections.
We tried killing all the open processes that we could find that might be holding open test points, but that didn't seem to clear them up. Stuck open test points is another known problem. Referencing the solution in #6968 I opened the diag shell and killed all test points everywhere:
controls@pianosa:~ 0$ diag -l -z
Set new test FFT
NDS version = 12
supported capabilities: testing testpoints awg
diag> tp clear * *
test point cleared
diag> quit
EXIT KERNEL
controls@pianosa:~ 0$
|
8466
|
Fri Apr 19 15:19:25 2013 |
Jamie | Update | PEM | Trilliums moved from bench to concrete |
I moved the two Trillium seismometers that Den left on the electronics bench out onto the new concrete blocks in the lab that will be their final resting places. I moved one onto the slab at the vertex and the other to the slab at the Y end. I left them both locked and just sitting on the concrete.
The pile of readout electronics that were sitting next to them I moved on to the yellow foam box half way down the MC tube, between the MC tube and the X arm tube. This is obviously not a good place to store them, but I couldn't think of a better place to put them for the moment. |
8524
|
Thu May 2 19:59:34 2013 |
Jamie | Update | Computer Scripts / Programs | lookback: new program to look at recent past testpoint data |
To aid in lock-loss studies, I made a new program called 'lookback', similar to 'getdata', to look at past data.
When called with channel name arguments, it runs continuously, storing all channel data in a ring buffer. When the user hits Ctrl-C, all the data in the ring buffer is displayed. There is an option to store the data in the ring buffer to disk as well.
controls@rosalba:/opt/rtcds/caltech/c1/scripts/general 0$ ./lookback -h
usage: lookback [-h] [-l LENGTH] [-o OUTDIR] channel [channel ...]
Lookback on testpoint data. The specified amount of data is stored in a ring
buffer. When Ctrl-C is hit, all data in the ring buffer is plotted. Both 'DQ'
and 'online' test point data is available. Use NDSSERVER environment variable
to specify host:port.
positional arguments:
channel Acquisition channel. Multiple channels may be
specified and acquired at once.
optional arguments:
-h, --help show this help message and exit
-l LENGTH, --lookback LENGTH
Lookback time in seconds. This amount of data will be
stored in a ring buffer, and plotted on Ctrl-C.
Default is 10 seconds
-o OUTDIR, --outdir OUTDIR
Output directory to write data (will be created if it
doesn't exist). Data from each channel stored as
'<channel>.txt'. Any existing data files will be
overwritten.
controls@rosalba:/opt/rtcds/caltech/c1/scripts/general 0$
|
8530
|
Mon May 6 19:04:30 2013 |
Jamie | Update | IOO | mode cleaner not locking |
About 30 minutes ago the mode cleaner fell out of lock and has since not been able to hold lock for more than a couple seconds.
I'm not sure what happened. I was in the middle of taking measurements of the MC error point spectrum, which included adjusting the FAST gain. I've put all the gains back to their nominal levels but no luck. I'm not sure what else could have gone wrong. Seismic noise looks relatively quiet. |
8540
|
Tue May 7 17:43:51 2013 |
Jamie | Update | Computers | 40MARS wireless network problems |
I'm not sure what's going on today but we're seeing ~80% packet loss on the 40MARS wireless network. This is obviously causing big problems for all of our wirelessly connected machines. The wired network seems to be fine.
I've tried power cycling the wireless router but it didn't seem to help. Not sure what's going on, or how it got this way. Investigating... |
8541
|
Tue May 7 18:16:37 2013 |
Jamie | Update | Computers | 40MARS wireless network problems |
Here's an example of the total horribleness of what's happening right now:
controls@rossa:~ 0$ ping 192.168.113.222
PING 192.168.113.222 (192.168.113.222) 56(84) bytes of data.
From 192.168.113.215 icmp_seq=2 Destination Host Unreachable
From 192.168.113.215 icmp_seq=3 Destination Host Unreachable
From 192.168.113.215 icmp_seq=4 Destination Host Unreachable
From 192.168.113.215 icmp_seq=5 Destination Host Unreachable
From 192.168.113.215 icmp_seq=6 Destination Host Unreachable
From 192.168.113.215 icmp_seq=7 Destination Host Unreachable
From 192.168.113.215 icmp_seq=9 Destination Host Unreachable
From 192.168.113.215 icmp_seq=10 Destination Host Unreachable
From 192.168.113.215 icmp_seq=11 Destination Host Unreachable
64 bytes from 192.168.113.222: icmp_seq=12 ttl=64 time=10341 ms
64 bytes from 192.168.113.222: icmp_seq=13 ttl=64 time=10335 ms
^C
--- 192.168.113.222 ping statistics ---
35 packets transmitted, 2 received, +9 errors, 94% packet loss, time 34021ms
rtt min/avg/max/mdev = 10335.309/10338.322/10341.336/4.406 ms, pipe 11
controls@rossa:~ 0$
Note that 10 SECOND round trip time and 94% packet loss. That's just beyond stupid. I have no idea what's going on. |
8542
|
Tue May 7 18:42:20 2013 |
Jamie | Update | PSL | PMC not locking |
I'm just now realizing that the PMC has also not been locked since noon today, and doesn't seem to be responding to anything right now.
wtf is going on here? |
8548
|
Wed May 8 16:10:09 2013 |
Jamie | Update | CDS | Unknown DAQ channels in c1sus c1x02 IOP? |
Someone for some reason added full-rate DAQ specification to some ADC3 channels in the c1sus IOP model (c1x02):
#DAQ Channels
TP_CH15 65536
TP_CH16 65536
TP_CH17 65536
TP_CH18 65536
TP_CH19 65536
TP_CH20 65536
TP_CH21 65536
These appear to be associated with c1pem, so I'm guessing it was Den (particularly since he's the worst about making modifications to models and not telling anyone or logging or svn committing).
I'm removing them. |
8549
|
Wed May 8 17:03:35 2013 |
Jamie | Configuration | CDS | make direct IPC connections between c1lsc and c1sus/c1mcs |
Previously, for some reason, many IPC connections were routed through the c1rfm model, even if a direct IPC connection was possible. It's unclear why this was done. I spoke to Joe B. about it and he couldn't remember either. Best guess is that it was just for book keeping purposes. Or maybe some old timing issue that has been fixed by DMA fixes in the RTS. So the point is that it's no longer needed, and we can reduce delays by making direct connections.
I made direct IPC connections from c1lsc to both c1sus and c1mcs, bypassing the c1rfm, through which they had previously been routed. All models were rebuilt/installed/restarted and everything seems to be working fine. |
8550
|
Wed May 8 17:23:04 2013 |
Jamie | Configuration | CDS | fixed direct IPC connection between c1als and c1mcs |
As with the previous post, I eliminated and unnecessary hop through c1rfm for the c1als --> c1mcs connection for the ALS output to MC2 POS.
As a side note, we might considering piping the ALS signals into the LSC input matrix, elevating them to actual LSC error signals, which in some since they are. It's just that right now we're routing them directly to the actuators without going through the full LSC control. |
8551
|
Wed May 8 17:45:49 2013 |
Jamie | Configuration | CDS | More bypassing c1rfm for c1mcs --> c1ioo IPCs |
As with the last two posts, I eliminated more unnecessary passing through c1rfm for IPC connections between c1mcs and c1ioo.
All models were rebuilt/installed/restarted and svn committed. Everything is working and we have eliminated almost all IPC errors and significantly simplified things. |
8553
|
Wed May 8 19:31:17 2013 |
Jamie | Configuration | LSC | LSC: added new SQRT_SWITCH to power normalization DOF outputs |
This removes the old sqrt'ing from the inputs to the POW_NORM matrix (was only on the POP110 I/Q) and moves it to the DOF outputs. Koji wanted this so that he could use the DC signals for normalization both sqrt'd and not sqrt'd.
The POW_NORM medm screen was updated accordingly. |
8575
|
Tue May 14 20:30:29 2013 |
Jamie | Summary | IOO | MC error spectrum at various FSS gain settings. |
I used the Agilent 4395A and the GPIB network bridge to measure the MC error spectrum at the MC servo board.
I looked at various settings of the FSS Common and FAST gains.
Here is the spectrum of various Common gain settings, with a fixed FAST setting of 23.5:

The peak at 34k is smallest at the largest Common gain setting of 13.0 (probably expected). The other higher frequency peaks are higher, though, such as the ones at 24.7k, 29.6k, 34.5k, etc.:

Here's a blow up of the peak at 1.06M, which peaks at about 9dB of common gain:

Here's the spectrum with a fixed Common gain of 10.5, and various FAST gains:

and here's a zoom around that 1.06 MHz peak, which is smallest at a FAST gain of 23.5 dB:

I'm not sure yet what this points to as the best gain settings. We can of course explore more of the space. I'm going to leave it at 13/23.5, which leaves the PC RMS at ~1.5 and the FAST Monitor at ~6.0.
If this does turn out to be a good setting we'll need to adjust some of the alarm levels.
Various settings:
MCS
in1 gain: 15
offset: 1.174
boost enabled
super boost: 2
VCO gain: 25
FSS:
input offset: -0.8537
slow actuator: 0.6304
I include the python scripts I used to remotely control the AG4395 to take the measurements, and make the plots.
PS: I made some changes/improvements to the netgpib stuff that I'll cleanup and commit tomorrow.
|
8580
|
Wed May 15 17:17:05 2013 |
Jamie | Summary | CDS | Accounting of ADC/DAC channel availability |
We need ADC and DAC channels for a couple of things:
- POP QPD: 3x ADC
- ALS PZTs: 3x 2x 2x DAC (three pairs of PZTs, at ends and vertex, each with two channels for pitch and yaw)
- Fibox: 1x DAC
What's being used:
- c1iscex/c1iscey:
- DAC_0: 7/16 = 9 free
- ADC_0: 17/32 = 15 free
- c1sus:
- c1ioo
- DAC_0: 0/16 = 16 free ?? This one is weird. DAC in IO chassis, half it's channels connected to cross connect (going ???), but no model links to it
- ADC_0: 23/32 = 9 free
- ADC_1: 8/32 = 24 free
- c1lsc
- DAC_0: 16/26 = 0 free
- ADC_0: 32/32 = 0 free
What this means:
- We definitely have enough DACs for the ALS PZTs. The free channels are also in the right places: at the end stations and in the c1ioo FE, which is close to the PSL and hosts the c1als controller.
- We appear to have enough ADCs for the QPD in c1ioo.
- We don't have any available DAC outputs in c1lsc for the Fibox. If we can move the Fibox to the IOO racks (1X1, 1X2) then we could send LSC channels to c1ioo and use c1ioo's extra DAC channels.
Of course we'll have to investigate the AA/AI situation as well. I'll try to asses that in a follow up post.
PS: this helps to identify used ADC channels in models:
grep adc_ sus/c1/models/c1scx.mdl | grep Name | awk '{print $2}' | sort | uniq
|
8581
|
Wed May 15 17:38:49 2013 |
Jamie | Summary | CDS | AA/AI requirements |
Quote: |
What this means:
- We definitely have enough DACs for the ALS PZTs. The free channels are also in the right places: at the end stations and in the c1ioo FE, which is close to the PSL and hosts the c1als controller.
- We appear to have enough ADCs for the QPD in c1ioo.
- We don't have any available DAC outputs in c1lsc for the Fibox. If we can move the Fibox to the IOO racks (1X1, 1X2) then we could send LSC channels to c1ioo and use c1ioo's extra DAC channels.
Of course we'll have to investigate the AA/AI situation as well. I'll try to asses that in a follow up post.
|
It looks like we have spare channels in the AA chassis for the existing c1ioo ADC inputs to accommodate the POP QPD. 
We need AI interfaces for the ALS PZTs. What we ideally need is 3x D000186, which are the eurocard AI boards that have the flat IDC input connects that can come straight from the DAC break-out interfaces. I'm not finding any in the spares in the spare electronics shelves, though. If we can't find any we'll have to make our own AI interfaces. |
8582
|
Wed May 15 17:48:25 2013 |
Jamie | Update | CDS | misc problems noticed in models |
I noticed a couple potential issues in some of the models while I was investigating the ADC/DAC situation:
c1ioo links to ADC1, but there are broken links to the bus selector that is supposed to be pulling out channels to go into the PSL block. They're pulling channels from ADC0, which it's not connected to, which means these connections are broken. I don't know if this means the current situation is broken, or if the model was changed but not recompiled, or what. But it needs to be fixed.
c1scy connects ADC_0_11, label "ALS_PZT", to an EpicsOutput called "ALS_LASER_TEMP", which means the exposed channel is called "C1:SCY-ALS_LASER_TEMP". This is almost certainly not what we want. I don't know why it was done this way, but it probably needs to be fixed. If we need and EPICS record for this channel it should come from the ALS library part, so it gets the correct name and is available from both ends. |
8585
|
Wed May 15 22:47:11 2013 |
Jamie | Summary | CDS | Accounting of ADC/DAC channel availability |
Quote: |
- What are we using 16 DAC channels for in the LSC?
|
For the new input and output tip-tilts. Two input, two output, each requires four channels.
Quote: |
- What are the functions of those IOO DAC channels which go to cross-connects? If they're not properly sending, then we may have malfunctioning MC or MCWFS.
|
I have no idea. I don't know what the hardware is, or is supposed to be, connected to. DAC for WFS?? Was there at some point supposed to be fast output channels in the PSL?
Quote: |
- Can we just use the SLOW DAC (4116) for the ALS PZTs? We used this for a long time for the input steering and it was OK (but not perfect).
|
Probably. I'm not as familiar with that system. I don't know what the availability of hardware channels is there. I'll investigate tomorrow. |
8608
|
Tue May 21 18:18:28 2013 |
Jamie | Update | Computer Scripts / Programs | netGPIB stuff update/modernized/cleanedup/improved |
I did a bunch of cleanup work on the netGPIB stuff:
- Removed extensions from all executable scripts (executables should not have language extensions)
- fixed execution permissions on executables and modules
- committed HP8590.py and HP3563A.py instrument modules, which were there but not included in the svn
- committed NWAG4395A (was AG4395A_Run.py) to svn, and removed old "custom" copies (bad people!)
- cleaned up, modernized, and fixed the
netgpibdata program
- removed plotting from
netgpibdata , since it was only available for one instrument, there's already a separate program to handle it, and it's just plotting the saved data anyway
- added a
netgpibcmd program for sending basic commands to instruments.
- added a README
Probably the most noticeable change is removing the extensions from the executables. There seems to be this bad habit around here of adding extensions to executables. It doesn't matter to the person running the program what language it was written in, so don't add extensions. It only matters for libraries. |
8613
|
Wed May 22 11:09:33 2013 |
Jamie | Summary | CDS | Weird DAC bit flipping at half integer output values |
After querying CDS folks about this issue, I got some responses that indicated the problems was likely limit-cycle oscillations due to zero-padding of the data when upsampling. Tobin ran some Matlab tests to confirm this issue.
Starting in RCG 2.5 there is a new "no_zero_pad=1" cdsParameters option turns zero padding OFF. I tried enabling this option c1scy to see how the behavior changed. Sure enough, the 32 kHz oscillations mostly went away. There are no oscillations for outputs held at the half-count value, and the oscillations around the half-count transitions went away as well.
The only thing I could see is a bit of oscillation when converging on a constant half-count value that went away after a couple of milliseconds:

So we might consider adding the no_zero_pad=1 option to all of our coil driver outputs, which might eliminate the need to add notches at the Nyquist in the analog anti-image filters |
8615
|
Wed May 22 11:35:06 2013 |
Jamie | Summary | CDS | Weird DAC bit flipping at half integer output values |
Quote: |
Is this limit cycle caused by the residual of the digital AI filtering at the half sampling freq and that his the threshold?
Or is this some nonlinear effect? If this is a linear effect associated with the zero-padding, the absolute
value of the DC may affect the amplitude of the oscillation. (Or equivalently the range of the DC where we get this oscillation.)
|
This is a good question. We may be able to test if it's a linear effect if we have enough DAC range to get the oscillation to be more than a single sample.
Quote: |
You pointed out the ringdown of the digital AI filter in the sample-hold case (i.e. no-zero-padding case).
How does it look like in the conventional zero-padding case?
|
In the zero-pad case the oscillation just continues indefinitely at the half-count value, so it never dies out (at least as far as I can tell). |
8617
|
Wed May 22 15:48:56 2013 |
Jamie | Update | SUS | Turn off zero padding in DAC outputs |
After the results of the analysis in the #8598 thread, I have added the "no_zero_pad=1" flag to the cdsParameters block of all SUS models:
The upsampling to the 64 kHz DAC output will now be done with sample-holds, instead of zero-pads. This should reduce the 32 kHz lines we were noticing in the analog DAC output.
I note, though, that Brian Lantz points out that this might actually introduce a delay of about a half sample. We will continue to investigate.
In any event, I have rebuilt and installed all models listed above. I will restart as soon as opportunity allows. |
8625
|
Thu May 23 10:20:33 2013 |
Jamie | Update | SUS | Turn off zero padding in DAC outputs |
Quote: |
After the results of the analysis in the #8598 thread, I have added the "no_zero_pad=1" flag to the cdsParameters block of all SUS models:
The upsampling to the 64 kHz DAC output will now be done with sample-holds, instead of zero-pads. This should reduce the 32 kHz lines we were noticing in the analog DAC output.
I note, though, that Brian Lantz points out that this might actually introduce a delay of about a half sample. We will continue to investigate.
In any event, I have rebuilt and installed all models listed above. I will restart as soon as opportunity allows.
|
I have restarted all the suspension models with the new no_zero_pad flag for the DAC upsampling. Everything came up fine and all optics are damped as expected (except for concerns about c1scy which I'll note in a followup). |
8626
|
Thu May 23 10:24:23 2013 |
Jamie | Summary | CDS | c1scy model continues to run at the hairy edge |
c1scy, the controller model at the Y END, is still running very long, typically at 55/60 microseconds, or ~92% of it's cycle. It's currently showing a recorded max cycle time (since last restart or reset) of 60, which means that it has actually hit it's limit sometime in the very recent past. This is obviously not good, since it's going to inject big glitches into ETMY.
c1scy is actually running a lot less code than c1scx, but c1scx caps out it's load at about 46 us. This indicates to me that it must be some hardware configuration setting in the c1iscey computer.
I'll try to look into this more as soon as I can. |
8654
|
Thu May 30 10:40:59 2013 |
Jamie | Configuration | CDS | Attempt to cleanup c1ioo ADC connections |
I have attempted to reconcile all of the ADC connections to c1ioo. Upon close inspection, it appears that there was a lot of legacy stuff hanging around. Either that or things have not been properly connected.
The c1ioo front end machine has two ADC cards, ADC0 and ADC1, which are used by two models, c1ioo and c1als. The CURRENT ADC connections are listed in the table below. The yellow cells indicate connections that were moved. The red cells indicate connections that were removed/unplugged:
|
channel block |
connection |
channel |
usage |
model |
ADC0 |
8-15 |
MC WFS1 interface |
|
MC WFS1 |
c1ioo |
16-23 |
MC WFS2 interface |
|
MC WFS2 |
c1ioo |
0-7 |
generic interface card (2 pin lemo)
|
0 |
|
|
1 |
|
|
2 |
|
|
3 |
ALS TRX |
c1als |
4 |
ALS TRY |
c1als |
5 |
|
|
6 |
MCL |
c1ioo |
7 |
MCF |
c1ioo |
|
channel block |
connection |
channel |
usage |
model |
ADC1 |
0-31 |
1U interface board |
0/1 (J1A) |
PSL FSS MIXER/NPRO |
c1ioo |
2/3 (J2) |
ALS BEAT X/Y DC |
c1als |
4/5 (J3) |
PSL eurocrate DAQ interface J4 |
|
6/7 |
PSL eurocrate DAQ interface J5 |
|
8/9 |
PSL eurocrate DAQ interface J6 |
|
10/11 |
MC eurocrate DAQ interface J1 |
|
12/13 |
MC servo board DAQ |
|
14/15 (J8) |
|
|
16/17 (J9A) |
UNLABELLED ("DAQ ISS1"???) |
|
18/19 (J10) |
"DAQ ISS2" |
|
20/21 |
"DAQ ISS3" |
|
22/23 |
ALS BEAT X I/Q |
c1als |
24/25 |
ALS BEAT Y I/Q |
c1als |
26/27 |
|
|
28/29 |
|
|
30/31 (J16) |
|
|
The following changes were made:
- "MC L" had been connected to ADC_0_0, moved to ADC_0_6
- "MC F" had been connected to ADC_0_6, moved to ADC_0_7
The c1ioo model was rebuilt/restarted to reflect this change.
The PSL-FSS_MIXER and PSL-FSS_NPRO connections were broken in the c1ioo so I fixed them when I moved the MC channels.
All the removed connections from ADC1 were not used by any of the front end models, which is why I unplugged them. Except for the MC DAQ interface J1 and MC servo DAQ connections, I left all other cables plugged in to wherever they were coming from. The MC cables I did fully remove.
I don't know what these connections were meant for. Presumably they expose they expose some useful DAQ channels that we're now getting elsewhere, but I'm not sure. We don't currently have an ISS, which is presumably why the cables labelled "ISS" are not going anywhere.
TODO
I would like to see some more 4-pin lemo --> double BNC cables made. That would allow us to more easily use the ADC1 generic interface board:
- Moved ALS TRX/Y to ADC1, so that we can keep all the ALS connections together in ADC1.
- POP QPD X/Y/SUM
We should also figure out if we're sub-optimally using the various "DAQ" connections to the DAQ cable connectiosn to the eurocrate DAQ interface cards and servo boards. |
8656
|
Thu May 30 11:28:34 2013 |
Jamie | Configuration | CDS | c1als model cleanup |
The c1als model was pulling out some ADC0 connections that were no longer used for anything:
- ADC_0_1 --> sfm "FD" --> IPC "C1:ALS-SCX_FD"
- ADC_0_5 --> sfm "OCX" --> term
- ADC_0_6 --> sfm "ADC" --> term
The channels would have shown up as C1:ALS-FD, C1:ALS-OCX, C1:ALS-ADC. The IPC connection that presumably was meant to go to c1scx is not connected on the other end.
I removed all this stuff from the model and rebuilt/restarted. |