40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 239 of 344  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  5896   Tue Nov 15 15:56:23 2011 jamieUpdateCDSdataviewer doesn't run

Quote:

Dataviewer is not able to access to fb somehow.

I restarted daqd on fb but it didn't help.

Also the status screen is showing a blank while form in all the realtime model. Something bad is happening.

 So something very strange was happening to the framebuilder (fb).  I logged on the fb and found this being spewed to the logs once a second:

[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory

Apparently /bin/gcore was trying to be called by some daqd subprocess or thread, and was failing since that file doesn't exist.  This apparently started at around 5:52 AM last night:

[Tue Nov 15 05:46:52 2011] main profiler warning: 1 empty blocks in the buffer
[Tue Nov 15 05:46:53 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:54 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:55 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:56 2011] main profiler warning: 0 empty blocks in the buffer
...
[Tue Nov 15 05:52:43 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:52:44 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:52:45 2011] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1005400026 to 1005400379
[Tue Nov 15 05:52:46 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 05:52:46 2011] going down on signal 11
sh: /bin/gcore: No such file or directory

The gcore I believe it's looking for is a debugging tool that is able to retrieve images of running processes.  I'm guessing that something caused something int the fb to eat crap, and it was stuck trying to debug itself.  I can't tell what exactly happend, though.  I'll ping the CDS guys about it.  The daqd process was continuing to run, but it was not responding to anything, which is why it could not be restarted via the normal means, and maybe why the various FB0_*_STATUS channels were seemingly dead.

I manually killed the daqd process, and monit seemed to bring up a new process with no problem.  I'll keep an eye on it.

  5908   Wed Nov 16 10:13:13 2011 jamieUpdateelogrestarted

Quote:

Basically, elog hangs up by the visit of googlebot.

Googlebot repeatedly tries to obtain all (!) of entries by specifying  "page0" and "mode=full".
Elogd seems not to have the way to block an access from the specified IP.
We might be able to use http-proxy via apache. (c.f. http://midas.psi.ch/elog/adminguide.html#secure)

 There are much more simple ways to prevent page indexing by googlebot: http://www.google.com/support/webmasters/bin/answer.py?answer=156449

However, I really think that's a less-than-idea solution to get around the actual problem which is that the elog software is a total piece of crap.  If google does not index the log then it won't appear in google search results.

But if there is one url that when requested causes the elog to crash then maybe it's a better solution to cut of just that url.

  5979   Tue Nov 22 18:15:39 2011 jamieUpdateCDSc1iscex ADC found dead. Replaced, c1iscex working again

c1iscex has not been running for a couple of days (since the power shutdown at least). I was assuming that the problem was recurrence of the c1iscex IO chassis issue from a couple weeks ago (5854).  However, upon investigation I found that the timing signals were all fine.  Instead, the IOP was reporting that it was finding now ADC, even though there is one in the chassis.

Since I had a spare ADC that was going to be used for the CyMAC, I decided to try swapping it out to see if that helped.  Sure enough, the system came up fine with the new ADC.  The IOP (c1x01) and c1scx are now both running fine.

I assume the issue before might have been caused by a failing and flaky ADC, which has now failed.  We'll need to get a new ADC for me to give back to the CyMAC.

  6037   Tue Nov 29 15:30:01 2011 jamieUpdateCDSlocation of currently used filter function

So I tracked down where the currently-used filter function code is defined (the following is all relative to /opt/rtcds/caltech/c1/core/release):

Looking at one of the generated front-end C source codes (src/fe/c1lsc/c1lsc.c) it looks like the relevant filter function is:

filterModuleD()

which is defined in:

src/include/drv/fm10Gen.c

and an associated header file is:

src/include/fm10Gen.h
  6124   Thu Dec 15 11:47:43 2011 jamieUpdateCDSRTS UPGRADE IN PROGRESS

I'm now in the middle of upgrading the RTS to version 2.4.

All RTS systems will be down until futher notice...

  6125   Thu Dec 15 22:22:18 2011 jamieUpdateCDSRTS upgrade aborted; restored to previous settings

Unfortunately, after working on it all day, I had to abort the upgrade and revert the system back to yesterday's state.

I think I got most of the upgrade working, but for some reason I could never get the new models to talk to the framebuilder.  Unfortunately, since the upgrade procedure isn't document anywhere, it was really a fly by the seat of my pants thing.  I got some help from Joe, which got me through one road block, but I ultimately got stumped.

I'll try to post a longer log later about what exactly I went through.

In any event, the system is back to the state is was in yesterday, and everything seems to be working.

  6302   Tue Feb 21 22:06:18 2012 jamieUpdateLSCbeatbox DFD installed in 1X2 rack

I have installed a proto version of the ALS beatbox delay-line frequency discriminator (DFD, formally known as MFD), in the 1X2 rack in the empty space above the RF generation box.

That empty space above the RF generation box had been intentionally left empty to provide needed ventilation airflow for the RF box, since it tends to get pretty hot.  I left 1U of space between the RF box and the beatbox, and so far the situation seems ok, ie. the RF box is not cooking the beatbox.  This is only a temporary arrangement, though, and we should be able to clean up the rack considerably once the beatbox is fully working.

For power I connected the beatbox to the two unused +/- 18 V Sorensen supplies in the OMC power rack next to the SP table.  I disconnected the OMC cable that was connected to those supplies originally.  Again, this is probably just temporary.

Right now the beatbox isn't fully functioning, but it should be enough to use for lock acquisition studies.  The beatbox is intended to have two multi-channel DFDs, one for each arm, each with coarse and fine outputs.  What's installed only has one DFD, but with both coarse and fine outputs.  It is also intended to have differential DAQ outputs for the mixer IF outputs, which are not installed in this version.

The intended design was also supposed to use a comparator in the initial amplification stages before the delay outputs.  The comparator was removed, though, since it was too slow and was limiting the bandwidth in the coarse channel.  I'll post an updated schematic tomorrow.

I made some initial noise measurements:  with a 21 MHz input, which corrseponds to a zero crossing for a minimal delay, the I output is at ~200 nVrms/\sqrt{Hz} at 5 Hz, falling down to ~30 nVrms about 100 Hz, after which it's mostly flat.  I'll make calibrated plots for all channels tomorrow.

The actual needed delay lines are installed/hooked up either.  Either Kiwamu will hook something up tonight, or I'll do it tomorrow.

  6318   Fri Feb 24 19:25:43 2012 jamieUpdateLSCALS X-arm beatbox added, DAQ channels wiring normalized

I have hooked the ALS beatbox into the c1ioo DAQ.  In the process, I did some rewiring so that the channel mapping corresponds to what is in the c1gcv model.

The Y-arm beat PD is going through the old proto-DFD setup.  The non-existant X-arm beat PD will use the beatbox alpha.

Y coarse I (proto-DFD) --> c1ioo ADC1 14 --> C1:ALS_BEATY_COARSE_I
Y fine   I (proto-DFD) --> c1ioo ADC1 15 --> C1:ALS_BEATY_FINE_I
X coarse I (bbox alpha)--> c1ioo ADC1 02 --> C1:ALS_BEATX_COARSE_I
X fine   I (bbox alpha)--> c1ioo ADC1 03 --> C1:ALS_BEATX_FINE_I

This remapping required coping some filters into the BEATY_{COARSE,FINE} filter bank.  I think I got it all copied over correctly, but I might have messed something up.  BE AWARE.

We still need to run a proper cable from the X-arm beat PD to the beatbox.

I still need to do a full noise/response characterization of the beatbox (hopefully this weekend).

  6325   Mon Feb 27 18:33:11 2012 jamieUpdatePSLwhat to do with old PSL fast channels

It appears that the old PSL fast channels never made it into the new DAQ system.  We need to figure out what to do with them.

A D990155 DAQ Interface card in far right of the 1X1 PSL EuroCard ("VME") crate is supposed output various PMC/FSS/ISS fast channels, which would then connect to the 1U "lemo breakout" ADC interface chassis.  Some connections are made from the DAQ interface card to the lemo breakout, but they are not used in any RTS model, so they're not being recorded anywhere.

An old elog entry from Rana listing the various PSL DAQ channels should be used as reference, to figure out which channels are coming out, and which we should be recording.

The new ALS channels will need some of these DAQ channels, so we need to figure out which ones we're going to use, and clear out the rest.

 

  6327   Mon Feb 27 19:04:13 2012 jamieUpdateCDSspontaneous timing glitch in c1lsc IO chassis?

For some reason there appears to have been a spontaneous timing glitch in the c1lsc IO chassis that caused all models running on c1lsc to loose timing sync with the framebuilder.  All the models were reporting "0x4000" ("Timing mismatch between DAQ and FE application") in the DAQ status indicator.  Looking in the front end logs and dmesg on the c1lsc front end machine I could see no obvious indication why this would have happened.  The timing seemed to be hooked up fine, and the indicator lights on the various timing cards were nominal.

I restarted all the models on c1lsc, including and most importantly the c1x04 IOP, and things came back fine.  Below is the restart procedure I used.  Note I killed all the control models first, since the IOP can't be restarted if they're still running.  I then restarted the IOP, followed by all the other control models.

controls@c1lsc ~ 0$ for m in lsc ass oaf; do /opt/rtcds/caltech/c1/scripts/killc1${m}; done
controls@c1lsc ~ 0$ /opt/rtcds/caltech/c1/scripts/startc1x04
c1x04epics C1 IOC Server started
 * Stopping IOP awgtpman ...                                                                      [ ok ]
controls@c1lsc ~ 0$ for m in lsc ass oaf; do /opt/rtcds/caltech/c1/scripts/startc1${m}; done
c1lscepics: no process found
ERROR: Module c1lscfe does not exist in /proc/modules
c1lscepics C1 IOC Server started
 * WARNING:  awgtpman_c1lsc has not yet been started.
c1assepics: no process found
ERROR: Module c1assfe does not exist in /proc/modules
c1assepics C1 IOC Server started
 * WARNING:  awgtpman_c1ass has not yet been started.
c1oafepics: no process found
ERROR: Module c1oaffe does not exist in /proc/modules
c1oafepics C1 IOC Server started
 * WARNING:  awgtpman_c1oaf has not yet been started.
controls@c1lsc ~ 0$ 
  6348   Fri Mar 2 18:11:50 2012 jamieSummarySUSevaluation of eLIGO tip-tilts from LLO

[Suresh, Jamie]

Suresh and I opened up and checked out the eLIGO tip-tilts assemblies we received from LLO. There are two, TT1 and TT2, which were used for aligning AS beam into the OMC on HAM6. The mirror assemblies hang from steel wires suspended from little, damped, vertical blade springs. The magnets are press fit into the edge of the mirror assemblies. The pointy flags magnetically attach to the magnets. BOSEMS are attached to the frame. The DCC numbers on the parts seem to all be entirely lies, but this document seems to be close to what we have, sans the vertical blade springs: T0900566

We noticed a couple of issues related to the magnets and flags. One of the magnets on each mirror assembly is chipped (see attached photos). Some of the magnets are also a bit loose in their press fits in the mirror assemblies. Some of the flags don't seat very well on the magnets. Some of the flag bases are made of some sort of crappy steel that has rusted (also see pictures). Overall some flags/magnets are too wobbly and mechanically unsound. I wouldn't want to use them without overhauling the magnets and flags on the mirror assemblies.

There are what appear to be DCC/SN numbers etched on some of the parts.  They seem to correspond to what's in the document above, but they appear to be lies since I can't find any DCC documents that correspond to these numbers:

TT1: D070176-00 SN001
  mirror assembly: D070183-00 SN003
TT2: D070176-00 SN002
  mirror assembly: D070183-00 SN006
  6468   Thu Mar 29 20:13:21 2012 jamieConfigurationPEMPEM_SLOW (i.e. seismic RMS) channels added to fb master

I've added the PEM_SLOW.ini file to the fb master file, which should give us the slow seismic RMS channels when the framebuilder is restarted. Example channels:

[C1:PEM-RMS_ACC6_1_3]
[C1:PEM-RMS_GUR2Y_0p3_1]
[C1:PEM-RMS_STS1X_3_10]
etc.

I also updated the path to the other _SLOW.ini files.

I DID NOT RESTART FB.

I will do it first thing in the am tomorrow, when Kiwamu is not busy getting real work done.

Here's is a the diff for /opt/rtcds/caltech/c1/target/fb/master:h

controls@pianosa:/opt/rtcds/caltech/c1/target/fb 1$ diff -u master~ master
--- master~	2011-09-15 17:32:24.000000000 -0700
+++ master	2012-03-29 19:51:52.000000000 -0700
@@ -7,11 +7,12 @@
 /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini
 /opt/rtcds/caltech/c1/chans/daq/C1MCS.ini
 /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1mcs.par
-/cvs/cds/rtcds/caltech/c1/chans/daq/SUS_SLOW.ini
-/cvs/cds/rtcds/caltech/c1/chans/daq/MCS_SLOW.ini
-/cvs/cds/rtcds/caltech/c1/chans/daq/RMS_SLOW.ini
-/cvs/cds/rtcds/caltech/c1/chans/daq/IOP_SLOW.ini
-/cvs/cds/rtcds/caltech/c1/chans/daq/IOO_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/SUS_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/MCS_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/RMS_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/IOP_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/IOO_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/PEM_SLOW.ini
 /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1rfm.par
 /opt/rtcds/caltech/c1/chans/daq/C1RFM.ini
 /opt/rtcds/caltech/c1/chans/daq/C1IOO.ini
controls@pianosa:/opt/rtcds/caltech/c1/target/fb 1$

 

  6484   Wed Apr 4 13:25:29 2012 jamieConfigurationPEMPEM_SLOW (i.e. seismic RMS) channels aquiring

Quote:

I've added the PEM_SLOW.ini file to the fb master file, which should give us the slow seismic RMS channels when the framebuilder is restarted. Example channels:

[C1:PEM-RMS_ACC6_1_3]
[C1:PEM-RMS_GUR2Y_0p3_1]
[C1:PEM-RMS_STS1X_3_10]
etc.

 The framebuilder seems to have been restarted, or restarted on it's own, so these channels are now being acquired.

Below is a minute trend of a smattering of the available RMS channels over the last five days.

2012-04-04-132346_1182x914_scrot.png

  7114   Wed Aug 8 10:15:13 2012 jamieUpdateEnvironmentAnother earthquake, optics damped

There were another couple of earthquakes at about 9:30am and 9:50am local.

earthquake.png

All but MC2 were off the watchdogs.  I damped and realigned everything and everything looks ok now.

Screenshot-Untitled_Window.png

  7142   Fri Aug 10 11:05:33 2012 jamieConfigurationIOOMC trans optics configured

Quote:

Quote:

Quote:

  The PDA255 is a good ringdown detector - Steve can find one in the 40m if you ask him nicely.

 We found a PDA255 but it doesn't seem to work. I am not sure if that is one you are mentioning...but I'll ask Steve tomorrow!

 I double checked the PDA255 found at the 40m and it is broken/bad. Also there was no success hunting PDs at Bridge. So the MC trans is still in the same configuration. Nothing has changed. I'll try doing ringdown measurements with PDA400 today.

Can you explain more what "broken/bad" means?  Is there no signal?  Is it noisy?  Glitch?  etc.

  7143   Fri Aug 10 11:08:26 2012 jamieUpdateComputersFE Status

Quote:

The c1lsc and c1sus screens are red in the front-end status. I restarted the frame builder, and hit the "global diag reset" button, but to no avail. Yesterday, the only thing Den and I did to c1sus was install a new c1pem model. I got rid of the changes and switched to the old one (I ran rtcds build, install, restart), but the status is still the same.

 The issue you're seeing here is stalled mx_stream processes on the front ends.  On the troublesome front ends you can log in and restart the mx_streams with the "mxstreamrestart" command.

  7161   Mon Aug 13 16:58:07 2012 jamieUpdateCDSmysterious stuck test points on c1spx model

We were not able to open up any test points in the revived c1spx model (dcuid 61).

Looking at the GDS_TP screen we found that every test point was being held open (C1:FEC-61_GDS_MON_?).  Tried closing all test points, awg and otherwise, with the diag comnand line (diag -l), but it would crash when we attempted to look at the test points for node 61.

Rebuild, install, restart of the model had no affect.  As soon as awgtpman came back up all the testpoints were full again.

I called Alex and he said he had seen this issue before as a problem with the mbuf kernel module.  Somehow the mbuf module was holding those memory locations open and not freeing them.

He suggested we reboot the machine or restart mbuf.  I used the following procedure to restart mbuf:

  • log into c1iscex as controls
  • sudo /etc/init.d/monit stop (needed so that monit doesn't auto-restart the awgtpman processes)
  • rtcds stop all
  • sudo /etc/init.d/mx_stream stop
  • sudo rmmod mbuf
  • sudo modprobe mbuf
  • sudo /etc/init.d/mx_stream start
  • sudo /etc/init.d/monit start
  • rtcds start all

Once this was done, all the test points were cleared.

Alex seems to think this issue is fixed in a newer version of mbuf.  I should probably rebuild and install the updated mbuf kernel module at some point soon to prevent this happening again.

Unfortunately this isn't the end of the story, though.  While the test points were cleared, the channels were still not available from c1spx.

I looked in the framebuilder logs to see if I could see anything suspicious.  Grep'ing for the DCUID (61), I found something that looked a little problematic:

...
GDS server NODE=25 HOST=c1iscex DCUID=61
GDS server NODE=28 HOST=c1ioo DCUID=28
GDS server NODE=33 HOST=c1ioo DCUID=33
GDS server NODE=34 HOST=c1ioo DCUID=34
GDS server NODE=36 HOST=c1sus DCUID=36
GDS server NODE=38 HOST=c1sus DCUID=38
GDS server NODE=39 HOST=c1sus DCUID=39
GDS server NODE=40 HOST=c1lsc DCUID=40
GDS server NODE=42 HOST=c1lsc DCUID=42
GDS server NODE=45 HOST=c1iscex DCUID=45
GDS server NODE=46 HOST=c1iscey DCUID=46
GDS server NODE=47 HOST=c1iscey DCUID=47
GDS server NODE=48 HOST=c1lsc DCUID=48
GDS server NODE=50 HOST=c1lsc DCUID=50
GDS server NODE=60 HOST=c1lsc DCUID=60
GDS server NODE=61 HOST=c1iscex DCUID=61
...

Note that two nodes, 25 and 61, are associated with the same dcuid.  25 was the old dcuid of c1spx, before I renumbered it.  I tracked this down to the target/gds/param/testpoint.par file which had the following:

[C-node25]
hostname=c1iscex
system=c1spx
...
[C-node61]
hostname=c1iscex
system=c1spx

It appears that this file is just amended with new dcuids, so dcuid changes can show up in duplicate.  I removed the offending old stanza and tried restarting fb again...

Unfortunately this didn't fix the issue either.  We're still not seeing any channels for c1spx.

  7162   Mon Aug 13 17:31:19 2012 jamieUpdateCDSmysterious stuck test points on c1spx model

Quote:

Unfortunately this didn't fix the issue either.  We're still not seeing any channels for c1spx.

So I was wrong, the channels are showing up.  I had forgotten that they are showing up under C1SUP, not C1SPX.

  7163   Mon Aug 13 18:00:30 2012 jamieUpdateGenerallarger optical tables at the ends

Quote:

I'm proposing larger optical tables at the ends to avoid the existing overcrowding. This would allow the initial pointing and optical level beams to set up correctly.

The existing table is 4 x 2 would be replaced by 4' x 3'   We would lose only ~3" space  toward exist door.

I'm working on the new ACRYLIC TABLE COVER for each end that will cost around $4k ea.  The new cover should fit the larger table.

Let me know what you think.

I'm not sure I see the motivation.  The tables are a little tight, but not that much.  If the issue is the incidence angle of the IP and OPLEV beams, then can't we solve that just by moving the table closer to the viewport?

The overcrowding alone doesn't seem bad enough to justify replacing the tables.

  7165   Mon Aug 13 20:12:29 2012 jamieUpdateCDSc1sup model moved to c1lsc machine

I moved the c1sup simplant model to the c1lsc machine, where there was one remaining available processor.  This requires changing a bunch of IPC routing in the c1sus and c1lsp models.  I have rebuilt and installed the models, and have restarted c1sup, but have not restarted c1sus and c1lsp since they're currently in use.  I'll restart them first thing tomorrow.

  7188   Wed Aug 15 09:09:45 2012 jamieUpdateLSCLSC whitening triggers

Quote:

I'm ~30% of the way through implementing LSC whitening filter triggers.  I think that everything I have done should be compile-able, but please don't compile c1lsc tonight.  I haven't tested it, and some channel names have changed, so I need to fix the LSC screen when I'm not falling asleep.

Also, Rana pointed out that we may not want the whitening to trigger on immediately upon acquiring lock - if there are other modes ringing down in the cavity, or some weird transients, we don't want to amplify those signals.  We want to wait a second or so for them to die down, then turn on analog whitening.  Jamie - do you know how long the "unit delay" delays things in the RCG?  Do those do what I naively think they do?  I'll ask you in the morning.

The unit delay delays for a single cycle, so I think that's not what you want.  I'm not sure that there's an existing part to add delays like that.

We also need to be a little clever about it, though, since we'll want it to flip off if we loose lock during the delay.

  7191   Wed Aug 15 11:44:35 2012 jamieSummaryLSCntp installed on all workstations

Quote:

5) DTT wasn't working on rossa. Used the Date/Time GUI to reset the system time to match fb and then it stopped giving 'Test Timed Out'. Jamie check rossa ntpd.

ntp is now installed on all the workstations.  I also added it to the /users/controls/workstation-setup.sh script

  7197   Wed Aug 15 17:23:22 2012 jamieUpdateCDSfront end IOP models changed to reflect actual physical hardware

As Rolf pointed out when he was here yesterday, all of our IOPs are filled with parts for ADCs and DACs that don't actually exist in the system.  This was causing needless module error messages and IOP GDS screens that were full of red indicators.  All the IOP models were identically stuffed with 9 ADC parts, 8 DAC parts, and 4 BO parts, even though none of the actual front end IO chassis had physical configurations even remotely like that.  This was probably not causing any particular malfunctions, but it's not right nonetheless.

I went through each IOP, c1x0{1-5}, and changed them to reflect the actual physical hardware in those systems.  I have committed these changes to the svn, but I haven't rebuilt the models yet.  I'll need to be able to restart all models to test the changes, so I'm going to wait until we have a quiet time, probably next week.

  7249   Wed Aug 22 15:47:34 2012 jamieSummaryGeneralvent prepartion for fast-track vent

We are discussing venting first thing next week, with the goal of
diagnosing what's going on in the PRC.

Reminder of the overall vent plan:

https://wiki-40m.ligo.caltech.edu/vent

Since we won't be prepared for tip-tilt installation (item 2), we should
focus most of the effort on diagnosing what's going on in the PRC.  Of
the other planned activities:

(1) dichroic mirror replacement for PR3 and SR3

  Given that we'll be working on the PRC, we might consider going ahead
  with this replacement, especially if the folding mirror becomes
  suspect for whatever reason.  In any case we should have the new
  mirrors ready to install, which means we should get the phase map
  measurements asap.

(3) black glass beam dumps:

  Install as time and manpower permits.  We need to make sure all needed
  components are baked and ready to install.

(4) OSEM mount screws:

  Delay until next vent.

(5) new periscope plate:

  Delay until next vent.

(6) cavity scattering measurement setup

  Delay until next vent.

  7287   Mon Aug 27 17:14:00 2012 jamieUpdateCDSc1oaf problem

Quote:

I came in to the lab in the evening and found c1lsc had "red" for FB connection.
I restarted c1lsc models and it kept hung the machine everytime.

I decided to kill all of the model during the startup sequence right after the reboot.
Then run only c1x04 and c1lsc. It seems that c1oaf was the cause, but it wasn't clear.

The "red for FB connection" issue was probably a dead mx_stream on c1lsc.  That can usually be fixed by just restarting mx_stream.

There is definitely a problem with c1oaf, though.  It crashes immediately after attempting to start.  kernel log for a crash included below.

We will leave c1oaf off until we have time to debug.

[83752.505720] c1oaf: Send Computer Number  = 0
[83752.505720] c1oaf: entering the loop
[83752.505720] c1oaf: waiting to sync 19520
[83753.207372] c1oaf: Synched 701492
[83753.207372] general protection fault: 0000 [#2] SMP 
[83753.207372] last sysfs file: /sys/devices/pci0000:00/0000:00:1e.0/0000:2e:01.0/class
[83753.207372] CPU 4 
[83753.207372] Modules linked in: c1oaf c1ass c1sup c1lsp c1cal c1lsc c1x04 open_mx dis_irm dis_dx dis_kosif mbuf [last unloaded: c1oaf]
[83753.207372] 
[83753.207372] Pid: 0, comm: swapper Tainted: G      D    2.6.34.1 #5 X7DWU/X7DWU
[83753.207372] RIP: 0010:[<ffffffffa1bf7567>]  [<ffffffffa1bf7567>] T.2870+0x27/0xbf0 [c1oaf]
[83753.207372] RSP: 0000:ffff88023ecc1aa8  EFLAGS: 00010092
[83753.207372] RAX: ffff88023ecc1af8 RBX: ffff88023ecc1ae8 RCX: ffffffffa1c35e48
[83753.207372] RDX: 0000000000000000 RSI: 0000000000000020 RDI: ffffffffa1c21360
[83753.207372] RBP: ffff88023ecc1bb8 R08: 0000000000000000 R09: 0000000000175f60
[83753.207372] R10: 0000000000000000 R11: ffffffffa1c2a640 R12: ffff88023ecc1b38
[83753.207372] R13: ffffffffa1c2a640 R14: 0000000000007fff R15: 0000000000000000
[83753.207372] FS:  0000000000000000(0000) GS:ffff880001f00000(0000) knlGS:0000000000000000
[83753.207372] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[83753.207372] CR2: 000000000378a040 CR3: 0000000001a09000 CR4: 00000000000406e0
[83753.207372] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[83753.207372] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[83753.207372] Process swapper (pid: 0, threadinfo ffff88023ecc0000, task ffff88023ec7eae0)
[83753.207372] Stack:
[83753.207372]  ffff88023ecc1ab8 0000000000000096 0000000000000019 ffff88023ecc1b18
[83753.207372] <0> 0000000000014729 0000000000032a0c ffff880001e12d90 000000000000000a
[83753.207372] <0> ffff88023ecc1bb8 ffffffffa1c06cad ffff88023ecc1be8 000000000000000f
[83753.207372] Call Trace:
[83753.207372]  [<ffffffffa1c06cad>] ? filterModuleD+0xd6d/0xe40 [c1oaf]
[83753.207372]  [<ffffffffa1c07ae3>] feCode+0xd63/0x129b0 [c1oaf]
[83753.207372]  [<ffffffffa1c00dc6>] ? T.2888+0x1966/0x1f10 [c1oaf]
[83753.207372]  [<ffffffffa1c1b3bf>] fe_start+0x1c8f/0x3060 [c1oaf]
[83753.207372]  [<ffffffff8102ce57>] ? select_task_rq_fair+0x2c8/0x821
[83753.207372]  [<ffffffff8104cd8b>] ? enqueue_hrtimer+0x65/0x72
[83753.207372]  [<ffffffff8104d8f6>] ? __hrtimer_start_range_ns+0x2d6/0x2e8
[83753.207372]  [<ffffffff8104d91b>] ? hrtimer_start+0x13/0x15
[83753.207372]  [<ffffffff810173df>] play_dead_common+0x6e/0x70
[83753.207372]  [<ffffffff810173ea>] native_play_dead+0x9/0x20
[83753.207372]  [<ffffffff81001c38>] cpu_idle+0x46/0x8d
[83753.207372]  [<ffffffff814ec523>] start_secondary+0x192/0x196
[83753.207372] Code: 1f 44 00 00 55 66 0f 57 c0 48 89 e5 41 57 41 56 41 55 41 54 53 48 8d 9d 30 ff ff ff 48 8d 43 10 4c 8d 63 50 48 81 ec e8 00 00 00 <66> 0f 29 85 30 ff ff ff 48 89 85 18 ff ff ff 31 c0 48 8d 53 78 
[83753.207372] RIP  [<ffffffffa1bf7567>] T.2870+0x27/0xbf0 [c1oaf]
[83753.207372]  RSP <ffff88023ecc1aa8>
[83753.207372] ---[ end trace df3ef089d7e64971 ]---
[83753.207372] Kernel panic - not syncing: Attempted to kill the idle task!
[83753.207372] Pid: 0, comm: swapper Tainted: G      D    2.6.34.1 #5
[83753.207372] Call Trace:
[83753.207372]  [<ffffffff814ef6f4>] panic+0x73/0xe8
[83753.207372]  [<ffffffff81063c19>] ? crash_kexec+0xef/0xf9
[83753.207372]  [<ffffffff8103a386>] do_exit+0x6d/0x712
[83753.207372]  [<ffffffff81037311>] ? spin_unlock_irqrestore+0x9/0xb
[83753.207372]  [<ffffffff81037f1b>] ? kmsg_dump+0x115/0x12f
[83753.207372]  [<ffffffff81006583>] oops_end+0xb1/0xb9
[83753.207372]  [<ffffffff8100674e>] die+0x55/0x5e
[83753.207372]  [<ffffffff81004496>] do_general_protection+0x12a/0x132
[83753.207372]  [<ffffffff814f17af>] general_protection+0x1f/0x30
[83753.207372]  [<ffffffffa1bf7567>] ? T.2870+0x27/0xbf0 [c1oaf]
[83753.207372]  [<ffffffffa1c06cad>] ? filterModuleD+0xd6d/0xe40 [c1oaf]
[83753.207372]  [<ffffffffa1c07ae3>] feCode+0xd63/0x129b0 [c1oaf]
[83753.207372]  [<ffffffffa1c00dc6>] ? T.2888+0x1966/0x1f10 [c1oaf]
[83753.207372]  [<ffffffffa1c1b3bf>] fe_start+0x1c8f/0x3060 [c1oaf]
[83753.207372]  [<ffffffff8102ce57>] ? select_task_rq_fair+0x2c8/0x821
[83753.207372]  [<ffffffff8104cd8b>] ? enqueue_hrtimer+0x65/0x72
[83753.207372]  [<ffffffff8104d8f6>] ? __hrtimer_start_range_ns+0x2d6/0x2e8
[83753.207372]  [<ffffffff8104d91b>] ? hrtimer_start+0x13/0x15
[83753.207372]  [<ffffffff810173df>] play_dead_common+0x6e/0x70
[83753.207372]  [<ffffffff810173ea>] native_play_dead+0x9/0x20
[83753.207372]  [<ffffffff81001c38>] cpu_idle+0x46/0x8d
[83753.207372]  [<ffffffff814ec523>] start_secondary+0x192/0x196

  7289   Mon Aug 27 18:59:24 2012 jamieUpdateIOOMC ASC screen was confusing - Jenne is not stupid

Quote:

We have figured out that some of these measurements, those with the WFS off, were also not allowing the dither lines through, so no dither, so no actual measurement.

Jamie is fixing up the model so we can force the WFS to stay off, but allow the dither lines to go through.  He'll elog things later.

In the c1ioo model there were filter modules at the output of the WFS output matrix, right before going to the MC SUS ASCs but right after the dither line inputs, that were not exposed in the C1IOO_WFS_OVERVIEW screen (bad!).  I switched the order of these modules and the dither sums, so these output filters are now before the dither inputs.  This will allow us to turn off all the WFS feedback while still allowing the dither lines.

I updated the medm screens as well (see attached images).

Attachment 1: Screenshot-1.png
Screenshot-1.png
Attachment 2: Screenshot-2.png
Screenshot-2.png
  7291   Tue Aug 28 00:16:19 2012 jamieUpdateGeneralAlignment and vent prep

I think we (Jenne, Jamie) are going to leave things for the night to give ourselves more time to prep for the vent tomorrow.

We still need to put in the PSL output beam attenuator, and then redo the MC alignment.

The AS spot is also indicating that we're clipping somewhere (see below).  We need to align things in the vertex and then check the centerings on the AP table.

So I think we're back on track and should be ready to vent by the end of the day tomorrow.

Attachment 1: as1.png
as1.png
  7296   Tue Aug 28 17:02:16 2012 jamieUpdateGeneralsvn commit changes

I just spent the last hour checking in a bunch of uncommitted changes to stuff in the SVN.  We need to be MUCH BETTER about this.  We must commit changes after we make them.  When multiple changes get mixed together there's no way to recover from one bad one.

  7309   Wed Aug 29 17:09:57 2012 jamieUpdateSUSETMX OK, free swinging

ETMX appears to be fine.  It was stuck to its OSEMs in the usual way.  I touched it and it dislodged and is now swinging freely.  Damping loops have been re-engaged.

Screenshot.png

  7314   Thu Aug 30 00:08:34 2012 jamieUpdateGeneralIn vac plans for tomorrow, 8/30

Quote:

We need to check spot centering on PRM with camera tomorrow.

Suresh checked that we're not clipped by IP ANG/POS pickoff mirrors, but we haven't done any alignment of IP ANG/POS.

 I think we should NOT do any adjustment of IP ANG/POS now.  We should in fact try to recover them when doing the PRM spot centering

Quote:

Tomorrow:  Open ITMX door.  Check with Watek that we're hitting center of PRM.  Then look to see if we're hitting center of PR2.  Then, continue through the chain of optics.

The motivation for removing the ITMX door was so that the scatter measurement team could check alignment of the new viewing mirror next to ETMX.  After discussion today we decided that everything can be done at the X end.  They can inject a probe beam into the ETMX chamber, bounce it off of ITMX and align the viewing mirror with the reflection.  So we'll leave ITMX door on for now.

We should, however, inspect the situation ITMY and make sure we have good clearance in the Y arm part of the Michaelson.  Koji previously expressed suspicion that we might have clipping on the southern edge of the POY steering mirror, so we need to check that out.

Koji and I discussed the situation for getting camera face views of BS and PRM.  Koji said the original idea was to see if we could install something at the south-east view port of ITMX chamber.  Steve also suggested utilizing the "ceiling" camera mounted on the top of the IOO chamber.

Vertex tasks:

  • check spot centering in PRM
  • check that REFL is getting cleanly to the AP table
  • check IPPOS and IPANG - we should be adjusting IPPOS or IPANG at this point
  • check spot centering on BS
  • remove ITMY north door
  • check clearance of POY steering mirror
  • ...

in parallel:

  • Steve will inspect the situation for getting a camera view of BS and PRM face, either through IOO or ITMX.

End X tasks:

  • install baffle
  • install "permanent" ITMX viewing mirror, on west side of ETMX - this might require moving ETMX SUS cable bracket south
  • install temporary steering mirror for probe laser on south-east side of ETMX
  • at some point the scatter guys can also do transmission measurements of the ETMX view ports
  • ...
  7318   Thu Aug 30 13:10:41 2012 jamieUpdateCamerasETMX

Quote:

We have done some work at ETMX today. We installed the baffle and placed two mirrors on the table.

The baffle position/orientation still needs to be checked more thoroughly to make sure that the beam will pass through the center of the baffle hole.

I must say that I am not at all happy with the baffle situation.  It is currently completely blocking our camera view of the ETMX face.  Here's a video capture of the ETMX face camera:

etmx-face-baffle.png

The circle is the baffle hole, through which we can see just the bottom edge of the test mass.  I don't think whatever benefit the baffle gives out weights the benefit of being able to see the spot on the mirror.

This afternoon we will try to adjust the baffle, and maybe the camera view mirror, to see if we can get a better shot of the center of the TM.  If we can see the beam spot through the hole we can probably live with it.  If not, I think we should remove the baffle.

  7324   Thu Aug 30 20:35:09 2012 jamieUpdateSUStarget installed on PRM, temporary earthquake stops in place

We installed beam targets on PRM and BS suspension cages.

On both suspensions one of the screw holes for the target actually houses the set screw for the side OSEM.  This means that the screw on one side of the target only goes in partial way.

The target installed on BS is wrong!  It has a center hole, instead of two 45 deg holes.  I forgot to remove it, but it will obvious it's wrong to the next person who tries to use it.  I believe we're supposed to have a correct target for BS, Steve?

The earthquake stop screws on PRM were too short and were preventing installation of the PRM target.  Therefore, in order to install the target on PRM I had to replace the earthquake stops with ones Jenne and I found in the bake lab clean room that were longer, but have little springs instead of viton inserts at the ends.  This is ok for now, but

WE NEED TO REMEMBER TO REPLACE EARTHQUAKE STOPS ON PRM WHEN WE CLOSE UP.

We checked the beam through PRM and it's a little high to the right (as viewed from behind).  Tomorrow we're going to open ITMX chamber so that we can get a closer look at the spot on PR2.

  7341   Tue Sep 4 20:20:47 2012 jamieUpdateGeneralproblematic tip-tilts

Quote:

We clearly need a better plan for adjusting the tip tilts in pitch, because utilizing their hysteresis is ridiculous.  Koji and Steve are thinking up a set of options, but so far it seems as though all of those options should wait for our next "big" vent.  So for now, we have just done alignment by poking the tip tilt.

Tomorrow, we want to open up the MC doors, open up ETMY, and look to see where the beam is on the optic.  I am concerned that the hysteresis will relax over a long ( >1hour ) time scale, and we'll loose our pointing.  After that, we should touch the table enough to trip the BS, PRM optics, since Koji is concerned that perhaps the tip tilt will move in an earthquake.  Jamie mentioned that he had to poke the tip tilt a pretty reasonable amount to get it to change a noticeable amount at ETMY, so we suspect that an earthquake won't be a problem, but we will check anyway.

 I'm very unhappy with the tip-tilts right now.  The amount of hysteresis is ridiculous.  I have no confidence that they will stay pointing wherever we point them.  It's true I poked the top more than it would normally move, but I don't actually believe it wouldn't move in an earthquake.  Given how much hysteresis we're seeing, I expect it will just drift on it's own and we'll loose good pointing again.

And as a reminder, IPPOS/ANG don't help us here before the tip-tilts are in the PRC after the IP pointing sensors.

I think we need to look seriously at possible solutions to eliminate or at least reduce the hysteresis, by either adding weight, or thinner wire, or something.

  7342   Tue Sep 4 20:25:22 2012 jamieOmnistructureVACbetter in-air "lite" access connector needed

We really need something better to replace the access connector when we're at air.  This tin foil tunnel crap is dumb.  We can't do any locking in the evening after we've put on the light doors.  We need something that we can put in place of the access connector that allows us access to the OMC and IOO tables, while still allowing IMC locking, and can be left in place at night.

  7344   Wed Sep 5 10:50:15 2012 jamieOmnistructureVACbetter in-air "lite" access connector needed

Quote:

Quote:

We really need something better to replace the access connector when we're at air.  This tin foil tunnel crap is dumb.  We can't do any locking in the evening after we've put on the light doors.  We need something that we can put in place of the access connector that allows us access to the OMC and IOO tables, while still allowing IMC locking, and can be left in place at night.

 It is in the shop. It will be ready for the next vent. Koji's dream comes through.

 Can we see the full design?  If we can't lock the mode cleaner with this thing on then it's really of no use.  We want it to be equivalent to the light doors, but allow us to keep the mode cleaner locked.  That's the most important aspect.

  7345   Wed Sep 5 13:11:43 2012 jamieOmnistructureVACbetter in-air "lite" access connector needed

Quote:

Quote:

Quote:

We really need something better to replace the access connector when we're at air.  This tin foil tunnel crap is dumb.  We can't do any locking in the evening after we've put on the light doors.  We need something that we can put in place of the access connector that allows us access to the OMC and IOO tables, while still allowing IMC locking, and can be left in place at night.

 It is in the shop. It will be ready for the next vent. Koji's dream comes through.

 Can we see the full design?  If we can't lock the mode cleaner with this thing on then it's really of no use.  We want it to be equivalent to the light doors, but allow us to keep the mode cleaner locked.  That's the most important aspect.

 It also needs to be wide enough that the MMT beam can go through, so that we can not only lock the MC, but also work on the rest of the IFO.

  7442   Wed Sep 26 16:59:30 2012 jamieUpdateIOOIPANG ND filter installed

Quote:

[Whomever took away this ND filter without elogging it was BAD!!!  (Jamie, when we first found IPANG coming out of the vacuum during this vent, we moved some of the mirrors on the out-of-vac table in the IPANG path.  Was the ND filter removed at that time?  Or has it been out for much longer, and we never noticed because IPANG wasn't coming nicely out of the vacuum / was clipping on the oplev lens?)

I do not remember removing anything from that setup.  We just moved some mirrors and lenses around

  7457   Mon Oct 1 16:05:01 2012 jamieUpdateCDSmx stream restart required on all front ends

For some reason the frame builder and mx stream processes on ALL front ends were down.  I restarted the frame builder and all the mx_stream processes and everything seems to be back to normal.  Unclear what caused this.  The CDS guys are aware of the issue with the mx_stream stability and are working on it.

  7472   Wed Oct 3 18:45:51 2012 jamieUpdateIOOwiki page for active IO tip-tilts

I made a wiki page for the active IO tip-tilts.  I should have made this a long time ago.

  7477   Thu Oct 4 14:04:21 2012 jamieUpdateCDSfront ends back up

All the front end machines are back up after the outage.  It looks like none of the front end machines came back up once power was restored, and they all needed to be powered manually.  One of the things I want to do in the next CDS upgrade is put all the front end computers in one rack, so we can control their power remotely.

c1sus was the only one that had a little trouble.  It's timing was for some reason not syncing with the frame builder.  Unclear why, but after restarting the models a couple of times things came back.

There's still a little red, but it mostly has to do with the fact that c1oaf is busted and not running (it actually crashes the machine when I tried to start it, so this needs to be fixed!).

  7478   Thu Oct 4 14:08:49 2012 jamieUpdateSUSsuspensions damped

All suspension damping has been restored.

  7498   Mon Oct 8 09:45:28 2012 jamieUpdateComputersRebooted cymac0

Quote:

I rebooted cymac0 a couple of times. When I first got here it was just frozen. I rebooted it and then ran a model (x1ios). The machine froze the second time I ran ./killx1ios. I've rebooted it again.

For context, there's a is stand-alone cymac test system running at the 40m.  It's not hooked up to anything, except for just being on the martian network (it's not currently mounting any 40m CDS filesystems, for instance).  The machine is temporarily between the 1Y4 and 1Y5 racks.

  7512   Tue Oct 9 17:33:37 2012 jamieUpdateSUSdiagonalization

Quote:

I went inside to align the beam on WFS and noticed that oscillations in yaw are ~10 times stronger then in pitch. I've plot rms of pitch and yaw measured by LID sensors and saw that MC3 yaw rms motion is a few times larger then pitch.

What are "LID" sensors?  Do you mean the OSEM shadow sensors?  I'm pretty sure that's what you meant, but I'm curious what "LID" means.

 

  7521   Wed Oct 10 19:22:03 2012 jamieUpdateIOOAdded control for input tip-tilts to c1ass

I have added some control logic and appropriate output DAC channels for the input tip-tilts (TT1 and TT2) to the c1ass model.

The plan is for all the tip-tilt drive electronics to live in a Eurocrate in 1Y2.  They will then interface with a DAC in c1lsc.

c1ass runs on the c1lsc front-end machine, and therefore seemed like an appropriate place for the control logic to go.

I added and interface to DAC0, and a top_named IOO block, to c1ass:

2012-10-10-185707_566x330_scrot.png

The IOO block includes two TT_CONTROL library parts, one for each of TT1 and TT2:

2012-10-10-191013_470x277_scrot.png

This is just a start so that I can start testing the DAC output.

I have not recompiled c1ass yet.  I will do that tomorrow.

  7529   Thu Oct 11 11:57:40 2012 jamieUpdateCDSall IOP models rebuild, install, restarted to reflect fixed ADC/DAC layouts

Quote:

As Rolf pointed out when he was here yesterday, all of our IOPs are filled with parts for ADCs and DACs that don't actually exist in the system.  This was causing needless module error messages and IOP GDS screens that were full of red indicators.  All the IOP models were identically stuffed with 9 ADC parts, 8 DAC parts, and 4 BO parts, even though none of the actual front end IO chassis had physical configurations even remotely like that.  This was probably not causing any particular malfunctions, but it's not right nonetheless.

I went through each IOP, c1x0{1-5}, and changed them to reflect the actual physical hardware in those systems.  I have committed these changes to the svn, but I haven't rebuilt the models yet.  I'll need to be able to restart all models to test the changes, so I'm going to wait until we have a quiet time, probably next week.

I finally got around to rebuilding, installing, and restarting all the IOP models.  Everything went smoothly.  I had to restart all the models on all the screens, but everything seemed to come back up fine.  We now have many fewer dmesg error messages, and the GDS_TP screens are cleaner and don't have a bunch of needless red.

A frame builder restart was also required, due to name changes in unused (but unfortunately still needed) channels in the IOP.

  7531   Thu Oct 11 12:11:23 2012 jamieUpdateIOOc1ass with new DAC0 output has been recompiled/install/restarted

I rebuilt/install/restarted c1ass.  It came up with no problems.  It's now showing DAC0 with no errors.

After lunch I'll test the outputs.

  7547   Mon Oct 15 13:07:51 2012 jamieUpdateVACvacuum VME crate broken, replaced, minor vacuum mayhem ensues

Steve and I managed to access the fuse in the vacuum VME crate, but replacing it did not bring it back up.  We decided to replace the entire crate.

We manually checked that the most important valves, VC1, VM1 and V1, were all closed.  We disconnected their power so that they would automatically close, and we wouldn't have to worry about them accidentally opening when we rebooted the system.

We noted where all the cables were, disconnected everything, and removed the crate.  We noted that one of the values switched when we disconnected one of the IPC cables from a VME card.  We'll note which one it was in a followup post.  We thought that was a little strange, since the VME crate was completely unpowered.

Anyway, we removed the crate, swapped in a spare, replaced all the cards and connections, double checked everything, then powered up the crate.  That's when minor chaos ensued.

When the system came back online after about 20 seconds, we heard a whole bunch of valves switching.  Luckily we were able to get the medm screens back up so that we could see what was going on.

Apparently all of the ION pump valves (VIPEE, VIPEV, VIPSV, VIPSE) opened, which vented the main volume up to 62 mTorr.  All of the annulus valves (VAVSE, VAVSV, VAVBS, VAVEV, VAVEE) also appeared to be open.  One of the roughing pumps was also turned on.  Other stuff we didn't notice?  Bad.

We ran around and manually unplugged all of the ION pump valves, since I couldn't immediately pull up the vacuum control screen.  Once that was done and we could see that the main volume was closed off we went back to figure out what was going on.

We got the medm vacuum control screen back (/cvs/cds/caltech/medm/c0/ve/VacControl_BAK.adl.  really??)  There was a lot of inconsistency between the readback states of the valves and the switch settings.  Toggling the switches seemed to bring things back in line.  At this point it seemed that we had control of the system again.  The epics readings were consistent with what we were seeing in the vacuum rack.

We went through and closed everything that should have been closed.  The line pressure between the big turbo pump TP1 and the rest of the pumps was up at atmosphere, 700 Torr.  We connected the roughing pumps and pumped down the lines so that we could turn the turbos back on.  Once TP2 and TP3 were up to speed, we turned on TP1 and opened V1 to start pumping the main volume back town.   The main volume is at 7e-4 Torr right now.

 

So there are a couple of problems with the vacuum system.

  • Why the hell did valves open when we rebooted the VME crate?  That's very bad.  That should never happen.  If the system is set to come up to an unsafe state that needs to be fixed ASAP.  The ION pump valves should never have opened.  Nor the annulus valves.
  • Why were the switches and the readbacks showing different states?
  • Apparently there is no control of the turbo pumps through MEDM.  This should be fixed.

I connect belledona, the laptop at the vacuum station to the wired network, so that it's connection would be less flaky.

 

  7550   Mon Oct 15 20:45:58 2012 jamieUpdateIOOc1lsc DAC0 now connected to tip-tilt SOS DW boards

The tip-tile SOS dewhite/AI boards are now connected to the digital system.

20121015_190340.png

I put together a chassis for one of our space DAC -> IDC interface boards (maybe our last?).  A new SCSI cable now runs from DAC0 in the c1lsc IO chassis in 1Y3, to the DAC interface chassis in 1Y2.

Two homemade ribbon cables go directly from the IDC outputs of the interface chassis to the 66 pin connectors on the backplane of the Eurocrate.  They do not go through the cross-connects, cause cross-connects are stupid.  They go to directly to the lower connectors for slots 1 and 3, which are the slots for the SOS DW/AI boards.  I had to custom make these cables, or course, and it was only slightly tricky to get the correct pins to line up.  I should probably document the cable pin outs.

  • cable 0:  IDC0 on interface chassis (DAC channels 0-7) ---> Eurocrate slot 0 (TT1/TT2)
  • cable 1:  IDC1 on interface chassis (DAC channels 8-15)---> Eurocrate slot 2 (TT3/TT4)

As reported in a previous log in this thread, I added control logic to the c1ass front-end model for the tip-tilts.  I extended it to include TT_CONTROL (model part) for TT3 and TT4 as well, so we're now using all channels of DAC0 in c1lsc for TT control.

I tested all channels by stepping through values in EPICS and reading the monitor and SMA outputs of the DW/AI boards.  The channels all line up correctly.  A full 32k count output of a DAC channel results in 10V output of the DW/AI boards.  All channels checked out, with a full +-10V swing on their output with a full +-32k count swing of the DAC outputs.

   We're using SN 1 and 2 of the SOS DW/AI boards (seriously!)

The output channels look ok, and not too noisy.

Tomorrow I'll get new SMA cables to connect the DW/AI outputs to the coil driver boards, and I'll start testing the coil driver outputs.

As a reminder:  https://wiki-40m.ligo.caltech.edu/Suspensions/Tip_Tilts_IO

 

  7574   Thu Oct 18 08:00:40 2012 jamieUpdateComputersRe: Lots of new White :(

Quote:

Solved. The power code of c1iscaux was loose.
Has anyone worked around the back side of 1Y3?


I looked into the problem. I went around the channel lists for each slow machines and found the variables are supported by c1iscaux

controls@pianosa:/cvs/cds/caltech/target/c1iscaux 0$ cd /cvs/cds/caltech/target/c1iscaux
controls@pianosa:/cvs/cds/caltech/target/c1iscaux 0$ grep C1:IF *
C1IFO_STATE.db:grecord(ai,"C1:IFO-STATE")

It seemed that the machine was not responding to ping. I went to 1Y3 and found the crate was off. Actually this is not correct.
The key was on but the power was off. I looked at the back and found the power code was loose from its inlet.
Once the code was pushed in and the crate was keyed, the white boxes got back online.

Just in case I burtrestored these slow channels by the snapshot at 6:07am on Sunday.

I was working around 1Y2 and 1Y3 when I wired the DAC in the c1lsc IO chassis in 1Y3 to the tip-tilt electronics in 1Y2.  I had to mess around in the back of 1Y3 to get it connected.  I obviously did not intend to touch anything else, but it's certainly possible that I did.

  7612   Wed Oct 24 19:55:06 2012 jamieUpdate my assesment of the folding mirror (passive tip-tilt) situation

We removed all the folding mirrors ({P,S}R{2,3}) from the IFO and took them into the bake lab clean room.  The idea was that at the very least we would install the new dichroic mirrors, and then maybe replace the suspension wires with thinner ones.

I went in to spend some quality time with one of the tip-tilts.  I got the oplev setup working to characterize the pointing.

I grabbed tip-tilt SN003, which was at PR2.  When I set it up it  was already pointing down by a couple cm over about a meter, which is worse than what we were seeing when it was installed.  I assume it got jostled during transport to the clean room?

I removed the optic that was in there and tried installing one of the dichroics.  It was essentially not possible to remove the optic without bending the wires by quite a bit (~45 degrees).  I decided to remove the whole suspension system (top clamps and mirror assembly) so that I could lay it flat on the table to swap the optic.

I was able to put in the dichroic without much trouble and get the suspension assembly back on to the frame.  I adjusted the clamp at the mirror mount to get it hanging back vertical again.  I was able to get it more-or-less vertical without too much trouble.

I poked at the mirror mount a bit to see how I could affect the hysteresis.  The answer is quite a bit, and stochastically.  Some times I would man-handle it and it wouldn't move at all.  Sometimes I would poke it just a bit and it would move by something like a radian.

A couple of other things I noted:

  • The eddy current damping blocks are not at all suspended.  The wires are way too think, so they're basically flexures.  They were all pretty cocked, so I repositioned them by just pushing on them so they were all aligned and centered on the mirror mount magnets.
  • The mirror mounts are very clearly purposely made to be light.  All mass that could be milled out has been.  This is very confusing to me, since this is basically the entire problem.  Why were they designed to be so light?  What problem was that supposed to solve?

I also investigated the weights that Steve baked.  These won't work at all.  The gap between the bottom of the mirror mount and the base is too small.  Even the smalled "weights" would hit the base.  So that whole solution is a no-go.

What else can we do?

At this point not much.  We're not going to be able to install more masses without re-engineering things, which is going to take too much time.  We could install thinner wires.  The wires that are being used now are all 0.0036", and we could install 0.0017" wires.  The problem is that we would have to mill down the clamps in order to reuse them, which would be time consuming.

The plan

So at this point I say we just install the dichroics, get them nicely suspended, and then VERY CAREFULLY reinstall them.  We have to be careful we don't jostle them too much when we transport them back to the IFO.  They look like they were too jostled when they were transported to the clean room.

My big question right now is: is the plan to install new dichroics in PR2 and SR2 as well, or just in PR3 and SR3, where the green beams are extracted?  I think the answer is no, we only want to install new dichroics in {P,S}R3.

The future

If we're going to stick with these passive tip-tilts, I think we need to consider machining completely new mirror mounts, that are not designed to be so light.  I think that's basically the only way we're going to solve the hysteresis problem.

I also note that the new active tip-tilts that we're going to use for the IO steering mirrors are going to have all the same problems.  The frame is taller, so the suspensions are longer, but everything else, including the mirror mounts are exactly the same.  I can't see that they're not going to suffer the same issues.  Luckily we'll be able to point them so I guess we won't notice.

ELOG V3.1.3-