40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 236 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  7612   Wed Oct 24 19:55:06 2012 jamieUpdate my assesment of the folding mirror (passive tip-tilt) situation

We removed all the folding mirrors ({P,S}R{2,3}) from the IFO and took them into the bake lab clean room.  The idea was that at the very least we would install the new dichroic mirrors, and then maybe replace the suspension wires with thinner ones.

I went in to spend some quality time with one of the tip-tilts.  I got the oplev setup working to characterize the pointing.

I grabbed tip-tilt SN003, which was at PR2.  When I set it up it  was already pointing down by a couple cm over about a meter, which is worse than what we were seeing when it was installed.  I assume it got jostled during transport to the clean room?

I removed the optic that was in there and tried installing one of the dichroics.  It was essentially not possible to remove the optic without bending the wires by quite a bit (~45 degrees).  I decided to remove the whole suspension system (top clamps and mirror assembly) so that I could lay it flat on the table to swap the optic.

I was able to put in the dichroic without much trouble and get the suspension assembly back on to the frame.  I adjusted the clamp at the mirror mount to get it hanging back vertical again.  I was able to get it more-or-less vertical without too much trouble.

I poked at the mirror mount a bit to see how I could affect the hysteresis.  The answer is quite a bit, and stochastically.  Some times I would man-handle it and it wouldn't move at all.  Sometimes I would poke it just a bit and it would move by something like a radian.

A couple of other things I noted:

  • The eddy current damping blocks are not at all suspended.  The wires are way too think, so they're basically flexures.  They were all pretty cocked, so I repositioned them by just pushing on them so they were all aligned and centered on the mirror mount magnets.
  • The mirror mounts are very clearly purposely made to be light.  All mass that could be milled out has been.  This is very confusing to me, since this is basically the entire problem.  Why were they designed to be so light?  What problem was that supposed to solve?

I also investigated the weights that Steve baked.  These won't work at all.  The gap between the bottom of the mirror mount and the base is too small.  Even the smalled "weights" would hit the base.  So that whole solution is a no-go.

What else can we do?

At this point not much.  We're not going to be able to install more masses without re-engineering things, which is going to take too much time.  We could install thinner wires.  The wires that are being used now are all 0.0036", and we could install 0.0017" wires.  The problem is that we would have to mill down the clamps in order to reuse them, which would be time consuming.

The plan

So at this point I say we just install the dichroics, get them nicely suspended, and then VERY CAREFULLY reinstall them.  We have to be careful we don't jostle them too much when we transport them back to the IFO.  They look like they were too jostled when they were transported to the clean room.

My big question right now is: is the plan to install new dichroics in PR2 and SR2 as well, or just in PR3 and SR3, where the green beams are extracted?  I think the answer is no, we only want to install new dichroics in {P,S}R3.

The future

If we're going to stick with these passive tip-tilts, I think we need to consider machining completely new mirror mounts, that are not designed to be so light.  I think that's basically the only way we're going to solve the hysteresis problem.

I also note that the new active tip-tilts that we're going to use for the IO steering mirrors are going to have all the same problems.  The frame is taller, so the suspensions are longer, but everything else, including the mirror mounts are exactly the same.  I can't see that they're not going to suffer the same issues.  Luckily we'll be able to point them so I guess we won't notice.

  7613   Wed Oct 24 20:09:41 2012 jamieUpdate installing the new dirchoic mirros in PR3/SR3

When installing the dichroics we need to pay attention to the wedge angle.  I didn't, so the ghost beam is currently point up and to the right (when facing the optic).  We should think carefully about where we want the ghost beams to go.

I also was using TT SN003, which I believe was being used for PR2.  However, I don't think we want to install dichroics in the PR2, and we might want to put all the tip-tilts back in the same spots they were in before.  We therefore may want to put the old optic back in SN003, and put the dichroics in SN005 (PR3) and SN001 (SR3) (see 7601).

  7646   Wed Oct 31 17:11:40 2012 jamieUpdateAlignmentprogress, then setback

jamie, nic, jenne, den, raji, manasa

We were doing pretty well with alignment, until I apparently fucked things up.

We were approaching the arm alignment on two fronts, looking for retro-reflection from both the ITMs and the ETMs.

Nic and Raji were looking for the reflected beam off of ETMY, at the ETMY chamber.  We put an AWG sine excitation into ETMY pitch and yaw.  Nic eventually found the reflected beam, and they adjusted ETMY for retro-reflection.

Meanwhile, Jenne and I adjusted ITMY to get the MICH Y arm beam retro-reflecting to BS.

Jenne and I then moved to the X arm.  We adjusted BS to center on ITMX, then we moved to ETMX to center the beam there.  We didn't both looking for the ETMX reflected beam.  We then went back to BS and adjusted ITMX to get the MICH X arm beam retro-reflected to the BS.

At this point we were fairly confident that we had the PRC, MICH, and X and Y arm alignment ok.

We then moved on the signal recycling cavity.  Having removed and reinstalled the SRC tip-tilts, and realigning everything else, they were not in the correct spot.  The beam was off-center in yaw on SR3, and the SR3 reflected beam was hitting low and to the right on SR2.  I went to loosen SR3 so that I could adjust it's position and yaw, and that when things went wrong.

Apparently I hit something BS table and completely lost the input pointing.  I was completely perplexed until I found that the PZT2 mount looked strange.  The upper adjustment screw appeared to have no range.  Looking closer I realized that we somehow lost the gimble ball between the screw and the mount.  Apparently I somehow hit PZT2 hard enough to separate from the mirror mount from the frame which caused the gimble ball to drop out.  The gimble ball probably got lost in a table hole, so we found a similar mount from which we stole a replacement ball.

However, after putting PZT2 back together things didn't come back to the right place.  We were somehow high going through PRM, so we couldn't retro-reflect from ITMY without completely clipping on the PRM/BS apertures.  wtf.

Jenne looked at some trends and we saw a big jump in the BS/PRM osems.  Clearly I must have hit the table/PZT2 pretty hard, enough to actually kick the table.  I'm completely perplexed how I could have hit it so hard and not really realized it.

Anyway, we stopped at this point, to keep me from punching a hole in the wall.  We will re-asses the situation in the morning.  Hopefully the BS table will have relaxed back to it's original position by then.

  7649   Wed Oct 31 17:36:39 2012 jamieUpdateAlignmentprogress, then setback - trend of BS table shift

Quote:

Here is a two hour set of second trends of 2 sensors per mirror, for BS, PRM, ITMY and MC1.  You can see about an hour ago there was a big change in the BS and PRM suspensions, but not in the ITMY and MC1 suspensions.  This corresponds as best we can tell with the time that Jamie was figuring out and then fixing PZT2's mount.  You can see that the table takes some time to relax back to it's original position.  Also, interestingly, after we put the doors on ~10 or 20 minutes ago, things change a little bit on all tables. This is a little disconcerting, although it's not a huge change.

 what's going on with those jumps on MC1?  It's smaller, but noticeable, and looks like around the same time.    Did the MC table jump as well?

more looking tomorrow.

  7653   Thu Nov 1 10:13:53 2012 jamieUpdateAlignmentTransmitance Measurements on LaserOptik mirror

Quote:

...Looks like the coating is out of spec at any angle for 1064nm. E11200219-v2

The coating should have very low 1064nm p transmission at 45 degrees, which the plot seems to indicate that it does.  That's really the only part of the spec that this measurement is saying anything about.    What makes you say it's out of spec?

  7654   Thu Nov 1 10:19:11 2012 jamieUpdateAlignmentTransmitance Measurements on LaserOptik mirror

Quote:

Quote:

...Looks like the coating is out of spec at any angle for 1064nm. E11200219-v2

The coating should have very low 1064nm p transmission at 45 degrees, which the plot seems to indicate that it does.  That's really the only part of the spec that this measurement is saying anything about.    What makes you say it's out of spec?

Ok, yes, sorry, the data itself does indicate that the transmission is way too high at 45 degrees for 1064 p.

  7655   Thu Nov 1 10:58:49 2012 jamieUpdateAlignmentprogress, then setback - trend of BS table shift

Here's a plot of the BS, PRM, and MC1 suspension shadow sensor trends over the last 24 hours.  I tried to put everything on the same Y scale:

foo.png

There definitely was some shift in the BS table that is visible in the BS and PRM that seems to be settling back now.  The MC1 is there for reference to show that it didn't really move.

  7657   Thu Nov 1 19:26:09 2012 jamieUpdateAlignmentaligned, this time without the crying

Jamie, Jenne, Nic, Manasa, Raji, Ayaka, Den

We basically walked through the entire alignment again, starting from the Faraday.  We weren't that far off, so we didn't have to do anything too major.  Here's basically the procedure we used:

  • Using PZT 1 and 2 we directed the beam through the PRM aperture and through an aperture in front of PR2.  We also got good retro-reflection from PRM (with PRM free-hanging).  This completely determined our input pointing, and once it was done we DID NOT TOUCH the PZT mirrors any more.
  • The beam was fortunately still centered on PR2, so we didn't touch PR2.
  • Using PR3 we direct the beam through the BS aperture, through the ITMY aperture, and to the ETMY aperture.  This was accomplished by loosening PR3 and twisting it to adjust yaw, moving it forward/backwards to adjust the beam translation, and tapping the mirror mount to affect the hysteresis to adjust pitch.  Surprisingly this worked, and we were able to get the beam cleanly through the BS and Y arm apertures.  Reclamped PR3.
  • Adjusted ITMY biases (MEDM) to get Michelson Y arm retro-reflecting to BS.
  • Adjusting BS biases (MEDM) we directed the beam through the ITMX and ETMX apertures.
  • Adjusted ITMX biases (MEDM) to get Michelson X arm retro-reflecting to BS.

At this point things were looking good and we had Michelson fringes at AS.  Time to align SRC.  This is where things went awry yesterday.  Proceeded more carefully this time:

  • Loosened SR3 to adjust yaw pointing towards SRM.  We were pretty far off at SRM, but we could get mostly there with just a little bit of adjustment of SR3.  Got beam centered in yaw on SR2.
  • Loosened and adjusted SR2 to get beam centered in yaw on SRM.
  • Once we were centered on SR3, SR2, and SRM reclamped SR2/SR3.
  • Pitch adjustment was the same stupid stupid jabbing at SR2/3 to get the hysteresis to stick at an acceptable place.**
  • Looked at retro-reflection from SRM.  We were off in yaw.  We decided to adjust SRM pointing, rather than go through some painful SR2/3 iterative adjustment.  So unclamped SRM and adjusted him slightly in yaw to get the retro-reflection at BS.

At this point we felt good that we had the full IFO aligned.  We were then able to fairly quickly get the AS beam back out on the AS table.

We took at stab at getting the REFL beam situation figured out.  We confirmed that what we thought was REFL is indeed NOT REFL, although we're still not quite sure what we're seeing.  Since it was getting late we decided to close up and take a stab at it tomorrow, possibly after removing the access connector.

The main tasks for tomorrow:

  • Find ALL pick-off beams (POX, POY, POP) and get them out of the vacuum.  We'll use Jenne's new Suresh's old green laser pointer method to deal with POP.
  • Find all OPLEV beams and make sure they're all still centered on their optics and are coming out cleanly.
  • Center IPPOS and IPANG
  • Find REFL and get it cleanly out.
  • Do a full check of everything else to make sure there is no clipping and that everything is where we expect it to be.

Then we'll be ready to close.  I don't see us putting on heavy doors tomorrow, but we should be able to get everything mostly done so that we're ready on Monday.

** Comment: I continue to have no confidence that we're going to maintain good pointing with these crappy tip-tilt folding mirrors.

 

  7670   Mon Nov 5 13:28:15 2012 jamieUpdateGeneral40m DCC document tree

Link to the new 40m DCC Document Tree: E1200979

  7674   Tue Nov 6 17:07:04 2012 jamieUpdateAlignmentAS and REFL

AS: tmp6oTENk.png

REFL: tmplamEtZ.png

  7684   Wed Nov 7 17:20:01 2012 jamieUpdateAlignmentJamie's tip tilt proposal

Quote:

Steve's elog 7682 is in response to the conversation we had at group meeting re: Jamie's proposed idea of re-purposing the active tip tilts.

What if we use the active TTs for the PR and SR folding mirrors, and use something else (like the picomotors that Steve found from the old days) for our input steering?

I think we will still need two active steering mirrors for input pointing into the OMC, after the SRC, so I think we'll still need two of the active TTs there.

My thought was about using the two active TTs that we were going to use as the input PZT replacements to instead replace the PR2/3 suspensions.  Hysteresis in PR2/3 wouldn't be an issue if we could control them.

With static input pointing, ie. leaving PZT2/3 as they are, I think we could use PRM and PR2/3 to compensate for most input pointing drift.  We might have to deal with the beam in PRC not being centered on PRM, though.

Koji's suggestion was that we could replace the PZTs with pico-motors.  This would give us all the DC input pointing control we need.

So I guess the suggestion on the table is to replace PZT1/2 with pico-motor mounts, and then replace PR2/3 with two of the active tip-tilts.  No hysteresis in the PRC, while maintaining full input pointing control.

  7693   Fri Nov 9 11:38:38 2012 jamieUpdateGeneralwe're closing up

After a brief look this morning, I called it and declared that we were ok to close up.  The access connector is almost all buttoned up, and both ETM doors are on.

Basically nothing moved since last night, which is good.  Jenne and I were a little bit worried about how the input pointing might have been effected by our moving of the green periscope in the MC chamber.

First thing this morning I went into the BS chamber to check out the alignment situation there.  I put the targets on the PRM and BS cages.  We were basically clear through the PRM aperture, and in retro-reflection.

The BS was not quite so clear.  There is a little bit of clipping through the exit aperture on the X arm side.  However, it didn't seem to me like it was enough to warrant retouching all the input alignment again, as that would have set us back another couple of days at least.

Both arm green beams are cleaning coming out, and are nicely overlapping with the IR beams at the BS (we even have a clean ~04 mode from the Y arm).  The AS and REFL spots look good.  IPANG and IPPOS are centered and haven't moved much since last night.  We're ready to go.

The rest of the vertex doors will go on after lunch.

  7749   Tue Nov 27 00:26:00 2012 jamieOmnistructureComputersUbuntu update seems to have broken html input to elog on firefox

 After some system updates this evening, firefox can no longer handle the html input encoding for the elog.  I'm not sure what happened.  You can still use the "ELCode" or "plain" input encodings, but "HTML" won't work.  The problem seems to be firefox 17.  ottavia and rosalba were upgraded, while rossa and pianosa have not yet been.

I've installed chromium-browser (debranded chrome) on all the machines as a backup.  Hopefully the problem will clear itself up with the next update.  In the mean time I'll try to figure out what happened.

To use chromium: Appliations -> Internet -> Chromium

  7750   Tue Nov 27 00:45:20 2012 jamieUpdateIOOMC_L and laser frequency noise spectra

I grabbed the a plot of the iLIGO PSL frequency noise spectrum from the Rana manifesto:

laser_noise.pdf

Rana's contention is that this spectrum (red trace) is roughly the same as for our NPRO.

From the jenne/mevans/pepper/rana paper Active noise cancellation in a suspended interferometer I pulled a plot of the calibrated MC_L noise spectrum:

MCL_noise.pdf

The green line on this plot is a rough estimate of where the above laser frequency noise would fall on this plot.  The conversion is:

    L / f  =  10 m / 2.8e14 Hz = 3.5e-14 m/Hz

which at 10 Hz is roughly 1.5e-11 m.  This puts the crossover somewhere between 1 and 10 Hz.

  7757   Wed Nov 28 17:40:28 2012 jamieOmnistructureComputerselog working again on firefox 17

Koji and I figured out what the problem is.  Apparently firefox 17.0 (specifically it's user-agent string) breaks fckeditor, which is the javascript toolbox the elog uses for the wysiwyg text editor.  See https://support.mozilla.org/en-US/questions/942438.

The suspect line was in elog/fckeditor/editor/js/fckeditorcode_gecko.js.  I hacked it up so that it stopped whatever crappy conditional user agent crap it was doing.  It seems to be working now.

Edit by Koji: In order to make this change work, I needed to clear the cache of firefox from Tool/Clear Recent History menu.

  7786   Tue Dec 4 20:38:51 2012 jamieOmnistructureComputersnew (beta) version of nds2 installed on control room machines

I've installed the new nds2 packages on the control room machines.

These new packages include some new and improved interfaces for python, matlab, and octave that were not previously available. See the documentation in:

  /usr/share/doc/nds2-client-doc/html/index.html

for details on how to use them.  They all work something like:

  conn = nds2.connection('fb', 8088)
  chans = conn.findChannels()
  buffers = conn.fetch(t1, t2, {c1,...})
  data = buffers(1).getData()

NOTE: the new interface for python is distinct from the one provided by pynds.  The old pynds interface should continue to work, though.

To use the new matlab interface, you have to first issue the following command:

   javaaddpath('/usr/lib/java')

I'll try to figure out a way to have that included automatically.

The old Matlab mex functions (NDS*_GetData, NDS*_GetChannel, etc.) are now provided by a new and improved package.  Those should now work "out of the box".

  7793   Wed Dec 5 16:54:29 2012 jamieOmnistructureComputersnew (beta) version of NDS2 installed on control room machines

Quote:

NDS2 is not designed for non DQ channels - it gets data from the frames, not through NDS1.

For getting the non-DQ stuff, I would just continue using our NDS1 compatible NDS mex files (this is what is used in mDV).

The NDS2 protocol is not for non-DQ, but the NDS2 client is capable of talking both the NDS1 and NDS2 protocols.

fb:8088 is an NDS1 server, so the client is talking NDS1 to fb.  It should therefore be capable of getting online data.

It doesn't seem to be seeing the online channels, though, so I'll work with Leo to figure out what's going on there.

The old mex functions, which like I said are now available, aren't capable of getting online data.

  7805   Mon Dec 10 16:28:13 2012 jamieOmnistructureComputersprogressive retrieval of online data now possible with the new NDS2 client

Leo fixed an issue with the new nds2-client packages that was preventing it from retrieving online data.  It's working now from matlab, python, and octave.

Here's an example of a dataviewer-like script in python:

#!/usr/bin/python

import sys
import nds2
from pylab import *

# channels are command line arguments
channels = sys.argv[1:]

conn = nds2.connection('fb', 8088)

fig = figure()
fig.show()
for bufs in conn.iterate(channels):
    fig.clf()
    for buf in bufs:
        plot(buf.data)
    draw()

  7897   Mon Jan 14 12:08:39 2013 jamieUpdateAlignmentTT2 connections

Quote:

Quote:

Was the connection between the feedthrough (atmosphere side) and the connector on the optical table confirmed to be OK?

We had a similar situation for the TT1. We found that we were using the wrong feedthrough connector (see TT1 elog).

 The major problem that Manasa and I found was that we weren't getting voltage along the cable between the rack and the chamber (all out-of-vac stuff).  We used a function generator to put voltage across 2 pins, then a DMM to try to measure that voltage on the other end of the cable.  No go.  Jamie and I will look at it again today.

Everything was fine.  Apparently these guys just forgot that the cable from the rack to the chamber flips it's pins.  There was also a small problem with the patch cable from the coil driver that had flipped pins.  This was fixed.  The coil driver signals are now getting to the TTs.

Investigating why the pitch/yaw seems to be flipped...

  7901   Tue Jan 15 19:26:35 2013 jamieUpdateAlignmentAdjustment of active TTs and input alignment

[Jamie, Manasa, Jenne]

We started by verifying that the tip-tilts were getting the correct signals at the correct coils, and were hanging properly without touching.

We started with TT2.  It was not hanging freely.  One of the coils was in much further than the others, and the mirror frame was basically sitting on the back side yaw dampers.  I backed out the coil to match the others, and backed off all of the dampers, both in back and the corner dampers on the front.

Once the mirror was freely suspended, we borrowed the BS oplev to verify that the mirror was hanging vertically.  I adjusted the adjustment screw on the bottom of the frame to make it level.  Once that was done, we verified our EPICS control.  We finally figured out that some of the coils have polarity flipped relative to the others, which is why we were seeing pitch as yaw and vice-versa.  At that point we were satisfied with how TT2 was hanging, and went back to TT1.

Given how hard it is to look at TT1, I just made sure all the dampers were backed out and touched the mirror frame to verify that it was freely swinging.  I leveled TT1 with the lower frame adjustment screw by looking at the spot position on MMT1.  Once it was level, we adjusted the EPICS biases in yaw to get it centered in yaw on MMT1.

I then adjusted the screws on MMT1 to get the beam centered at MMT2, and did the same at MMT2 to get the beam centered vertically at TT2.

I put the target at PRM and the double target at BS.  I loosened TT2 from it's base so that I could push it around a bit.  Once I had it in a reasonable position, with a beam coming out at PR3, I adjusted MMT1 to get the beam centered through the PRM target.  I went back and checked that we were still centered at MMT1.  We then adjusted the pitch and yaw of TT2 to get the transmitted beam through the BS targets as clear as possible.

At this point we stopped and closed up.  Tomorrow first thing AM we'll get our beams at the ETMs, try to finalize the input alignment, and see if we can do some in-air locking.

The plan is still to close up at the end of the week.

  7919   Fri Jan 18 15:08:13 2013 jamieUpdateAlignmentalignment of temporary half PRC

[jenne, jamie]

Jenne and I got the half PRC flashing.  We could see flashes in the PRM and PR2 face cameras.

We took out the mirror in the REFL path on the AP that diverts the beam to the REFL RF pds so that we could get more light on the REFL camera.  Added an ND filter to the REFL camera so as not to saturate.

  7949   Mon Jan 28 21:32:38 2013 jamieUpdateAlignmenttweaking of alignment into half PRC

[Koji, Jamie]

We tweaked up the alignment of the half PRC a bit.  Koji started by looking at the REFL and POP DC powers as a function of TT2 and PRM alignment. 
He found that the reflected beam for good PRC transmission was not well overlapped at REFL.  When the beam was well overlapped at REFL, there was clipping in the REFL path on the AS table.

We started by getting good overlap at REFL, and then went to the AS table to tweak up all the beams on the REFL pds and cameras.
This made the unlocked REFL DC about 40 count. This was about 10mV (=0.2mA) at the REFL55 PD.
This amazed Koji since we found the REFL DC (of the day) of 160 as the maximum of the day for a particular combination of the PRM Pitch and TT2 Pitch. So something wrong could be somewhere.

We then moved to the ITMX table where we cleaned up the POP path.  We noticed that the lens in the POP path is a little slow, so the beam is too big on the POP PD and on the POP camera (and on the camera pick-off mirror as well)
We moved the currently unused POP55 and POP22/110 RFPDs out of the way so we could move the POP RF PD and camera back closer to the focus.  Things are better, but we still need to get a better focus, particularly on the POP PD.

We found two irides on the oplev path. They are too big and one of these is too close to the POP beam. Since it does not make sense too to have two irides in vicinity, we pulled out that one from the post.

Other things we noticed:

  • The POP beam is definitely clipping in the vacuum, looks like on two sides.
  • We can probably get better layout on the POP table, so we're not hitting mirrors at oblique angles and can get beams on grid paths.

After the alignment work on the tables, we started locking the cavity. We already saw the improvement of the POPDC power from 1000 cnt to 2500 cnt without any realignment.
Once PRM is tweaked a little (0.01ish for pitch and yaw), the maximum POPDC of 6000 was achieved. But still the POP camera shows non-gaussian shape of the beam and the Faraday camera shows bright
scattering of the beam. It seems that the scattering at the Faraday is not from the main beam but the halo leaking from the cavity (i.e. unlocking of the cavity made the scattering disappeared)


Tomorrow Jenne and I will go into BS to tweak the alignment of the TEMP PRC flat mirror, and into ITMX to see if we can clean up the POP path.

  8778   Thu Jun 27 23:18:46 2013 jamieUpdateComputer Scripts / ProgramsWARNING: Matlab upgraded

Quote:

I moved the old matlab directory from /cvs/cds/caltech/apps/linux64/matlab_o to /cvs/cds/caltech/apps/linux64/matlab_oo

and moved the previously current matlab dir from /cvs/cds/caltech/apps/linux64/matlab to /cvs/cds/caltech/apps/linux64/matlab_o.

And have installed the new Matlab 2013a into /cvs/cds/caltech/apps/linux64/matlab.

Since I'm not sure how well the new Matlab/Simulink plays with the CDS RCG, I've left the old one and we can easily revert by renaming directories.

Be careful with this.  If Matlab starts re-saving models in a new file format that is unreadable by the RCG, then we won't be able to rebuild models until we do an svn revert.  Or the bigger danger, that the RCG *thinks* it reads the file and generates code that does something unexpected.

Of course this all may be an attempt to drive home the point that we need an RCG test suite.

  8951   Thu Aug 1 15:06:59 2013 jamieUpdateCDSNew model for endtable PZTs

Quote:

I have made a new model for the endtable PZT servo, and have put it in c1iscex. Model name is c1asx. Yesterday, Koji helped me start the model up. The model seems to be running fine now (there were some problems initially, I will post a more detailed elog about this in a bit) but some channels, which are computer generated, don't seem to exist (they show up as white blocks on the MEDM GDS_TP screen). I am attaching a screenshot of the said screen and the names of the channels. More detailed elog about what was done in making the model to follow.

 

C1ASX_GDS_TP.png

 

Channel Names:

C1:DAQ-DC0_C1ASX_STATUS (this is the channel name for the two leftmost white blocks)

C1:DAQ_DC0_C1ASX_CRC_CPS

C1:DAQ-DC0_C1ASX_CRC_SUM

I don't know what's going on here (why the channels are white), and I don't yet have a suggestion of where to look to fix it but...

Is there a reason that you're making a new model for this?  You could just use and existing model at c1iscex, like the c1scx, and put your stuff in a top-names block.  Then you wouldn't have to worry about all of the issues with adding and integrating a new model.

  9086   Wed Aug 28 19:47:28 2013 jamieConfigurationCDSfront end IPC configuration

Quote:

It's hard to believe that c1lsc -> c1sus only has 4 channels. We actuate ITMX/Y/BS/PRM/SRM for the length control.
In addition to these, we control the angles of ITMX/Y/BS/PRM (and SRM in future) via c1ass model on c1lsc.
So there should be at least 12 connections (and more as I ignored MCL).

Koji was correct that I missed some connections from c1lsc to c1sus.  I corrected the graph in the original post.

Also, I should have noted, that that graph doesn't actually include everything that we now have.  I left out all the simplant stuff, which adds extra connections between c1lsc and c1sus, mostly because the sus simplant is being run on c1lsc only because there was no space on c1sus.  That should be corrected, either by moving c1rfm to c1lsc, or by adding a new core to c1sus.

I also spoke to Rolf today and about the possibility of getting a OneStop fiber and dolphin card for c1ioo.  The dolphin card and cable we should be able to order no problem.  As for the OneStop, we might have to borrow a new fiber-supporting card from India, then send our current card to OneStop for fiber-supporting modifications.  It sounds kind of tricky.  I'll post more as I figure things out.

Rolf also said that in newer versions of the RCG, the RFM direct memory access (DMA) has improved in performance considerably, which reduces considerably the model run-time delay involved in using the RFM.  In other words, the long awaited RCG upgrade might alleviate some of our IPC woes.

We need to upgrade the RCG to the latest release (2.7)

  9087   Wed Aug 28 23:09:55 2013 jamieConfigurationCDScode to generate host IPC graph
Attachment 1: hosts.png
hosts.png
Attachment 2: 40m-ipcs-graph.py
#!/usr/bin/env python

# ipc connections: (from, to, number)
ipcs = [
    ('c1scx', 'c1lsc', 1),
    ('c1scy', 'c1lsc', 1),
    ('c1oaf', 'c1lsc', 8),

    ('c1scx', 'c1ass', 1),
    ('c1scy', 'c1ass', 1),
... 96 more lines ...
  9194   Thu Oct 3 08:57:00 2013 jamieUpdateComputer Scripts / Programspianosa can't find Jamie PPA

Quote:

Message on 'pianosa':

Failed to fetch http://ppa.launchpad.net/drgraefy/nds2-client/ubuntu/dists/lucid/main/binary-amd64/Packages.gz  404  Not Found

Sorry, that was an experiment to see if I could set up a general-use repository for the NDS packages.  I've removed it, and did an update/upgrade.

  9266   Wed Oct 23 17:30:17 2013 jamieUpdateSUSETMY sensors compared to ETMX

c1scy has been running slow (compared to c1scx, which does basically the exact same thing *) for many moons now.  We've looked at it but never been able to identify a reason why it should run slower.  I suspect there may be some bios setting that's problematic.

The RCG build process is totally convoluted, and really bad at reporting errors.  In fact, you need to be careful because the errors it does print are frequently totally misleading.  You have to look at the error logs for the full story.  The rtcds utility is ultimately just executing the "standard" build instructions.  The build directory is:

    /opt/rtcds/caltech/c1/rtbuild

The build/error logs are:

    <model>.log     <model>_error.log 
I'll add a command to rtcds to view the last logs.

(*) the phrase "basically the exact same thing" is LIGO code for "empirically not at all the same"
  9278   Thu Oct 24 12:00:11 2013 jamieUpdateCDSfb acquisition of slow channels

Quote:

 

 While that would be good - it doesn't address the EDCU problem at hand. After some verbal emailing, Jamie and I find that the master file in target/fb/ actually doesn't point to any of the EDCU files created by any of the FE machines. It is only using the C0EDCU.ini as well as the *_SLOW.ini files that were last edited in 2011 !!!

So....we have not been adding SLOW channels via the RCG build process for a couple years. Tomorrow morning, Jamie will edit the master file and fix this unless I get to it tonight. There a bunch of old .ini files in the daq/ dir that can be deleted too.

I took a look at the situation here so I think I have a better idea of what's going on (it's a mess, as usual):

The framebuilder looks at the "master" file

    /opt/rtcds/caltech/c1/target/fb/master

which lists a bunch of other files that contain lists of channels to acquire.  It looks like there might have been some notion to just use 

    /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini

as the master slow channels file.  Slow channels from all over the place have been added to this file, presumably by hand.  Maybe the idea was to just add slow channels manually as needed, instead of recording them all by default.  The full slow channels lists are in the

    /opt/rtcds/caltech/c1/chans/daq/C1EDCU_<model>.ini

files, none of which are listed in the fb master file.

There are also these old slow channel files, like

    /opt/rtcds/caltech/c1/chans/daq/SUS_SLOW.ini

There's a perplexing breakdown of channels spread out between these files and C1EDCU.ini:

controls@fb ~ 0$ grep MC3_URS /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini
[C1:SUS-MC3_URSEN_OVERFLOW]
[C1:SUS-MC3_URSEN_OUTPUT]
controls@fb ~ 0$ grep MC3_URS /opt/rtcds/caltech/c1/chans/daq/MCS_SLOW.ini
[C1:SUS-MC3_URSEN_INMON]
[C1:SUS-MC3_URSEN_OUT16]
[C1:SUS-MC3_URSEN_EXCMON]
controls@fb ~ 0$

why some of these channels are in one file and some in the other I have no idea.  If the fb finds multiple of the same channel if will fail to start, so at least we've been diligent about keeping disparate lists in the different files.

So I guess the question is if we want to automatically record all slow channels by default, in which case we add in the C1EDCU_<model>.ini files, or if we want to keep just adding them in by hand, in which case we keep the status quo.  In either case we should probably get rid of the *_SLOW.ini files (by maybe integrating their channels in C0EDCU.ini), since they're old and just confusing things.

In the mean time, I added C1:FEC-45_CPU_METER to C0EDCU.ini, so that we can keep track of the load there.

 

  9282   Thu Oct 24 17:26:35 2013 jamieUpdateCDSnew dataviewer installed; 'cdsutils avg' now working.

I installed a new version of dataviewer (2.3.2), and at the same time fixed the NDSSERVER issue we were having with cdsutils.  They should both be working now.

The problem turned out to be that I had setup our dataviewer to use the NDSSERVER environment, whereas by default it uses the LIGONDSIP variable.  Why we have two different environment variables that mean basically exactly the same thing, who knows.

  9285   Thu Oct 24 23:12:21 2013 jamieUpdateCDSnew dataviewer installed; no longer works on Ubuntu 10 workstations

Quote:

I installed a new version of dataviewer (2.3.2), and at the same time fixed the NDSSERVER issue we were having with cdsutils.  They should both be working now.

The problem turned out to be that I had setup our dataviewer to use the NDSSERVER environment, whereas by default it uses the LIGONDSIP variable.  Why we have two different environment variables that mean basically exactly the same thing, who knows.

 Dataviewer seems to run fine on Chiara (Ubuntu 12), but not on Rossa or Pianosa (Ubuntu 10), or Megatron, which I assume is also something medium-old.

We get the error:

controls@megatron:~ 0$ dataviewer
Can't find hostname `fb:8088'
Can't find hostname `fb:8088'; gethostbyname(); error=1
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Error in obtaining chan info.
Can't find hostname `fb:8088'
Can't find hostname `fb:8088'; gethostbyname(); error=1

Sadface :(   We also get the popup saying "Couldn't connect to fb:8088"

  9287   Thu Oct 24 23:30:57 2013 jamieUpdateCDSnew dataviewer installed; no longer works on Ubuntu 10 workstations

Quote:

Quote:

I installed a new version of dataviewer (2.3.2), and at the same time fixed the NDSSERVER issue we were having with cdsutils.  They should both be working now.

The problem turned out to be that I had setup our dataviewer to use the NDSSERVER environment, whereas by default it uses the LIGONDSIP variable.  Why we have two different environment variables that mean basically exactly the same thing, who knows.

 Dataviewer seems to run fine on Chiara (Ubuntu 12), but not on Rossa or Pianosa (Ubuntu 10), or Megatron, which I assume is also something medium-old.

We get the error:

controls@megatron:~ 0$ dataviewer
Can't find hostname `fb:8088'
Can't find hostname `fb:8088'; gethostbyname(); error=1
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Error in obtaining chan info.
Can't find hostname `fb:8088'
Can't find hostname `fb:8088'; gethostbyname(); error=1

Sadface :(   We also get the popup saying "Couldn't connect to fb:8088"

Sorry, that was a goof on my part.  It should be working now.

  9393   Fri Nov 15 10:49:55 2013 jamieUpdateCDSCan't talk to AUXEY?

Please just try rebooting the vxworks machine.  I think there is a key on the card or create that will reset the device.  These machines are "embeded" so they're designed to be hard reset, so don't worry, just restart the damn thing and see if that fixes the problem.

  9531   Tue Jan 7 23:08:01 2014 jamieUpdateCDS/frames is full, causing daqd to die

Quote:

The daqd process is segfaulting and restarting itself every 30 seconds or so.  It's pretty frustrating. 

Just for kicks, I tried an mxstream restart, clearing the testpoints, and restarting the daqd process, but none of things changed anything.  

Manasa found an elog from a year ago (elog 7105 and preceding), but I'm not sure that it's a similar / related problem.  Jamie, please help us

The problem is not exactly the same as what's described in 7105, but the symptoms are so similar I assumed they must have a similar source.

And sure enough, /frames is completely full:

controls@fb /opt/rtcds/caltech/c1/target/fb 0$ df -h /frames/
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              13T   13T     0 100% /frames
controls@fb /opt/rtcds/caltech/c1/target/fb 0$

So the problem in both cases was that it couldn't write out the frames.  Unfortunately daqd is apparently too stupid to give us a reasonable error message about what's going on.

So why is /frames full?  Apparently the wiper script is either not running, or is failing to do it's job.  My guess is that this is a side effect of the linux1 raid failure we had over xmas.

  9533   Tue Jan 7 23:13:47 2014 jamieUpdateCDS/frames is full, causing daqd to die

Quote:

So why is /frames full?  Apparently the wiper script is either not running, or is failing to do it's job.  My guess is that this is a side effect of the linux1 raid failure we had over xmas.

It actually looks like the wiper script has been running fine.  There is a log from Tuesday morning:

controls@fb ~ 0$ cat /opt/rtcds/caltech/c1/target/fb/wiper.log

Tue Jan  7 06:00:02 PST 2014

Directory disk usage:
/frames/trend/minute_raw 385289132k
/frames/trend/second 100891124k
/frames/full 12269554048k
/frames/trend/minute 1906772k
Combined 12757641076k or 12458633m or 12166Gb

/frames size 13460088620k at 94.78%
/frames is below keep value of 95.00%
Will not delete any files
df reported usage 97.72%
controls@fb ~ 0$

So now I'm wondering if something else has been filling up the frames today.  Has anything changed today that might cause more data than usual to be written to frames?

I'm manually running the wiper script now to clear up some /frames.  Hopefully that will solve the problem temporarily.

  9535   Tue Jan 7 23:50:27 2014 jamieUpdateCDS/frames space cleared up, daqd stabilized

The wiper script is done and deleted a whole bunch of stuff to clean up some space:

controls@fb ~ 0$ /opt/rtcds/caltech/c1/target/fb/wiper.pl --delete

Tue Jan  7 23:09:21 PST 2014

Directory disk usage:
/frames/trend/minute_raw 385927520k
/frames/trend/second 125729084k
/frames/full 12552144324k
/frames/trend/minute 2311404k
Combined 13066112332k or 12759875m or 12460Gb

/frames size 13460088620k at 97.07%
/frames above keep value of 95.00%
Frame area size is 12401156668k
/frames/full size 12552144324k keep 11781098835k
/frames/trend/second size 125729084k keep 24802313k
/frames/trend/minute size 2311404k keep 620057k
Deleting some full frames to free 771045488k
- /frames/full/10685/C-R-1068567600-16.gwf
- /frames/full/10685/C-R-1068567616-16.gwf
...
controls@fb ~ 0$ df -h /frames
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              13T   12T  826G  94% /frames
controls@fb ~ 0$
So it cleaned up 826G of space.  It looks like the fb is stabilized for the moment.  On site folks should confirm...

 

asdfasdfsadf sadf asdf

  9618   Mon Feb 10 18:03:41 2014 jamieUpdateCDS12 core c1sus replacement

I have configured one of the spare Supermicro X8DTU-F chassis as a dual-CPU, 12-core CDS front end machine.  This is meant to be a replacement for c1sus.  The extra cores are so we can split up c1rfm and reduce the over-cycle problems we've been seeing related to RFM IPC delays.

I pulled the machine fresh out of the box, and installed the second CPU and additional memory that Steve purchased.  The machine seems to be working fine.  After assigning it a temporary IP address, I can boot it from the front-end boot server on the martian network.  It comes up cleanly with both CPUs recognized, and /proc/cpustat showing all 12 cores, and free showing 12 GB memory.

The plan is:

  1. pull the old c1sus machine from the rack
  2. pull OneStop, Dolphin, RFM cards from c1sus chassis
  3. installed OneStop, Dolphin, RFM cards into new c1sus
  4. install new c1sus back in rack
  5. power everything on and have it start back up with no problems

Obviously the when of all this needs to be done when it won't interfere with locking work.  fwiw, I am around tomorrow (Tuesday, 2/11), but will likely be leaving for LHO on Wednesday.

  9727   Fri Mar 14 10:31:10 2014 jamieUpdateGreen LockingALS Slow servo settings

Quote:

 

Q and I have started to...

 Ha!

  9798   Fri Apr 11 10:30:48 2014 jamieUpdateLSCCARM and DARM both on IR signals!!!!!!!!!

Quote:

[EricQ, Jenne]

We're still working, but I'm really excited, so here's our news:  We are currently holding the IFO on all IR signalsNo green, no ALS is being used at all!!!!  

 Phenomenal!!  Well done, guys!

  9822   Thu Apr 17 11:00:54 2014 jamieUpdateCDSfailed attempt to get Dolphin working on c1ioo

I've been trying to get c1ioo on the Dolphin network, but have not yet been successful.

Background: if we can put the c1ioo machine on the fast Dolphin IPC network, we can essentially eliminate latencies between the c1als model and the c1lsc model, which are currently connected via a rube goldberg-esq c1lsc->dolphin->c1sus->rfm->c1ioo configuration.

Rolf gave us a Dolpin host adapter card, and we purchased a Dolphin fiber cable to run from the 1X2 rack to the 1X4 rack where the Dolphin switch is.

Yesterday I installed the dolphin card into c1ioo.  Unfortunately, c1ioo, which is Sun Fire X4600, and therefore different than the rest of the front end machines, doesn't seem to be recognizing the card.  The /etc/dolphin_present.sh script, which is supposed to detect the presence of the card by grep'ing for the string 'Stargen' in the lspci output, returns null.

I've tried moving the card to different PCIe slots, as well as swapping it out with another Dolphin host adapter that we have.  Neither worked.

I looked at the Dolphin host adapter installed in c1lsc and it's quite different, presumably a newer or older model.  Not sure if that has anything to do with anything.

I'm contacting Rolf to see if he has any other ideas.

  9824   Thu Apr 17 16:59:45 2014 jamieUpdateCDSslightly more successful attempt to get Dolphin working on c1ioo

So it turns out that the card that Rolf had given me was not a Dolphin host adapter after all.  He did have an actual host adapter board on hand, though, and kindly let us take it.  And this one works!

I installed the new board in c1ioo, and it recognized it.  Upon boot, the dolphin configuration scripts managed to automatically recognize the card, load the necessary kernel modules, and configure it.  I'll describe below how I got everything working.

However, at some point mx_stream stopped working on c1ioo.  I have no idea why, and it shouldn't be related to any of this dolphin stuff at all.  But given that mx_stream stopped working at the same time the dolphin stuff started working, I didn't take any chances and completely backed out all the dolphin stuff on c1ioo, including removing the dolphin host adapter from the chassis all together.  Unfortunately that didn't fix any of the mx_stream issues, so mx_stream continues to not work on c1ioo.  I'll follow up in a separate post about that.  In the meantime, here's what I did to get dolphin working on c1ioo:

c1ioo Dolphin configuration

To get the new host recognized on the Dolphin network, I had to make a couple of changes to the dolphin manager setup on fb.  I referenced the following page:

https://cdswiki.ligo-la.caltech.edu/foswiki/bin/view/CDS/DolphinHowTo

Below are the two patches I made to the dolphin ("dis") config files on fb:

--- /etc/dis/dishosts.conf.bak    2014-04-17 09:31:08.000000000 -0700
+++ /etc/dis/dishosts.conf    2014-04-17 09:28:27.000000000 -0700
@@ -26,6 +26,8 @@
 ADAPTER:  c1sus_a0 8 0 4
 HOSTNAME: c1lsc
 ADAPTER:  c1lsc_a0 12 0 4
+HOSTNAME: c1ioo
+ADAPTER:  c1ioo_a0 16 0 4
 
 # Here we define a socket adapter in single mode.
 #SOCKETADAPTER: sockad_0 SINGLE 0

--- /etc/dis/networkmanager.conf.bak    2014-04-17 09:30:40.000000000 -0700
+++ /etc/dis/networkmanager.conf    2014-04-17 09:30:48.000000000 -0700
@@ -39,7 +39,7 @@
 # Number of nodes in X Dimension. If you are using a single ring, please
 # specify number of nodes in ring.
 
--dimensionX 2;
+-dimensionX 3;
 
 # Number of nodes in Y Dimension.

I then had to restart the DIS network manager to see these changes take affect:

$ sudo /etc/init.d/dis_networkmgr restart

I then rebooted c1ioo one more time, after which c1ioo showed up in the dxadmin GUI.

At this point I tried adding a dolphin IPC connection between c1als and c1lsc to see if it worked.  Unfortunately everything crashed every time I tried to run the models (including models on other machines!).  The problem was that I had forgotten to tell the c1ioo IOP (c1x03) to use PCIe RFM (i.e. Dolphin).  This is done by adding the following flag to the cdsParamters block in the IOP:

pciRfm=1

Once this was added, and the IOP was rebuilt/installed/restarted and came back up fine.  The c1als model with the dolphin output also came up fine.

However, at this point I ran into the c1ioo mx_stream problem and started backing everything out.

 

  9825   Thu Apr 17 17:15:54 2014 jamieUpdateCDSmx_stream not starting on c1ioo

While trying to get dolphin working on c1ioo, the c1ioo mx_stream processes mysteriously stopped working.  The mx_stream process itself just won't start now.  I have no idea why, or what could have happened to cause this change.  I was working on PCIe dolphin stuff, but have since backed out everything that I had done, and still the c1ioo mx_stream process will not start.

mx_stream relies on the open-mx kernel module, but that appears to be fine:

controls@c1ioo ~ 0$ /opt/open-mx/bin/omx_info  
Open-MX version 1.3.901
 build: root@fb:/root/open-mx-1.3.901 Wed Feb 23 11:13:17 PST 2011

Found 1 boards (32 max) supporting 32 endpoints each:
 c1ioo:0 (board #0 name eth1 addr 00:14:4f:40:64:25)
   managed by driver 'e1000'
   attached to numa node 0

Peer table is ready, mapper is 00:30:48:d6:11:17
================================================
  0) 00:14:4f:40:64:25 c1ioo:0
  1) 00:30:48:d6:11:17 c1iscey:0
  2) 00:25:90:0d:75:bb c1sus:0
  3) 00:30:48:be:11:5d c1iscex:0
  4) 00:30:48:bf:69:4f c1lsc:0
controls@c1ioo ~ 0$ 

However, if trying to start mx_stream now fails:

controls@c1ioo ~ 0$ /opt/rtcds/caltech/c1/target/fb/mx_stream -s c1x03 c1ioo c1als -d fb:0
c1x03
mmapped address is 0x7f885f576000
mapped at 0x7f885f576000
send len = 263596
OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
mx_connect failed
controls@c1ioo ~ 1$ 

I'm not quite sure how to interpret this error message.  The "00:00:00:00:00:00" has the form of a 48-bit MAC address that would be used for a hardware identifier, ala the second column of the OMC "peer table" above, although of course all zeros is not an actual address.  So there's some disconnect between mx_stream and the actually omx configuration stuff that's running underneath.

Again, I have no idea what happened.  I spoke to Rolf and he's going to try to help sort this out tomorrow.

Attachment 1: c1ioo_no_mx_stream.png
c1ioo_no_mx_stream.png
  9831   Fri Apr 18 19:05:17 2014 jamieUpdateCDSmx_stream not starting on c1ioo

Quote:

To fix open-mx connection to c1ioo, had to restart the mx mapper on fb machine. Command is /opt/mx/sbin/mx_start_mapper, to be run as root. Once this was done, omx_info on c1ioo computer showed fb:0 in the table and mx_stream started back up on its own. 

Thanks so much Rolf (and Keith)!

  9881   Wed Apr 30 17:07:19 2014 jamieUpdateCDSc1ioo now on Dolphin network

The c1ioo host is now fully on the dolphin network!

After the mx stream issue from two weeks ago was resolved and determined to not be due to the introduction of dolphin on c1ioo, I went ahead and re-installed the dolphin host adapter card on c1ioo.  The Dolphin network configurations changes I made during the first attempt (see previous log in thread) were still in place.  Once I rebooted the c1ioo machine, everything came up fine:

dolphin.png

We then tested the interface by making a cdsIPCx-PCIE connection between the c1ioo/c1als model and the c1lsc/c1lsc model for the ALS-X beat note fine phase signal.  We then locked both ALS X and Y, and compared the signals against the existing ALS-Y beat note phase connection that passes through c1sus/c1rfm via an RFM IPC:

The signal is perfectly coherent and we've gained ~25 degrees of phase at 1kHz.  EricQ calculates that the delay for this signal has changed from:

ALSXonDolphin.pdf

122 us -> 61 us 

I then went ahead and made the needed modifications for ALS-Y as well, and removed ALS->LSC stuff in the c1rfm model.

Next up: move the RFM card from the c1sus machine to the c1lsc machine, and eliminate c1sus/c1rfm model entirely.

  9882   Wed Apr 30 17:45:34 2014 jamieUpdateCDSc1ioo now on Dolphin network

For reference, here are the new IPC entries that were made for the ALS X/Y phase between c1als and c1lsc:

controls@fb ~ 0$ egrep -A5 'C1:ALS-(X|Y)_PHASE' /opt/rtcds/caltech/c1/chans/ipc/C1.ipc
[C1:ALS-Y_PHASE]
ipcType=PCIE
ipcRate=16384
ipcHost=c1ioo
ipcNum=114
desc=Automatically generated by feCodeGen.pl on 2014_Apr_17_14:27:41
--
[C1:ALS-X_PHASE]
ipcType=PCIE
ipcRate=16384
ipcHost=c1ioo
ipcNum=115
desc=Automatically generated by feCodeGen.pl on 2014_Apr_17_14:28:53
controls@fb ~ 0$ 

After all this IPC cleanup is done we should go through and clean out all the defunct entries from the C1.ipc file.

  9883   Wed Apr 30 18:06:06 2014 jamieUpdateCDSPOP QPD signals now on dolphin

The POP QPD X/Y/SUM signals, which are acquired in c1ioo, are now being broadcast over dolphin.  c1ass was modified to pick them up there as well:

c1ioo-POPQPD.pngc1ass-POPQPD.png

Here are the new IPC entries:

controls@fb ~ 0$ egrep -A5 'C1:IOO-POP' /opt/rtcds/caltech/c1/chans/ipc/C1.ipc
[C1:IOO-POP_QPD_SUM]
ipcType=PCIE
ipcRate=16384
ipcHost=c1ioo
ipcNum=116
desc=Automatically generated by feCodeGen.pl on 2014_Apr_30_17:33:22
--
[C1:IOO-POP_QPD_X]
ipcType=PCIE
ipcRate=16384
ipcHost=c1ioo
ipcNum=117
desc=Automatically generated by feCodeGen.pl on 2014_Apr_30_17:33:22
--
[C1:IOO-POP_QPD_Y]
ipcType=PCIE
ipcRate=16384
ipcHost=c1ioo
ipcNum=118
desc=Automatically generated by feCodeGen.pl on 2014_Apr_30_17:33:22
controls@fb ~ 0$ 

Both c1ioo and c1ass were rebuild/install/restarted, and everything came up fine.

The corresponding cruft was removed from c1rfm, which was also rebuild/installed/restarted.

  9890   Thu May 1 10:23:42 2014 jamieUpdateCDSc1ioo dolphin fiber nicely routed

Steve and I nicely routed the dolphin fiber from c1ioo in the 1X2 rack to the dolphin switch in the 1X4 rack.  I shutdown c1ioo before removing the fiber, but still all the dolphin connected models crashed.  After the fiber was run, I brought back c1ioo and restarted all wedged models.  Everything is green again:

green.png

  9903   Fri May 2 11:14:47 2014 jamieUpdateCDSc1ioo dolphin fiber nicely routed

Quote:

This C1IOO business seems to be wiping out the MC2_TRANS QPD servo settings each day.   What kind of BURT is being done to recover our settings after each of these activities?

(also we had to do mxstream restart on c1sus twice so far tonight -- not unusual, just keeping track)

I don't see how the work I did would affect this stuff, but I'll look into it.  I didn't touch the MC2 trans QPD signals.  Also nothing I did has anything to do with BURT.  I didn't change any channels, I only swapped out the IPCs.

  9910   Mon May 5 19:34:54 2014 jamieUpdateCDSc1ioo/c1ioo control output IPCs changed to PCIE Dolphin

Now the c1ioo in on the Dolphin network, I changed the c1ioo MC{1,2,3}_{PIT,YAW} and MC{L,F} outputs to go out over the Dolphin network rather than the old RFM network.

Two models, c1mcs and c1oaf, are ultimately the consumers of these outputs.  Now they are picking up the new PCIE IPC channels directly, rather than from any sort of RFM/PCIE proxy hops.  This should improve the phase for these channels a bit, as well as reduce complexity and clutter.  More stuff was removed from c1rfm as well, moving us to the goal of getting rid of that model entirely.

c1ioo, c1mcs, and c1rfm were all rebuild/installed/restarted, and all came back fine.  The mode cleaner relocked once we reenabled the autolocker.

c1oaf, on the other hand, is not building.  It's not building even before the changes I attempted, though.  I tried reverting c1oaf back to what is in the SVN (which also corresponds to what is currently running) and it doesn't compile either:

controls@c1lsc ~ 2$ rtcds build c1oaf
buildd: /opt/rtcds/caltech/c1/rtbuild
### building c1oaf...
Cleaning c1oaf...
Done
Parsing the model c1oaf...
YARM_BLRMS_SEIS_CLASS TP
YARM_BLRMS_SEIS_CLASS_EQ TP
YARM_BLRMS_SEIS_CLASS_QUIET TP
YARM_BLRMS_SEIS_CLASS_TRUCK TP
YARM_BLRMS_S_CLASS EpicsOut
YARM_BLRMS_S_CLASS_EQ EpicsOut
YARM_BLRMS_S_CLASS_QUIET EpicsOut
YARM_BLRMS_S_CLASS_TRUCK EpicsOut
YARM_BLRMS_classify_seismic FunctionCall
Please check the model for missing links around these parts.
make[1]: *** [c1oaf] Error 1
make: *** [c1oaf] Error 1
controls@c1lsc ~ 2$ 

I've been trying to debug it but have had no success.  For the time being I'm shutting off the c1oaf model, since it's now looking for bogus signals on RFM, until we can figure out what's wrong with it. 

Attachment 1: ioo-ipc.png
ioo-ipc.png
  9911   Mon May 5 19:51:56 2014 jamieUpdateCDSc1oaf model broken because of broken BLRMS block

I finally tracked down the problem with the c1oaf model to the BLRMS part:

/opt/rtcds/userapps/release/cds/common/models/BLRMS.mdl

blrms-hot-mess.pngsddefault.jpg

Note that this is pulling from a cds/common location, so presumably this is a part that's also being used at the sites.

Either there was an svn up that pulled in something new and broken, or the local version is broken, or who knows what.

We'll have to figure how what's going on here, but in the mean time, as I already mentioned, I'm leaving the c1oaf model off for now.

 RXA: also...we updated Ottavia to Ubuntu 12 LTS...but now it has no working network connection. Needs help.  (which of course has nothing whatsoever to do with this point )

ELOG V3.1.3-