40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 245 of 341  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  11231   Tue Apr 21 15:03:27 2015 ranaUpdateOptical Levers1103P noise measurement

It doesn't work with the lens in there, but it seems pretty close. Please leave it as is and I'll play with it after 5 today.

  11232   Tue Apr 21 21:46:34 2015 ranaUpdateOptical Levers1103P noise measurement

To test what the inherent angular noise of the HeNe 1103P laser is, we're testing it on a table pointing into the BS OL QPD with only a few steering mirrors.

From the setup that I found today, I've removed the lens nearest to the laser (which was used for the BS and PRM) as well as the ND filter (what was this for?) and the lens placed just before the BS QPD.

With the ND filter removed, the quadrant signals are now ~15000 if we misalign it and ~9000 each with the beam centered.

In order to calibrate the OLPIT_IN1 and OLYAW_IN1 signals into mm of beam motion, I misaligned the mirror just before the QPD. The knobs on there actuate the 100 TPI screws and the knurling on the knob itself has 10 ridges, so that's 36 deg per bump.

Pit Knob (deg) OLPIT Yaw Knob (deg) OLYAW
0 29 0 -36
45 13 36 -16
90 -16 72 19
135 -39 108 36
       

PIT cal ~ 1.55 (knob deg / count) -->> 10 microns / count --->>> 10 urad / count

YAW cal ~ 1 (knob deg / count)  -->> 6.5 microns / count --->>> 6.5 urad / count

Distance from the 45 deg turning mirror to the QPD silicon surface is 23 cm. Distance between knob tip and fixed pivot point is ~4 cm. 1 knob turn = 0.01" = 0.254 mm = 0.254/40 radians of mirror angle.

So 360 deg of knob gives 2*0.254/40 = 0.012 radians of beam angle = 0.012 * 230 mm ~2.3 mm of beam spot motion. Or 6.4 microns of translation / deg of knob.

The distance from the face of the laser to the QPD is 96 cm.


The punchline is that the laser shows a level of noise which has a similar shape to what's seen at LLO, but 10x lower.

The noise at 0.05 - 0.2 Hz is ~2-3x worse than the PR3 at LLO. Not sure if this is inherent to the HeNe or the wind in our setup.

  11233   Wed Apr 22 11:21:51 2015 SteveUpdateOptical LeversBS & PRM oplevs are back to normal

BS & PRM oplev is restored. Note: the F -150 lens was removed right after the first turning mirror from the laser. This helped Rana to get small spot on the qpd.

It also means that the oplev paths are somewhat different now.

 

 

 

  11234   Wed Apr 22 11:43:28 2015 SteveUpdateSUSETMX damping restored

ETMX sus damping restored.

  11237   Wed Apr 22 17:04:11 2015 ranaUpdateElectronicsMC REFL PD back from the dead

Just randomly found this old entry from 3 years ago. We should never have installed a GAP 2000 - they are an inferior type of InGaAs diode. We should add to our list replacing these with a 2 mm EG&G diode.

How many 2 mm EG&G InGaAs diodes do we have Steve? Can you please find a good clean diode case so that we can store them in the optics cabinet on the south arm?

Quote:

 [Yuta, Manasa]

We replaced the dead photodiode on MC REFL PD with a new one (GAP 2000). We measured the frequency response of the PD and tuned the resonant frequency using inductor L5 (in the circuit diagram) to be 29.575MHz - over an average of 10 measurements.

 

  11238   Thu Apr 23 08:43:40 2015 SteveUpdateElectronicsEG&G InGaAs diodes in stock

RFpds box is moved from RF cabinet E4 to clean cabinet S15

Inventory updated at https://wiki-40m.ligo.caltech.edu/RF_Pd_Inventory

Large Area InGaAs PIN Photodiode -- C30642GH      6 pieces in stock

Product Details
in: Photodiodes

 

Large Area InGaAs PIN Photodiode -- C30642GH -- View Larger Image

Large Area InGaAs PIN Photodiode with a 2,0 mm active diameter chip in TO-5 package with flat glass window

Large area InGaAs PIN photodiode with useful diameter of 2,0 mm in a T0-5 package with a flat glass window. The C30642GH provides high quantum efficiency from 800nm to 1700nm. It features high responsivity, high shunt resistance, low dark current, low capacitance for fast response time and uniformity within 2% across the detector active area.

  11239   Thu Apr 23 15:40:41 2015 SteveUpdateGeneraltorque for 1/4-20

 

Few 1/4 -20 socket cap head screw with washers were tested for optimum torque.

QJR 117E Snap On  torque wrench was used. I found that 40 lb in was enough.

Looked up recommended values on the web later:

Our Thorlab SS 1/4-20 screw kits are SS 18-8 as DRY 70 inch / lbs max, lubricated  60 inch / lbs max

These numbers will varie with washers, material it's going into and so on!

BLACK-OXIDE Alloy Steel Socket Head Cap Screws can go much higher value

Thread Size 1/4"-20
Length 1/4"
Thread Length Full
Additional Specifications Black-Oxide Alloy Steel
RoHS Compliant

The standard among high-strength fasteners, these screws are stronger than Grade 8 steel screws. They have a minimum tensile strength of 170,000 psi. and a minimum Rockwell hardness of C37. Length is measured from under the head.

Inch screws have a Class 3A thread fit. They meet ASTM A574.

Black Oxide—Screws have been heat-treated for hardness, which results in a dark surface color.

 
The information in this 3-D model is provided for reference only. Details
 
 

We still do not know that what torque values we get best performance : minimum jiggel and drift etc.

After looking at these numbers I raise my recommendation to 50 inch / lbs on a std aplication.

Rana is next to calibrate his feelings and declare the right number.

Than Koji....and so on

Once we a number, than I buy more torque wrenches to fit it.

  11240   Thu Apr 23 21:05:23 2015 ranaUpdateComputer Scripts / ProgramsCDSutils upgrade undone

Q: please update this Wiki page with the go-back procedure:

https://wiki-40m.ligo.caltech.edu/CDSutils_Upgrade_Procedure

  11242   Fri Apr 24 01:16:30 2015 JenneUpdateASCBroken Xass?

I ran the "off" script for the Xarm ASS, followed by the "on" script, and now the Xarm ASS doesn't work.  Usually we just run the freeze/unfreeze, but I ran the off/on scripts one time. 

Koji, if you have some time tomorrow, can you please look at it?  I am sorry to ask, but it would be very helpful if I could keep working on other things while the ASS is taken care of.

Steve, can you please find a cable that goes from the LSC rack to the IOO rack (1Y2 to 1X2), or lay a new one?  It must be one single long cable, without barrels sticking it together.  This will help me actuate on the Marconi using the LSC rack's DAC. 

Thank you!!

  11243   Fri Apr 24 17:30:32 2015 JenneUpdateVACPressure watch script broken
Quote:

I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi.

The script checking the N2 pressure is not working.  I signed into the foteee account to look at some of the picasa photos, and there are thousands of emails (one every 10 minutes for the past month!) with error messages.  Q, can you please make it stop (having errors)?

The error looks like it's mad about a "caget" command.  I don't have time to investigate further though.

  11244   Fri Apr 24 18:13:36 2015 ranaUpdateGeneraltorque for 1/4-20

For 1/4-20 bolts made of 18-8 Stainless Steel, the recommended torque varies from 65-100 inch-pounds, depending upon the application, the lubrication, how loose the bolt is, if there's a washer, etc.

For our case, where we are going into a tapped, ferromagnetic stainless table, its less clear, but it will certainly by in the 60-80 range. This is close to the 5-6 foot-lbs that I recommended on Wednesday.

I've ordered 3 torque wrenches with 1/4" drive so that we can have one at each end and one in the toolbox near MC2. We'll indicate the recommended torque on there so that we can tighten everything appropriately.

  11246   Fri Apr 24 23:40:15 2015 ranaUpdateSUSPRM and BS oplev laser replaced

Recently, Steve replaced the HeNe which was sourcing the BS & PRM OL. After replacement, no one checked the beam sizes and we've been living with a mostly broken BS OL. The beam spot on the QPD was so tiny that we were seeing the 'beam is nearly the size of the segment gap' effect.

Today I removed 2 of the lenses which were in the beam path: one removed from the common PRM/BS path, and one removed from the PRM path. The beams on both the BS & PRM got bigger. The BS beam is bigger by a factor of 7. I've increased the loop gains by a factor of 6 and now the UGFs are ~6 Hz. The loop gains were much too high with the small beam spots that Steve had left there. I would prefer for the beams to be ~1.5-2x smaller than they are now, but its not terrible.

Many of the mounts on the table are low quality and not constructed stably. One of the PRM turning mirror mounts twisted all the way around when I tried to align it. This table needs some help this summer.

In the future: never try locking after an OL laser change. Always redo the telescope and alignment and check the servo shape before the OL job is done.

Also, I reduced the height of the RG3.3 in the OL loops from 30 to 18 dB. The BS OL loops were conditionally stable before and thats a no-no. It makes it oscillate if it saturates.

  11247   Sat Apr 25 00:20:16 2015 ranaUpdateCDSmegatron python autoMC cron

Upgraded python on megatron. Added lines to the crontab to run autoMX.py. Edited crontab to have a PYTHONPATH so that it can run .py stuff.

But autoMX.py is still not working from inside of cron, just from command line.

  11248   Sat Apr 25 03:32:45 2015 KojiUpdateASCBroken Xass?

I spent a day to fix the XARM ASS, but no real result. If the input of the 6th DOF servo is turned off, the other error signals are happy
to be squished to around their zeros. So this gives us some sort of alignment control. But obviously a particular combination of the
misalignment is left uncontrolled.

This 6th DOF uses BS to minimize the dither in ITMX yaw. I tired to use the other actuators but failed to have linear coupling between
the actuator and the sensor.


During the investigation, I compared TRX/TRY power spectra. TRX had a bump at 30Hz. Further investigation revealed that the POX/POY
had a big bump in the error signals. The POX/POY error signals between 10-100Hz were coherent. This means that this is coming from
the frequency noise stabilized with the MC. (Is this frequency noise level reasonable?)

The mysterious discovery was that the bump in the transmission exist only in TRX. How did the residual frequency noise cause
the intensity noise of the transmission? One way is the PDH offset.

Anyway, Rana pointed out that IMC WFS QPDs had large spot offsets. Rana went to the AS table and fixed the WFS spot centering.
This actually removed the bump in TRX although we still don't know the mechanism of this coupling.

The bump at 30Hz was removed. However, the ASS issue still remains.

  11249   Sat Apr 25 18:50:47 2015 ericqUpdateVACPressure watch script broken

Ugh, this turns out to be because cron doesn't source the controls bashrc that defines where to find caget and all that jazz that many commands depend on. This is probably also why the AutoMX cron job isn't working either. 

Also, cron automatically emails everything from stderr to the email address that is configured for the user, which is why the n2 script blew up the foteee account and why the AutoMX script was blowing up my email yesterday. This can be avoided by doing something like this in the crontab:

0 8 * * * /bin/somecommand >> somefile.log 2>&1

(The >> part means that the standard output is appended to some log file, while the 2>&1 means send the standard error stream to the same place as stdout)

I made this change for the n2 script, so the foteee email account should be safe from this script. I haven't figured out the right way to set up cron to have all the right $PATH and other environment stuff, such as epics may need, so the script is still not working. 

  11250   Sat Apr 25 22:17:49 2015 ranaUpdateCDSMXstream restart script working (beta)

Since python from crontab seemed intractableangry, I replaced autoMX.py with a soft link that points at autoMX.sh.

This is a simple BASH scriptcool that looks at the LSC FB stat (C1:DAQ-DC0_C1LSC_STATUS), and runs the restart mxstream script if its non-zero.

So far its run 5 times successfullylaugh. I guess this is good enough for now. Later on, someone ought to make it loop over other FE, but this ought to catch 99% of the FB issues.

  11253   Sun Apr 26 01:10:18 2015 ranaUpdateASCunBroken Xass?

Today I tried some things, but basically, lowering the input gain by 10 made the thing stable. In the attached screenshotstrip, you can see what happens with the gain at 1. After a few cycles of oscillation, I turned the gain back to 0.1.

There still is an uncontrolled DoF, but I that's just the way it is since we only have one mirror (the BS) to steer into the x arm once the yarm pointing is fixed.

Along the way, I also changed the phase for POX, just in case that was an issue. I changed it from +86 to +101 deg. The attached spectra shows how that lowered the POX_Q noise.

I also changed the frequencies for ETM_P/Y dither from ~14/18 Hz to 11.31/14.13 Hz. This seemed to make no difference, but since the TR and PO signals were quieter there I left it like that.

This is probably OK for now and we can tune up the matrix by measuring some sensing matrix stuff again later.

  11254   Sun Apr 26 14:17:40 2015 JenneUpdateLSCPOXDC, POYDC unplugged for now

I have unplugged POXDC and POYDC from their whitening inputs.  They have labels on them which whitening channel they belong to (POY=5, POX=6) on the DCPD whitening board.

TT3_LR's DAC output is Tee-ed, going to the POYDC input and also to an SR560 near the Marconi.

TT4_LR's DAC output is Tee-ed, going to the POXDC input and also to the CM board's ExcB input.

  11255   Sun Apr 26 15:05:35 2015 JenneUpdateASCunBroken Xass?

Thank you both.

I have updated the .snap file, so that it'll use these parameters, as Rana left them.  Also, so that the "unfreeze" script works without changes (since it wants to make the overall gain 1), I have changed the Xarm input matrix elements from 1 to 0.1, for all of them.  This should be equivalent to the overall gain being 0.1.

  11256   Sun Apr 26 15:34:34 2015 JenneUpdateSUSPRM oplev centered

After last week's work on the BS/PRM oplev table, I think the PRM oplev got centered while the PRM was misaligned.  With the PRM aligned, the oplev spot was not on the QPD.  It has been centered.

  11258   Mon Apr 27 01:13:08 2015 JenneUpdateLSCPRCL angular FF not working, no locking :(

I'm sad.  And frustrated. 

The PRCL angular feed forward is not working, and without it I am having a very difficult time keeping the PRMI locked while the arms are at high power (either buzzing, or the one time I got stable high power partway through the transition).  Obviously if the PRMI unlocks once CARM and DARM are mostly relying on the REFL signals, I lose the whole IFO. 

Q and I had been noticing over the last few weeks that the angular feed forward wasn't seeming quite as awesome as it did when I first implemented it.  We speculated that this was likely because we had started DC coupling the ITM optical levers, which changes the way seismic motion is propagated to cavity axis motion (since the ITMs are reacting differently).

Anyhow, today it does not work at all.  It just pushes the PRM until the PRMI loses lock. I am worried that, even though Rana re-tuned the BS and PRM oplev servos to be very similar to how they used to be, there is enough of a difference (especially when compounded with the DC coupled ITMs) that the feed forward transfer functions just aren't valid anymore.

Since this prevents whole IFO locking, I spent some time trying to get it back under control, although it's still not working. 

I remeasured the actuator transfer function of how moving PRM affects the sideband spot at the QPD, in the PRMI-only situation.  I didn't make a comparison plot for the yaw degree of freedom, but you can see that the pitch transfer function is pretty different below ~20Hz, which is the whole region that we care about.  In the plot below, black is from January (PRMI-only, no DC-coupled ITMs) and blue is from today (PRMI-only, with DC-coupled ITMs, and somewhat different BS/PRM oplev setup):

Pitch_oldVsNew.pdf

I calculated new Wiener filters, and tried to put them in, but sometimes (and I don't understand what the pattern is yet) I get "error" in the Alternate box, rather than the zpk version of my sos filter.  It seems to go away if you use fewer and fewer poles for fitting the Wiener filters, but then the fit is so poor that you're not going to get any subtraction (according to the residual estimation plot that uses the fitted filters rather than the ideal Wiener filters). The pitch filters could only handle 6 poles, although the yaw filters were fine with 20.

The feed forward just keeps pushing the PRM away though.  I flipped the signs on the Wiener filters, I tried recalculating without the actuator pre-filtering, I don't know why it's failing.  But, I'm not able to lock the interferometer.  Which sucks, because I was hoping to finally get most of my noise coupling measurements done today.

 

  11259   Mon Apr 27 09:09:15 2015 SteveUpdatePEMair cond filters checked

 

Quote:

 

Quote:

Yesterday morning was dusty. I wonder why?

The PRM sus damping was restored this morning.

Yesterday afternoon at 4 the dust count peaked 70,000 counts

Manasa's alergy was bad at the X-end yesterday. What is going on?

There was no wind and CES neighbors did not do anything.

Air cond filters checked by Chris. The 400 days plot show 3 bad peaks at 1-20, 2-5 & 2-19

  11260   Mon Apr 27 14:37:55 2015 SteveUpdateVACN2 pneumatic pressure watch

 

Quote:

Based on Jenne's chiara disk usage monitoring script, I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi. I also updated the chiara disk checking script to work on the new Nodus setup. I tested the two, only emailing myself, and they appear to work as expected. 

The scripts are committed to the svn. Nodus' crontab now includes these two scripts, as well as the crontab backup script. (It occurs to me that the crontab backup script could be a little smarter, only backing it up if a change is made, but the archive is only a few MB, so it's probably not so important...)

This watch script gives little time to replace N2 cylinder. When the regulated supply drops below 60 psi the cylinder pressure is 60 psi too.

It is more of a statement that V1 is closed and act accordingly. It's only practical if you are in the lab.

Rana pointed it out correctly that we need this message 24 hrs before it happens. This requires monitoring  of total supplies , not the regulated one.

So we need pressure transducers on each nitrogen cylinder, before the regulator. The sum of the two N2 cylinder when they are full 4000 - 4500 psi

The first email should be send out at 1000 psi as sum of the two cylinders. This means that you have  1 day to replace nitrogen cylinder.

 Most of the time the daily consumption is 750 +-50 psi   

However sometimes  this variation goes up   ~750 +-150 psi

  11261   Mon Apr 27 21:42:07 2015 ranaUpdateVACSummary pages

We want to have a VAC page in the summaries, so Steve - please put a list of important channel names for the vacuum system into the elog so that we can start monitoring for trouble.

Also, anyone that has any ideas can feel free to just add a comment to the summary pages DisQus comment section with the 40m shared account or make your own account.

  11262   Tue Apr 28 09:49:26 2015 SteveUpdateVACVac Summery Channels

 

 

       Channel        

                    Function                                          Interlock action           
            

C1:Vac-P1_pressure   

 IFO vac envelope pressure           at 3 mT close V1 and PSL shutter
C1:Vac-P2_pressure  Maglev foreline pressure                      at 6 Torr close V1 
C1:Vac-P3_pressure  annuloses    
C1:Vac-CC1_pressure  IFO pressure   at 1e-5 Torr close VM1
C1:Vac-CC4_pressure  RGA  pressure    
 C1:Vac-N2pres  valve's drive  pneumatic 60-80PSI    

 

at 55 PSI close V1, at 45 PSI close all 
 It  does not exist yet 2 N2 cylinder sum pressure  

 

  11263   Wed Apr 29 18:12:42 2015 ranaUpdateComputer Scripts / Programsnodus update

Installed libmotif3 and libmotif4 on nodus so that we can run dataviewer on there.

Also, the lscsoft stuff wasn't installed for apt-get, so I did so following the instructions on the DASWG website:

https://www.lsc-group.phys.uwm.edu/daswg/download/repositories.html#debian

Then I installed libmetaio1, libfftw3-3. Now, rather than complain about missing librarries, diaggui just silently dies.

Then I noticed that the awggui error message tells us to use 'ssh -Y' instead of 'ssh -X'. Using that I could run DTT on nodus from my office.

  11264   Thu Apr 30 16:30:25 2015 SteveUpdateVACN2 pneumatic pressure watch set up

We have 2 transduser PX303-3KG5V   http://www.omega.com/pressure/pdf/PX303.pdf They  will be installed on the out put of the N2 cylinders to read the supply pressure.

I will order one DC power supply  http://www.omega.com/pptst/PSU93_FPW15.html      PSU-93

One full cylinder pressure is ~ 2400 PSI max so two of them will give us ~9Vdc

The email reminder should be send at 1000 PSI  =  1.8 V

 

 

 

 

  11265   Fri May 1 13:22:08 2015 ericqUpdateDAQPEM Slow channels added to saved frames

Rana asked me to include add slow outputs (OUT16) of the seismometer BLRMS channels to the frames. 

All of the PEM slow channels are already set up in c1/chans/daq/C1EDCU_PEM.ini, but up to this point, daqd had no knowledge of this file, since it wasn't included in c1/target/fb/master, which defines all the places to look for files describing channels to be written to disk. This file already includes lines for C1EDCU_LSC.ini and such, which from old elogs, looks like was set up by hand for subsystems we care about. 

Hence, since we now care about slow trends for the PEM subsystem, I have added a line to the daqd master file to tell it to save the PEM slow channels. This looks to have increased the size of the individual 16 second frame files from 57MB to 59MB, which isn't so bad.

  11266   Fri May 1 16:42:42 2015 ranaUpdateDAQPEM Slow channels added to saved frames

Still processing, but I think it should work fine once we have a day of data. Until then, here's the summary pages so far, including Vac channels:

http://www.ligo.caltech.edu/~misi/summary/day/20150501/pem/

  11269   Sun May 3 19:40:51 2015 ranaUpdateASCSunday maintenance: alignment, OL center, seismo, temp sensors

X arm was far out in yaw, so I reran the ASS for Y and then X. Ran OK; the offload from ASS outputs to SUS bias is still pretty violent - needs smoother ramping.

After this I recentered the ITMX OL- it was off by 50 microradians in pitch. Just like the BS/PRM OLs, this one has a few badly assembled & flimsly mounts. Steve, please prepare for replacing the ITMX OL mirror mounts with the proper base/post/Polaris combo. I think we need ~3 of them. Pit/yaw loop measurements attached.

Based on the PEM-SEIS summary page, it looked like GUR1 was oscillating (and thereby saturating and suppressing the Z channel). So I power cycled both Guralps by turning off the interface box for ~30 seconds and the powering back on. Still not fixed; looks like the oscillations at 110 and 520 Hz have moved but GUR2_X/Y are suppressed above 1 Hz, and GUR1_Z is suppressed below 1 Hz. We need Jenne or Zach to come and use the Gur Paddle on these things to make them OK.

From the SUS-WatchDog summary page, it looked like the PRM tripped during the little 3.8 EQ at 4AM, so I un-tripped it.

Caryn's temperature sensors look like they're still plugged in. Does anyone know where they're connected?

  11271   Mon May 4 12:35:49 2015 ranaUpdateLSCdrift in Y arm

http://www.ligo.caltech.edu/~misi/summary/day/20150504/

I left the arms locked last night. Looks like the drift in the Y arm power is related to the Y arm control signal being much bigger than X.

Why would it be that Y > X  ?

  11274   Tue May 5 16:02:57 2015 SteveUpdateVACVac Summery Channels with discription

As it was requested by the Bos.

It would be nice to read from the epic screen C1:Vac-state_mon.......Current State: Vacuum Normal, valve configuration

Quote:

       Channel        

                    Function                                              Description                                             Interlock         
             

C1:Vac-P1_pressure   

 Main volume of 40m interferro meter        P=Pirani gauge, Pressure range: ATM-760  to 1e-4 Torr at 3 mT close V1 and PSL shutter
C1:Vac-P2_pressure  Maglev foreline pressure

Maglev is the main pump of our vacuum system below 500 mTorr

It's long term pressure has to be <500 mTorr                  

 at 6 Torr close V1 
C1:Vac-P3_pressure  annuloses

 Each chamber has it's own annulos. These small volumes are indipendent from main volume.     Their  pressure  ranges are <5 mTorr at vac. normal valve configuration.

                                          
C1:Vac-CC1_pressure  IFO main volume

CC1=cold cathode gauge (low emmision), Pressure range: 1e-4 to 1e-10 Torr,

In vac- normal configuration CC1= 2e-6 Torr

at 1e-5 Torr close VM1
C1:Vac-CC4_pressure  RGA  pressure In vac-normal configuration CC1=CC4  
 C1:Vac-N2pres  valve's drive pneumatic    

The N2 supply is regulated to 60-80 PSI out put at the auto cylinder changer.

 at 55 PSI close V1, at 45 PSI close all 
 It  does not exist yet 2 N2 cylinder sum pressure

Each cylinder pressure will be measured before the regulator and summed for warning message to be send

at 1000 PSI

 

  11275   Fri May 8 08:16:46 2015 SteveUpdateLSCdrift in Y arm

Why is the Y arm drifting so much?

The " PSL FSS Slow Actuator Adjust " was brought back to range from 1.5 to 0.3 yesterday as ususual. Nothing else was touched.

I'm not sure if the timing scale is working correctly on theses summery plots. What is the definition of today?

The y-arm became much better as I noticed it at 5pm

 

  11276   Fri May 8 14:30:09 2015 SteveUpdateVACCC1 cold cathode gauges are baked now

CC1s  are not reading any longer. It is an attempt to clean them over the weekend at 85C

These brand new gauges "10421002" sn 11823-vertical, sn 11837 horizontal replaced 11 years old 421 http://nsei.missouri.edu/manuals/hps-mks/421%20Cold%20Cathode%20Ionization%20Guage.pdf  on 09-06-2012 http://nodus.ligo.caltech.edu:8080/40m/7441

Quote:

 

We have two cold cathode gauges at the pump spool and one  signal cable to controller. CC1  in horizontal position and CC1 in vertical position.  

CC1 h started not reading so I moved cable over to CC1 v

 

  11280   Mon May 11 13:21:25 2015 manasaUpdateCDSc1lsp and c1sup not running

I found the c1lsp and c1sup models not running anymore on c1lsc (white blocks for status lights on medm).

To fix this, I ssh'd into c1lsc. c1lsc status did not show c1lsp and c1sup models running on it.

I tried the usual rtcds restart <model name> for both and that returned error "Cannot start/stop model 'c1XXX' on host c1lsc".

I also tried rtcds restart all on c1lsc, but that has NOT brought back the models alive.

Does anyone know how I can fix this??

c1sup runs some the suspension controls. So I am afraid that the drift and frequent unlocking of the arms we see might be related to this.

 

P.S. We might also want to add the FE status channels to the summary pages.

  11281   Mon May 11 13:26:02 2015 manasaUpdateIMCMC_F calibration

The last MC_F calibration was done by Ayaka : Elog 7823

Quote:

And does anyone know what the MC_F calibration is?

 

  11282   Mon May 11 14:08:19 2015 manasaUpdateCDSc1lsp and c1sup removed?

I just found out that c1lsp and c1sup models no more exist on the FE status medm screens. I am assuming some changes were done to the models as well.

Earlier today, I was looking at some of the old medm screens running on Donatella that did not reflect this modification. 

Did I miss any elogs about this or was this change not elogged??

Quote:

I found the c1lsp and c1sup models not running anymore on c1lsc (white blocks for status lights on medm).

To fix this, I ssh'd into c1lsc. c1lsc status did not show c1lsp and c1sup models running on it.

I tried the usual rtcds restart <model name> for both and that returned error "Cannot start/stop model 'c1XXX' on host c1lsc".

I also tried rtcds restart all on c1lsc, but that has NOT brought back the models alive.

Does anyone know how I can fix this??

c1sup runs some the suspension controls. So I am afraid that the drift and frequent unlocking of the arms we see might be related to this.

 

P.S. We might also want to add the FE status channels to the summary pages.

 

  11283   Mon May 11 15:15:12 2015 manasaUpdateGeneralRan ASS for arms

Arm powers had drifted to ~ 0.5 in transmission.

X and Y arms were locked and ASS'd to bring the arm transmission powers to ~1.

  11284   Mon May 11 18:14:52 2015 ranaUpdateIMCMC_F calibration

I saw that entry, but it doesn't state what the calibration is in units of Hz/counts. It just gives the final calibrated spectrum.

  11285   Tue May 12 08:51:08 2015 ericqUpdateCDSc1lsp and c1sup removed?
Quote:

was this change not elogged??

This is my sin.

Back in Febuary (around the 25th) I modified c1sus.mdl, removing the simulated plant connections we weren't using from c1lsp and c1sup. This was included in the model's svn log, but not elogged. blush

The models don't start with the rtcds restart shortcut, because I removed them from the c1lsc line in FB:/diskless/root/etc/rtsystab (or c1lsc:/etc/rtsystab). There is a commented out line in there that can be uncommented to restore them to the list of models c1lsc is allowed to run. 

However, I wouldn't suspect that the models not running should affect the suspension drift, since the connections from them to c1sus have been removed. If we still have trends from early February, we could look and see if the drift was happening before I made this change. 

  11286   Tue May 12 12:04:41 2015 manasaUpdateGeneralSome maintenance

* Relocked IMC. I guess it was stuck somewhere in the autlocker loop. I disabled autolocker and locked it manually. Autolocker has been reenabled and seems to be running just fine.

* The X arm has been having trouble staying locked. There seemed to be some amount of gain peaking. I reduced the gain from 0.007 to 0.006.

*  I disabled the triggered BounceRG filter : FM8 in the Xarm filter module.  We already have a triggered Bounce filter: FM6 that takes care of the noise at bounce/roll frequencies. FM8 was just adding too much gain at 16.5Hz. Once this filter was disabled the X arm lock has been much more stable. 
Also, the Y arm doesn't use FM8 for locking either.

 

  11287   Tue May 12 14:57:52 2015 SteveUpdateVACCC1 cold cathode gauges are baked now

Baking both CC1 at 85 C for 60 hrs did not help.

The temperature is increased to 125 C and it is being repeated.

Quote:

CC1s  are not reading any longer. It is an attempt to clean them over the weekend at 85C

These brand new gauges "10421002" sn 11823-vertical, sn 11837 horizontal replaced 11 years old 421 http://nsei.missouri.edu/manuals/hps-mks/421%20Cold%20Cathode%20Ionization%20Guage.pdf  on 09-06-2012 http://nodus.ligo.caltech.edu:8080/40m/7441

Quote:

 

We have two cold cathode gauges at the pump spool and one  signal cable to controller. CC1  in horizontal position and CC1 in vertical position.  

CC1 h started not reading so I moved cable over to CC1 v

 

 

  11288   Wed May 13 09:17:28 2015 ranaUpdateComputer Scripts / Programsrsync frames to LDAS cluster

Still seems to be running without causing FB issues. One thought is that we could look through the FB status channel trends and see if there is some excess of FB problems at 10 min after the hour to see if its causing problems.

I also looked into our minute trend situation. Looks like the files are comrpessed and have checksum enabled. The size changes sometimes, but its roughly 35 MB per hour. So 840 MB per day.

According to the wiper.pl script, its trying to keep the minute-trend directory to below some fixed fraction of the total /frames disk. The comment in the scripts says 0.005%,

but I'm dubious since that's only 13TB*5e-5 = 600 MB, and that would only keep us for a day. Maybe the comment should read 0.5% instead...

Quote:

The rsync job to sync our frames over to the cluster has been on a 20 MB/s BW limit for awhile now.

Dan Kozak has now set up a cronjob to do this at 10 min after the hour, every hour. Let's see how this goes.

You can find the script and its logfile name by doing 'crontab -l' on nodus.

 

  11291   Thu May 14 17:41:10 2015 ranaUpdatePEMweather station and Guralp maintenance

Today Steve and I tried to recenter the Guralps. The breakout box technique didn't work for us, so we just turned the leveling screws until we got the mass position outputs within +/-50 mV for all DoF as read out by the breakout box.

Some points:

  1. GUR1 is at the ETMY (E/W arm) and GUR2 is at the X-end (South arm)
  2. The SS containers are good and make a good seal.
  3. We had to replace the screws on the granite slab interface plate. The heads were too big to allow the connector to snap into place.
  4. The Guralps had been left way, way off level and the brass locking screws were all the way up. We locked them down after leveling today. Steve was blaming Cathy(?).
  5. The GUR1_Z channel now looks good - see the summary pages for the before and after behavior. My mistake; the low frequency is still as bad as before.
  6. GUR2 X/Y still look like there is no whitening or if the masses are stuck or the interface box is broken.
  7. When we first powered them up, a few of the channels of both seismometers showed 100-200 Hz oscillations. This has settled down after several minutes.

 

The attachment shows the 6 channels after our work. You can see that GUR2_X/Y still look deadish. I tried wiggling the cables at the interface box and powering on/off, but no luck. Next, we swap cables.

Tried to bring the weather station back to life, but no luck. The unit on the wall is alive and so is the EPICS IOC (c1pem1). But there is apparently no communication between them. telnet into c1pem and the error message repeating at the prompt is:

Weather Monitor Output: NO COMM

Might be related to the flaky connector situation that Liz and I found there a couple summers ago, but I tried jiggling and reseating that one with no luck. Looks like it stopped working around 8 PM on March 24, 2014. That's the same time as a ~30s power outage, so perhaps we just need some more power cycling? Tried hitting the reset button on the VME card for c1pem1, but didn't change anything.

Let's try power cycling that crate (which has c1pem1, c0daqawg, and some GPS receiver)...nope - no luck.

Also tried power cycling the weather box which is near the BS chamber on the wall. This didn't change the error message at the c1pem1 telnet prompt.

  11292   Fri May 15 16:18:28 2015 SteveUpdateVACVac Operation Guide

Vacuum Operation Guide is up loaded into the 40m-wiki. This is an old master copy. Not exact in terms of real action, but it is still a good guide of logic.

Rana has promissed to watch the N2 supply and change cylinder when it is empty. I will be Hanford next week.

  11294   Sat May 16 21:05:24 2015 ranaUpdateGeneralsome status

1) Checked the N2 pressures: the unregulated cylinder pressures are both around 1500 PSI. How long until they get to 1000?

2) The IMC has been flaky for a day or so; don't know why. I moved the gains in the autolocker so now the input gain slider to the MC board is 10 dB higher and the output slider is 10 dB lower. This is updated in the mcdown and mcup scripts and both committed to SVN. The trend shows that the MC was wandering away after ~15 minutes of lock, so I suspected the WFS offsets. I ran the offsets script (after flipping the z servo signs and adding 'C1:' prefix). So far powers are good and stable.

3) pianosa was unresponsive and I couldn't ssh to it. I powered it off and then it came back.

4) Noticed that DAQD is restarting once per hour on the hour. Why?

5) Many (but not all) EPICS readbacks are whiting out every several minutes. I remote booted c1susaux since it was one of the victims, but it didn't change any behavior.

6) The ETMX and ITMX have very different bounce mode response: should add to our Vent Todo List. Double checked that the bounce/roll bandstop is on and at the right frequency for the bounce mode. Increased the stopband from 40 to 50 dB to see if that helps.

7) op340 is still running ! The only reason to keep it alive is its crontab:

op340m:SUS>crontab -l

07 * * * * /opt/rtcds/caltech/c1/burt/autoburt/burt.cron >> /opt/rtcds/caltech/c1/burt/burtcron.log
#46 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/FSSSlowServo > /cvs/cds/caltech/logs/scripts/FSSslow.cronlog 2>&1
#14,44 * * * * /cvs/cds/caltech/conlog/bin/check_conlogger_and_restart_if_dead
15,45 * * * * /opt/rtcds/caltech/c1/scripts/SUS/rampdown.pl > /dev/null 2>&1
#10 * * * *  /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/MC/autolockMCmain40m >/cvs/cds/caltech/logs/scripts/mclock.cronlog 2>&1
#27 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/RCthermalPID.pl >/cvs/cds/caltech/logs/scripts/RCthermalPID.cronlog 2>&1

00 0 * * * /var/scripts/ntp.sh > /dev/null 2>&1
#00 4 * * * /opt/rtcds/caltech/c1/scripts/RGA/RGAlogger.cron >> /cvs/cds/caltech/users/rward/RGA/RGAcron.out 2>&1
#00 6 * * * /cvs/cds/scripts/backupScripts.pl
00 7 * * * /opt/rtcds/caltech/c1/scripts/AutoUpdate/update_conlog.cron
00 8 * * * /opt/rtcds/caltech/c1/scripts/crontab/backupCrontab

added a new script (scripts/SUS/rampdown.py) which decrements every 30 minutes if needed. Added this to the megatron crontab and commented out the op340m crontab line. IF this works for awhile we can retire our last Solaris machine.

8) To see if we could get rid of the wandering PCDRIVE noise, I looked into the NPRO temperatures: was - T_crystal = 30.89 C, T_diode1 = 21 C, T_diode2 = 22 C. I moved up the crystal temp to 33.0 C, to see if it could make the noise more stable. Then I used the trimpots on the front of the controller to maximimize the laseroutput at these temperatures; it was basically maximized already. Lets see if there's any qualitative difference after a week. I'm attaching the pinout for the DSUB25 diagnostics connector on the back of the box. Aidan is going to help us record this stuff with AcroMag tech so that we can see if there's any correlation with PCDRIVE. The shifts in FSS_SLOW coincident with PCDRIVE noise corresponds to ~100 MHz, so it seems like it could be NPRO related.

 

  11295   Sat May 16 21:40:29 2015 ranaUpdatePEMGuralp maintenance

Tried swapping cables at the Guralp interface box side. It seems that all of our seismic signal problems have to do with the GUR2 cable being flaky (not surprising since it looks like it was patched with Orange Electrical tape!! rather than proper mechanical strain relief).

After swapping the cables today, the GUR2 DAQ channels all look fine: i.e. GUR1 (the one at the Y end) is fine, as is its cable and the GUR2 analog channels inside the interface box.

OTOH, the GUR1 DAQ channels (which have GUR2 (EX) connected into it) are too small by a factor of ~1000. Seems like that end of the cable will need to be remade. Luckily Jenne is still around this week and can point us to the pinout / instructions. Looks like there could be some shorting inside the backshell, so I've left it disconnected rather than risk damaging the seismometer. We should get a GUR1 style backshell to remake this cable. It might also be possible that the end at the seismometer is bad - Steve was supposed to swap the screws on the granite-aluminum plate on Thursday; I'll double check.

  11296   Sun May 17 23:46:25 2015 ranaUpdateASCIOO / Arm trends

Looking at the summary page trends from today, you can see that the MC transmission is pretty flat after I zeroed the MCWFS offsets. In addition, the transmission from both arms is also flat, indicating that our previous observation of long term drift in the Y arm transmission probably had more to do with bad Y-arm initial alignment than unbalanced ETMY coil-magnets.

Much like checking the N2 pressure, amount of coffee beans, frames backups, etc. we should put MC WFS offset adjustment into our periodic checklist. Would be good to have a reminder system that pings us to check these items and wait for confirmation that we have done so.

  11297   Mon May 18 09:50:00 2015 ericqUpdateGeneralsome status
Quote:

Added this to the megatron crontab and commented out the op340m crontab line. IF this works for awhile we can retire our last Solaris machine.

For some reason, my email address is the one that megatron complains to when cron commands fail; since 11:15PM last night, I've been getting emails that the rampdown.py line is failing, with the super-helpful message: expr: syntax error

  11298   Mon May 18 11:59:07 2015 ranaUpdateGeneralsome status

Yes - my rampdown.py script correctly ramps down the watchdog thresholds. This replaces the old rampdown.pl Perl script that Rob and Dave Barker wrote.

Unfortunately, cron doesn't correctly inherit the bashrc environment variables so its having trouble running.

On a positive note, I've resurrected the MEDM Screenshot taking cron job, so now this webpage is alive (mostly) and you can check screens from remote:

https://nodus.ligo.caltech.edu:30889/medm/screenshot.html

ELOG V3.1.3-