40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 252 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  4819   Wed Jun 15 00:49:34 2011 SureshUpdateIOOWFS2 has been fixed.

 

The WFS2 sensor head had a damaged Quadrant PIN diode (YAG-444-4A).  This has been replaced by a   YAG-444-4AH  which has a responsivity of 0.5 A/W. 

P6150121.JPG     P6150124.JPG

The responsivity of each quadrant was measured at normal incidence.  A diagram of the set up with the relevant power levels is attached.  The precision of these measurement is about 5% .  Largely because the power levels measured are sensitive to the position of the laser beam on the power meter sensor head (Ophir with ND filter mask taken off).  Putting the mask back on did not solve this problem.

The incident power was 0.491mW  of which about 0.026mW was reflected from the face of the QPD.  The beam was repositioned on the QPD to measure the response of each quadrant.  In each case the beam was positioned to obtain maximum DC output voltage from the relevant quadrant.  A small amount of spill over was seen in the other quadrants.  The measurements are given below

WFS2 DC output measurements (mV)
  Position 1 Position 2 Position 3 Position 4 Dark
Q1 244 6.7 5.4 6.9 4
Q2 5.9 238 8.4 5 5
Q3 9 6.6 236 7.3 6
Q4 7.5 7 7 252 7

WFS_QE_measurement.png

To measure these DC outputs of from the sensor-head a breakout board for the 25-pin D-type connector was used as in the previous measurements.  The results are given below

 

WFS2 Quantum Efficiency measurement

  DC out (mV)

Responsivity

A/W

Quantum Efficiency (%)
Q1 238 0.52 0.60
Q2 233 0.50 0.59
Q3 230 0.50 0.58
Q4 244 0.53 0.61

 

The measured responsivity agrees with the specification from the manufacturer.  It is to be noted that the previous QPD is reported to have a slightly smaller responsivity 0.4 A/W at 1064 nm.  The data sheet is attached. 

Since the new QPD may have a slightly different capacitance the RF transfer function of the WFS2 needs to be examined to verify the location of the resonances. 

 

Quote:

[Larisa and Jenne]

A few weeks ago (on the 28th of January) I had tried to measure the quantum efficiency of one quadrant of the WFS as a function of angle.  However, Rana pointed out that I was a spaz, and had forgotten to put a lens in front of the laser.  Why I forgot when doing the measurement as a function of angle, but I had remembered while doing it at normal incidence for all of the quadrants, who knows?

Anyhow, Larisa measured the quantum efficiency today.  She used WFS2, quadrant 1 (totally oil-free), since that was easier than WFS1.  She also used the Jenne Laser (with a lens), since it's more stable and less crappy than the CrystaLasers.  We put a 50 Ohm terminator on the RF input of the Jenne Laser, since we weren't doing a swept sine measurement.  Again, the Ophir power meter was used to measure the power incident on the diode, and the reflected power, and the difference between them was used as the power absorbed by the diode for the quantum efficiency measurement.  A voltmeter was used to measure the output of the diode, and then converted to current as in the quote below. 

Still on the to-do list:  Replace the WFS2 diode.  See if we have one around, otherwise order one.  Align beams onto WFS so we can turn on the servo.

QE = (h*c)/(lambda*e) * (I/P)

Where I = (Volts from Pin1 to GND)/2 /500ohms
P = Power from laser - power reflected from diode.
h, c, e are the natural constants, and lambda is 1064nm.
Also, I/P = Responsivity


Larissa is going to put her data and plots into the elog shortly....

Quote:

Quantum Efficiency Measurement:

I refer to Jamie's LHO elog for the equation governing quantum efficiency of photodiodes: LHO 2 Sept 2009

The information I gathered for each quadrant of each WFS was: [1] Power of light incident on PD (measured with the Ophir power meter), [2] Power of light reflected off the PD (since this light doesn't get absorbed, it's not part of the QE), and [3] the photo current output by the PD (To get this, I measured the voltage out of the DC path that is meant to go to EPICS, and backed out what the current is, based on the schematic, attached). 

I found a nifty 25 pin Dsub breakout board, that you can put in like a cable extension, and you can use clip doodles to look at any of the pins on the cable.  Since this was a PD activity, and I didn't want to die from the 100V bias, I covered all of the pins I wasn't going to use with electrical tape.  After turning down the 100V Kepco that supplies the WFS bias, I stuck the breakout board in the WFS.  Since I was able to measure the voltage at the output of the DC path, if you look at the schematic, I needed to divide this by 2 (to undo the 2nd op amp's gain of 2), and then convert to current using the 499 Ohm resistor, R66 in the 1st DC path.  

I did all 4 quadrants of WFS1 using a 532nm laser pointer, just to make sure that I had my measurement procedure under control, since silicon PDs are nice and sensitive to green.  I got an average QE of ~65% for green, which is not too far off the spec of 70% that Suresh found.

I then did all 8 WFS quadrants using the 1064nm CrystaLaser #2, and got an average QE of ~62% for 1064 (58% if I exclude 2 of the quadrants....see below).  Statistics, and whatever else is needed can wait for tomorrow.

Problem with 2 quadrants of WFS2?

While doing all of this, I noticed that quadrants 3 and 4 of WFS2 seem to be different than all the rest.  You can see this on the MEDM screens in that all 6 other quadrants, when there is no light, read about -0.2, whereas the 2 funny quadrants read positive values.  This might be okay, because they both respond to light, in some kind of proportion to the amount of light on them.  I ended up getting QE of ~72% for both of these quadrants, which doesn't make a whole lot of sense since the spec for green is 70%, and silicon is supposed to be less good for infrared than green.  Anyhow, we'll have to meditate on this.  We should also see if we have a trend, to check how long they have been funny.

 

 

Attachment 2: SensorsBrochure-p12.pdf
SensorsBrochure-p12.pdf
  4224   Fri Jan 28 18:19:21 2011 JenneUpdateIOOWFS2 has some kind of oil on it

Mystery solved!

I removed WFS2 from the AP table (after placing markers so I can put it back in ~the same place) so that I could take some reflectivity as a function of angle measurements for aLIGO WFS design stuff.

I was dismayed to discover, upon glancing at the diode itself, that half of the diode is covered with some kind of oil!!!.  The oil is mostly confined to quadrants 3 and 4, which explains the confusion with their quantum efficiency measurements, as well as why the readback values on the MEDM WFS Head screen for WFS2 don't really make sense. 

The WFS QPD has a piece of glass protecting the diode itself, and the oil seems to be on top of the glass, so I'm going to use some lens tissue and clean it off.

Pre-cleaning photos are on Picasa.

Update:  I tried scrubbing the glass with a Q-tip soaked with Iso, and then one soaked in methanol.  Both of these failed to make any improvement.  I am suspicious that perhaps whatever it is, is underneath the glass, but I don't know.  Rana suggested replacing the diode, if we have spares / when we order some spares.

Oily_WFS2.jpg

Quote:

Problem with 2 quadrants of WFS2?

While doing all of this, I noticed that quadrants 3 and 4 of WFS2 seem to be different than all the rest.  You can see this on the MEDM screens in that all 6 other quadrants, when there is no light, read about -0.2, whereas the 2 funny quadrants read positive values.  This might be okay, because they both respond to light, in some kind of proportion to the amount of light on them.  I ended up getting QE of ~72% for both of these quadrants, which doesn't make a whole lot of sense since the spec for green is 70%, and silicon is supposed to be less good for infrared than green.  Anyhow, we'll have to meditate on this.  We should also see if we have a trend, to check how long they have been funny.

 

  4927   Fri Jul 1 07:01:23 2011 SureshUpdateIOOWFS2 resonances and installation

This was the WFS whose photodiode was repaced as the old one was found to be damaged. 

I retuned the resonances and the notches of all the quadrant and have attached a pdf file of my measurements.

 

Some notes:

a)  The variable inductor on WFS2Q2 quadrant may need to be changed. The ferrite code has come of the solinoid and is just held in place due to friction..  It may be easily disturbed.    So though i chose to leave it in place for now,  it will need to be replace in case the Q3 misbahaves..

b) In general the frequencies have shifted a bit when I closed the lid of tne WFS sensor head.

 

WFS1 and 2 have been installed on the AP table and are functional. I am shifting attention to the software.

 

Attachment 1: WFS2new.pdf
WFS2new.pdf WFS2new.pdf WFS2new.pdf WFS2new.pdf
  4928   Fri Jul 1 11:47:25 2011 ranaUpdateIOOWFS2 resonances and installation

What is implicit in Suresh's entry is that we decided to run the WFS with the 10 dB internal attenuation set to ON as the nominal. In the past, we have always had all the attenuation OFF for max gain. The layout of the WFS is such that we get that nasty 200 MHz oscillation due to crosstalk between the 2 MAX4106 opamps for each quadrant. The 10 dB attenuator is able to reduce the positive feedback enough to damp the oscillation.

In principle, this is still OK noise-wise. I think the thermal noise of the resonant circuit should be ~2-3 nV/rHz. Then the first opamp has a gain of 5, then the -10 dB attenuator, then another gain of 5. The noise going to the demod board is then ~10-15 nV.

The real noise issue will be the input noise of the demod board. As you may recall, the output of the AD831 mixer goes to a AD797. The AD797 is a poor choice for this application. It has low noise only at high frequencies. At 10 Hz, it has an input voltage noise of 10 nV/rHz and a current noise of 20 pA/rHz. If we wanted to use the AD797 here, at least the RC filter's resistor should be reduced to ~500 Ohms. Much better is to use an OP27 and then choose the R so as to optimize the noise.

We should also be careful to keep the filter frequency low enough so as not to rate limit the OP27. From the schematic, you can see that this circuit is also missing the 50 Ohm termination on the output. There ought to be the usual high-order LC low pass at the mixer output. The simple RC is just not good enough for this application.

As a quick fix, I recommend that when we next want to up the WFS SNR, we just replace the RC with an RLC (R = 500 Ohms, L = 22 uH, C = 1 uF).

 

Attachment 1: Screen_shot_2011-07-01_at_11.13.01_AM.png
Screen_shot_2011-07-01_at_11.13.01_AM.png
  5761   Sat Oct 29 02:35:39 2011 SureshUpdateIOOWFS_MASTER screen and lockin screens fixed

I have fixed the WFS_MASTER screen and several of the subscreens such as the MCASS and MC_WFS_LKIN.

Since MC_WFS_LKIN uses six demodulators and single oscillator I could not use the automatically built Lockin screens. 

I built one using the compact filter banks mentioned earlier

The phases in the WFSlockins have yet tp be set.

  13305   Mon Sep 11 09:47:53 2017 SteveUpdateGeneralWIMA caps refilled

Instock WIMA caps refilled to a minimum 50 pieces each.

Attachment 1: WIMA.png
WIMA.png
  3592   Tue Sep 21 15:33:02 2010 steveMetaphysicsTreasureWagonga alart

John Miller has arrived from Australia with 3 bags of  Wagonga Coffee. Trade bargaining has started on

250 mgs of Sumatran Mandehling, Timur and Papua New Guine.

Attachment 1: P1060866.JPG
P1060866.JPG
Attachment 2: P1060872.JPG
P1060872.JPG
  11150   Fri Mar 20 12:42:01 2015 JenneUpdateIOOWaking up the IFO

I've done a few things to start waking up the IFO after it's week of conference-vacation.

PMC trans was at 0.679, aligned the input to the PMC, now it's up at 0.786.

MC transmission was very low, mostly from low PMC transmission.  Anyhow, MC locked, WFS relieved so that it will re-acquire faster.

Many of the optics had drifted away. AS port had no fringing, and almost every optic was far away from it's driftmon set val.  While putting the optics back to their driftmon spots, I noticed that some of the cds.servos had incorrect gain.  Previously, I had just been using the ETMX servo, which had the correct gain, but the ITMs needed smaller gain, and some of the optics needed the gain to be negative rather than positive.  So, now the script ..../scripts/SUS/DRIFT_MON/MoveOpticToMatchDriftMon.py has individually defined gains for the cds.servo. 

Next up (after lunch) will be locking an aligning the arms.  I still don't have MICH fringing at the AS port, so I suspect that the ASS will move some of the optics somewhat significantly (perhaps the input tip tilts, which I don't have DRIFT_MON for?)

  11151   Fri Mar 20 13:29:33 2015 KojiUpdateIOOWaking up the IFO

If the optics moved such amount, could you check the PD alignment once the optics are aligned?

  11152   Fri Mar 20 16:44:49 2015 ericqUpdateIOOWaking up the IFO

X arm ASS is having some issues. ITMX oplev was recentered with ITMX in a good hand-aligned state. 

The martian wifi network wasn't showing up, so I power cycled the wifi router. Seems to be fine now. 

  11153   Fri Mar 20 23:37:46 2015 JenneUpdateSUSWaking up the IFO

In addition to (and probably related to) the XARM ASS not working today, the ITMX has been jumping around kind of like ETMX sometimes does.  It's very disconcerting. 

Earlier today, Q and I tried turning off both the LSC and the oplev damping (leaving the local OSEM damping on), and ITMX still jumped, far enough that it fell off the oplev PD. 

I'm not sure what is wrong with ITMX, but probably ASS won't work well until we figure out what's up.

I tried a few lock stretches (after realigning the Xgreen on the PSL table) after hand-aligning the Xarm, but the overall alignment just isn't good enough.  Usually POPDC gets to 400 or 450 while the arms are held off resonance, but today (after tweaking BS and PRM alignment), the best I can get POPDC is about 300 counts. 

Den and I are looking at the ASS and ITMX now.

  10941   Mon Jan 26 21:10:04 2015 JenneUpdateModern ControlWaking up the OAF

I had a look at the OAF model today. 

Somehow, the screens that we had weren't matching up with the model.  It was as if the screens were a few versions old.  Anyhow, I found the correct screens in /userapps/oaf/common/medm, and copied them into the proper place for us, /userapps/isc/c1/medm/c1oaf.  Now the screens seem all good.

I also added 2 PCIE links between the OAF and the SUS models.  I want to be able to send signals to the PRM's pitch and yaw.  I compiled and restarted both the oaf model and the sus model.

The OAF model isn't running right now (it's got the NO SYNC error), but since it's not something that we need for tonight, I'll fix it in the morning.


My thought for trying out the OAF is to look at the coherence between seismic motion and the POP DC QPD when the PRMI is locked (no arms).  I assume that the PRM is already handled in terms of angular damping (local and oplev), so the motion will be primarily from the folding mirrors.  Then, if I can feedforward the seismometer signal to the PRM to compensate for the folding mirrors' motion, I can use the DC QPD as a monitor to make sure it's working when we're PRMI-only locked, or at low recycling gain with the arms.  But, since I'm not actually using the QPD signal, this will be independent of the arm power increase, so should just keep working.

Anyhow, that's what my game plan is tomorrow for FF.  Right now the T-240 is settling out from its move today, and the auto-zero after the move.

  11014   Thu Feb 12 12:23:21 2015 manasaUpdateGeneralWaking up the PDFR measurement system

[EricG, Manasa]

We woke up the PDFR measurement setup that has been sleeping since summer. We ran a check for the laser module and the multiplexer module. We tried setting things up for measuring frequency response of AS55.
We could not repeat Nichin's measurements because the gpib scripts are outdated and need to be revised. 

PDFR diode laser was shutdown after this job.

  11132   Wed Mar 11 15:35:38 2015 manasaUpdateGeneralWaking up the PDFR measurement system

I was around the 1Y1 rack today. Trials were done to get the PDFR of AS55.

Quote:

[EricG, Manasa]

We woke up the PDFR measurement setup that has been sleeping since summer. We ran a check for the laser module and the multiplexer module. We tried setting things up for measuring frequency response of AS55.
We could not repeat Nichin's measurements because the gpib scripts are outdated and need to be revised. 

PDFR diode laser was shutdown after this job.

 

  11209   Wed Apr 8 21:10:55 2015 manasaUpdateGeneralWaking up the PDFR measurement system

I was poking around with the PDFR hardware today.

I moved the Agilent which had its screen projected on the monitor. I have put it back...but please verify the settings before using it for tonight.

  11493   Tue Aug 11 11:56:36 2015 Ignacio, JessicaUpdatePEMWasps obliterated maybe...

The wasp terminator came in today. He obliterated the known wasp nest.

We discovered a second wasp nest, right next to the previous one...

Jessica wasn't too happy the wasps weren't gone!

  11512   Mon Aug 17 17:48:12 2015 KojiUpdatePEMWasps obliterated maybe...

We found the same wasp in the 40m. Megan found it walking behind Steve desk!

  3473   Thu Aug 26 13:08:03 2010 josephbUpdateCDSWatch dogs for Vertex optics turned off

We are in the process of doing a damping test with the real time code and have turned off the vertex optics watchdogs temporarily, including BS, ITMs, SRM, PRM, MCs.

  3479   Fri Aug 27 14:03:43 2010 kiwamuUpdateCDSWatch dogs for Vertex optics turned off

For a futher damping test, I again turned off the vertex optics watchdogs temporarily, including BS, ITMs, SRM, PRM, MCs.

  14564   Tue Apr 23 19:31:45 2019 JonUpdateSUSWatchdog channels separated from autoBurt.req

For the new c1susaux, Gautam and I moved the watchdog channels from autoBurt.req to a new file named autoBurt_watchdogs.req. When the new modbus service starts, it loads the state contained in autoBurt.snap. We thought it best for the watchdogs to not be automatically enabled at this stage, but for an operator to manually have to do this. By moving the watchdog channels to a separate snap file, the entire SUS state can be loaded while leaving just the watchdogs disabled.

This same modification should be made to the ETMX and ETMY machines.

  5559   Tue Sep 27 20:02:19 2011 KojiUpdateSUSWatchdog rearmed

I came to the control room and found the PMC and IMC were unlocked. ==> Relocked
I found the watch dogs of the vertex suspensions are tripped.

I checked the data for the past 6 hours and found they are independent events.
The unlock of the MCs occured 4 hours ago and the watchdogs tripped 2 hours ago.

The suspension damping was restored at around 7:50PM PDT.

  5560   Wed Sep 28 00:06:21 2011 JenneUpdateSUSWatchdog rearmed

Quote:

I came to the control room and found the PMC and IMC were unlocked. ==> Relocked
I found the watch dogs of the vertex suspensions are tripped.

I checked the data for the past 6 hours and found they are independent events.
The unlock of the MCs occured 4 hours ago and the watchdogs tripped 2 hours ago.

The suspension damping was restored at around 7:50PM PDT.

 Oops, I should have noticed all of those things.  Several hours of computer-battle exhausted me.  Thanks Koji.

  15862   Thu Mar 4 11:59:25 2021 Paco, AnchalSummaryLSCWatchdog tripped, Optics damped back

Gautam came in and noted that the optics damping watchdogs had been tripped by a >5 magnitude earthquake somewhere off the coast of Australia. So, under guided assistance, we manually damped the optics using following:

  • Using the scripts/SUS/reEnableWatchdogs.py script we re-enabled all the watchdogs.
  • Everything except SRM was restored to stable state.
  • Then we clicked on SRM in SUS-> Watchdogs, disabled the Oplevs, shutdown the watchdog.
  • We changed the threshold for watchdog temporarily to 1000 to allow damping.
  • We enabled all the coil outputs  manually. Then enabled watchdog by clicking on Normal.
  • Once the SRM was damped, we shutdown the watchdog, brought back the threshold to 215 and restarted it.

Gautum also noticed that MC autolocker got turned OFF by me (Anchal), we turned it back on and MC engaged the lock again. All good, no harm done.

  15863   Thu Mar 4 15:48:26 2021 KojiSummaryPEMWatchdog tripped, Optics damped back

EQs seen on Summary pages
https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20210304/pem/seismic_blrms/

  2532   Tue Jan 19 16:21:18 2010 AlbertoUpdateABSLWatchdogs not working and then fixed

This afternoon the watchdogs stopped working: they didn't trip when the suspension positions crossed the threshold values.

I rebooted c1susaux (aka c1dscl1epics0 in the 1Y5 rack), which is the computer that runs the watchdog processes.

The reboot fixed the problem.

  7576   Thu Oct 18 15:36:57 2012 SteveUpdateCamerasWatec cameras & Tamron lenses

I purchased 3x  1/2" ccd cameras and 3x  F  50 mm lenses for the lab.

The spectral sensitivity plot is for an older model 902H. This new model has better sensitivity

Attachment 1: 10181201.PDF
10181201.PDF
  7322   Thu Aug 30 20:20:52 2012 JenneUpdateSUSWatek camera placed on SE viewport of ITMX to look at PRM

[EricQ, Jenne]

We placed the Watek camera on the SE viewport of the ITMX chamber, and focused it on the face of PRM.  We are not able to see any scattered light transmitted through the PRM, so this camera was an ineffective way to try to check spot centering on the PRM.  Jamie placed one of the new targets on the PRM cage - see his elog for details.

To get more use of the camera, we need to mount it on something, at the 5.5 inch beam height, and then cover that something with clean foil so we can place the camera on the table, in the beamline in various places.  We also need to carefully wrap the cables in foil so the don't dirty anything inside.

  16942   Thu Jun 23 15:05:01 2022 Water MonitorUpdateUpgradeWater Bottle Refill

22:05:02 UTC Jordan refilled his water bottle at the water dispenser in the control room.

  3208   Tue Jul 13 17:36:42 2010 nancyUpdateIOOWavefront Sensing Matrix Control

For yesterday - July 12th.

Yesterday, I tried understanding the MEDM and the Dataviewer screens for the WFS.

I then also decided to play around with the sensing matrix put into the WFS control system and see what happens.

I changed the sensing matrix to completely random values, and for some of the very bad values, it even lost lock :P (i wanted that to happen)

Then I put in some values near to what it already had, and saw things again.

I also put in the matrix values that I had obtained from my DC calculations, which after Rana's explanation, I understand was silly.

Later I put back the original values, but the MC lock didnot come back to what it was earlier. Probably my changing the values took it out of the linear region. THE MATRIX NOW HAS ITS OLD VALUES.

I was observing the POwer Spectrum of teh WFS signals after changing the matrix values, but it turned out to  be a flop, because  I had not removed the mean while measuring them.  I will do that again today, if we obtain the lock again (we suddenly lost MC lock badly some 20 minutes ago).

  3236   Fri Jul 16 15:39:27 2010 nancyUpdateIOOWavefront Sensors- switched off

I tuned the gain of WFS to 0 last night at about 3am.

I turned it back on now.

  16006   Wed Apr 7 22:48:48 2021 gautamUpdateIOOWaveplate commissioning

Summary:

I spent an hour today evening checking out the remote waveplate operation. Basic remote operation was established 👍 . To run a test on the main beam (or any beam for that matter), we need to lay out some long cabling, and install the controller in a rack. I will work with Jordan in the coming days to do these things. Apart from the hardware, some EPICS channel will need to be added to the c1ioo.db file and a python script will need to be set up as a service to allow remote operation.

Part numbers:

  • The controller is a NewFocus ESP300.
  • The waveplate stage is a PR50CC. The waveplate itself that is mounted has a 1" diameter (clear aperture is more like 21mm), which I think is ~twice the size of the waveplates we have in the lab, good thing Livingston shipped us the waveplate itself too. It is labelled QWPO-1064-10-2, so should be a half wave plate as we want, but I didn't explicitly check with a linearly polarized beam today. Before any serious high power tests, we can first contact clean the waveplate to avoid any burning of dirt. The damage threshold is rated as 1 MW/cm^2, and I estimate that we will be well below this threshold for any power levels (<30W) we are planning to put through this waveplate. For a 100um radius beam with 30W, the peak intensity is ~0.2 MW/cm^2. This is 20% of the rated damage threshold, so may be better to enforce that the beam be >200um going through this waveplate.
  • The dimensions of the mount look compatible with the space we have on the PSL table (though of course once the amplifier comes into the picture, we will have to change the layout. Maybe it's better to keep everything downstream of the PMC fixed - then we just re-position the seed beam (i.e. NPRO) and amplifier, and then mode-match the output of the amplifier to the PMC.

Electrical tests:

  1. First, I connected a power cord to the ESP300 and powered it on - the front display lit up and displayed a bunch of diagnostics, and said something to the effect of "No stage connected".
  2. Next, I connected the rotary mount to "Axis #1": Male DB25 on the stage to female DB25 on the rear of the ESP300. The stage was recognized.
  3. Used the buttons on the front panel to rotate the waveplate, and confirmed visually that rotation was happening 👍 . I didn't calibrate the actual degrees of rotation against the readback on the front panel, but 45 degrees on the panel looked like 45 degrees rotation of the physical stage so seems fine.

RS232 tests:

  • This unit only has a 9-pin Dsub connector to interface remotely to it, via RS232 protocol. c1psl Supermicro host was designated the computer with which I would attempt remote control.
  • To test, I decided to use a serial-USB adapter. Since this is only a single unit, no need to get an RS232-ethernet interface like the one used in the vacuum rack, but if there are strong opinions otherwise we can adopt some other wiring/control philosophy.
  • No drivers needed to be installed, the host recognized the adapter immediately. I then shifted the waveplate and controller assembly to inside the VEA - they are sitting on a cart behind 1X2. Once the controller was connected to the USB-serial adapter cable, it was registered at /dev/ttyUSB0 immediately. I had to chown this port to the controls user for accessing it using python serial
  • Initially, I was pleasantly surprised when I found not one but TWO projects on PyPi that already claimed to do what I want! Sadly, neither NewportESP1.1 nor PyMeasure0.9.0 actually worked - the former is for python2 (and the string typesetting has changed for PySerial compatible with python3), while the latter seems to be optimized for Labview interfacing and didn't play so nice with the serial-USB adapter. I didn't want to spend >10mins on this and I know enough python serial to do the interfacing myself, so I pushed ahead. Good thing we have several pySerial experts in the group now, if any of you want to figure out how we can make either of these two utilities actually work for us - there is also this repo which claims to work for python 3 but I didn't try it because it isn't a managed package.
  • The command list is rather intimidating, it runs for some 100 (!) pages. Nevertheless, I used some basic commands to readback the serial number of the controller, and also succeeded in moving the stage around  by issuing the "PR" command appropriately 👍. BTW, I forgot that I didn't test the motor enable/disable which is an essential channel I think.
  • I think we actually only need a very minimal set of commands, so we don't need to read all 100 pages of instructions:
    • motor enable/disable
    • absolute and relative rotations
    • readback of the current position
    • readback of the moving status
    • a stop command
    • an interlock
  • Note that as a part of this work, in addition to chowning /dev/ttyUSB0, I installed the two aforementioned python packages on c1psl. I saw no reason to manually restart the modbus and latch services running on it, and I don't believe this work would have impacted the correct functioning of either of those two services, but be aware that I was poking around on c1psl. I was also reminded that the system python on this machine is 2.7 - basically, only the latch service that takes care of the gains for the IMC servo board are dependent on python (and my proposed waveplate control script will be too), but we should really upgrade the default python to 3.7/3.8.

Next steps:

Satisfied that the unit works basically as expected, I decided to stop for today. My thinking was that we can have the ESP300 installed in 1X1 or 1X2 (depending on where space is more readily available). I will upload have uploaded a cartoon here so people can comment if they like/dislike my plan

  • We need to use a long-ish cable to run from 1X1/1X2, where the controller will be housed, to the PSL enclosure. Livingston did ship one such long cable (still on Rana's table), but I didn't check if the length is sufficient / the functionality of this long cable. 
  • We need to set up some EPICS channels for the rotation stage angle, motor ENABLE/DISABLE, a "move stage" button, motion status, and maybe a channel to control the rotation speed? 
  • We need a python script that is reading from / writing to these EPICS channel in a while loop. Should be straightforward to setup something to run like the latch.py service that has worked decently reliably for ~a year now. afaik, there isn't a good way to run this synchronously, and the delay in sending/completing the execution of some of the serial commands might be ~1 second, but for the purpose of slowly ramping up the power, this shouldn't be a problem.
  • One question I do have is, what is the strategy to protect the IFO from the high power when the lock is lost? Surely we are not gonna rely on this waveplate for any fast actuation? With the current input power of 1W, the MCREFL photodiode sees ~100mW when the IMC loses lock. So if the final input power is 35W, do we wanna change the T=10% beamsplitter in the MCREFL path to keep this ratio?

Once everything is installed, we can run some tests to see if the rotary motion disturbs the PSL in any meaningful way. I will upload some photos to the picasa later. Photos here.

Attachment 1: remotePowCtrl.pdf
remotePowCtrl.pdf
  16036   Thu Apr 15 15:54:46 2021 gautamUpdateIOOWaveplate commissioning - hardware installed

[jordan, gautam]

We did the following this afternoon.

  1. Disconnected the cable from the unused (and possibly not working) RefCav heater power supply, and removed said PS from 1X1. There was insufficient space to install the ESP300 controller elsewhere. I have stored the power supply along the east arm under the beamtube, approximately directly opposite the RFPD cabinet.
  2. Installed the ESP 300 - conveniently, the HP DCPS was already sitting on some rails and so we didn't need to add any.
  3. Ran a long D25-D25 cable from the ESP300 to the NE corner area of the PSL enclosure. The ends of the cable are labelled as "ESP end" and "Waveplate end". The HEPA was turned on for the duration we had the enclosure open, and I have now turned it off.
  4. Connected the waveplate to this cable. Also re-connected the ESP300 to the c1psl supermicro host via the USB-RS232 adapter cable.

The IMC stayed locked throughout our work, and judging by the CDS overview screen, we don't seem to have done any lasting damage, but I will run more tests. Note that the waveplate isn't yet installed in the beam path - I may do this later today evening depending on lab activity, but for now, it is just sitting on the lower shelf inside the PSL enclosure. I will post some photos later.

Quote:
 

So this system is ready to be installed once Jordan and I find some time to lay out cabling + install the ESP300 controller in a rack.


Update: The waveplate was installed. I gave it a couple of rounds of cleaning by first contact, and visually, it looked good to me. More photos uploaded. I also made some minor improvements to the MEDM screen, and setup the communication script with the ESP300 to run as a systemd service on c1psl. Let's see how stable things are... I think the philosophy at the sites is to calibrate the waveplate rotation angle in terms of power units, but i'm not sure how the unit we have performs in terms of backlash error. We can do a trial by requesting ~100 "random" angles, monitoring the power in s- and p-polatizations, and then quanitfying the error between requested and realized angles, but I haven't done this yet. I also haven't added these channels to the set recorded to frames / to the burt snapshot - do we want to record these channels long term?

  16022   Tue Apr 13 17:47:07 2021 gautamUpdateIOOWaveplate commissioning - software prepared

I spent some time today setting up a workable user interface to control the waveplate.

  1. Created some EPICS database records at /cvs/cds/caltech/target/ESP300.db. These are all soft channels. This required a couple of restarts of the modbus service on c1psl - as far as I can tell, everything has come back up without problems.
  2. Hacked newportESP to make it work, mainly some string encoding BS in the python2-->python3 paradigm shift.
  3. Made a python script at /cvs/cds/caltech/target/ESP300.py that is based on similar services I've set up for the CM servo and IMC servo boards. I have not yet set this up to run as a service on c1psl, but that is pretty trivial.
  4. Made a minimal MEDM screen, see Attachment #1. It is saved at  /opt/rtcds/caltech/c1/medm/c1psl/C1PSL_POW_CTRL.adl and can be accessed from the "PSL" tab on sitemap. We can eventually "calibrate" the angular position to power units.
  5. Confirmed that I can move the waveplate using this MEDM screen.

So this system is ready to be installed once Jordan and I find some time to lay out cabling + install the ESP300 controller in a rack.

At the moment, there is no high power and there is minimal risk of damaging anything, but someone should double check my logic to make sure that we aren't gonna burn the precious IFO optics. We should also probably hook up a hardware interlock to this controller.

I went through some aLIGO documentation and believe that they are using a custom made potentiometer based angle sensor rather than the integrated Newport (or similar) sensor+motor. My reading of the situation was that there were several problems to do with hysterisis, the "find home" routine etc. I guess for our purposes, none of these are real problems, as long as we are careful not to randomly rotate the waveplate through a full 180 degrees and go through the full fringe in the process. Need to think of a clever way to guard against careless / accidental MEDM button presses / slider drags.


Unrelated to this work: I haven't been in the lab for ~a week so I took the opportunity today to go through the various configs (POX/POY/PRMI resonant carrier etc). I didn't make a noise budget for each config but at least they can be locked 👍 . I also re-aligned the badly misaligned PMC and offloaded the somewhat large DC WFS offsets (~100 cts, which I estimate to be ~150 nNm of torque, corresponding to ~50 urad of misalignment) to the IMC suspensions' slow bias voltages. 

Attachment 1: remoteHWP.png
remoteHWP.png
  2410   Mon Dec 14 12:13:52 2009 JenneUpdateTreasureWe are *ROCKSTARS* ! IFO is back up

[Jenne, Kiwamu, Koji]

We got the IFO back up and running!  After all of our aligning, we even managed to get both arms locked simultaneously.  Basically, we are awesome. 

 This morning, we did the following:

*  Turned on the PZT High voltages for both the steering mirrors and the OMC.  (For the steering mirrors, turn on the power, then hit "close loop" on each.  For the OMC, hit Output ON/OFF).

*  Looked at the PZT strain gauges, to confirm that the PZTs came back to where they had been.  (Look at the snapshot of C1ASC_PZT_Al)

*  Locked all components of the PSL (This had already been done.)

*  Removed beam dump which was blocking the PSL, and opened the PSL mechanical shutter.  Light into the IFO!

*  Locked the Mode Cleaner.  The auto-locker handled this with no problem.

*  Confirm that light is going through the Faraday.  (Look at the TV sitting on top of MC13 tank...it shows the Faraday, and we're hitting the input of the Faraday pretty much dead-on).

*  Look at IP_ANG and IP_POS.  Adjust the steering mirrors slightly to zero the X&Y readings on IP_ANG.  This did not change the PZTs by very much, so that's good.

*  Align all of the Core Optics to their OpLev positions.

*  On the IFO_Align screen, save these positions.

*  Run the IFO_Configure scripts, in the usual order.  (Xarm, Yarm, PRM, DRM).  Save the appropriate optics' positions after running the alignment scripts.  We ended up running each alignment script twice, because there was some residual misalignment after the first iteration, which we could see in the signal as viewed on DataViewer (Either TRX, TRY, or SPOB, for those respective DoFs).

*  Restore Full IFO.

*  Watch the beauty of both arms and the central cavity snapping together all by themselves!  In the attached screenshot, notice that TRX and TRY are both ~0.5, and SPOB and AS166Q are high.  Yay!

Conclusions: 

*  The wiping may have helped.  While aligning X and Y separately, TRX got as high as ~1.08, and TRY got as high as 0.98  This seems to be a little bit higher than it was previously.

*  Since everything locked up in pretty short order, and the free swinging spectra (as measured by Kiwamu in elog 2405) looks good, we didn't break anything while we were in the chambers last week.  Excellent.

*  We are now ready for a finesse measurement to tell us more quantitatively how we did with the wiping last week.

 

Attachment 1: Jenne14Dec09_IFOlocked.png
Jenne14Dec09_IFOlocked.png
  2412   Mon Dec 14 13:17:33 2009 robUpdateTreasureWe are *ROCKSTARS* ! IFO is back up

 

 

Attachment 1: two-thumbs-up.jpeg
two-thumbs-up.jpeg
  7859   Wed Dec 19 20:18:51 2012 ranaUpdateComputersWe are Changing the Passwerdz next week----

Be Prepared

http://xkcd.com/936/

  9602   Wed Feb 5 15:39:41 2014 manasaUpdateGeneralWe are pumping down

[Steve, Manasa]

I checked the alignment one last time. The arms locked, PRM aligned, oplevs centered.

We went ahead and put the heavy doors ON. Steve is pumping down now!

Attachment 1: pre_pump_down.png
pre_pump_down.png
  4873   Thu Jun 23 23:54:29 2011 KojiOmnistructureEnvironmentWe are saved

Sonali, Ishwita, and another anonymous SURF saved the long-lasted water shortage of the 40m

Attachment 1: IMG_0023.jpg
IMG_0023.jpg
  2948   Tue May 18 16:19:19 2010 josephbUpdateCDSWe have two new IO chassis

We have 2 new IO chassis with mounting rails and necessary boards for communicating to the computers.  Still need boards to talk to the ADCs, DACs, etc, but its a start.  These two IO chassis are currently in the lab, but not in their racks.

They will installed into 1X4 and 1Y5 tomorrow.  In addition to the boards, we need some cables, and the computers need the approriate real time operating systems setup.  I'm hoping to get Alex over sometime this week to help work on that.

  2483   Thu Jan 7 14:08:46 2010 JenneUpdateComputersWe haven't had a bootfest yet this week.....so today's the day

All the DAQ screens are bright red.  Thumbs down to that.

  2484   Thu Jan 7 14:55:36 2010 JenneUpdateComputersWe haven't had a bootfest yet this week.....so today's the day

Quote:

All the DAQ screens are bright red.  Thumbs down to that.

 All better now. 

  1849   Thu Aug 6 20:03:10 2009 KojiUpdateGeneralWe left two carts near PSL table.

Stephanie and Koji

We left two carts near the PSL table.
We are using them for characterization of the tripple resonant EOM.

  5548   Mon Sep 26 17:49:21 2011 JenneUpdateComputersWe now have BURT restore for slow channels

Koji and Suresh found that there have not been any autoburt snapshots taken of slow channels since ~December 13th 2010.  Not good!

We have found an elog from Joe talking about autoburt changes from that day:  elog 4046

Joe pointed all of the autoburt stuff to the new directory system, so it now decides to take a snapshot of every system in the *new* target directory.  This means, since all of the aux things were left in the *old* target directory that none of them were getting snapshots taken.  I have added the old target path back to the autoburt cron file so that every hour it will search through both old and new target directories and take snapshots of everything in both. 

So, the systems which will now once again have autoburt snapshots taken are the following:

c1aux

c1auxex

c1auxey

c1dcuepics

c1iool0

c1iscaux

c1iscaux2

c1iscepics

c1losepics

c1omcepics

c1psl

c1susaux

c1vac1

c1vac2

 

I moved some old stuff (and especially things which would conflict with the new stuff) to the old target directory/oldfe/ :  c1ass, c1assepics, c1susvme1, c1susvme2, c1sosvme, c1iovme.

The following systems don't have an autoburt.req file, so don't get snapshots:  c0daqawg, c1daqctrl, c1dcu1, c1iscex, c1iscey.  If any of these need autoburts, we should create them.

All the new systems in the new target directory still have their autoburts working.

The first test of this will be in a few minutes, at 18:07:00 Pacific during the regular cron job.  Hopefully nothing crashes....

  5552   Mon Sep 26 22:40:41 2011 JenneUpdateComputersWe now have BURT restore for slow channels
[Jenne, Koji]

After much Perl-learning and a few iterations, we have fixed the burt restore script, so that it actually does the slow channels. We have so far had one successful run, at 22:25, and the regular cron job should start doing the slow channels as of 23:07.
  7872   Wed Jan 2 15:33:23 2013 JenneHowToLockingWe should retry in-air locking

Immediate things to do include finishing installation of new TTs and re-routing of oplev paths in the BS chamber, but after all that, we should retry in-air locking.

The last time we (I) tried in-air locking, MICH wouldn't lock since there was only ~ 6uW of light on AS55 (see elog 7355).  That was before we increased the power into the MC by a factor of 10 (see elog 7410), so we should have tens of microwatts on the PD now.  At that time, we could barely see some PDH signal hidden in the noise of the PD, so with a factor of 10 optical gain, we should be able to lock MICH.

REFL should also have plenty of power - about 1.5 times the power incident on the PRM, so we should be able to lock PRCL. 

Even if we put a flat G&H mirror after the PRM to make a mini-cavity, and we lose power due to poor mode matching, we'll still have plenty of power at the REFL port to lock the mini-cavity.

For reference, I calculate that at full power, POX and POY see ~13uW when the arms are locked.

 

POX/POY power =  [  (P_inc on ITM) + (P_circ in arm)*(T_itm)  ] * (pickoff fraction of ITM ~ 100ppm)

REFL power = (P_inc on PRM) + (P_circ in PRCL)*(T_prm)     =~ 1.5*(P_inc on PRM)

  921   Thu Sep 4 10:13:48 2008 JenneUpdateIOOWe unlocked the MC temporarily
[Joe, Eric, Jenne]

While trying to diagnose some DAQ/PD problems (look for Joe and Eric's entry later), we unlocked the PMC, which caused (of course) the MC to unlock. So if you're looking back in the data, the unlock at ~10:08am is caused by us, not whatever problems may have been going on with the FSS. It is now locked again, and looking good.
  3131   Tue Jun 29 08:55:18 2010 JenneFrogsEnvironmentWe're being attacked!

Infested_InvasionOfKillerBugs.jpg

We're going to have to reinstate the policy of No food / organic trash *anywhere* in the 40m.  Everyone has been pretty good, keeping the food trash to the one can right next to the sink, but that is no longer sufficient, since we've been invaded by an army of ants:

AntInvasion_small.jpg

We are going back to the old policy of Take your trash out to the dumpsters outside.  I'm sure there are some old wives tales about how exercise after eating helps your digestion, or something like that, so no more laziness allowed!

  7694   Fri Nov 9 17:15:05 2012 Manasa, Steve, AyakaUpdateGeneralWe're closed! Pumping down monday morning

Quote:

After a brief look this morning, I called it and declared that we were ok to close up.  The access connector is almost all buttoned up, and both ETM doors are on.

Basically nothing moved since last night, which is good.  Jenne and I were a little bit worried about how the input pointing might have been effected by our moving of the green periscope in the MC chamber.

First thing this morning I went into the BS chamber to check out the alignment situation there.  I put the targets on the PRM and BS cages.  We were basically clear through the PRM aperture, and in retro-reflection.

The BS was not quite so clear.  There is a little bit of clipping through the exit aperture on the X arm side.  However, it didn't seem to me like it was enough to warrant retouching all the input alignment again, as that would have set us back another couple of days at least.

Both arm green beams are cleaning coming out, and are nicely overlapping with the IR beams at the BS (we even have a clean ~04 mode from the Y arm).  The AS and REFL spots look good.  IPANG and IPPOS are centered and haven't moved much since last night.  We're ready to go.

The rest of the vertex doors will go on after lunch.

Jamie and Steve got the ETM doors on this morning.

We got the other heavy doors including the ITMs, BS and the access connector in place.

If nobody raises any concerns in reply to this elog, Steve will assume it as a green signal and will start pumping down first thing Monday morning after the final check on the access connector bellow screws.

 

Steve! 

Ayaka and I got the ITMY and BS door closed at 45foot pounds just now. 

  8284   Wed Mar 13 10:26:58 2013 ManasaUpdateLockingWe're still good!!

We're still good with the IFO alignment after 7hours.

I found the green still locked in the same state as last night; but no IR (so the arms are stable and the TTs should definitely take the blame).

From last night's observation (elog about drift in TT1), I only moved TT1 in pitch and gained back locking in IR for both the arms

  420   Wed Apr 16 09:47:35 2008 AndreySummaryPEMWeather Station
The weather station is functional again.

The long ethernet Cat5 cable connecting 'WeatherLink' and processor 'c1pem1' was repaired yesterday, namely the RJ45 connector was replaced,
and information about weather conditions is now again continuously being transferred from the 'Weather Monitor' to the control UNIX computers. We can see this information in 'c0Checklist.adl' screen and in Dataviewer.

Below are the two sets of trends for the temperature, wind speed and direction, pressure and the amount of precipitation.

The upper set of trends ("Attachment 1") is "Full Data" in Dataviewer for the 3 hours from 6.30AM till 9.30AM this morning,
and the lower set of trends ("Attachment 2") is "Minute Trend" in Dataviewer for 15 hours from 6.30PM yesterday till 9.30AM this morning.

I also updated the wiki-40 page describing the Weather Station and added to there a description of the process of attaching the RJ45 connector to the end of ethernet Cat5 cable. To access the wiki-40 page about the "weather station" you should go from the main page to "PEM" section and click on "Weather Station".
Attachment 1: Weather-FullData_3hrs.png
Weather-FullData_3hrs.png
Attachment 2: Weather_Trend_15hrs.png
Weather_Trend_15hrs.png
ELOG V3.1.3-