40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 40 of 335  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  9818   Wed Apr 16 02:29:30 2014 ericqUpdateLSCCARM and DARM on IR signals, boosts engaged

 As Jenne mentioned, we took OLTF transfer functions, and determined that we had more than enough phase margin to switch on the LSC boosts in FM4. This improved the error signal noise spectra quite a lot, and noticeably reduced the TRX/TRY fluctuations, and actuation output. 

Here's the CARM OLTF (FM4 boost on in red, boost off in black)

carmOLTF.pdf

 

Here's what happened to the CARM and DARM spectra when we turned on the boosts. (ALS only in black, initial IR signal transitions in mid-color, boosted IR signals in bright color)

boostPlot.png 

  9819   Thu Apr 17 00:49:06 2014 JenneUpdateLSCCARM and DARM on IR signals, boosts engaged

I looked at 2 of the locklosses from last night, (1:19am and 1:27am), and saw that for both, the DARM loop started to oscillate just before we lost lock.  In the trials tonight, we were more watchful of gain peaking.

Here is the 1:19am lockloss

Lockloss_DARMgainTooHigh_119am.png

And here is the 1:27am lockloss

Lockloss_DARMgainTooHigh_127am.png


 So you can see what we were doing, and what the effect was, here is a few minutes of data just before the 1:27am lockloss. The times I note below are rough, but should give you an idea of what happened at which time.

0 sec:  Arms are held on resonance with ALS.

10 sec:  CARM offset of 3nm added.

20 sec:  PRM restored, one flash, then PRMI acquires lock.

30 sec:  CARM offset reduced to 2nm, transmitted powers are about 0.1

50 sec:  Transition CARM to 1/sqrt(trans) signals.  Note that we are using the high gain Thorlabs PD here, so the noise is better than last Thursday.

60-110 sec:  CARM offset reduction to about 1nm.

110 sec:  CARM's LSC low frequency boost engaged.

120 sec:  DARM transitioned to AS55Q.

170 sec:  DARM's LSC low frequency boost engaged.

SmoothCARMandDARMtransitions_LSCboosts.png

  11116   Sat Mar 7 22:01:12 2015 JenneConfigurationLSCCARM and DARM on RF signals!!!!!!!!!!!!!!!!!!!!

[Jenne, with Matt and Fujimi as witnesses]

It might be about time to throw that champagne in the fridge.  Nice. Not quite close enough to talk about popping it open, but we'll want it chilled just in case... yeslaugh

I still haven't logged yesterday's work, and I'm still working now, so no details, but I just handed both CARM and DARM over to non-normalized RF signals, and had the arms stable at powers of about 105.  I was workinig on the ETM alignment, and the power was increasing, so I think that's where the extra power will come from. I was lowering the DARM gain as I improved the alignment, because the optical gain was increasing so much.  I probably just didn't do that fast enough for the last aligning, which is why I lost lock.

Anyhow, here's a plot, because I'm excited:

Attachment 1: ARM_POWERS_100.png
ARM_POWERS_100.png
  11117   Sun Mar 8 00:05:37 2015 KojiConfigurationLSCCARM and DARM on RF signals!!!!!!!!!!!!!!!!!!!!

Exciting! How long was it?

  11118   Sun Mar 8 01:27:01 2015 JenneConfigurationLSCCARM and DARM on RF signals!!!!!!!!!!!!!!!!!!!!

I have in my notebook that at 9:49pm CARM was no longer using ALS as an error signal, and at 9:50pm, DARM was no longer using ALS as an error signal.  It looks like I was locked for 3+ minutes after getting to RF-only signals.

The increase in power near the end of the lock stretch was me trying to improve the dark port contrast by touching the ETMX alignment.  DARM was definitely oscillating as I improved the dark port contrast, so I was trying to hand-lower the gain as I worked on the alignment.

Attachment 1: RFlock3min.png
RFlock3min.png
  15015   Wed Nov 6 17:05:45 2019 gautamUpdateLSCCARM calibration

Summary:

A coarse calibration of the CARM error point (when on ALS control) is 7.040 +/- 0.030 kHz/ct. This corresponds to approximately 0.95nm/ct. I typically lose the PRMI lock when the CARM offset is ~0.2 cts, which means I am about 1kHz away from the resonance. This is >10 CARM linewidths.

Details:

The calibration was done by sweeping the CARM offset (no PRM) and identifying the arm cavity FSRs by looking for peaks in TRX / TRY. Attachment #1 shows the scan, while Attachment #2 shows a linear fit to the FSRs. In Attachment #2, the frequency axis is taken from the phase tracker servo, which was calibrated by injecting a "known" frequency with the Marconi, and there is good agreement to the expected FSR with 37.79 m long arm cavities. There is much more info in the scan (e.g. modulation depths, mode matching to the arm cavities etc) which I will extract later, but if anyone wants the data (pre-downsampled by me to have a managable filesize), it's attached as a .zip file in Attachment #3.

Attachment 1: CARMscan.pdf
CARMscan.pdf
Attachment 2: CARMcalib.pdf
CARMcalib.pdf
Attachment 3: scan.hdf5.zip
  10933   Fri Jan 23 02:11:40 2015 JenneUpdateLSCCARM filters modified slightly

[Jenne, Diego]

One of tonight's goals was to tweak the CARM filters, so that we could engage the lowpass filter, to avoid the detuned double cavity pole resonance disturbing the CARM loop.

I increased the Q of the zeros in the FM3 boost so that it eats fewer than the original 18 degrees of phase at 100 Hz.  We kept losing lock though, so for each lock I backed off on the Q a little bit.  In the end, the filter eats 9 degrees of phase at 100 Hz.  I also moved the lowpass from 700 Hz to 1kHz, although that doesn't change the phase at 100 Hz very much.

We modified the carm_up script re: PRMI locking a little bit.  The PRMI is not so enthusiastic about locking immediately at 25% MICH fringe, so I backed that off.  It now acquires lock at a few percent, and then ramps up the offset.  Also, the MICH FM6 bounce roll filter is now turned on after lock is acquired, effectively giving it an extra second or two of delay beyond the rest of the filters.

We were able several times to get to some MICH offset and turn on the lowpass filter, but starting to reduce the CARM offset makes us lose lock.  I think the problem is that the UGF servo demod phase is changing as we are changing offsets, filters and error signals.  We see that the I-phase is servoed successfully to 0dB, but that the Q-phase is starting to move around by 30 degrees or more.  We either need to monitor this much more closely, and add the changing demod phases to the carm_up script, or we need to go back to the sum-of-squares situation that we had last week.  Note that we saw failures with that method for a completely separate reason:  we were getting too close to the limiters, which cause the UGF servos to glitch and the outputs jump by a significant amount.  So, the issues that we were seeing last week when we had the sum-of-squares were a different thing, that we noticed and understood later.

Anyhow, nothing too exciting and glorious tonight, but progress has been made.

Also, from some Mist simulations that Q did, Diego made a sweet plot that is now posted in the control room, so we can translate arm power to CARM offset, at various MICH offsets. 

We also took some CARM loop measurements with the new filters.  We have a little more phase than we used to, which is nice.  These traces don't have the lowpass engaged, since I was trying to see how far we could get without it.  We lost lock right after the second measurement, but I think that was to do with the UGF servos.

Attachment 2: CARM_22Jan2015.pdf
CARM_22Jan2015.pdf
  15351   Tue May 26 03:01:35 2020 gautamUpdateLSCCARM loop

Summary:

I am able to realize ~8 kHz UGF with ~60 degrees of phase margin on the CARM loop OLTF (combination of analog and digital signal paths).

Details:

  • Attachment #1 shows the measured OLTF.
  • The measurement is made by using the "EXC A" bank on the CM board, with an SR785. With this technique, the measurement will be poor where the loop gain is high, as the excitation will be squished. Nevertheless, we can estimate the behavior in those regimes by using a model, and fitting it to the regions where the measurement is valid (in this case, above ~1 kHz).
  • This measurement was made with IN1 Gain = +4 dB, AO gain = 0 dB, and IMC IN2 gain = 0 dB.
  • The regular boost has been enabled, but no super-boosts yet, mainly because I think they consume too much phase close to the UGF. 
  • The modeling/fitting of this, including a more thorough characterization of the crossover, will follow...
Attachment 1: CARM_OLTF.pdf
CARM_OLTF.pdf
  15366   Wed Jun 3 01:46:14 2020 gautamUpdateLSCCARM loop

Summary:

The CARM loop now has a UGF of ~12 kHz with a phase margin of ~60 degrees. These values of conventional stability indicators are good. The CARM optical gain that best fits the measurements is 9 MW/m.

I've been working on understanding the loop better, here are the notes.

Details:

Attachment #1 shows a block diagram of the loop topology.

  • The "crossover" measurement made at the digital CARM error point, and the OLG measurement at the CM board error point are shown.
  • I've tried to include all the pieces in the loop, and yet, I had to introduce a fudge gain in the digital path to get the model to line up with the measurement (see below).

Attachment #2 shows the OLGs of the two actuation paths.

  • Aforementioned fudge factor for the digital path is included.
  • For the AO path, I assumed a value of the PDH discriminant at the IMC error point to be 13 kHz/V, per my earlier measurement. 
  • I trawled the elog for the most up-to-date info about the IMC servo (elog9457, elog13696, elog15044) and CM board, to build up the model. 

Attachment #3 and #4 show the model, overlaid with measurements of the loop OLG and crossover TF respectively.

  • No fitting is done yet - the next step would be to add the delay of the CDS system for the digital path, and the analog electronics for the AO path. Though these are likely only small corrections.
  • For the crossover TF - I've divided out the digital filters in the CARM_B filter bank, because the injection is made downstream of it (see Attachment #1).
  • There is reasonably good agreement between model and measurement.
  • I think the biggest source of error is the assumed model for the IMC OLTF.

Attachment #5 shows the evolution of the CARM OLG at a few points in the lock acquisition sequence.

  • "Before handoff" corresponds to the state where the primary control is still done by the ALS leg, but the REFL11 signal has begun to enter the picture via the CARM_B path.
  • "IN2 ramped" corresponds to the state where the AO path gain (=IN2 gain on the IMC servo board) has been ramped up to its final value (+0 dB), but the overall loop gain (=IN1 gain on the CM board) is still low. So this is preparation for high bandwidth control. Typically, the arm powers will have stabilized in this state, but ALS control is still on.
  • "Pre-boost" corresponds to an intermediate state - ALS control is off, but the low frequency boosts have not yet been enabled. I typically first engage some ASC to stabilize things somewhat, and then turn on the boosts.
  • "Final" - self explanatory.

Next steps:

Now the I have a model I believe, I need to think about whether there is any benefit to changing some of these loop shapes. I've already raised the possibility of changing the shape of the boosts on the CM board, with which we could get a bit more suppression in the 100 Hz - 1kHz region (noise budget of laser frequency noise --> DARM required to see if this is necessary). 

Attachment 1: CM_loop_topology.pdf
CM_loop_topology.pdf
Attachment 2: CARM_TFs.pdf
CARM_TFs.pdf
Attachment 3: CARM_OLTF.pdf
CARM_OLTF.pdf
Attachment 4: CARM_xover.pdf
CARM_xover.pdf
Attachment 5: CARM_OLG_evolution.pdf
CARM_OLG_evolution.pdf
  9792   Wed Apr 9 16:08:33 2014 JenneUpdateLSCCARM loop gains vs. CARM offset

I have taken EricQ's simulation results for the CARM plant change vs. CARM offset, and put that together with the CM and CARM digital control loops, to see what we have. 

The overall gains here aren't meaningful yet (I haven't set a UGF), but we can certainly look at the phases, and how the magnitude of the signals change with CARM offset.

First, the analog CM servo.  I use the servo shape from Den's elog from December, but only what he calls "BOOST", the regular servo shape, not any of the super boosts, "BOOST 1-3".   No normalization.

REFL11_analog.pngREFL55_analog.png

Next, the digital LSC CARM servo (same filters as XARM and YARM).  I have used FM4 and FM5, which are the 2 filters that we use to acquire regular LSC arm lock.  For the actuator, I just use a 1Hz pendulum as if I'm pushing only on the ETMs.

REFL11_digital.pngREFL55_digital.png

I also used the exact same setups as above, but normalized the transfer functions by a DC photodiode output.  The analog CM loops change the least (around a few kHz) if I use POPDC.  The digital CARM loops change the least (around 100Hz) if I use TRX (or, equivalently, TRX + TRY).

Here are the normalized plots:

REFL11_analog_normalizedPOPDC.pngREFL55_analog_normalizedPOPDC.png

REFL11_digital_normalizedTRX.pngREFL55_digital_normalizedTRX.png

Either way, with or without normalization, the digital CARM loop will go unstable between 0-10pm, for both the REFL RF photodiodes.  We need to figure out how to get a realistic transfer function out for the 1/sqrt(TRANS) signals, and see what happens with those.  If those also look unstable, then maybe we should consider a DC signal for the analog CM servo to start, since that could have a wider linear range.

  1500   Mon Apr 20 18:17:44 2009 robSummaryLockingCARM offset/Power rubric

Plotted assuming the average arm power goes up to ~80.  No DARM offset.

Attachment 1: ARMpowersCARM.png
ARMpowersCARM.png
  10953   Thu Jan 29 04:27:35 2015 ericqUpdateLSCCARM on REFL11

[ericq, Diego]

Tonight, we transitioned CARM from ALS directly to REFL11 I at 25% Mich Offset. yes

We attempted the transition twice, the first time worked, but we lost lock ~5 seconds after full transition due to a sudden ~400Hz ringup (see attached lockloss plot). The second barfed halfway, I think because I forgot to remove the CARM B offset from the first time frown

The key to getting to zero CARM offset with CARM and DARM on ALS is ekeing out every bit of PRMI phase margin that you can. Neither MICH nor PRCL had their RG filters on and I tweaked the MICH LP to attenuate less and give back more phase (the HF still isn't the dominant RMS source.) PRCL had ~60 degrees phase margin at 100Hz UGF, MICH had ~50 deg at 47Hz UGF. The error signals were comparitively very noisy, but we only cared that they held on. Also important was approaching zero slooooooooowly, and having the CARM and DARM UGF servo excitations off, because they made everything go nuts. Diego stewarded the MICH and PRCL excitation amplitudes admirably. 

Oddly, and worringly, the arm powers at zero CARM offset were only around 10-12. Our previous estimations already include the high Xarm loss, so I'm not sure what's going on with this. Maybe we need to measure our recycling gain?

I hooked up the SR785 by the LSC rack to the CM board after the first success. For the second trial, I also took TFs with respect to CM slow, but they looked nowhere near as clean as the normal REFL11 I channel; I didn't really check all the connections. I will be revisiting the whole AO situation soon. 

In any case, I think we're getting close...

Attachment 1: Jan29_REFL11_lockloss.png
Jan29_REFL11_lockloss.png
  10960   Fri Jan 30 03:12:15 2015 diegoUpdateLSCCARM on REFL11I

[Jenne, Diego]

Tonight we continued following the plan of last night: perform the transition of CARM to REFL11_I while on MICH offset at -25%:

  • we managed to do the transition several times, keeping the UGF servos on for MICH and PRCL but turning off the DARM and CARM ones, because their contribution was rather unimportant and we feared that their excitations could affect negatively the other loops (as loops tend to see each other's excitation lines);
  • we had to tweak the MICH and PRCL UGF servos:
    • the excitation frequency for MICH was lowered to ~41 Hz, while PRCL's one was lowered to ~50 Hz;
    • PRCL's amplitude was lowered to 75 because it was probably too high and it affected the CARM loop, while MICH's one was increased to 300 because during the reduction of the CARM offset it was sinking into the noise; after a few tries we can say they don't need to be tweaked on the fly during the procedure but can be kept fixed from the beginning;
    • after the transition to REFL11_I for CARM, we engaged also its UGF servo, still at the highest frequency of the lot (~115 Hz) and with relatively low amplitude (2), to help keeping the loop stable;
    • as DARM was still on ALS, we didn't engage its UGF servo during or after the transition, but we just hold its output from the initial part of the locking sequence (after we lowered its frequency to 100 Hz;
  • however, at CARM offset 0 our arm power was less that what we had yesterday: we managed to get higher than ~8, but after Koji tweaked the MC alignment we reached ~10; we still don't understand the reason of the big difference with respect to what the simulations show for MICH offset at 25% (arm power ~50);
  • after the CARM transition to REFL11_I we felt things were pretty stable, so we tried to reduce the MICH offset to get us in the ~ -10% range, however we never managed to get past ~ -15% before losing lock, at arm power around 20;
  • we lost lock several times, but for several different reasons (IMC lost lock a couple of times, PRCL noise increased/showed some ringing, MICH railed) but our main concern is with the PRCL loop:
    • we took several measurements of the PRCL loop: the first one seemed pretty good, and it had a bigger phase bubble than usual; however, the subsequent measurements showed some weird shapes we struggle to find a reason for; these measurements were taken at different UGF frequencies, so maybe it is worth looking for some kind of correlation; morever, in the two weird measurements the UGFs are not where they are supposed to be, even if the servo was correctly following the input (or so it seemed); the last measurement was interrupted just before we lost lock because of PRCL itself;
    • we noticed a few times during the night that the PRCL loop noise in the 300-500 Hz range increased suddenly and we saw some ringing; at least a couple of times it was PRCL who threw us out of lock; this frequency range is similar to the 'weird' range we found in our measurements, so we definitely need to keep an eye on PRCL on those frequencies;
  • in conclusion, the farthest we got tonight was CARM on REFL11_I at 0 offset, DARM at 0 offset still on ALS and MICH at ~ 15% offset, arm power ~20.

 

Attachment 1: PRCL_29Jan2015_Weird_Shape.pdf
PRCL_29Jan2015_Weird_Shape.pdf
Attachment 2: ArmPowers20_MICHoffsetBeingReduced_0CARMoffset_29Jan2015.pdf
ArmPowers20_MICHoffsetBeingReduced_0CARMoffset_29Jan2015.pdf
  573   Thu Jun 26 12:30:40 2008 JohnSummaryLockingCARM on REFL_DC
Idea:

Try REFL_DC as the error signal for CARM rather than PO_DC.

Reasoning:

The PO signal is dominated by sideband light when the arms are detuned so that any misalignment in the recycling cavity introduces spurious signals. Also, the transfer function from coupled cavity excitation to REFL signal is not so steep and hence REFL should give a little more phase. Finally, the slope of the REFL signal should make it easier to hand over to RF CARM.

Conclusion:


The REFL signal showed no clear improvement over PO signals. We've gone back to PO.


During the night we also discovered that the LO for the MC loop is low.
  9793   Thu Apr 10 01:56:05 2014 JenneUpdateLSCCARM transitioned to IR error signals!

[Jenne, EricQ]

This evening we took things a little bit farther than last night (elog 9791) and transitioned CARM to fully IR signals, no ALS at all for CARM error signals!  We were unsuccessful at doing the same for DARM. 

As we discussed at 40m Meeting this afternoon, the big key was to remove the PRCL ASC from the situation.  I don't know specifically yet if it's QPD saturation, or what, that was causing PRM to be pushed in pitch last night, but removing the ASC loops and reengaging the PRM optical lever worked like a dream. 

Since we can now, using ALS-only, get arbitrarily close to the PRMI+2 arm full resonance point, we decided to transition CARM over to the 1/sqrt(transmission) signals.  We have now done this transition 5 or 10 times.  It feels very procedural and robust now, which is awesome!

To make this transition easier, we made a proto-CESAR for the CARM signals in the LSC.  There's nothing automatic about it, it's just (for now) a different matrix. 


ALS lock conventions:

We have (finally listening to the suggestion that Koji has been making for years now....) set a convention for which side of the PSL the X and Y beatnotes should be, so that we don't have to guess-and-check the gain signs anymore.

For the X beatnote, when you increase the value on the slow slider, the beatnote should increase in frequency.  For the Y beatnote, when you increase the value on the slow slider, the beatnote should decrease in frequency. 

The input matrix (the aux input part) should then have +1 from ALSX->carm, and +1 from ALSY->carm.  It should also have -1 from ALSX->darm and +1 from ALSY->darm. 

The output matrix should be carm -> +1's for both ETMs.  darm should be -1 to ETMX and +1 to ETMY.

With these conventions, both carm and darm should have negative signs for their gains. 

Since we don't have (although should whip up) Watch scripts for the CARM and DARM servo filters, we were using the Xarm filterbank for carm, and the Yarm filterbank for darm again.


Transitioning CARM to 1/sqrt(trans) signals:

As with last night, we were able to easily acquire PRMI lock with a CARM offset of 3 counts.  We then moved down to 2 counts, and saw transmission values of 0.1-0.2.  We set the offsets in the TR_SQRTINV filter banks so that the difference between the outputs was zero, and the mean of the outputs was 2 (the same as the CARM offset we had). 

We looked at the relative gain and sign between the ALS and 1/sqrt() signals, and found that we needed a minus sign, and half the gain.  So, we stepped the 1/sqrt() matrix elements from 0 to -0.5 in steps of 0.1, and at the same time were stepping the ALS matrix elements to CARM from +1 to 0, in steps of 0.2.  This was, excitingly, very easy!

The first time we did this successfully, was a few seconds before 1081143556 gps.

Here is a set of spectra from the first time we locked on the 1/sqrt(trans) signals. 

 

sqrtInvLock.pdf

 


Failure to transition CARM to RF signals, or reduce CARM offset to zero:

While locked on the 1/sqrt(trans) signals, we looked at several RF signals as options for CARM.  The most promising seems to be REFL55, normalized by (TRX+TRY).  The next most promising looks like REFL11 normalized by POPDC.  Note that these are entirely empirical, and we aren't yet at the resonant point, so these may not be truly the best.  Anyhow, we need to reconfigure the LSC input of the normalized error signals, so that they can go into the CESAR matrices.  This was more than we were prepared to do during the nighttime.  However, it seems like we should be about ready to do the transition, once we have the software in place.  Right now, we either normalize both ALS and the RF signal, or we normalize neither.  We want to be able to apply normalization to only the RF signal. 

Just sitting on the tail of the CARM resonance, there were some random times when we seem to have swung through total resonance, and spoiled our 1/sqrt(trans) signals, which aren't valid at resonance, and so we lost lock.  This implies that auto-transitioning, as CESAR should do, will be helpful. 


Attempt at transitioning DARM to AS55:

Next up, we tried to transition DARM to AS55, after we had CARM on the 1/sqrt signals.  This was unsuccessful.  Part of the reason is that it's unclear what the relative gain should be between the ALS darm signals and AS55, since the transfer function is not flat.  Also, we didn't have much coherence between the ALS signals and AS55Q at low frequencies, below about 100 Hz, which is concerning.  Anyhow, more to investigate and think on here. 


Transitioning CARM to 1/sqrt signals, with a DARM offset:

As a last test, Q put in a DARM offset in the ALS control, rather than a CARM offset, and then was still able to transition CARM control to the 1/sqrt signals.  As we expect, when we're sitting on opposite sides of the arm resonances, the 1/sqrt signals have opposite signs, to make a CARM signal. 


Conclusions / path(s) forward:

We need to redo the LSC RF signal normalization, so that the normalized signals can be inputs to CESAR. 

We need to make sure we set the AS55 phase in a sane way.

We need to think about the non-flat transfer function (the shape was 1/f^n, where n was some number other than 0) between the ALS darm signal and AS55.  The shape was the same for AS55 I&Q, and didn't change when we changed the AS55 phase, so it's not just a phasing problem. 

What DC signals can we use for auto-transitioning between error signals for the big CARM CESAR?

  13653   Fri Feb 23 07:47:54 2018 SteveUpdateVACCC1 Hornet

We have the IFO pressure logged again! Thanks Johannes and Gautam

This InstruTech cold cathode ionization vacuum gauge " Hornet " was installed 2016 Sep 14

Here is the CC1 gauge history of 10 years from 2015 Dec 1

The next thing to do is put this channel C1:Vac-CC1_HORNET_PRESSURE  on the 40m Vacuum System Monitor   [ COVAC_MONITOR.adl ] 

gautam 1pm: Vac MEDM screen monitor has been edited to change the readback channel for the CC1 pressure field - see Attachment #2. Seems to work okay.

Attachment 1: InstruTech_Hornet_CC1.png
InstruTech_Hornet_CC1.png
Attachment 2: CC1_readback_updated.png
CC1_readback_updated.png
  11276   Fri May 8 14:30:09 2015 SteveUpdateVACCC1 cold cathode gauges are baked now

CC1s  are not reading any longer. It is an attempt to clean them over the weekend at 85C

These brand new gauges "10421002" sn 11823-vertical, sn 11837 horizontal replaced 11 years old 421 http://nsei.missouri.edu/manuals/hps-mks/421%20Cold%20Cathode%20Ionization%20Guage.pdf  on 09-06-2012 http://nodus.ligo.caltech.edu:8080/40m/7441

Quote:

 

We have two cold cathode gauges at the pump spool and one  signal cable to controller. CC1  in horizontal position and CC1 in vertical position.  

CC1 h started not reading so I moved cable over to CC1 v

 

Attachment 1: cc1bake95C.jpg
cc1bake95C.jpg
  11287   Tue May 12 14:57:52 2015 SteveUpdateVACCC1 cold cathode gauges are baked now

Baking both CC1 at 85 C for 60 hrs did not help.

The temperature is increased to 125 C and it is being repeated.

Quote:

CC1s  are not reading any longer. It is an attempt to clean them over the weekend at 85C

These brand new gauges "10421002" sn 11823-vertical, sn 11837 horizontal replaced 11 years old 421 http://nsei.missouri.edu/manuals/hps-mks/421%20Cold%20Cathode%20Ionization%20Guage.pdf  on 09-06-2012 http://nodus.ligo.caltech.edu:8080/40m/7441

Quote:

 

We have two cold cathode gauges at the pump spool and one  signal cable to controller. CC1  in horizontal position and CC1 in vertical position.  

CC1 h started not reading so I moved cable over to CC1 v

 

 

  14264   Wed Oct 31 17:54:25 2018 gautamUpdateVACCC1 hornet power connection restored

Steve reported to me that the CC1 Hornet gauge was not reporting the IFO pressure after some cable tracing at EX. I found that the power to the unit had been accidentally disconnected. I re-connected the power and manually turned on the HV on the CC gauge (perhaps this can be automated in the new vacuum paradigm). IFO pressure of 8e-6 torr is being reported now.

Attachment 1: cc1_Hornet.png
cc1_Hornet.png
  14639   Sun May 26 21:47:07 2019 KruthiUpdateCamerasCCD Calibration

 

On Friday, I tried calibrating the CCD with the following setup. Here, I present the expected values of scattered power (Ps) at \thetas = 45°, where \thetas is scattering angle (refer figure). The LED box has a hole with an aperture of 5mm and the LED is placed at approximately 7mm from the hole. Thus the aperture angle is 2*tan-1(2.5/7) ≈ 40° approx. Using this, the spot size of the LED light at a distance 'd' was estimated. The width of the LED holder/stand (approx 4") puts a constraint on the lowest possible \thetas. At this lowest possible \thetas, the distance of CCD/Ophir from the screen is given by \sqrt{d^2 + (2'')^2}. This was taken as the imaging distance for other angles also.

In the table below, Pi is taken to be 1.5mW, and Ps and \Omega were calculated using the following equations:

  \Omega = \frac{CCD \ sensor \ area}{(Imaging \ distance)^2}            P_{s} = \frac{1 }{\pi} * P_{i} *\Omega *cos(45^{\circ})  

d (cm)

Estimated spot diameter (cm)

Lowest possible \thetas  (in degrees)

Distance of CCD/Ophir from the screen (in cm) \Omega (in sr)

Expected Ps at   \thetas = 45° (in µW)

1.0 1.2 78.86 5.2 0.1036 34.98
2.0 2.0 68.51 5.5 0.0259 8.74
3.0 2.7 59.44 5.9 0.0115 3.88
4.0 3.4 51.78 6.5 0.0065 2.19
5.0 4.1 45.45 7.1 0.0041 1.38
6.0 4.9 40.25 7.9 0.0029 0.98
7.0 5.6 35.97 8.6 0.0021 0.71
8.0 6.3 32.42 9.5 0.0016 0.54
9.0 7.1 29.44 10.3 0.0013 0.44
10.0 7.8 26.93 11.2 0.0010 0.34

 

                                 

 

 

 

 

 

 

 

 

 

On measuring the scattered power (Ps) using the ophir power meter, I got values of the same order as that of  expected values given the above table. Like Gautam suggested, we could use a photodiode to detect the scattered power as it will offer us better precision or we could calibrate the power meter using the method mentioned in Johannes's post: https://nodus.ligo.caltech.edu:8081/40m/13391.

 

Attachment 1: CCD_calibration_setup.png
CCD_calibration_setup.png
  14708   Sat Jun 29 03:11:18 2019 KruthiUpdateCamerasCCD Calibration

Finding the gain of the Photodiode: The three-position rotary switch of the photodiode being used (PDA520) wasn't working, so I determined its gain by making a comparative measurement between ophir power meter and the photodiode. The photodiode has a responsitivity of 0.34 A/W at 1064 nm (obtained from the responsitivity curve given in the spec sheet using a curve digitizing software). Using the following equation, I determined the gain setting, which turned out to be 20dB.

\large Transimpedance\ Gain (V/A) = \frac{Photodiode\ reading (V)}{Ophir\ reading (W) * Responsitivity (A/W)}

Setup: Here a 1050nm (closest we have to 1064nm) LED is used as the light source instead of a laser to eliminate the effects caused by coherence of a laser source, which might affect our radiometric calibration. The LED is placed in a box with a hole of diameter 5mm (aperture angle = 40 degrees approx.). Suitable lenses are used to focus the light onto a white paper, which is fixed at an arbitrary angle and serves as a Lambertian scatterer. To make a comparative measurement between the photodiode (PDA520) and GigE, we need to account for their different sensor areas, 8.8mm (aperture diameter) and 3.7mm x 2.8 mm respectively . This can be done by either using an iris with a common aperture so that both the photodiode and GigE receive same amount of light , or by calculating the power incident on GigE using the ratio of sensor areas and power incident on the photodiode (here we are using the fact that power scattered by Lambertian scatterer per unit solid angle is constant). 

Calibration of GigE 152 unit: I took around 50 images, starting with an exposure time of 2000 \LARGE \mu s in steps of 2000, using the exposure_variation.py code. But the code doesn't allow us to take images with an exposure time greater than 100 ms, so I took few more images at higher exposures manually. From each image I subtracted a dark image (not in the sense of usual CCD calibration, but just an image with same exposure time and no LED light). These dark images do the job of usual dark frame + bias frame and also account for stray lights. A plot of pixel sum vs exposure time is attached. From a linear fit for the unsaturated region, I obtained the slope and calculated the calibration factor.

Equations:      \LARGE Power (P)=\frac{Photodiode\ reading(V)}{Transimpedance\ gain (V/W) * Responsivity (A/W)}                    \LARGE Calibration factor (CF) = \frac{P}{slope}

Result: CF = 1.91x 10^-16 W-sec/counts  Update: I had used a wrong value for the area of photodiode. On using 61.36 mm^2 as the area, I got 2.04 x 10^-15 W-sec/counts.

I'll put the uncertainities soon. I'm also attaching the GigE spectral response curve for future reference.

Attachment 1: calibration_setup.jpg
calibration_setup.jpg
Attachment 2: CCD_calibration_2.jpeg
CCD_calibration_2.jpeg
Attachment 3: GigE_spectral_response_curve.png
GigE_spectral_response_curve.png
Attachment 4: 152_calibration_plot.png
152_calibration_plot.png
  14757   Sun Jul 14 00:24:29 2019 KruthiUpdateCamerasCCD Calibration

On Friday, I took images for different power outputs of LED. I calculated the calibration factor as explained in my previous elog (plots attached).

Vcc (V) Photodiode
reading(V)

Power incident on photodiode (W)

Power incident on GigE (W)
Slope (counts/​𝝁s)
Uncertainity in
 slope (counts/​𝝁s)
CF (W-sec/counts)
16 0.784 2.31E-06 3.89E-07 180.4029 1.02882 2.16E-15
18 0.854 2.51E-06 4.24E-07 207.7314 0.7656 2.04E-15
20 0.92 2.71E-06 4.57E-07 209.8902 1.358 2.18E-15
22 0.969 2.85E-06 4.81E-07 222.3862 1.456 2.16E-15
25 1.026 3.02E-06 5.09E-07 235.2349 1.53118 2.17E-15
  Average  2.14E-15

To estimate the uncertainity, I assumed an error of at most 20mV (due to stray lights or difference in orientation of GigE and photodiode) for the photodiode reading. Using the uncertainity in slope from the linear fit, I expect an uncertainity of maximum 4%. Note: I haven't accounted for the error in the responsivity value of the photodiode.

GigE area 10.36 sq.mm
PDA area 61.364 sq.mm
Responsivity 0.34 A/W
Transimpedance gain (at gain = 20dB) 10^6 V/W +/- 0.1%
Pixel format used Mono 8 bit

Johannes had reported CF as 0.0858E-15 W-sec/counts for 12 bit images, with measured a laser source. This value and the one I got are off by a factor of 25. Difference in the pixel formats and effect of coherence of the light used might be the possible reasons.

Attachment 1: CCD_calibration.png
CCD_calibration.png
  3655   Tue Oct 5 18:27:18 2010 Joonho LeeSummaryElectronicsCCD cable's impedence

Today I checked the CCD cables which is connected to the VIDEOMUX.

17 cables are type of RG59, 8 cables are type of RG58. I have not figured out the type of other cables(23 cables) yet.

The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.

After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.

To check the impedance of each CCD cable, I went to the VIDEOMUX and looked for the label on the cable's surface.

Type of RG59 is designated to the cable of impedance 75ohm. I wrote down each cable's input or output channel number with observation(whether it is of type RG59 or not).

The result of observation is as follows.

Type channel number where it is connected to
Type 59 in#2, in#11, in#12, in#15, in#18, in#19, in#22, in#26, out#3, out#4, out#11, out#12, out#14, out#17, out#18, out#20, out#21
Type 58 in#17, in#23, in#24, in#25, out#2, out#5, out#7, out#19
unknown type others

 

For 23 cables that I have not figured out their type, cables are too entangled so it is limited to look for the label along each cable.

I will try to figure out more tomorrow. Any suggestion would be really appreciated.

  3739   Mon Oct 18 22:11:32 2010 Joonho LeeSummaryElectronicsCCD cables for input signal

Today I checked all the CCD cables which is connected input channels of the VIDEOMUX.

Among total 25 cables for output, 12 cables are type of RG59, 4 cables are type of RG58, and 9 cables are of unknown type.

The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.

After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.

 

Today, I check the cables in similar way as I did the last time.

I labeled all cables connected to input channels of VIDEO MUX and disconnect all of them since last time it was hard to check every cable because of cables too entangled.

Then I checked the types of all cables and existing label which might designate where each cable is connected to.

After I finished the check, I reconnected all cables into the input channel which each of cable was connected to before I disconnected.

 

4 cables out of 25 are type of RG58 so expected to be replace with cable of type RG59.

9 cables out of 25 are of unknown type. These nine cables are all orange-colored thick cables which do not have any label about the cable characteristic on the surface.

The result of observation is as follows.

Note that type 'TBD-1' is used for the orange colored cables because all of them look like the same type of cable.

 

Channel number where its signal is coming type
1 C1:IO-VIDEO 1 MC2 TBD-1
2 FI CAMERA 59
3 PSL OUTPUT CAMERA 59
4 BS  C:1O-VIDEO 4 TBD-1
5 MC1&3 C:1O-VIDEO 5 59
6 ITMX C:1O-VIDEO 6 TBD-1
7 C1:IO-VIDEO 7 ITMY TBD-1
8 C1:IO-VIDEO 8 ETMX TBD-1
9 C1:IO-VIDEO 9 ETMY TBD-1
10 No cable is connected
(spare channel)
 
11 C1:IO-VIDEO 11 RCR 59
12 C1:IO-VIDEO RCT 59
13 MCR VIDEO 59
14 C1:IO-VIDEO 14 PMCT 59
15 VIDEO 15 PSL IOO(OR IOC) 59
16 C1:IO-VIDEO 16 IMCT TBD-1
17 PSL CAMERA 58
18 C1:IO-VIDEO 18 IMCR 59
19 C1:IO-VIDEO 19 SPS 59
20 C1:IO-VIDEO 20 BSPO TBD-1
21 C1:IO-VIDEO 21 ITMXPO TBD-1
22 C1:IO-VIDEO 22 APS1 59
23 ETMX-T 58
24 ETMY-T 58
25 POY CCD VIDEO CH25 58
26 OMC-V 59

Today I could not figure out what impedance the TBD-1 type(unknown type) has.

Next time, I will check out the orange-colored cables' impedance directly and find where the unknown output signal is sent. Any suggestion would be really appreciated.

  3694   Mon Oct 11 23:55:25 2010 Joonho LeeSummaryElectronicsCCD cables for output signal

Today I checked all the CCD cables which is connected output channels of the VIDEOMUX.

Among total 22 cables for output, 18 cables are type of RG59, 4 cables are type of RG58.

The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.

After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.

 

Today, I labeled all cables connected to output channels of VIDEO MUX and disconnect all of them since last time it was hard to check every cable because of cables too entangled.

With thankful help by Yuta, I also checked which output channel is sending signal to which monitor while I was disconnecting cables.

Then I checked the types of all cables and existing label which might designate where each cable is connected to.

After I finished the check, I reconnected all cables into the output channel which each of cable was connected to before I disconnected.

 

4 cables out of 22 are type of RG58 so expected to be replace with cable of type RG59.

The result of observation is as follows. 

Ch#
where its signal is sent type
1 unknown 59
2 Monitor#2  58
3 Monitor#3 59
4 Monitor#4 59
5 Monitor#5 58
6 Monitor#6 59
7 Monitor#7 58
8 unknown / labeled as "PSL output monitor" 59
9 Monitor#9 59
10 Monitor#10 59
11 Monitor#11 59
12 Monitor#12 59
13 Unknown 59
14 Monitor#14 59
15 Monitor#15 59
16 unknown / labeled as "10" 59
17 unknown 59
18 unknown / labeled as "3B" 59
19 unknown / labeled as "MON6 IR19" 58
20 unknown 59
21 unknown 59
22 unknown 59

I could not figure out where 10 cables are sending their signals to. They are not connected to monitor turned on in control room

so I guess they are connected to monitors located inside the lab. I will check these unknown cables when I check the unknown input cables.

Next time, I will check out cables which is connected to input channels of VIDEIO MUX. Any suggestion would be really appreciated.

  4139   Tue Jan 11 21:08:19 2011 JoonhoSummaryCamerasCCD cables upgrade plan.

Today I have made the CCD Cable Upgrade Plan for improvement of sysmtem.

I have ~60 VIDEO cables to be worked for upgrades so I would like to ask all of your favor in helping me of replacing cables.

 

1. Background

Currently, VIDEO system is not working as we desire.

About 20 cables are of impedance of 50 or 52 ohm which is not matched with the whole VIDEO system.

Moreover, some cameras and monitors are out of connection.

 

2. What I have worked so far.

I have checked impedance of all cables so I figured out which cables can be used or should be replaced.

I measured cables' pathes along the side tray so that we can share which cable is installed along which path.

I have made almost of cables necessary for VIDEO system upgrades but no label is attached so far.

 

3. Upgrade plan (More details are shown in attached file)

 

0 : Cable for output ch#2 and input ch#16 is not available for now
1 : First, we need to work on the existing cables. 
1A : Check the label on the both ends and replace to the new label if necessary
1B : We need to move the existing cable's channel only for those currently connected to In #26 (from #26 to #25)
2 : Second, we need to implement new cables into the system
2A : Make two cable's label and attach those on the both ends
2B : Disconnect existing cables at the channel assigned for new cables and remove the cables from the tray also
2C : Move 4 quads into the cabinet containing VIDEO MUX
2D : Implement the new cable into the system along the path described and connect the cables to the assgined channel and camera or monitor

 

 

4. This is a kind of  a first draft of the plan.

Any comment for the better plan is always welcome.

Moreover, replacing all the cables indicated in the files is of great amount of work.

I would like to ask all of your favors in helping me to replace the cables (from 1. to 2D. steps above).

 

Attachment 1: CCD_Cable_Upgrade_Plan_Jan11_2011.pdf
CCD_Cable_Upgrade_Plan_Jan11_2011.pdf CCD_Cable_Upgrade_Plan_Jan11_2011.pdf CCD_Cable_Upgrade_Plan_Jan11_2011.pdf CCD_Cable_Upgrade_Plan_Jan11_2011.pdf
  3950   Thu Nov 18 17:42:20 2010 Joonho LeeSummaryElectronicsCCD cables.

I finished the direct measurement of cable impedances.

Moreover, I wrote the cable replacement plan.

The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.

After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.

Moreover, as Koji suggested, the VIDEO system will be upgraded for better interface.

 

I measured the cable impedance by checking the reflection ratio at the point connected to the terminator with 50 ohm or 75 ohm.

The orange colored cables are measured to be 75ohm so we do not need to replace them.

Combining the list of cable types and the list of desired length,

I need to make total 37 cables and to remove 10 cables from the current connection.

Detailed plan is attached below.

I currently ordered additional cables and BNC plugs.

 

From now on, I will keep making CCD cables for VIDEO upgrade.

Then, with your helps, we will replace the CCD cables.

 

In my opinion, I will finish VIDEO upgrade by this year.

Attachment 1: Upgrade_plan_(Nov18).pdf
Upgrade_plan_(Nov18).pdf Upgrade_plan_(Nov18).pdf
  13352   Mon Oct 2 23:16:05 2017 gautamHowToCamerasCCD calibration

Going through some astronomy CCD calibration resources ([1]-[3]), I gather that there are in general 3 distinct types of correction that are applied:

  1. Dark frames --- this would be what we get with a "zero duration" capture, some documents further subdivide this into various categories like thermal noise in the CCD / readout electronics, poissonian offsets on individual pixels etc.
  2. Bias frames --- this effect is attributed to the charge applied to the CCD array prior to the readout.
  3. Flat-field calibration --- this effect accounts for the non-uniform responsivity of individual pixels on the CCDs. 

The flat-field calibration seems to be the most complicated - the idea is to use a source of known radiance, and capture an image of this known radiance with the CCD. Then assuming we know the source radiance well enough, we can use some math to back out what the actual response function of individual pixels are. Then, for an actual image, we would divide by this response-map to get the actual image. There are a number of assumptions that go into this, such as: 

  • We know the source radiance perfectly (I guess we are assuming that the white paper is a Lambertian scatterer so we know its BRDF, and hence the radiance, perfectly, although the work that Jigyas and Amani did this summer suggest that white paper isn't really a Lambertian scatterer). 
  • There is only one wavelength incident on the CCD.
  • We can neglect the effects of dust on the telescope/CCD array itself, which would obviously modify the responsivity of the CCD, and is presumably not stationary. Best we can do is try and keep the setup as clean as possible during installation.

I am not sure what error is incurred by ignoring 2 and 3 in the list at the beginning of this elog, perhaps this won't affect our ability to estimate the scattered power from the test-masses to within a factor of 2. But it may be worth it to do these additional calibration steps. 

I also wonder what the uncertainty in the 1.5V/A number for the photodiode is (i.e. how much do we trust the Ophir power meter at low power levels?). The datasheet for the PDA100A says the transimpedance gain at 60dB gain is 1.5 MV/A (into high impedance load), and the Si responsivity at 1064nm is ~0.25A/W, so naively I would expect 0.375 V/uW which is ~factor of 4 lower. Is there a reason to trust one method over the other?  

Also, are the calibration factor units correct? Jigyasa reported something like 0.5nW s / ct in her report.

Camera IP Calibration Factor CF
192.168.113.152 8.58 W*s
192.168.113.153 7.83 W*s

The incident power can be calculated as Pin =CF*Total(Counts-DarkCounts)/ExposureTime.

References:

[1] http://www.astrophoto.net/calibration.php

[2] https://www.eso.org/~ohainaut/ccd/

[3] http://www.astro.ufl.edu/~lee/ast325/handouts/ccd.pdf

  13354   Tue Oct 3 01:58:32 2017 johannesHowToCamerasCCD calibration

Disclaimer: Wrong calibration factors! See https://nodus.ligo.caltech.edu:8081/40m/13391

The factors were indeed enormously off. The correct table reads:

Camera IP Calibration Factor CF
192.168.113.152 85.8 pW*s
192.168.113.153 78.3 pW*s

I did subtract a 'dark' frame from the images, though not in the sense of your point 1, just an exposure of identical duration with the laser turned off. This was mostly to reduce the effect of residual light, but given similar initial conditions would somewhat compensate for the offset that pre-existing charge and electronics noise put on the pixel values. The white field is of course a difference story.

I wonder how close we can get to a white field by putting a thin piece of paper in front of the camera without lenses and illuminate it from the other side. A problem is of course the coherence if we use a laser source... Or we scrap any sort of screen/paper and illuminate directly with a strongly divergent beam? Then there wouldn't be a specular pattern.

I'm not sure I understand your point about the 1.5V/A. Just to make sure we're talking about the same thing I made a crude drawing:

The PD sees plenty of light at all times, and the 1.5V/uW came from a comparative measurement PD<-->Ophir (which took the place of the CCD) while adjusting the power deflected with the AOM, so it doesn't have immediate connection to the conversion gain of silicon in this case. I can't remember the gain setting of the PD, but I believe it was 0dB, 20dB at most.

Attachment 1: gige_calibration.pdf
gige_calibration.pdf
  13940   Mon Jun 11 17:18:39 2018 poojaUpdateCamerasCCD calibration

Aim: To calibrate CCD of GigE using LED1050E.

The following table shows some of the specifications for LED1050E as given in Thorlabs datasheet.

Specifications Typical maximum ratings
DC forward current (mA)   100
Forward voltage (V) @ 20mA (VF) 1.25 1.55
Forward optical power (mW) 1.6  
Total optical power (mW) 2.5  
Power dissipation (mW)   130

 The circuit diagram is given in Attachment 1.

Considering a power supply voltage Vcc = 15V, current I = 20mA & forward voltage of led VF = 1.25V, resistance in the circuit is calculated as,

R = (Vcc - VF)/I = 687.5\ohm\ohms\Omega

Attachment 2 gives a plot of resistance (R) vs input voltage (Vcc) when a current of 20mA flows through the circuit. I hope I can proceed with this setup soon.

 

Attachment 1: led_circuit.pdf
led_circuit.pdf
Attachment 2: R_vs_V.pdf
R_vs_V.pdf
  13951   Tue Jun 12 19:27:25 2018 poojaUpdateCamerasCCD calibration

Today I made the led (1050nm) circuit inside a box as given in my previous elog. Steve drilled a 1mm hole in the box as an aperture for led light.

Resistance (R) used = 665 \Omega.

We connected a power supply and IR has been detected using the card.

Later we changed the input voltage and measured the optical power using a powermeter.

Input voltage (Vcc in V) Optical power
0 (dark reading) 60 nW
15 68 \muW
18 82.5 \muW
20 92 \muW

Since the optical power values are very less, we may need to drill a larger hole.

Now the hole is approximately 7mm from led, therefore aperture angle is approximately 2*tan-1(0.5/7) = 8deg. From radiometric curve given in the datasheet of LED1050E, most of the power is within 20 deg. So a hole of size 2* tan(10) *7 = 2.5mm may be required.

I have also attached a photo of the led beam spot on the IR detection card.

Attachment 1: IMG_20180612_163831.jpg
IMG_20180612_163831.jpg
  14633   Thu May 23 10:18:39 2019 KruthiUpdateCamerasCCD calibration

On Tuesday, I tried reproducing Pooja's measurements (https://nodus.ligo.caltech.edu:8081/40m/13986). The table below shows the values I got. Pictures of LED circuit, schematic and the setup are attached. The powermeter readings fluctuated quite a bit for input volatges (Vcc) > 8V, therefore, I expect a maximum uncertainity of 50µW to be on a safer side. Though the readings at lower input voltages didn't vary much over time (variation < 2µW), I don't know how relaible the Ophir powermeter is at such low power levels. The optical power output of LED was linear for input voltages 10V to 20V. I'll proceed with the CCD calibration soon.

Input Voltage (Vcc) in volts Optical power
0 (dark reading) 1.6 nW
2 55.4 µW
4 215.9 µW
6 0.398 mW
8 0.585 mW
10 0.769 mW
12 0.929 mW
14 1.065 mW
16 1.216 mW
18 1.330 mW
20 1.437 mW
22 1.484 mW
24 1.565 mW
26 1.644 mW
28 1.678 mW

Attachment 1: setup.jpeg
setup.jpeg
Attachment 2: led_circuit.jpeg
led_circuit.jpeg
Attachment 3: led_schematic.pdf
led_schematic.pdf
  14621   Sat May 18 12:19:36 2019 KruthiUpdate CCD calibration and telescope design

I went through all the elog entries related to CCD calibration. I was wondering if we can use Spectralon diffuse reflectance standards (https://www.labsphere.com/labsphere-products-solutions/materials-coatings-2/targets-standards/diffuse-reflectance-standards/diffuse-reflectance-standards/) instead of a white paper as they would be a better approximation to a Lambertian scatterer.

Telescope design:
On calculating the accessible u-v ranges and the % error in magnification (more precisely, %deviation), I got %deviation of order 10 and in some cases of order 100 (attachments 1 to 4), which matches with Pooja's calculations. But I'm not able reproduce Jigyasa's %error calculations where the %error is of order 10^-1. I couldn't find the code that she had used for these calculations and I even mailed her about the same. We can still image with 150-250 mm combination as proposed by Jigyasa, but I don't think it ensures maximum usage of pixel array. Also for this combination the resulting conjugate ratio will be greater than 5. So, use of plano-convex lenses will reduce spherical aberrations. I also explored other focal length combinations such as 250-500 mm and 500-500mm. In these cases, both the lenses will have f-numbers greater than 5. But the conjugate ratios will be less than 5, so biconvex lenses will be a better choice.

Constraints: available lens tube length (max value of d) = 3" ; object distances range (u) = 70 cm to 150 cm ; available cylindrical enclosures (max value of d+v) are 52cm and 20cm long (https://nodus.ligo.caltech.edu:8081/40m/13000).

I calculated the resultant image distance (v) and the required distance between lenses (d), for fixed magnifications (i.e. m = -0.06089 and m = -0.1826 for imaging test masses and beam spot respectively) and different values of 'u'. This way we can ensure that no pixels are wasted. The focal length combinations - 300-300mm (for imaging beam spot), and 100-125mm (for imaging test masses) - were the only combinations that gave all positive values for 'd' and 'v', for given range of 'u' (attachments 5-6). But here 'd' ranges from 0 to 30cm in first case, which exceeds the available lens tube length. Also, in the second case the f-numbers will be less than 5 for 2" lenses and thus may result in spherical aberration.

All this fuss about f-numbers, conjugate ratios, and plano-convex/biconvex lenses is to reduce spherical aberrations. But how much will spherical aberrations affect our readings? 

We have two 2" biconvex lenses of 150mm focal length and one 2" biconvex lens of focal length 250mm in stock. I'll start off with these and once I have a metric to quantify spherical aberrations we can further decide upon lenses to improve the telescopic lens system.

Attachment 1: 15-25.png
15-25.png
Attachment 2: 25-25.png
25-25.png
Attachment 3: 25-50.png
25-50.png
Attachment 4: 50-50.png
50-50.png
Attachment 5: 30-30_for_1%22.png
30-30_for_1%22.png
Attachment 6: 10-12.5_for_3%22.png
10-12.5_for_3%22.png
  13986   Tue Jun 19 14:08:37 2018 poojaUpdateCamerasCCD calibration using LED1050E

Aim: To measure the optical power from led using a powermeter.

Yesterday Gautam drilled a larger hole of diameter 5mm in the box as an aperture for led (aperture angle is approximately 2*tan-1(2.5/7) = 39 deg). I repeated the measurements that I had done before (https://nodus.ligo.caltech.edu:8081/40m/13951). The measurents of optical power measured using a powermeter and the corresponding input voltages are listed below.

Input voltage (Vcc in V) Optical power
0 (dark reading) 0.8 nW
10 1.05 mW
12 1.15 mW
15 1.47 mW
16 1.56 mW
18 1.81 mW

So we are able to receive optical power close to the value (1.6mW) given in Thorlabs datasheet for LED1050E (https://www.thorlabs.com/drawings/e6da1d5608eefd5c-035CFFE5-C317-209E-7686CA23F717638B/LED1050E-SpecSheet.pdf). I hope we can proceed to BRDF measurements for CCD calibration.

Steve: did you center the LED ?

  13991   Wed Jun 20 20:39:36 2018 poojaUpdateCamerasCCD calibration using LED1050E

 

Quote:

Aim: To measure the optical power from led using a powermeter.

Yesterday Gautam drilled a larger hole of diameter 5mm in the box as an aperture for led (aperture angle is approximately 2*tan-1(2.5/7) = 39 deg). I repeated the measurements that I had done before (https://nodus.ligo.caltech.edu:8081/40m/13951). The measurents of optical power measured using a powermeter and the corresponding input voltages are listed below.

Input voltage (Vcc in V) Optical power
0 (dark reading) 0.8 nW
10 1.05 mW
12 1.15 mW
15 1.47 mW
16 1.56 mW
18 1.81 mW

So we are able to receive optical power close to the value (1.6mW) given in Thorlabs datasheet for LED1050E (https://www.thorlabs.com/drawings/e6da1d5608eefd5c-035CFFE5-C317-209E-7686CA23F717638B/LED1050E-SpecSheet.pdf). I hope we can proceed to BRDF measurements for CCD calibration.

Steve: did you center the LED ?

Yes.

  8880   Fri Jul 19 12:23:34 2013 manasaUpdateCDSCDS FE not happy

I found CDS rt processes in red. I did 'mxstreamrestart' from the medm. It did not help. Also ssh'd into c1iscex and tried 'mxstreamrestart' from the command line. It did not work either.

I thought restarting frame builder would help. I ssh'd to fb. But when I try to restart fb I get the following error:

controls@fb ~ 0$ telnet fb 8088
Trying 192.168.113.202...
telnet: connect to address
192.168.113.202: Connection refused

 

Screenshot-Untitled_Window.png

  8881   Fri Jul 19 14:04:24 2013 KojiUpdateCDSCDS FE not happy

daqd was restarted.


- tried telnet fb 8088 on rossa => same error as manasa had

- tried telnet fb 8087 on rossa => same result

- sshed into fb ssh fb

- tried to find daqpd by ps -def | grep daqd => not found

- looked at wiki https://wiki-40m.ligo.caltech.edu/New_Computer_Restart_Procedures?highlight=%28daqd%29

- the wiki page suggested the following command to run daqd /opt/rtcds/caltech/c1/target/fb/daqd -c ./daqdrc &

- ran ps -def | grep nds => already exist. Left untouched.

- Left fb.

- tried telnet fb 8087 on rossa => now it works

  4770   Tue May 31 11:26:29 2011 josephbUpdateCDSCDS Maintenance

1) Checked in the changes I had made to the c1mcp.mdl model just before leaving for Elba.

2) The c1x01 and c1scx kernel modules had stopped running due to an ADC timeout. 

According to dmesg on c1iscex, they died at 3426838 seconds after starting (which corresponds to ~39 days).  "uptime" indicates c1iscex was up for 46 days, 23 hours. So my guess is about 8 days ago (last Monday or Tuesday),  they both died when the ADCs failed to respond quick enough for an unknown reason.

I used the kill scripts (in /opt/rtcds/caltech/c1/scripts/) to kill c1spx, c1scx, and c1x01.  I then used the start scripts to start c1x01, then c1scx, and then finally c1spx.  They all came up fine.

Status screen is now all green.  I renabled damping on ETMX and it seems to be happy. A small kick of the optic shows the approriately damped response.

  12148   Fri Jun 3 13:05:18 2016 ericqUpdateCDSCDS Notes

Some CDS related things:


Keith Thorne has told us about a potential fix for our framebuilder woes. Jamie is going to be at the 40m next week to implement this, which could interfere with normal interferometer operation - so plan accordingly. 


I spent a little time doing some plumbing in the realtime models for Varun's audio processing work. Specifically, I tried to spin up a new model (C1DAF), running on the c1lsc machine. This included:

  • Removing the unused TT3 and TT4 parts from the IOO block in c1ass.mdl, freeing up some DAC outputs on the LSC rack
  • Adding an output row to the LSC input matrix which pipes to a shared memory IPC block. (This seemed like the simplest way for the DAFI model to have access to lots of signals with minimal overhead).
  • Removing two unused ADC inputs from c1lsc.mdl (that went to things like PD_XXX), to give c1daf.mdl the required two ADC inputs - and to give us the option of feeding in some analog signals.
  • Editing the rtsystab file to include c1daf in the list of models that run on c1lsc
  • Editing the existing DAFI .mdl file (which just looked like an old recolored cut-n-paste of c1ioo.mdl) to accept the IPC and ADC connections, and one DAC output that would go to the fibox. 

The simple DAFI model compiled and installed without complaint, but doesn't succesfully start. For some reason, the frontend never takes the CPU offline. Jamie will help with this next week. Since things aren't working, these changes have not been commited to the userapps svn. 

  3963   Mon Nov 22 13:16:52 2010 josephbSummaryCDSCDS Plan for the week

CDS Objectives for the Week:

Monday/Tuesday:

1) Investigate ETMX SD sensor problems

2) Fully check out the ETMX suspension and get that to a "green" state.

3) Look into cleaning up target directories (merge old target directory into the current target directory) and update all the slow machines for the new code location.

4) Clean up GDS apps directory (create link to opt/apps on all front end machines).

5) Get Rana his SENSOR, PERROR, etc channels.

Tuesday/Wednesday:

3) Install LSC IO chassis and necessary cabling/fibers.

4) Get LSC computer talking to its remote IO chassis

Wednesday:

5) If time, connect and start debugging Dolphin connection between LSC and SUS machines

 

  16208   Thu Jun 17 11:19:37 2021 Ian MacMillanUpdateCDSCDS Upgrade

Jon and I tested the ADC and DAC cards in both of the systems on the test stand. We had to swap out an 18-bit DAC for a 16-bit one that worked but now both machines have at least one working ADC and DAC.

[Still working on this post. I need to look at what is in the machines to say everything ]

  16217   Mon Jun 21 17:15:49 2021 Ian MacMillanUpdateCDSCDS Upgrade

Anchal and I wrote a script (Attachment 1) that will test the ADC and DAC connections with inputs on the INMON from -3000 to 3000. We could not run it because some of the channels seemed to be frozen. 

Attachment 1: DAC2ADC_Test.py
´╗┐import os
import time
import numpy as np
import subprocess
from traceback import print_exc
import argparse


def grabInputArgs():
    parser = argparse.ArgumentParser(
... 75 more lines ...
  3127   Mon Jun 28 12:48:04 2010 josephbSummaryCDSCDS adapter board notes

The following is according to the drawing by Ben Abbott found at http://www.ligo.caltech.edu/~babbott/40m_sus_wiring.pdf

This applies to SUS:

Two ICS 110Bs.  Each has 2 (4 total) 44 shielded conductors going to DAQ Interface Chassis  (D990147-A).  See pages 2 and 4.

Three Pentek 6102 Analog Outputs to LSC Anti-Image Board (D000186 Rev A).  Each connected via 40 conductor ribbon cable (so 3 total). See page 5.

Eight XY220 to various whitening and dewhitening filters.  50 conductor ribbon cable for each (8 total). See page 10.

Three Pentek 6102 Analog Input to Op Lev interface board. 40 conductor ribbon cable for each (3 total).  See page 13.

 

The following look to be part of the AUX crate, and thus don't need replacement:

Five VMIC113A to various Coil Drives, Optical Levers, and Whitening boards.  64 conductor ribbon cable for each (5 total). See page 11.

Three XY220 to various Coil boards. 50 conductor ribbon for each (3 total).  See page 11.

The following is according to the drawing by Jay found at http://www.ligo.caltech.edu/~jay/drawings/d020006-03.pdf

This applies to WFS and LSC:

Two XY220 to whitening 1 and 2 boards.  50 conductor ribbon for each (2 total).  See page 3.

Pentek 6102 to LSC Anti-image. 50 conductor ribbon. (1 total). See page 5.

 

The following are unclear if they belong to the FE or the Aux crate.  Unable to check the physical setup at the moment.

One VMIC3113A to LSC I & Q, RFAM, QPD INT. 64 conductor ribbon cable. (Total 1).  See page 4.

One XY220 to QPD Int.  50 conductor ribbon cable. (Total 1). See page 4.

 

The following look to be part of WFS, and aren't needed:

Two Pentek 6102 Analog Input to WFS boards. 40 conductor ribbon cables (2 Total). See page 1.

The following are part of the Aux crate, and don't need to be replaced:

Two VMIC3113A to Demods, PD, MC servo amp, PZT driver, Anti-imaging board. 64 conductor ribbon cable (2 Total). See page 3.

Two XY220 to Demods, MC Servo Amp, QPD Int boards.  50 conductor ribbon cable (2 Total). See page 3.

Three VMIC4116 to Demod and whitening boards.  50 conductor ribbon cable (3 Total). See page 3.

  3129   Mon Jun 28 21:26:05 2010 ranaSummaryCDSCDS adapter board notes

Those drawings are an OK start, but its obvious that things have changed at the 40m since 2002. We cannot rely on these drawings to determine all of the channel counts, etc.

I thought we had already been through all this...If not, we'll have to spend one afternoon going around and marking it all up. 

  3156   Fri Jul 2 11:06:38 2010 josephb, kiwamuUpdateCDSCDS and Green locking thoughts

Kiwamu and I went through and looked at the spare channels available near the PSL table and at the ends.

First, I noticed I need another 4 DB37 ADC adapter box, since there's 3 Pentek ADCs there, which I don't think Jay realized.

PSL Green Locking

Anyways, in the IOO chassis that will put in, for the ADC we have a spare 8 channels which comes in the DB37 format.  So one option, is build a 8 BNC converter, that plugs into that box.

The other option, is build 4-pin Lemo connectors and go in through the Sander box which currently goes to the 110B ADC, which has some spare channels.

For DAC at the PSL, the IOO chassis will have 8 spare channel DAC channels since there's only 1 Pentek DAC.  This would be in a IDC40 cable format, since thats what the blue DAC adapter box takes.  A 8 channel DAC box to 40 pin IDC would need to be built.

 

End Green Locking

The ends have 8 spare DAC channels, again 40 pin IDC cable.   A box similar to the 8 channel DAC box for the PSL would need to be built.

The ends also have spare 4-pin Lemo capacity.  It looked like there were 10 channels or so still unused.  So lemo connections would need to be made.  There doesn't appear to be any spare 37 DB connectors on the adapter box available, so lemo via the Sander box is the only way.

 

Notes

Joe needs to provide Kiwamu with cabling pin outs.

If Kiwamu makes a couple spares of the 8 BNC to 37DB connector boards, there's a spare 37DB ADC input in the SUS machine we could use up, providing 8 more channels for test use.

  13837   Sun May 13 15:15:18 2018 gautamUpdateGeneralCDS crash

I found the c1lsc machine to be completely unresponsive today. Looking at the trend of the state word, it happened sometime yesterday (Saturday). The usual reboot procedure did not work - I am not able to bring back any of the models on any of the machines, during the restart procedure, they all fail. The logfile reads (for the c1ioo front end, but they all behave the same):

[  309.783460] c1x03: Initializing space for daqLib buffers
[  309.887357] CPU 2 is now offline
[  309.887422] c1x03: Sync source = 4
[  309.887425] c1x03: Waiting for EPICS BURT Restore = 2
[  309.946320] c1x03: Waiting for EPICS BURT 0
[  309.946320] c1x03: BURT Restore Complete
[  309.946320] c1x03: Corrupted Epics data:  module=0 filter=1 filterType=0 filtSections=134610112
[  309.946320] c1x03: Filter module init failed, exiting
[  363.229086] c1x03: Setting stop_working_threads to 1
[  364.232148] DXH Adapter 0 : BROADCAST - dx_user_mcast_unbind - mcgroupid=0x3
[  364.233689] Will bring back CPU 2
[  365.236674] Booting Node 1 Processor 2 APIC 0x2
[  365.236771] smpboot cpu 2: start_ip = 9a000
[  309.946320] Calibrating delay loop (skipped) already calibrated this CPU
[  365.251060] NMI watchdog enabled, takes one hw-pmu counter.
[  365.252135] Brought the CPU back up
[  365.252138] c1x03: Just before returning from cleanup_module for c1x03

Not sure what is going on here, or what "Corrutped EPICS data" is supposed to mean. Thinking that something was messed up the last time the model was compiled, I tried recompiling the IOP model. But I'm not able to even compile the model, it fails giving the error message

make[1]: Leaving directory '/opt/rtcds/caltech/c1/rtbuild/3.4'
make[1]: /cvs/cds/rtapps/epics-3.14.12.2_long/modules/seq/bin/linux-x86_64/snc: Command not found
make[1]: *** [build/c1x03epics/c1x03.c] Error 127
Makefile:28: recipe for target 'c1x03' failed
make: *** [c1x03] Error 1

I suspect this is some kind of path problem - the EPICS_BASE bash variable is set to /cvs/cds/rtapps/epics-3.14.12.2_long/base on the FEs, while /cvs isn't even mounted on the FEs (nor do I think it should be). I think the correct path should be /opt/rtapps/epics-3.14.12.2_long/base. Why should this have changed?

I've shutdown all watchdogs until this is resolved.

Attachment 1: vertexFEs_crashed.png
vertexFEs_crashed.png
  13838   Sun May 13 17:31:51 2018 gautamUpdateGeneralCDS crash

As suspected, this was indeed a path problem. Johannes will elog about it later, but in short, it is related to some path variables being changed in order to try and streamline the EPICS processes on the new c1auxex machine (Acromag Era). It is confusing that futzing around with the slow computing system messes with the realtime system as well - aren't these supposed to be decoupled? Once the paths were restored by Johannes, everything compiled and restarted fine. We even have a beam on the AS camera, which was what triggered this whole thingyes.

Anyways, Attachment #1 shows the current status. I am puzzled by the red TIMING indicators on the c1x04 and c1x02 processes, it is absent from any other processes. How can this be debugged further?

Quote:
 

I suspect this is some kind of path problem - the EPICS_BASE bash variable is set to /cvs/cds/rtapps/epics-3.14.12.2_long/base on the FEs, while /cvs isn't even mounted on the FEs (nor do I think it should be). I think the correct path should be /opt/rtapps/epics-3.14.12.2_long/base. Why should this have changed?

Attachment 1: CDS_overview_20180513.png
CDS_overview_20180513.png
Attachment 2: AS_1210293643.jpeg
AS_1210293643.jpeg
  13839   Sun May 13 20:48:38 2018 johannesUpdateGeneralCDS crash

I think the root of the problem is that the /opt/rtapps/ and /cvs/cds/rtapps/ mounting locations point to the same directory on the nfs server. Gautam and I were cleaning up the /cvs/cds/caltech/target/ directory, placing the previous contents of /cvs/cds/caltech/target/c1auxex/, including database files and startup instructions in /cvs/cds/caltech/target/c1auxex_oldVME/, and then moved /cvs/cds/caltech/target/c1auxex2/, which has the channel database and initialization files for the Acromac DAQ, to /cvs/cds/caltech/target/c1auxex/.

This also required updating the systemd entries on c1auxex to point to the changed directory. While confirming that everything worked as before we noticed that upon startup the EPICS IOC complains about not being able to find the caRepeater binary. This was not new and has not limited DAQ functionality in the past, but we wanted to fix this, as it seemed to be some simple PATH issue. While the paths are all correctly defined in the user login shell, systemd runs on a lower level and doesn't know about them. One thing we tried was to let systemd execute /cvs/cds/rtapps/epics-3.14.12.2_long/etc/epics-user-env.sh initializing EPICS. It was strange that the content of that file was pointing to /opt/rtapps/epics-3.14.12.2_long/base, which is not mounted on the slow machines, so we changed the /opt/ it to /cvs/cds/, not realizing that the frontends read from the same directory (as Gautam said, /cvs/cds does not exist as a mount point on the frontend). It ended up not working this way, and apparently I forgot to change it back during clean up. But worse, never elogged it!

In the end, we managed to to give systemd the correct path definitions by explicitly calling them out in /cvs/cds/caltech/target/c1auxex/ETMXenv, to which a reference was added in the systemd service file. The caRepeater warning no longer appears.

  15791   Tue Feb 2 23:29:35 2021 KojiUpdateCDSCDS crash and CDS/IFO recovery

I worked around the racks and the feedthru flanges this afternoon and evening. This inevitably crashed c1lsc real-time process.
Rebooting c1lsc caused multiple crashes (as usual) and I had to hard reboot c1lsc/c1sus/c1ioo
This made the "DC" indicator of the IOPs for these hosts **RED**.

This looked like the usual timing issue. It looked like "ntpdate" is not available in the new system. (When was it updated?)

The hardware clock (RTC) of these hosts are set to be PST while the functional end host showed UTC. So I copied the time of the UTC time from the end to the vertex machines.
For the time adjustment, the standard "date" command was used

> sudo date -s "2021-02-03 07:11:30"

This made the trick. Once IOP was restarted, the "DC" indicators returned to **Green**, restarting the other processes were straight forward and now the CDS indicators are all green.

controls@c1iscex:~ 0$ timedatectl
      Local time: Wed 2021-02-03 07:35:12 UTC
  Universal time: Wed 2021-02-03 07:35:12 UTC
        RTC time: Wed 2021-02-03 07:35:26
       Time zone: Etc/UTC (UTC, +0000)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a

NTP synchronization is not active. Is this OK?


With the recovered CDS, the IMC was immediately locked and the autolocker started to function after a few pokes (like manually running of the "mcup" script). However, I didn't see any light on the AS/REF cameras as well as the test mass faces. I'm sure the IMC alignment is OK. This means the TTs are not well aligned.

So, burtrestored c1assepics with 12:19 snapshot. This immediately brought the spots on the REFL/AS.

Then the arm were aligned, locked, and ASSed. I tried to lock the FP arms. The transmissions were at the level of 0.1~0.3. So some manual alignment of ITMY and BS were necessary. After having the TRs of ~0.8, I still could not lock the arms. The signs of the servo gains were flipped to -0.143 for X arm and -0.012 for Y arm, and the arms were locked. ASS worked well and the ASS offsets were offloaded to the SUSs.

 

  15792   Wed Feb 3 15:24:52 2021 gautamUpdateCDSCDS crash and CDS/IFO recovery

Didn't get a chance to comment during the meeting - This was almost certainly a coincidence. I have never had to do this - I assert, based on the ~10 labwide reboots I have had to do in the last two years, that whether the timing errors persist on reboot or not is not deterministic. But this is beyond my level of CDS knowledge and so I'm happy for Rolf / Jamie to comment. I use the reboot script - if that doesn't work, I use it again until the systems come back without any errors.

Quote:

This looked like the usual timing issue. It looked like "ntpdate" is not available in the new system. (When was it updated?)

The hardware clock (RTC) of these hosts are set to be PST while the functional end host showed UTC. So I copied the time of the UTC time from the end to the vertex machines.
For the time adjustment, the standard "date" command was used

> sudo date -s "2021-02-03 07:11:30"

This made the trick. Once IOP was restarted, the "DC" indicators returned to **Green**, restarting the other processes were straight forward and now the CDS indicators are all green.

I don't think this is a problem, the NTP synchronization is handled by timesyncd now.

Quote:

NTP synchronization is not active. Is this OK?

I defer restoring the LSC settings etc since I guess there is not expected to be any interferometer activity for a while.

ELOG V3.1.3-