40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 269 of 357  Not logged in ELOG logo
ID Dateup Author Type Category Subject
  13492   Tue Dec 26 17:24:24 2017 SteveUpdateGeneralpower outage

There was a power outage.

The IFO pressure is 12.8 mTorr-it and it is not pumped. V1 is still closed. TP1 is not running. The Rga is not powered.

The PSL output shutter is still closed. 2W Innolight turned on and manual beam block placed in its beampath.

3 AC units turned on at room temp 84F

Attachment 1: powerOutage.png
powerOutage.png
  13493   Thu Dec 28 17:22:02 2017 gautamUpdateGeneralpower outage - CDS recovery
  1. I had to manually reboot c1lsc, c1sus and c1ioo.
  2. I edited the line in /etc/rt.sh (specifically, on FB /diskless/root.jessie/etc/rt.sh) that lists models running on a given frontend, to exclude c1dnn and c1oaf, as these are the models that have been giving us most trouble on startup. After this, I was able to bring back all models on these three machines using rtcds restart --all. The original line in this file has just been commented out, and can be restored whenever we wish to do so.
  3. mx_stream processes are showing failed status on all the frontends. As a result, the daqd processes are still not working. Usual debugging methods didn't work.
  4. Restored all sus dampings.
  5. Slow computers all seem to be responsive, so no action was required there.
  6. Burtrestored c1psl to solve the "sticky slider" problem, relocked PMC. I didn't do anything further on the PSL table w.r.t. the manual beam block Steve has placed there till the vacuum situation returns to normal.

@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.

from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.

I also hard-rebooted megatron and optimus as these were unresponsive to ping.

*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup.

Attachment 1: 28.png
28.png
  13494   Sun Dec 31 12:43:50 2017 ranaSummaryElectronicsSR560: reworking

I have ordered some LSK389A (in both the SOIC-8 and TO-71 packages) to replace the SR560's default front end FET pair (NPD5565).

I'm going to rework s# 00619 once these new FETs come in. Also ordered 100 of the SOIC-8 to DIP-8 adapter boards from Digikey.

This plot shows the current performance compared to the Rai Low Noise box. I expect the FETs should let us get to ~1.5 nV/rHz with the SR560.

Attachment 1: Preamps.pdf
Preamps.pdf
  13495   Tue Jan 2 15:43:35 2018 SteveUpdateVACpumpdown after power outage

 

Quote:

There was a power outage.

The IFO pressure is 12.8 mTorr-it and it is not pumped. V1 is still closed. TP1 is not running. The Rga is not powered.

The PSL output shutter is still closed. 2W Innolight turned on and manual beam block placed in its beampath.

3 AC units turned on at room temp 84F

IFO pumped down from 44 mTorr to 9.6e-6 Torr with Maglev  backed with only TP3

Aux drypump  was helping our std drypump during this 1 hour period. TP3 reached 32 C and slowed down 47K rpm

The peak foreline pressure at P2  was ~3 Torr

Hornet cold cathode gauge setting:   research mode, air,

                                                            2830 HV  1e-4A  at 9.6e-6 Torr,

                                                         [  3110 HV  8e-5A at 7.4e-6 Torr one day later ]

Annuloses are at 2 Torr, not pumped

Valve configuration:  vacuum normal, RGA is still off

PSL shutter is opened automatically. Manual block removed.

End IR lasers and doublers are turned on.

 

NOTE: Maglev " rotation X " on vacuum medm screen is not working! " C1:Vac-TP1_rot " channel was removed.  Use " NORMAL X " for rotation monitoring.

*We removed this (i.e. rotation) field from the MEDM screen to avoid confusion.

Attachment 1: pumpdown_from_44_mTorr.png
pumpdown_from_44_mTorr.png
  13496   Tue Jan 2 16:24:29 2018 gautamUpdatesafetyProjector periodically shuts itself off

I noticed this behaviour since ~Dec 20th, before the power failure. The bulb itself seems to work fine, but the projector turns itself off after <1 minute after being manually turned on by the power button. AFAIK, there was no changes made to the projector/Zita. Perhaps this is some kind of in-built mechanism that is signalling that the bulb is at the end of its lifetime? It has been ~4.5 months (3240 hours) since the last bulb replacement (according to the little sticker on the back which says the last bulb replacement was on 15 Aug 2017

  13497   Tue Jan 2 16:37:26 2018 gautamUpdateOptimal ControlOplev loop tuning

I've made various changes to the optimal loop design approach, but am still not having much success. A summary of changes made:

  1. Parametrization of filter - enforcing uniqueness
    • Previously, the input to the particle swarm was a vector of root frequencies and associated Q-factors.
    • This way of parametrization is not unique - permuting the order of the roots yield the same filter, but particles traversing the high (65) dimensional parameter space may have to go over very expensive regions in order to converge with the global minimum / best performing particle.
    • One way around this is to parametrize the filter by the highest pole/zero frequency, and then specifying the remaining roots by the cumulative separation from this highest root. This guarantees that a unique vector input to the particle swarm function specifies a unique filter.
    • To avoid negative frequencies, I manually set a particular element of the vector to 0 if the cumulative sum yields a negative frequency. I believe this is how MATLAB's particle swarm implements the "constraints" in the constrained optimization routines.
  2. Cost function - I've reformulated this into something that makes more sense to me, but probably can be improved further.
    • Term #1 - integral of the area (evaluated with MATLAB's trapz utility) between the in-loop (i.e. suppressed) error signal and the sensing noise spectrum (for the latter, I use the orange curve from this plot). This is a signed number, so that suppression below the sensing noise is penalized. Target value is 1 urad rtHz. One problem I see with this approach is that if we believe the sensing noise measurement, then even at 10mHz, it looks like sensing noise is below the out-of-loop error signal level. So the optimizer doesn't seem to want to make the loop AC coupled.
    • Term #2 - stability margin. I'm using this number, which is the distance-of-closest-approach to the point -1 in the Nyquist plot, instead of gain and phase margins, as this yields a more conservative robustness measure. Target value is 0.65.
    • Term #3 - A2L contribution of in-loop control signal. This contribution is calculated using measurements of A2L coupling for the DRMI. The actual term that goes into the cost function is the ratio of the area under the in-loop control signal to that under the seismic noise curve above 35Hz. Further, f>100Hz is given 10x the weight of 35Hz<f<100Hz (I've not really played around with this weighting function). The goal is to be as close to the seismic curve as possible, at which point this term becomes 1.
    • Terms #4 and #5 - the maximum open loop gain evaluated in a 1Hz wide bin centered around the bounce and roll resonances. The aim is to not exceed -40dB in these bins. Perhaps this needs to be reformulated, as the optimizer seems to be giving this term too much importance - the optimized loops have extremely deep bandstops around the BR resonances.
    • To normalize each term, I divide by the "target" value mentioned above, so as to make the various terms comparable.
    • Each term in the cost function has two regimes - one where it is rapidly varying close to the desired operating point, and one far away where the cost still increases monotonically, but slower (see Attachment #2).
    • A scalar cost function is evaluated by taking a weighted sum of the above terms. The weights are chosen so as to make each term ~10 for the controller currently implemented.
    • All of the above are only applicable if the resulting loop is stable - else, a large cost is assigned (exponential of sum of real parts of poles of OLTF).

Attachment #1 shows the outcome of a typical optimization run - so while I am having some more success with this than before, where the PSO algorithm was stalling and terminating before any actual optimization was done, it seems like I need to re-think the cost function yet again...

Attachment #2 shows the current terms entering the cost function, and their "desired" values.

The current version of the code I am using is here: although I may not have inculded some of the data files required to run it, to be fixed...

Attachment 1: loopOpt_180102_1706.pdf
loopOpt_180102_1706.pdf
Attachment 2: globalCosts.pdf
globalCosts.pdf
  13498   Wed Jan 3 12:33:16 2018 ranaUpdateOptimal ControlOplev loop tuning

When putting code into git.ligo.org, one way to have automated testing is to use the Gitlab CI. This is an automated 'checker', much like the 'Travis' system used in GitHub. Essentially, you give it a make files which it runs somewhere and your GIT repo web page gets a little 'failed/passing' badge telling you if its working. You can also browse the logs to see in detail what happened. This avoids the 'but it works on my computer!' thing that we usually hear.

Quote:

The current version of the code I am using is here: although I may not have inculded some of the data files required to run it, to be fixed...

 

  13499   Wed Jan 3 15:13:55 2018 SteveUpdateGeneralprojector light bulb replaced

Bulb  is replaced.

Quote:

I noticed this behaviour since ~Dec 20th, before the power failure. The bulb itself seems to work fine, but the projector turns itself off after <1 minute after being manually turned on by the power button. AFAIK, there was no changes made to the projector/Zita. Perhaps this is some kind of in-built mechanism that is signalling that the bulb is at the end of its lifetime? It has been ~4.5 months (3240 hours) since the last bulb replacement (according to the little sticker on the back which says the last bulb replacement was on 15 Aug 2017

 

  13500   Wed Jan 3 16:25:32 2018 awadeUpdateOptimal ControlOplev loop tuning

Another cool feature is client side pre-commit hooks. They can be used to run checks on the local version at the time of commit and refuses to push until the pass/fail exits 0.

Can be the same as the Gitlab CI or just basic code quality checks.  I use them to prevent jupyter notebooks being commited with uncleared cells. It needs to be set up on the user's computer manually and is not automatically cloned with the directory: a script can be included in the repo to do this and run manually on first time clone.

Quote:

When putting code into git.ligo.org, one way to have automated testing is to use the Gitlab CI. This is an automated 'checker', much like the 'Travis' system used in GitHub. Essentially, you give it a make files which it runs somewhere and your GIT repo web page gets a little 'failed/passing' badge telling you if its working. You can also browse the logs to see in detail what happened. This avoids the 'but it works on my computer!' thing that we usually hear.

 

  13501   Wed Jan 3 18:00:46 2018 gautamUpdatePonderSqueezeplan of action

Notes of stuff we discussed @ today's meeting, and afterwards, towards measuring ponderomotive squeezing at the 40m.

  1. Displacement noise requirements
    • Kevin is going to see if we can measure any kind of squeezing on a short timescale by tuning various parameters.
    • Specifically, without requiring crazy ultra low current noise level for the coil driver noise.
  2. Investigate how much actuation range we need for lock acquisition and maintaining lock.
    • Specifically, for DARM.
    • We will measure this by having the arms controlled with ALS in the CARM/DARM basis.
    • Build up a noise budget for this, see how significant the laser noise contribution is.
  3. RC folding mirrors
    • In the present configuration, these are introducing ~2.5% RT loss in the RCs.
    • This affects PRG, and on the output side, measurable squeezing.
    • We want to see if we can relax the requirements on the RC folding mirrors such that we don't have to spend > 20 k$.
    • Specifically, consider spec'ing the folding mirror coatings to only have HR @1064 nm, and take what we get at 532 nm.
    • But still demand tolerances on RoC driven by mode-matching between the RCs and the arm cavities.
  4. ALS with Beat Mouth
    • Use the fiber coupled light from the ends to make the ALS signals.
    • Gautam will update diagram to show the signal chain from end-to-end (i.e. starting at AUX laser, ending at ADC input).
    • Make a noise budget for the same - preliminary analysis suggests a sensing noise floor of ~10 mHz/rtHz.

RXA:

  • For the ALS-DARM budget the idea is that we can do lock acquisition better, so we don't need to care about the acquisition reqs. i.e. we just need to set the ETM coil driver current range based on the DARM in-lock values.
    • To get the coil driver noise to be low enough to detect squeezing we need to use a ~10-15 kOhm series resistor.
    • We assume that all DAC and coil driver input noises can be sufficiently filtered.
    • We are assuming that we don't change the magnet sizes or the number of coil windings in the OSEMs.
    • The noise in the ITMs doesn't matter because we don't use them for any locking activity, so we can easily set the coil driver series resistors to 15 kOhm.
    • We will do the bias for the ETMs and ITMs using some HV circuit (not the existing ones on the coil driver boards) and doing the summation after the main coil driver series resistor. This HV bias module needs to handle the ~ (2 V / 400 Ohm) = 5 mA which is now used. This would require (5 mA) x (15 kOhm) = 60+ V drivers.
  • IF we can get away with doing the ALS beat note with just red (still using GREEN light from the end laser to lock to the arms from the ends), we will not have any requirements for the 532 nm transmission of any optics in the DRMI area.
    • Get some quotes for the new PR/SR mirrors having tight RoC tolerance, high R for 1064, and no spec for 532.
    • Check that the 1-way fiber noise for 1064 nm is < 100 mHz/rHz in the 50-1000 Hz band. If its more, explore putting better acoustic foam around the fiber run.
    • Improve the mode-matching of the IR beam into the fibers at the ends. We want >80% to reduce the noise do to scattering; we don't really care about the amount of light available in the PSL - this is just to reduce the IR-ALS noise.
  13502   Thu Jan 4 12:46:27 2018 gautamUpdateALSFiber ALS assay

Attachment #1 is the updated diagram of the Fiber ALS setup. I've indicated part numbers, power levels (optical and electrical). For the light power levels, numbers in green are for the AUX lasers, numbers in red are for the PSL.

I confirmed that the output of the power splitter is going to the "RF input" and the output of the delay line is going to the "LO input" of the demodulator box. Shouldn't this be the other way around? Unless the labels are misleading and the actual signal routing inside the 1U chassis is correctly done :/

  • Mode-matching into the fibers is rather abysmal everywhere.
  • In this diagram, only the power levels measured at the lasers and inputs of the fiber couplers are from today's measurements. I just reproduced numbers for inside the beat mouth from elog13254.
  • Inside the beat mouth, the PD output actually goes through a 20dB coupler which is included in this diagram for brevity. Both the direct and coupled outputs are available at the front panel of the beat mouth. The latter is meant for diagnostic purposes. The number of -8dBm of beat @30MHz is quoted using the direct output, and not the coupled output.

Still facing some CDS troubles, will start ALS recovery once I address them.

Attachment #2 is the svg file of Attachment #1, which we can update as we improve things. I'll put it on the DCC 40m tree eventually.

Attachment 1: FiberALS.pdf
FiberALS.pdf
Attachment 2: FiberALS.svg.zip
FiberALS.svg.zip
  13503   Thu Jan 4 14:39:50 2018 gautamUpdateGeneralpower outage - timing error

As mentioned in my previous elog, the CDS overview screen "DC" indicators are all RED (everything else is green). Opening up the displays for individual CPUs, the error message shown is "0x4000", which is indicative of some sort of timing error. Indeed, it seems to me that on the FB machine, the gpstime command shows a gps time that is ~1 second ahead of the times on other FE machines.

Running gpstime on other FE machines throws up an error, saying that it cannot connect to the network to update leap second data. Not sure what this is about...

I double checked the GPS timing module, we had some issues with this in the recent past. But judging by its front panel display, everything seems to be in order...

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/gpstime", line 9, in <module>
    load_entry_point('gpstime==0.2', 'console_scripts', 'gpstime')()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 356, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2476, in load_entry_point
    return ep.load()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2190, in load
    ['__name__'])
  File "/usr/lib/python3/dist-packages/gpstime/__init__.py", line 41, in <module>
    LEAPDATA = ietf_leap_seconds.load_leapdata(notify=True)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 158, in load_leapdata
    fetch_leapfile(leapfile)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 115, in fetch_leapfile
    r = requests.get(LEAPFILE_IETF)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in get
    return request('get', url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 407, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(101, 'Network is unreachable'))

 

 

  13504   Fri Jan 5 17:50:47 2018 ranaConfigurationComputersmotif on nodus

I had to do 'sudo yum install motif' on nodus so that we could get libXm.so.4 so that we could run MEDM. Works now.

  13505   Fri Jan 5 19:19:25 2018 ranaConfigurationSEIBarry Controls 'air puck' instead of 'VOPO style' breadboard

We've been thinking about putting in a blade spring / wire based aluminum breadboard on top of the ETM & ITM stacks to get an extra factor of 10 in seismic attenuation.

Today Koji and I wondered about whether we could instead put something on the outside of the chambers. We have frozen the STACIS system because it produces a lot of excess noise below 1 Hz while isolating in the 5-50 Hz band.

But there is a small gap between the STACIS and the blue crossbeams that attache to the beams that go into the vacuum to support the stack. One possibility is to put in a small compliant piece in there to gives us some isolation in the 10-30 Hz band where we are using up a lot of the control range. The SLM series mounts from Barry Controls seems to do the trick. Depending on the load, we can get a 3-4 Hz resonant frequency.

Steve, can you please figure out how to measure what the vertical load is on each of the STACIS?

Attachment 1: mm_slm.jpg
mm_slm.jpg
Attachment 2: Screen_Shot_2018-01-05_at_7.25.47_PM.png
Screen_Shot_2018-01-05_at_7.25.47_PM.png
  13506   Fri Jan 5 21:54:28 2018 ranaUpdateGeneralpower outage - timing error

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

Attachment 1: huh.png
huh.png
  13507   Fri Jan 5 22:19:53 2018 gautamUpdateGeneralpower outage - timing error

Just putting the relevant line from email from Rolf which at least identifies the problem here:

Looks like FB time is actually off by 1 year, as your timing system does not get year info.

There still seems to be something funky with the X arm transmission PDs - I can't seem to get the triggering to switch between the QPD and the Thorlabs PD, and the QPD signal seems to be wildly fluctuating by several orders of magnitude from 0.01-100. The c1iscex FE was pulled out, and it seemed to me like someone was doing some cable re-arrangement at the X end.

I will look into this tomorrow. 

Quote:

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

 

  13508   Sat Jan 6 05:18:12 2018 KevinUpdatePonderSqueezeDisplacement requirements for short-term squeezing

I have been looking into whether we can observe squeezing on a short timescale. The simulations I show here say that we can get 2 dBvac of squeezing at about 120 Hz using extreme signal recycling.

The parameters used here are

  • 100 ppm transmissivity on the folding mirrors giving a PRC gain of 40.
  • 10 kΩ series resistance for the ETMs; 15 kΩ series resistance for the ITMs.
  • 1 W incident on the back of PRM.
  • PD quantum efficiency 0.88.

The first attachment shows the displacement noise. The red curve labeled vacuum is the standard unsqueezed vacuum noise which we need to beat. The second attachment shows the same noise budget as a ratio of the noise sources to the vacuum noise.

This homodyne angle and SRC detuning give about the maximum amount of squeezing. However, there's quite a bit of flexibility and if there are other considerations, such as 100 Hz being too low, we should be able to optimize these angles (even with more pessimistic values of the above parameters) to see at least 0.2 dBvac around 400 Hz.

Attachment 1: displacement_noise.pdf
displacement_noise.pdf
Attachment 2: noise_budget.pdf
noise_budget.pdf
  13509   Sat Jan 6 13:47:32 2018 ranaUpdatePonderSqueezeDisplacement requirements for short-term squeezing
  • ought to tune for 210 Hz (in-between powerlines) since 100 Hz is tough to work due to scattering, etc.
  • rename DAC - I think what this curve shows is really the coil driver noise. The DAC noise we can always filter out with the dewhitening board; i.e. once we have 1000x attenuation between the DAC and the coil driver input, DAC noise is not dominant.
  13510   Sat Jan 6 18:27:37 2018 gautamUpdateGeneralpower outage - IFO recovery

Mostly back to nominal operating conditions now.

  1. EX TransMon QPD is not giving any sensible output. Seems like only one quadrant is problematic, see Attachment #1. I blame team EX_Acromag for bumping some cabling somewhere. In any case, I've disabled output of the QPD, and forced the LSC servo to always use the Thorlabs "High Gain" PD for now. Dither alignment servo for X arm does not work so well with this configuration - to be investigated.
  2. BS Seismometer (Trillium) is still not giving any sensible output.
    • I looked under the can, the little spirit level on the seismometer is well centered.
    • I jiggled all the cabling to rule out any obvious loose connections - found none at the seismometer, or at the interface unit (labelled D1002694 on the front panel) in 1X5/1X6.
    • All 3 axes are giving outputs with DC values of a few hundred - I guess there could've been some big earthquake in early December which screwed the internal alignment of the sensing mass in the seismometer. I don't know how to fix this.
    • Attachment #2 = spectra for the 3 channels. Can't say they look very seismicy frown. I've assumed the units are in um/sec.
    • This is mainly bothering me in the short term because I can't use the angular feedforward on PRC alignment, which is usually quite helpful in DRMI locking.
    • But I think the PRM Oplev loop is actually poorly tuned, in which case perhaps the feedforward won't really be necessary once I touch that up.

What I did today (may have missed some minor stuff but I think this is all of it):

  1. At EX:
    • Toggled power to Thorlabs trans monitoring PD, checked that it was actually powered, squished some cables in the e- rack.
    • Removed PDA55 in the green path (put there for EX laser AM/PM measurement). So green beam can now enter the X arm cavity.
    • Re-connected ALS cabling.
    • Turned on HV supply for EX Green PZT steering mirrors (this has to be done every time there is a power failure).
  2. At ITMY table:
    • Removed temporary HeNe RIN/ Oplev sensing noise measurement setup. HeNe + 1" vis-coated steering mirror moved to SP table.
    • Turned on ITMY/SRM Oplev HeNe.
    • Undid changes on ITMY Oplev QPD and returned it to its original position.
    • Centered ITMY reflected beam on this QPD.
  3. At vertex area
    • Looked under Trillium seismometer can - I've left the clamps undone for now while we debug this problem.
    • Power-cycled Trillium interface box.
    • Touched up PMC alignment.
  4. Control room
    • Recover IFO alignment using combination of IR and Green beams.
    • Single arm locking recovered, dither alignment servos run to maximize arm transmission. Single arm locks holding for hours, that's good.
    • The X arm dither alignment isn't working so well, the transmission never quite hits 1 and it undergoes some low frequency (T~30secs) oscillations once the transmission reaches its peak value.
    • Had to do the usual ipcrm thing to get dataviewer to run on pianosa.

Next order of business:

  1. Recover ALS:
    • aim is to replace the vertex area ALS signals derived from 532nm with their 1064nm counterparts.
    • Need to touch up end PDH servos, alignment/MM into arms, and into Fibers at ends etc.
    • Control the arms (with RMs misaligned) in the CARM/DARM basis using the revised ALS setup.
    • Make a noise budget - specifically, we are interested in how much actuation range is required to maintain DARM control in this config.
  2. Recover DRMI locking
    • Continue NBing.
    • Do a statistical study of actuation range required for acquiring and maintaining DRMI locking.
Attachment 1: EX_QPD_Quad1_Faulty.pdf
EX_QPD_Quad1_Faulty.pdf
Attachment 2: Trillium_faulty.pdf
Trillium_faulty.pdf
  13511   Sat Jan 6 23:25:18 2018 KevinUpdatePonderSqueezeDisplacement requirements for short-term squeezing

 

Quote:
  • ought to tune for 210 Hz (in-between powerlines) since 100 Hz is tough to work due to scattering, etc.

We can get 1.1 dBvac at 210 Hz.

The first two attachments are the noise budgets for these optimized angles. The third attachment shows squeezing as a function of homodyne angle and SRC detuning at 210 Hz. To stay below -1 dBvac, the homodyne angle must be kept between 88.5 and 89.7 degrees and the SRC detuning must be kept between -0.04 and 0.03 degrees. This corresponds to fixing the SRC length to within a range of 0.07/360 * 1064 nm = 200 pm.

Attachment 1: displacement_noise.pdf
displacement_noise.pdf
Attachment 2: noise_budget.pdf
noise_budget.pdf
Attachment 3: angles.pdf
angles.pdf
  13512   Sun Jan 7 03:22:24 2018 KojiUpdatePonderSqueezeDisplacement requirements for short-term squeezing

Interesting. My understanding is that this is close to signal recycling, rather than resonant sideband extraction. Is that correct?

For signal recycling, we need to change the resonant condition of the carrier in the SRC. Thus the macroscopic SRC length needs to be changed from ~5.4m to 9.5m, 6.8m, or 4.1m.
In the case of 6.8m, SRC legnth= PRC length. This means that we can use the PRM (T=5%) as the new SRM.

Does this T(SRM)=5% change the squeezing level?

  13513   Sun Jan 7 11:40:58 2018 KevinUpdatePonderSqueezeDisplacement requirements for short-term squeezing

Yes, this SRC detuning is very close to extreme signal recycling (0° in this convention), and the homodyne angle is close to the amplitude quadrature (90° in this convention).

For T(SRM) = 5% at the optimal angles (SRC detuning of -0.01° and homodyne angle of 89°), we can see 0.7 dBvac at 210 Hz.

  13514   Sun Jan 7 17:27:13 2018 gautamUpdatePonderSqueezeDisplacement requirements for short-term squeezing

Maybe you've accounted for this already in the Optickle simulations - but in Finesse (software), the "tuning" corresponds to the microscopic (i.e. at the nm level) position of the optics, whereas the macroscopic lengths, which determine which fields are resonant inside the various cavities, are set separately. So it is possible to change the microscopic tuning of the SRC, which need not necessarily mean that the correct resonance conditions are satisfied. If you are using the Finesse model of the 40m I gave you as a basis for your Optickle model, then the macroscopic length of the SRC in that was ~5.38m. In this configuration, the f2 (i.e. 55MHz sideband) field is resonant inside the SRC while the f1 and carrier fields are not.

If we decide to change the macroscopic length of the SRC, there may also be a small change to the requirements on the RoCs of the RC folding mirrors. Actually, come to think of it, the difference in macroscopic cavity lengths explains the slight differences in mode-matching efficiencies I was seeing between the arms and RCs I was seeing before.

Quote:

Yes, this SRC detuning is very close to extreme signal recycling (0° in this convention), and the homodyne angle is close to the amplitude quadrature (90° in this convention).

For T(SRM) = 5% at the optimal angles (SRC detuning of -0.01° and homodyne angle of 89°), we can see 0.7 dBvac at 210 Hz.

 

  13515   Sun Jan 7 20:11:54 2018 KojiUpdatePonderSqueezeDisplacement requirements for short-term squeezing

In fact, that is my point. If we use signal recycling instead of resonant sideband extraction, the "tuning" of the SRC is opposite to the current setup. We need to change the macro length of the SRC to make 55MHz resonant with this tuning. And if we make the SRC macro length together with the PRC macro length for this reason, we need to thing again about the mode matching. Fortunately, we have the spare PRM (T=5%) which matches with this curvature. This was the motivation of my question. We may also choose to keep the current SRM because of its higher T and may want to evaluate the effect of expected mode mismatch.

  13516   Mon Jan 8 20:50:01 2018 ranaSummaryElectronicsSR560: reworking

I replaced the NPD5565 with a LSK389 (SOIC-8 with DIP adapter). There was a noise reduction of ~30%, but not nearly as much as I expected. I wonder if I have to change the DC bias current on these to get the low noise operation?

https://photos.app.goo.gl/hsMwsif7NLscsgpx1

  13517   Tue Jan 9 00:07:03 2018 johannesUpdateDAQetmx slow daq chassis

All parts received and assembly near complete, small problem detected because the two DSub connectors are too close together for two cables to fit at the same time. Gautam and I will make some additional slot panels tomorrow using a waterjet cutter, so we can spread the breakout boards out and remedy this.

Fast binary channels need to be spliced into DSub connectors. Aaron is on this. All other, slow connections are already wired from before and have been tested for correct pins on the backplane DIN connectors.

 

The Acromag modules require only a positive supply voltage between +12 and +30 VDC and draw a maximum of 2.8W at that. This raises the question if we want this supply rail to be regulated or take the raw power from the Sorensens. Consulting with Ben Abbott: The Acromags in LIGO do not operate with regulated power. We could easily accomodate the standard regulator boards D1000217 in the chassis, which is probably a good idea if we want to place any custom electronics inside the chassis in the future, for example for whitening or active lowpass filtering.

  13518   Tue Jan 9 11:52:29 2018 gautamUpdateCDSslow machine bootfest

Eurocrate key turning reboots today morning for and c1susaux, c1auxey and c1iscaux. These were responding to ping but not telnet-able. Usual precautions were taken to minimize risk of ITMX getting stuck.

 

  13519   Tue Jan 9 21:38:00 2018 gautamUpdateALSALS recovery
  • Aligned IFO to IR.
    • Ran dither alignment to maximize arm transmission.
    • Centered Oplev reflections onto their respective QPDs for ITMs, ETMs and BS, as DC alignment reference. Also updated all the DC alignment save/restore files with current alignment. 
  • Undid the first 5 bullets of elog13325. The AUX laser power monitor PD remains to be re-installed and re-integrated with the DAQ.
    • I stupidly did not refer to my previous elog of the changes made to the X end table, and so spent ages trying to convince Johannes that the X end green alignment had shifted, and turned out that the green locking wasn't going because of the 50ohm terminator added to the X end NPRO PZT input. I am sorry for the hours wasted sad
    • GTRY and GTRX at levels I am used to seeing (i.e. ~0.25 and ~0.5) now. I tweaked input pointing of green and also movable MM lenses at both ends to try and maximize this. 
    • Input green power into X arm after re-adjusting previously rotated HWP to ~100 degrees on the dial is ~2.2mW. Seems consistent with what I reported here.
    • Adjusted both GTR cameras on the PSL table to have the spots roughly centered on the monitors.
    • Will update shortly with measured OLTFs for both end PDH loops.
    • X end PDH seems to have UGF ~9kHz, Y end has ~4.5kHz. Phase margin ~60 degrees in both cases. Data + plotting code attached. During the measurement, GTRY ~0.22, GTRX~0.45.

Next, I will work on commissioning the BEAT MOUTH for ALS beat generation. 

Note: In the ~40mins that I've been typing out these elogs, the IR lock has been stable for both the X and Y arms. But the X green has dropped lock twice, and the Y green has been fluctuating rather more, but has mangaged to stay locked. I think the low frequency Y-arm GTRY fluctuations are correlated with the arm cavity alignment drifting around. But the frequent X arm green lock dropouts - not sure what's up with that. Need to look at IR arm control signals and ALS signals at lock drop times to see if there is some info there.

Attachment 1: GreenLockStability.png
GreenLockStability.png
Attachment 2: ALS_OLTFs_20180109.pdf
ALS_OLTFs_20180109.pdf
Attachment 3: ALS_OLTF_data_20180109.tar.bz2
  13520   Tue Jan 9 21:57:29 2018 gautamUpdateOptimal ControlOplev loop tuning

After some more tweaking, I feel like I may be getting closer to a cost-function definition that works.

  • The main change I made was to effectively separate the BR-bandstop filter poles/zeros and the rest of the poles and zeros.
  • So now the input vector is still a list of highest pole frequency followed by frequency separations, but I can specify much tighter frequency bounds for the roots of the part of the transfer function corresponding to the Bounce/Roll bandstops.
  • This in turn considerably reduces the swarming area - at the moment, half of the roots are for the notches, and in the (f0,Q) basis, I see no reason for the bounds on f0 to be wider than [10,30]Hz.

Some things to figure out:

  1. How the "force" the loop to be AC coupled without explicitly requiring it to be so? What should the AC coupling frequency be? From the (admittedly cursory) sensing noise measurement, it would seem that the Oplev error signal is above sensing noise even at frequencies as low as 10mHz.
  2. In general, the loops seem to do well in reducing sensing noise injection - but they seem to do this at the expense of the loop gain at ~1Hz, which is not what we want.
    • I am going to try and run the optimizer with an excess of poles relative to zeros
    • Currently, n(Poles) = n(Zeros), and this is the condition required for elliptic low pass filters, which achieve fast transition between the passband and stopband - but we could just as well use a less rapid, but more monotonic roll-off. So the gain at 50Hz might be higher, but at 200Hz, we could perhaps do better with this approach.
  3. The loop shape between 10 and 30Hz that the optimizer outputs seems a but weird to me - it doesn't really quite converge to a bandstop. Need to figure that out.
Attachment 1: loopOpt_180108_2232.pdf
loopOpt_180108_2232.pdf
  13521   Wed Jan 10 09:49:28 2018 SteveUpdatePEMthe rat is back

Five mechcanical traps set inside of boxes. Red-white warning tape on top of each.

Quote:

Last jump at rack Y2.

 

  13522   Wed Jan 10 12:24:52 2018 gautamUpdateCDSslow machine bootfest

MC autolocker got stuck (judging by wall StripTool traces, it has been this way for ~7 hours) because c1psl was unresponsive so I power cycled it. Now MC is locked.

  13523   Wed Jan 10 12:42:27 2018 gautamUpdateSUSETMX DC alignment

I've been observing this for a few days: ETMX's DC alignment seems to drift by so much that the previously well aligned X arm cavity is now totally misaligned.

The wall StripTool trace shows that both the X and Y arms were locked with arm transmissions around 1 till c1psl conked out - so in the attached plot, around 1400 UTC, the arm cavity was well aligned. So the sudden jump in the OSEM sensor signals is the time at which LSC control to the ETM was triggered OFF. But as seen in the attached plot, after the lockloss, the Oplev signals seem to show that the mirror alignment drifted by >50urad. This level of drift isn't consistent with the OSEM sensor signals - of course, the Oplev calibration could be off, but the tension in values is almost an order of magnitude. The misalignment seems real - the other Oplev spots have stuck around near the (0,0) points where I recentered them last night, only ETMX seems to have undergone misalignment.

Need to think about what's happening here. Note that this kind of "drift" behaviour seems to be distinct from the infamous ETMX "glitching" problem that was supposed to have been fixed in the 2016 vent.

 

Attachment 1: ETMXdrift.png
ETMXdrift.png
  13524   Wed Jan 10 14:17:57 2018 johannesConfigurationComputer Scripts / Programsautoburt no longer making backups

I was looking into setting up autoburt for the new c1auxex2 and found that it stopped making automatic backups for all machines after the beginning of the new year. There is no 2018 folder (it was the only one missing) in /opt/rtcds/caltech/c1/burt/autoburt/snapshots and the /latest/ link in /opt/rtcds/caltech/c1/burt/autoburt/ leads to the last backup of 2017 on 12/31/17 at 23:19.

The autoburt log file shows that the back script last ran today 01/10/18 at 14:19, as it should have, but doesn't show any errors and ends with "You are at the 40m".

I'm not familiar with the autoburt scheduling using cronjobs. If I'm not mistaken the relevant cronjob file is /cvs/cds/rtcds/caltech/c1/scripts/autoburt/autoburt.cron which executes /cvs/cds/rtcds/caltech/c1/scripts/autoburt/autoburt.pl

I've never used perl but there's the following statement when establishing the directory for the new backup:

  $yearpath = $autoburtpath."/snapshots/".$thisyear;
  # print "scanning for path $yearpath\n";
  if (!-e $yearpath) {
    die "ERROR: Year directory $yearpath does not exist\n";
  }

I manually created the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2018/ directory. Maybe this fixes the hickup? Gotta wait about 30 minutes.

  13525   Wed Jan 10 15:25:43 2018 johannesConfigurationComputer Scripts / Programsautoburt making backups again
Quote:

I manually created the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2018/ directory. Maybe this fixes the hickup? Gotta wait about 30 minutes.

It worked. The first backup of the year is now from Wednesday, 01/10/18 at 15:19. Ten days of automatic backups are missing. Up until 2204 the year folders had been pre-emptively created so why was 2018 missing?

gautam: this is a bit suspect still - the snapshot file for c1auxex at least seemed to be too light on channels recorded. this was before any c1auxex switching. to be investigated.

  13526   Wed Jan 10 16:27:02 2018 SteveConfigurationSEIload cell for weight measurement

We could use similar load cells   to make the actual weight measurement on the Stacis legs. This seems practical in our case.

I have had bad experience with pneumatic Barry isolators.

Our approximate max compression loads are 1500 lbs on 2 feet and 2500 lbs on the 3rd one.

Quote:

We've been thinking about putting in a blade spring / wire based aluminum breadboard on top of the ETM & ITM stacks to get an extra factor of 10 in seismic attenuation.

Today Koji and I wondered about whether we could instead put something on the outside of the chambers. We have frozen the STACIS system because it produces a lot of excess noise below 1 Hz while isolating in the 5-50 Hz band.

But there is a small gap between the STACIS and the blue crossbeams that attache to the beams that go into the vacuum to support the stack. One possibility is to put in a small compliant piece in there to gives us some isolation in the 10-30 Hz band where we are using up a lot of the control range. The SLM series mounts from Barry Controls seems to do the trick. Depending on the load, we can get a 3-4 Hz resonant frequency.

Steve, can you please figure out how to measure what the vertical load is on each of the STACIS?

 

Attachment 1: stacis3LoadCells.png
stacis3LoadCells.png
  13527   Wed Jan 10 18:53:31 2018 gautamUpdateSUSETMX DC alignment

I should've put in the SUSPIT and SUSYAW channels in the previous screenshot. I re-aligned ETMX till I could see IR flashes in the arm, and also was able to lock the green beam on a TEM00 mode with reasonable transmission. As I suspected, this brought the Oplev spot back near the center of it's QPD. But the answer to the question "How much did I move the ETM by" still varies by ~1 order of magnitude, depending on if you believe the OSEM SUSPIT and SUSYAW signals, or the Oplev error signals - I don't know which, if any, of these, are calibrated.

Attachment 1: ETMXdrift.png
ETMXdrift.png
  13528   Wed Jan 10 22:19:44 2018 ranaUpdateSUSETMX DC alignment

Best to just calibrate the ETM OL in the usual way. I bet the OSEM outputs have a cal uncertainty of ~50% since the input matrix changes as a function of the DC alignment. Still, a 30 urad pitch mis-alignment gives a (30e-6 rad)(40 m) ~ 1 mm beam spot shift. This would be enough to flash other modes, but it would still be easy to lock on a TEM00 like this. I also doubt that the OL calibration is valid outside of some region near zero - can easily check by moving the ETM bias sliders.

Quote:

I should've put in the SUSPIT and SUSYAW channels in the previous screenshot. I re-aligned ETMX till I could see IR flashes in the arm, and also was able to lock the green beam on a TEM00 mode with reasonable transmission. As I suspected, this brought the Oplev spot back near the center of it's QPD. But the answer to the question "How much did I move the ETM by" still varies by ~1 order of magnitude, depending on if you believe the OSEM SUSPIT and SUSYAW signals, or the Oplev error signals - I don't know which, if any, of these, are calibrated.

What we still don't know is if this is due to Johannes/Aaron working at the ETMX rack (bumping some of the flaky coil cables and/or bumping the blue beams which support the stack). Adding or substracting weight from the stack supports will give us an ETM mis alignment.

  13529   Wed Jan 10 22:24:28 2018 johannesUpdateDAQetmx slow daq chassis

This evening I transitioned the slow controls to c1auxex2.

  1. Disconnected satellite box
  2. Turned off c1auxex
  3. Disconnected DIN cables from backplace connectors
  4. Attached purple adapter boards
  5. Labeled DSub cables for use
  6. Connected DSub cables to adapter boards and chassis
  7. Initiated modbus IOC on c1auxex2

Gautam and I then proceeded to test basic functionality

  1. Pitch bias sliders move pitch, yaw moves yawyes.
  2. Coil enable and monitoring channels work yes
  3. Watchdog seems to work. yes We set the treshold for tripping low, excited the optic, watchdog didn't disappoint and triggered.
  4. All channels Initialize with "0" upon machine/server script restart. This means the watchdog happens to be OFF, which is good yes. It would be great if we could also initialize PIT and YAW to retain their value from before to avoid kicking the optic. This is not straightforward with EPICS records but there must be a way.
  5. We got the local damping going yes.
  6. There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)
  7. The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Arms are locked, have been for ~1hour with no hickups. We will leave it like this overnight to observe, and debug further tomorrow.

  13530   Thu Jan 11 09:57:17 2018 SteveUpdateDAQacromag at ETMX

Good going Johannes!

Quote:

This evening I transitioned the slow controls to c1auxex2.

  1. Disconnected satellite box
  2. Turned off c1auxex
  3. Disconnected DIN cables from backplace connectors
  4. Attached purple adapter boards
  5. Labeled DSub cables for use
  6. Connected DSub cables to adapter boards and chassis
  7. Initiated modbus IOC on c1auxex2

Gautam and I then proceeded to test basic functionality

  1. Pitch bias sliders move pitch, yaw moves yawyes.
  2. Coil enable and monitoring channels work yes
  3. Watchdog seems to work. yes We set the treshold for tripping low, excited the optic, watchdog didn't disappoint and triggered.
  4. All channels Initialize with "0" upon machine/server script restart. This means the watchdog happens to be OFF, which is good yes. It would be great if we could also initialize PIT and YAW to retain their value from before to avoid kicking the optic. This is not straightforward with EPICS records but there must be a way.
  5. We got the local damping going yes.
  6. There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)
  7. The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Arms are locked, have been for ~1hour with no hickups. We will leave it like this overnight to observe, and debug further tomorrow.

 

Attachment 1: Acromg_in_action.png
Acromg_in_action.png
  13531   Thu Jan 11 14:22:40 2018 gautamUpdateALSFiber ALS assay

I did a cursory check of the ALS signal chain in preparation for commissioning the IR ALS system. The main elements of this system are shown in my diagram in the previous elog in this thread.

Questions I have:

  1. Does anyone know what exactly is inside the "Delay Line" box? I can't find a diagram anywhere.
    • Jessica's SURF report would suggest that there are just 2 50m cables in there.
    • There are two power splitters taped to the top of this box.
    • It is unclear to me if there are any active components in the box.
    • It is unclear to me if there is any thermal/acoustic insulation in there.
    • For completeness, I'd like to temporarily pull the box out of the LSC rack, open it up, take photos, and make a diagram unless there are any objections.
  2. If you believe the front panel labeling, then currently, the "LO" input of the mixer is being driven by the part of the ALS beat signal that goes through the delay line. The direct (i.e. non delayed) output of the power splitter goes to the "RF" input of the mixer. The mixer used, according to the DCC diagram, is a PE4140. Datasheet suggests the LO power can range from -7dBm to +20dBm. For a -8dBm beat from the IR beat PDs, with +24dB gain from the ZHL3A but -3dB from the power splitter, and assuming 9dB loss in the cable (I don't know what the actual loss is, but according to a Frank Seifert elog, the optimal loss is 8.7dB and I assume our delay line is close to optimal), this means that we have ~4dBm at the "LO" input of the demod board. The schematic says the nominal level the circuit expects is 10dBm. If we use the non-delayed output of the power splitter, we would have, for a -8dBm beat, (-8+24-3)dBm ~13dBm, plus probably some cabling loss along the way which would be closer to 10dBm. So should we use the non-delayed version for the LO signal? Is there any reason why the current wiring is done in this way?

 

  13532   Thu Jan 11 14:47:11 2018 SteveUpdatePSLshelf work for tomorrow

I have just received the scheduling of the PSL self work for tomorrow. Gautam and I agreed that if it is needed I will shut the laser off and cover the hole table with plastic.

  13533   Thu Jan 11 18:50:31 2018 gautamUpdateIOOMCautolocker getting stuck

I've noticed this a couple of times today - when the autolocker runs the mcdown script, sometimes it doesn't seem to actually change the various gain sliders on the PSL FSS. There is no handshaking built in to the autolocker at the moment. So the autolocker thinks that the settings are correct for lock re-acquisition, but they are not. The PCdrive signal is often railing, as is the PZT signal. The autolocker just gets stuck waiting to re-acquire lock. This has happened today ~3 times, and each time, the Autolocker has tried to re-acquire lock unsuccessfully for ~1hour.

Perhaps I'll add a line or two to check that the signal levels are indicative of mcdown being successfully executed.

  13534   Thu Jan 11 20:51:20 2018 gautamUpdateALSFiber ALS assay

After labeling cables I would disconnect, I pulled the box out of the LSC rack. Attachment #1 is a picture of the insides of the box - looks like it is indeed just two lengths of cabling. There was also some foam haphazardly stuck around inside - presumably an attempt at insulation/isolation.

Since I have the box out, I plan to measure the delay in each path, and also the signal attenuation. I'll also try and neaten the foam padding arrangement - Steve was showing me some foam we have, I'll use that. If anyone has comments on other changes that should be made / additional tests that should be done, please let me know.

20180111_2200: I'm running some TF measurements on the delay line box with the Agilent in the control room area (script running in tmux sesh on pianosa). Results will be uploaded later.

Quote:

For completeness, I'd like to temporarily pull the box out of the rack, open it up, take photos, and make a diagram unless there are any objections.

 

Attachment 1: IMG_5112.JPG
IMG_5112.JPG
  13535   Thu Jan 11 20:59:41 2018 gautamUpdateDAQetmx slow daq chassis

Some suggestions of checks to run, based on the rightmost colum in the wiring diagram here - I guess some of these have been done already, just noting them here so that results can be posted.

  1. Oplev quadrant slow readouts should match their fast DAQ counterparts.
  2. Confirm that EX Transmon QPD whitening/gain switching are working as expected, and that quadrant spectra have the correct shape.
  3. Watchdog tripping under different conditions.
  4. Coil driver slow readbacks make sense - we should also confirm which of the slow readbacks we are monitoring (there are multiple on the SOS coil driver board) and update the MEDM screen accordingly.
  5. Confirm that shadow sensor PD whitening is working by looking at spectra.
  6. Confirm de-whitening switching capability - both to engage and disengage - maybe the procedure here can be repeated.
  7. Monitor DC alignment of ETMX - we've seen the optic wander around (as judged by the Oplev QPD spot position) while sitting in the control room, would be useful to rule out that this is because of the DC bias voltage stability (it probably isn't).
  8. Confirm that burt snapshot recording is working as expected - this is not just for c1auxex, but for all channels, since, as Johannes pointed out, the 2018 directory was totally missing and hence no snapshots were being made.
  9. Confirm that systemd restarts IOC processes when the machine currently called c1auxex2 gets restarted for whatever reason.

 

  13536   Thu Jan 11 21:09:33 2018 gautamUpdateCDSrevisiting Acromag

We'd like to setup the recording of the PSL diagnostic connector Acromag channels in a more robust way - the objective is to assess the long term performance of the Acromag DAQ system, glitch rates etc. At the Wednesday meeting, Rana suggested using c1ioo to run the IOC processes - the advantage being that c1ioo has the systemd utility, which seems to be pretty reliable in starting up various processes in the event of the computer being rebooted for whatever reason. Jamie pointed out that this may not be the best approach however - because all the FEs get the list of services to run from their common shared drive mount point, it may be that in the event of a power failure for example, all of them try and start the IOC processes, which is presumably undesirable. Furthermore, Johannes reported the necessity for the procServ utility to be able to run the modbusIOC process in the background - this utility is not available on any of the FEs currently, and I didn't want to futz around with trying to install it.

One alternative is to connect the PSL Acromag also to the Supermicro computer Johannes has set up at the Xend - it currently has systemd setup to run the modbusIOC, so it has all the utilities necessary. Or else, we could use optimus, which has systemd, and all the EPICS dependencies required. I feel less wary of trying to install procServ on optimus too. Thoughts?

 

  13537   Fri Jan 12 10:02:05 2018 johannesUpdateDAQetmx slow daq chassis
Quote:

There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)

The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Both issues were fixed. In both cases it was two separate causes that prevented them from working.

The QPD gain stage switch software channels were assigned to wrong physical pins of the Acromag, and additionally their DSub cable was swapped with a different one.

The BIO switching had its signal and ground wires swapped on ALL connections, and part of it was also suffering from the cable-mixup.

Both issues were fixed. All backplane signals are now routed through the Acromag chassis.

 

Gautam and I did notice that occasionally ETMX alignment will start drifting as evident from the OpLev. I want to set up a diagnostic channel to see if the DAC voltages coming from the Acromag are responsible for this.

  13538   Fri Jan 12 10:26:24 2018 SteveUpdatePSL PSL shelf work schedule

Measurements for good fit were made. The new shelf will be installed on next Tuesday at 2pm

The reference cavity ion pump is in the way  so the cavity will be moved 5" westward. The shelf height space will be 10"  Under shelf working height 18" to optical table.

Quote:

I have just received the scheduling of the PSL self work for tomorrow. Gautam and I agreed that if it is needed I will shut the laser off and cover the hole table with plastic.

 

  13539   Fri Jan 12 12:31:04 2018 gautamConfigurationComputerssendmail troubles on nodus

I'm having trouble getting the sendmail service going on nodus since the Christmas day power failure - for some reason, it seems like the mail server that sendmail uses to send out emails on nodus (mx1.caltech.iphmx.com, IP=68.232.148.132) is on a blacklist! Not sure how exactly to go about remedying this.

Running sudo systemctl status sendmail.service -l also shows a bunch of suspicious lines:

Jan 12 10:15:27 nodus.ligo.caltech.edu sendmail[6958]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 10:15:45 nodus.ligo.caltech.edu sendmail[6958]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+10:49:16, xdelay=00:00:39, mailer=esmtp, pri=5432408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 11:15:23 nodus.ligo.caltech.edu sendmail[10334]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 11:15:31 nodus.ligo.caltech.edu sendmail[10334]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+11:49:02, xdelay=00:00:27, mailer=esmtp, pri=5522408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 12:15:25 nodus.ligo.caltech.edu sendmail[13747]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 12:15:42 nodus.ligo.caltech.edu sendmail[13747]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+12:49:13, xdelay=00:00:33, mailer=esmtp, pri=5612408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable

 

Why is nodus attempting to email umakant.rapol@iiserpune.ac.in?

  13540   Fri Jan 12 16:01:27 2018 KojiConfigurationComputerssendmail troubles on nodus

I personally don't like the idea of having sendmail (or something similar like postfix) on a personal server as it requires a lot of maintenance cost (like security update, configuration, etc). If we can use external mail service (like gmail) via gmail API on python, that would easy our worry, I thought.

  13541   Fri Jan 12 18:08:55 2018 gautamUpdateGeneralpip installed on nodus

After much googling, I figured out how to install pip on SL7:

sudo easy_install pip

Next, I installed git:

sudo yum install git A

Turns out, actually, pip can be installed via yum using

sudo yum install python-pip
ELOG V3.1.3-