40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 69 of 337  Not logged in ELOG logo
ID Date Author Type Category Subject
  13511   Sat Jan 6 23:25:18 2018 KevinUpdatePonderSqueezeDisplacement requirements for short-term squeezing

 

Quote:
  • ought to tune for 210 Hz (in-between powerlines) since 100 Hz is tough to work due to scattering, etc.

We can get 1.1 dBvac at 210 Hz.

The first two attachments are the noise budgets for these optimized angles. The third attachment shows squeezing as a function of homodyne angle and SRC detuning at 210 Hz. To stay below -1 dBvac, the homodyne angle must be kept between 88.5 and 89.7 degrees and the SRC detuning must be kept between -0.04 and 0.03 degrees. This corresponds to fixing the SRC length to within a range of 0.07/360 * 1064 nm = 200 pm.

Attachment 1: displacement_noise.pdf
displacement_noise.pdf
Attachment 2: noise_budget.pdf
noise_budget.pdf
Attachment 3: angles.pdf
angles.pdf
  13510   Sat Jan 6 18:27:37 2018 gautamUpdateGeneralpower outage - IFO recovery

Mostly back to nominal operating conditions now.

  1. EX TransMon QPD is not giving any sensible output. Seems like only one quadrant is problematic, see Attachment #1. I blame team EX_Acromag for bumping some cabling somewhere. In any case, I've disabled output of the QPD, and forced the LSC servo to always use the Thorlabs "High Gain" PD for now. Dither alignment servo for X arm does not work so well with this configuration - to be investigated.
  2. BS Seismometer (Trillium) is still not giving any sensible output.
    • I looked under the can, the little spirit level on the seismometer is well centered.
    • I jiggled all the cabling to rule out any obvious loose connections - found none at the seismometer, or at the interface unit (labelled D1002694 on the front panel) in 1X5/1X6.
    • All 3 axes are giving outputs with DC values of a few hundred - I guess there could've been some big earthquake in early December which screwed the internal alignment of the sensing mass in the seismometer. I don't know how to fix this.
    • Attachment #2 = spectra for the 3 channels. Can't say they look very seismicy frown. I've assumed the units are in um/sec.
    • This is mainly bothering me in the short term because I can't use the angular feedforward on PRC alignment, which is usually quite helpful in DRMI locking.
    • But I think the PRM Oplev loop is actually poorly tuned, in which case perhaps the feedforward won't really be necessary once I touch that up.

What I did today (may have missed some minor stuff but I think this is all of it):

  1. At EX:
    • Toggled power to Thorlabs trans monitoring PD, checked that it was actually powered, squished some cables in the e- rack.
    • Removed PDA55 in the green path (put there for EX laser AM/PM measurement). So green beam can now enter the X arm cavity.
    • Re-connected ALS cabling.
    • Turned on HV supply for EX Green PZT steering mirrors (this has to be done every time there is a power failure).
  2. At ITMY table:
    • Removed temporary HeNe RIN/ Oplev sensing noise measurement setup. HeNe + 1" vis-coated steering mirror moved to SP table.
    • Turned on ITMY/SRM Oplev HeNe.
    • Undid changes on ITMY Oplev QPD and returned it to its original position.
    • Centered ITMY reflected beam on this QPD.
  3. At vertex area
    • Looked under Trillium seismometer can - I've left the clamps undone for now while we debug this problem.
    • Power-cycled Trillium interface box.
    • Touched up PMC alignment.
  4. Control room
    • Recover IFO alignment using combination of IR and Green beams.
    • Single arm locking recovered, dither alignment servos run to maximize arm transmission. Single arm locks holding for hours, that's good.
    • The X arm dither alignment isn't working so well, the transmission never quite hits 1 and it undergoes some low frequency (T~30secs) oscillations once the transmission reaches its peak value.
    • Had to do the usual ipcrm thing to get dataviewer to run on pianosa.

Next order of business:

  1. Recover ALS:
    • aim is to replace the vertex area ALS signals derived from 532nm with their 1064nm counterparts.
    • Need to touch up end PDH servos, alignment/MM into arms, and into Fibers at ends etc.
    • Control the arms (with RMs misaligned) in the CARM/DARM basis using the revised ALS setup.
    • Make a noise budget - specifically, we are interested in how much actuation range is required to maintain DARM control in this config.
  2. Recover DRMI locking
    • Continue NBing.
    • Do a statistical study of actuation range required for acquiring and maintaining DRMI locking.
Attachment 1: EX_QPD_Quad1_Faulty.pdf
EX_QPD_Quad1_Faulty.pdf
Attachment 2: Trillium_faulty.pdf
Trillium_faulty.pdf
  13509   Sat Jan 6 13:47:32 2018 ranaUpdatePonderSqueezeDisplacement requirements for short-term squeezing
  • ought to tune for 210 Hz (in-between powerlines) since 100 Hz is tough to work due to scattering, etc.
  • rename DAC - I think what this curve shows is really the coil driver noise. The DAC noise we can always filter out with the dewhitening board; i.e. once we have 1000x attenuation between the DAC and the coil driver input, DAC noise is not dominant.
  13508   Sat Jan 6 05:18:12 2018 KevinUpdatePonderSqueezeDisplacement requirements for short-term squeezing

I have been looking into whether we can observe squeezing on a short timescale. The simulations I show here say that we can get 2 dBvac of squeezing at about 120 Hz using extreme signal recycling.

The parameters used here are

  • 100 ppm transmissivity on the folding mirrors giving a PRC gain of 40.
  • 10 kΩ series resistance for the ETMs; 15 kΩ series resistance for the ITMs.
  • 1 W incident on the back of PRM.
  • PD quantum efficiency 0.88.

The first attachment shows the displacement noise. The red curve labeled vacuum is the standard unsqueezed vacuum noise which we need to beat. The second attachment shows the same noise budget as a ratio of the noise sources to the vacuum noise.

This homodyne angle and SRC detuning give about the maximum amount of squeezing. However, there's quite a bit of flexibility and if there are other considerations, such as 100 Hz being too low, we should be able to optimize these angles (even with more pessimistic values of the above parameters) to see at least 0.2 dBvac around 400 Hz.

Attachment 1: displacement_noise.pdf
displacement_noise.pdf
Attachment 2: noise_budget.pdf
noise_budget.pdf
  13507   Fri Jan 5 22:19:53 2018 gautamUpdateGeneralpower outage - timing error

Just putting the relevant line from email from Rolf which at least identifies the problem here:

Looks like FB time is actually off by 1 year, as your timing system does not get year info.

There still seems to be something funky with the X arm transmission PDs - I can't seem to get the triggering to switch between the QPD and the Thorlabs PD, and the QPD signal seems to be wildly fluctuating by several orders of magnitude from 0.01-100. The c1iscex FE was pulled out, and it seemed to me like someone was doing some cable re-arrangement at the X end.

I will look into this tomorrow. 

Quote:

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

 

  13506   Fri Jan 5 21:54:28 2018 ranaUpdateGeneralpower outage - timing error

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

Attachment 1: huh.png
huh.png
  13505   Fri Jan 5 19:19:25 2018 ranaConfigurationSEIBarry Controls 'air puck' instead of 'VOPO style' breadboard

We've been thinking about putting in a blade spring / wire based aluminum breadboard on top of the ETM & ITM stacks to get an extra factor of 10 in seismic attenuation.

Today Koji and I wondered about whether we could instead put something on the outside of the chambers. We have frozen the STACIS system because it produces a lot of excess noise below 1 Hz while isolating in the 5-50 Hz band.

But there is a small gap between the STACIS and the blue crossbeams that attache to the beams that go into the vacuum to support the stack. One possibility is to put in a small compliant piece in there to gives us some isolation in the 10-30 Hz band where we are using up a lot of the control range. The SLM series mounts from Barry Controls seems to do the trick. Depending on the load, we can get a 3-4 Hz resonant frequency.

Steve, can you please figure out how to measure what the vertical load is on each of the STACIS?

Attachment 1: mm_slm.jpg
mm_slm.jpg
Attachment 2: Screen_Shot_2018-01-05_at_7.25.47_PM.png
Screen_Shot_2018-01-05_at_7.25.47_PM.png
  13504   Fri Jan 5 17:50:47 2018 ranaConfigurationComputersmotif on nodus

I had to do 'sudo yum install motif' on nodus so that we could get libXm.so.4 so that we could run MEDM. Works now.

  13503   Thu Jan 4 14:39:50 2018 gautamUpdateGeneralpower outage - timing error

As mentioned in my previous elog, the CDS overview screen "DC" indicators are all RED (everything else is green). Opening up the displays for individual CPUs, the error message shown is "0x4000", which is indicative of some sort of timing error. Indeed, it seems to me that on the FB machine, the gpstime command shows a gps time that is ~1 second ahead of the times on other FE machines.

Running gpstime on other FE machines throws up an error, saying that it cannot connect to the network to update leap second data. Not sure what this is about...

I double checked the GPS timing module, we had some issues with this in the recent past. But judging by its front panel display, everything seems to be in order...

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/gpstime", line 9, in <module>
    load_entry_point('gpstime==0.2', 'console_scripts', 'gpstime')()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 356, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2476, in load_entry_point
    return ep.load()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2190, in load
    ['__name__'])
  File "/usr/lib/python3/dist-packages/gpstime/__init__.py", line 41, in <module>
    LEAPDATA = ietf_leap_seconds.load_leapdata(notify=True)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 158, in load_leapdata
    fetch_leapfile(leapfile)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 115, in fetch_leapfile
    r = requests.get(LEAPFILE_IETF)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in get
    return request('get', url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 407, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(101, 'Network is unreachable'))

 

 

  13502   Thu Jan 4 12:46:27 2018 gautamUpdateALSFiber ALS assay

Attachment #1 is the updated diagram of the Fiber ALS setup. I've indicated part numbers, power levels (optical and electrical). For the light power levels, numbers in green are for the AUX lasers, numbers in red are for the PSL.

I confirmed that the output of the power splitter is going to the "RF input" and the output of the delay line is going to the "LO input" of the demodulator box. Shouldn't this be the other way around? Unless the labels are misleading and the actual signal routing inside the 1U chassis is correctly done :/

  • Mode-matching into the fibers is rather abysmal everywhere.
  • In this diagram, only the power levels measured at the lasers and inputs of the fiber couplers are from today's measurements. I just reproduced numbers for inside the beat mouth from elog13254.
  • Inside the beat mouth, the PD output actually goes through a 20dB coupler which is included in this diagram for brevity. Both the direct and coupled outputs are available at the front panel of the beat mouth. The latter is meant for diagnostic purposes. The number of -8dBm of beat @30MHz is quoted using the direct output, and not the coupled output.

Still facing some CDS troubles, will start ALS recovery once I address them.

Attachment #2 is the svg file of Attachment #1, which we can update as we improve things. I'll put it on the DCC 40m tree eventually.

Attachment 1: FiberALS.pdf
FiberALS.pdf
Attachment 2: FiberALS.svg.zip
FiberALS.svg.zip
  13501   Wed Jan 3 18:00:46 2018 gautamUpdatePonderSqueezeplan of action

Notes of stuff we discussed @ today's meeting, and afterwards, towards measuring ponderomotive squeezing at the 40m.

  1. Displacement noise requirements
    • Kevin is going to see if we can measure any kind of squeezing on a short timescale by tuning various parameters.
    • Specifically, without requiring crazy ultra low current noise level for the coil driver noise.
  2. Investigate how much actuation range we need for lock acquisition and maintaining lock.
    • Specifically, for DARM.
    • We will measure this by having the arms controlled with ALS in the CARM/DARM basis.
    • Build up a noise budget for this, see how significant the laser noise contribution is.
  3. RC folding mirrors
    • In the present configuration, these are introducing ~2.5% RT loss in the RCs.
    • This affects PRG, and on the output side, measurable squeezing.
    • We want to see if we can relax the requirements on the RC folding mirrors such that we don't have to spend > 20 k$.
    • Specifically, consider spec'ing the folding mirror coatings to only have HR @1064 nm, and take what we get at 532 nm.
    • But still demand tolerances on RoC driven by mode-matching between the RCs and the arm cavities.
  4. ALS with Beat Mouth
    • Use the fiber coupled light from the ends to make the ALS signals.
    • Gautam will update diagram to show the signal chain from end-to-end (i.e. starting at AUX laser, ending at ADC input).
    • Make a noise budget for the same - preliminary analysis suggests a sensing noise floor of ~10 mHz/rtHz.

RXA:

  • For the ALS-DARM budget the idea is that we can do lock acquisition better, so we don't need to care about the acquisition reqs. i.e. we just need to set the ETM coil driver current range based on the DARM in-lock values.
    • To get the coil driver noise to be low enough to detect squeezing we need to use a ~10-15 kOhm series resistor.
    • We assume that all DAC and coil driver input noises can be sufficiently filtered.
    • We are assuming that we don't change the magnet sizes or the number of coil windings in the OSEMs.
    • The noise in the ITMs doesn't matter because we don't use them for any locking activity, so we can easily set the coil driver series resistors to 15 kOhm.
    • We will do the bias for the ETMs and ITMs using some HV circuit (not the existing ones on the coil driver boards) and doing the summation after the main coil driver series resistor. This HV bias module needs to handle the ~ (2 V / 400 Ohm) = 5 mA which is now used. This would require (5 mA) x (15 kOhm) = 60+ V drivers.
  • IF we can get away with doing the ALS beat note with just red (still using GREEN light from the end laser to lock to the arms from the ends), we will not have any requirements for the 532 nm transmission of any optics in the DRMI area.
    • Get some quotes for the new PR/SR mirrors having tight RoC tolerance, high R for 1064, and no spec for 532.
    • Check that the 1-way fiber noise for 1064 nm is < 100 mHz/rHz in the 50-1000 Hz band. If its more, explore putting better acoustic foam around the fiber run.
    • Improve the mode-matching of the IR beam into the fibers at the ends. We want >80% to reduce the noise do to scattering; we don't really care about the amount of light available in the PSL - this is just to reduce the IR-ALS noise.
  13500   Wed Jan 3 16:25:32 2018 awadeUpdateOptimal ControlOplev loop tuning

Another cool feature is client side pre-commit hooks. They can be used to run checks on the local version at the time of commit and refuses to push until the pass/fail exits 0.

Can be the same as the Gitlab CI or just basic code quality checks.  I use them to prevent jupyter notebooks being commited with uncleared cells. It needs to be set up on the user's computer manually and is not automatically cloned with the directory: a script can be included in the repo to do this and run manually on first time clone.

Quote:

When putting code into git.ligo.org, one way to have automated testing is to use the Gitlab CI. This is an automated 'checker', much like the 'Travis' system used in GitHub. Essentially, you give it a make files which it runs somewhere and your GIT repo web page gets a little 'failed/passing' badge telling you if its working. You can also browse the logs to see in detail what happened. This avoids the 'but it works on my computer!' thing that we usually hear.

 

  13499   Wed Jan 3 15:13:55 2018 SteveUpdateGeneralprojector light bulb replaced

Bulb  is replaced.

Quote:

I noticed this behaviour since ~Dec 20th, before the power failure. The bulb itself seems to work fine, but the projector turns itself off after <1 minute after being manually turned on by the power button. AFAIK, there was no changes made to the projector/Zita. Perhaps this is some kind of in-built mechanism that is signalling that the bulb is at the end of its lifetime? It has been ~4.5 months (3240 hours) since the last bulb replacement (according to the little sticker on the back which says the last bulb replacement was on 15 Aug 2017

 

  13498   Wed Jan 3 12:33:16 2018 ranaUpdateOptimal ControlOplev loop tuning

When putting code into git.ligo.org, one way to have automated testing is to use the Gitlab CI. This is an automated 'checker', much like the 'Travis' system used in GitHub. Essentially, you give it a make files which it runs somewhere and your GIT repo web page gets a little 'failed/passing' badge telling you if its working. You can also browse the logs to see in detail what happened. This avoids the 'but it works on my computer!' thing that we usually hear.

Quote:

The current version of the code I am using is here: although I may not have inculded some of the data files required to run it, to be fixed...

 

  13497   Tue Jan 2 16:37:26 2018 gautamUpdateOptimal ControlOplev loop tuning

I've made various changes to the optimal loop design approach, but am still not having much success. A summary of changes made:

  1. Parametrization of filter - enforcing uniqueness
    • Previously, the input to the particle swarm was a vector of root frequencies and associated Q-factors.
    • This way of parametrization is not unique - permuting the order of the roots yield the same filter, but particles traversing the high (65) dimensional parameter space may have to go over very expensive regions in order to converge with the global minimum / best performing particle.
    • One way around this is to parametrize the filter by the highest pole/zero frequency, and then specifying the remaining roots by the cumulative separation from this highest root. This guarantees that a unique vector input to the particle swarm function specifies a unique filter.
    • To avoid negative frequencies, I manually set a particular element of the vector to 0 if the cumulative sum yields a negative frequency. I believe this is how MATLAB's particle swarm implements the "constraints" in the constrained optimization routines.
  2. Cost function - I've reformulated this into something that makes more sense to me, but probably can be improved further.
    • Term #1 - integral of the area (evaluated with MATLAB's trapz utility) between the in-loop (i.e. suppressed) error signal and the sensing noise spectrum (for the latter, I use the orange curve from this plot). This is a signed number, so that suppression below the sensing noise is penalized. Target value is 1 urad rtHz. One problem I see with this approach is that if we believe the sensing noise measurement, then even at 10mHz, it looks like sensing noise is below the out-of-loop error signal level. So the optimizer doesn't seem to want to make the loop AC coupled.
    • Term #2 - stability margin. I'm using this number, which is the distance-of-closest-approach to the point -1 in the Nyquist plot, instead of gain and phase margins, as this yields a more conservative robustness measure. Target value is 0.65.
    • Term #3 - A2L contribution of in-loop control signal. This contribution is calculated using measurements of A2L coupling for the DRMI. The actual term that goes into the cost function is the ratio of the area under the in-loop control signal to that under the seismic noise curve above 35Hz. Further, f>100Hz is given 10x the weight of 35Hz<f<100Hz (I've not really played around with this weighting function). The goal is to be as close to the seismic curve as possible, at which point this term becomes 1.
    • Terms #4 and #5 - the maximum open loop gain evaluated in a 1Hz wide bin centered around the bounce and roll resonances. The aim is to not exceed -40dB in these bins. Perhaps this needs to be reformulated, as the optimizer seems to be giving this term too much importance - the optimized loops have extremely deep bandstops around the BR resonances.
    • To normalize each term, I divide by the "target" value mentioned above, so as to make the various terms comparable.
    • Each term in the cost function has two regimes - one where it is rapidly varying close to the desired operating point, and one far away where the cost still increases monotonically, but slower (see Attachment #2).
    • A scalar cost function is evaluated by taking a weighted sum of the above terms. The weights are chosen so as to make each term ~10 for the controller currently implemented.
    • All of the above are only applicable if the resulting loop is stable - else, a large cost is assigned (exponential of sum of real parts of poles of OLTF).

Attachment #1 shows the outcome of a typical optimization run - so while I am having some more success with this than before, where the PSO algorithm was stalling and terminating before any actual optimization was done, it seems like I need to re-think the cost function yet again...

Attachment #2 shows the current terms entering the cost function, and their "desired" values.

The current version of the code I am using is here: although I may not have inculded some of the data files required to run it, to be fixed...

Attachment 1: loopOpt_180102_1706.pdf
loopOpt_180102_1706.pdf
Attachment 2: globalCosts.pdf
globalCosts.pdf
  13496   Tue Jan 2 16:24:29 2018 gautamUpdatesafetyProjector periodically shuts itself off

I noticed this behaviour since ~Dec 20th, before the power failure. The bulb itself seems to work fine, but the projector turns itself off after <1 minute after being manually turned on by the power button. AFAIK, there was no changes made to the projector/Zita. Perhaps this is some kind of in-built mechanism that is signalling that the bulb is at the end of its lifetime? It has been ~4.5 months (3240 hours) since the last bulb replacement (according to the little sticker on the back which says the last bulb replacement was on 15 Aug 2017

  13495   Tue Jan 2 15:43:35 2018 SteveUpdateVACpumpdown after power outage

 

Quote:

There was a power outage.

The IFO pressure is 12.8 mTorr-it and it is not pumped. V1 is still closed. TP1 is not running. The Rga is not powered.

The PSL output shutter is still closed. 2W Innolight turned on and manual beam block placed in its beampath.

3 AC units turned on at room temp 84F

IFO pumped down from 44 mTorr to 9.6e-6 Torr with Maglev  backed with only TP3

Aux drypump  was helping our std drypump during this 1 hour period. TP3 reached 32 C and slowed down 47K rpm

The peak foreline pressure at P2  was ~3 Torr

Hornet cold cathode gauge setting:   research mode, air,

                                                            2830 HV  1e-4A  at 9.6e-6 Torr,

                                                         [  3110 HV  8e-5A at 7.4e-6 Torr one day later ]

Annuloses are at 2 Torr, not pumped

Valve configuration:  vacuum normal, RGA is still off

PSL shutter is opened automatically. Manual block removed.

End IR lasers and doublers are turned on.

 

NOTE: Maglev " rotation X " on vacuum medm screen is not working! " C1:Vac-TP1_rot " channel was removed.  Use " NORMAL X " for rotation monitoring.

*We removed this (i.e. rotation) field from the MEDM screen to avoid confusion.

Attachment 1: pumpdown_from_44_mTorr.png
pumpdown_from_44_mTorr.png
  13494   Sun Dec 31 12:43:50 2017 ranaSummaryElectronicsSR560: reworking

I have ordered some LSK389A (in both the SOIC-8 and TO-71 packages) to replace the SR560's default front end FET pair (NPD5565).

I'm going to rework s# 00619 once these new FETs come in. Also ordered 100 of the SOIC-8 to DIP-8 adapter boards from Digikey.

This plot shows the current performance compared to the Rai Low Noise box. I expect the FETs should let us get to ~1.5 nV/rHz with the SR560.

Attachment 1: Preamps.pdf
Preamps.pdf
  13493   Thu Dec 28 17:22:02 2017 gautamUpdateGeneralpower outage - CDS recovery
  1. I had to manually reboot c1lsc, c1sus and c1ioo.
  2. I edited the line in /etc/rt.sh (specifically, on FB /diskless/root.jessie/etc/rt.sh) that lists models running on a given frontend, to exclude c1dnn and c1oaf, as these are the models that have been giving us most trouble on startup. After this, I was able to bring back all models on these three machines using rtcds restart --all. The original line in this file has just been commented out, and can be restored whenever we wish to do so.
  3. mx_stream processes are showing failed status on all the frontends. As a result, the daqd processes are still not working. Usual debugging methods didn't work.
  4. Restored all sus dampings.
  5. Slow computers all seem to be responsive, so no action was required there.
  6. Burtrestored c1psl to solve the "sticky slider" problem, relocked PMC. I didn't do anything further on the PSL table w.r.t. the manual beam block Steve has placed there till the vacuum situation returns to normal.

@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.

from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.

I also hard-rebooted megatron and optimus as these were unresponsive to ping.

*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup.

Attachment 1: 28.png
28.png
  13492   Tue Dec 26 17:24:24 2017 SteveUpdateGeneralpower outage

There was a power outage.

The IFO pressure is 12.8 mTorr-it and it is not pumped. V1 is still closed. TP1 is not running. The Rga is not powered.

The PSL output shutter is still closed. 2W Innolight turned on and manual beam block placed in its beampath.

3 AC units turned on at room temp 84F

Attachment 1: powerOutage.png
powerOutage.png
  13491   Fri Dec 22 09:40:19 2017 ranaSummaryGeneralDAC noise contribution to squeezing noise budget
  1. Should not count the ITMs. On those we can use big resistors / filters to cut out the noise.
  2. For the initial LIGO, we used 7 K resistors and the mass was 10 kg. But...the output driver went +/- 150 V.

So we had a max F/m = (20 mA * 0.064 N/A)/(10 kg) = 0.0001. For the 40m, to get the same thing, we would need 40x less current (~0.5 mA). At the moment we have (12 V / 400 Ohm) = 30 mA.

We need to get a spectrum and times series of the required coil current for acquiring and holding the DRMI, and also the single arm. Then we can see where to make noise reductions to allow this drastic force reduction. 

Coil Driver Upgrade wiki here.

  13490   Thu Dec 21 19:25:48 2017 KevinSummaryGeneralDAC noise contribution to squeezing noise budget

Gautam and I redid our calculations, and the updated plot of squeezing as a function of DAC current noise per coil is shown in the attachment. The current noise is calculated as the maximum of the filtered DAC noise and the Johnson noise of the series resistor. The total noise is for four optics with four coils each.

The numbers are worse than we quoted before: according to these calculations we can get to 0 dBvac for current noise per coil of about 2.4 pA/rtHz at 100 Hz.

Quote:

Gautam and I looked into the DAC noise contribution to the noise budget for homodyne detection at the 40m. DAC noise is currently the most likely limiting source of technical noise.

Several of us have previously looked into the optimal SRC detuning and homodyne angle to observe pondermotive squeezing at the 40m. The first attachment summarizes these investigations and shows the amount of squeezing below vacuum obtainable as a function of homodyne angle for an optimal SRC detuning including fundamental classical sources of noise (seismic, CTN, and suspension thermal). These calculations are done with an Optickle model. According to the calculations, it's possible to see 6 dBvac of squeezing around 100 Hz.

The second attachment shows the amount of squeezing obtainable including DAC noise as a function of current noise in the DAC electronics. These calculations are done at the optimal -0.45 deg SRC detuning and 97 deg homodyne angle. Estimates of this noise are computed as is done in elog 13146 and include de-whitening. It is not possible to observe squeezing with the current 400 Ω series resistor which corresponds to 30 pA/rtHz current noise at 100 Hz. We can get to 0 dBvac for current noise of around 10 pA/rtHz (1.2 kΩ series resistor) and can see 3 dBvac of squeezing with current noise of about 5 pA/rtHz at 100 Hz (2.5 kΩ series resistor). At this point it will be difficult to control the optics however.

So it seems reasonable to reduce the DAC noise to sufficient levels to observe squeezing, but we will need to think about the controls problem more.

 

Attachment 1: 40mDAC_squeezing.pdf
40mDAC_squeezing.pdf
  13489   Wed Dec 20 00:43:58 2017 KevinSummaryGeneralDAC noise contribution to squeezing noise budget

Gautam and I looked into the DAC noise contribution to the noise budget for homodyne detection at the 40m. DAC noise is currently the most likely limiting source of technical noise.

Several of us have previously looked into the optimal SRC detuning and homodyne angle to observe pondermotive squeezing at the 40m. The first attachment summarizes these investigations and shows the amount of squeezing below vacuum obtainable as a function of homodyne angle for an optimal SRC detuning including fundamental classical sources of noise (seismic, CTN, and suspension thermal). These calculations are done with an Optickle model. According to the calculations, it's possible to see 6 dBvac of squeezing around 100 Hz.

The second attachment shows the amount of squeezing obtainable including DAC noise as a function of current noise in the DAC electronics. These calculations are done at the optimal -0.45 deg SRC detuning and 97 deg homodyne angle. Estimates of this noise are computed as is done in elog 13146 and include de-whitening. It is not possible to observe squeezing with the current 400 Ω series resistor which corresponds to 30 pA/rtHz current noise at 100 Hz. We can get to 0 dBvac for current noise of around 10 pA/rtHz (1.2 kΩ series resistor) and can see 3 dBvac of squeezing with current noise of about 5 pA/rtHz at 100 Hz (2.5 kΩ series resistor). At this point it will be difficult to control the optics however.

So it seems reasonable to reduce the DAC noise to sufficient levels to observe squeezing, but we will need to think about the controls problem more.

Attachment 1: 40m_squeezing.pdf
40m_squeezing.pdf
Attachment 2: 40mDAC_squeezing.pdf
40mDAC_squeezing.pdf
  13488   Mon Dec 18 20:37:18 2017 gautamUpdatePSLPMC MEDM cleanup

There are fewer lies on this screen now. For reference, the details of the electronics modifications made are in this elog.

  1. Error and control signals are now in units of nm, the appropriate filter switches have been SDF'ed.
  2. I think it's useful to see the control voltage to the PZT in volts as well, so I've made two readbacks available at the control point, one in V and one in nm.
  3. Indicated that the on-board LO mon readback, which reads "nan", is no longer meaningful, as the mixer is off the demod board.
  4. Indicated that the PMC Trans readback of "0" is because of a dead ADC.
Quote:

I think many of the readbacks on the PMC MEDM screen are now bogus and misleading since the PMC RF upgrade that Gautam did awhile ago. We ought to fix the screen and clearly label which readbacks and actuators are no longer valid.

 

Attachment 1: PMC_revamped.png
PMC_revamped.png
  13487   Mon Dec 18 17:48:09 2017 ranaUpdateComputersrossa: SL7.3 upgrade continues

Following instructions from LLO-CDS fo the rossa upgrade. Last time there were some issues with not being to access the LLO EPEL repos, but this time it seems to be working fine.

After adding font aliases, need to run 'sudo xset fp rehash' to get the new aliases to take hold. Afterwards, am able to use MEDM and sitemap just fine.

But diaggui won't run because of a lib-sasl error. Try 'sudo yum install gds-all'.

diaggui: error while loading shared libraries: libsasl2.so.2: cannot open shared object file: No such file or directorycrying (have contacted LLO CDS admins)

X-windows keeps crashing with SL7 and this big monitor. Followed instructions on the internet to remove the generic 'Nouveau' driver and install the proprietary NVDIA drivers by dropping to run level 3 and runnning some command line hoodoo to modify the X-files. Now I can even put the mouse on the left side of the screen and it doesn't crash. laugh

  13486   Mon Dec 18 16:45:44 2017 gautamUpdateIOOIMC lockloss correlated with PRM alignment?

I stopped the test earlier today morning around 11:30am. The log file is located at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/PRM_stepping.txt. It contains the times at which the PRM was aligned/misaligned for lookback, and also the number of MC unlocks during every 30 minute period that the PRM alignment was toggled. This was computed by:

  • continuously reading the current value of the EPICS record for MC Trans.
  • comparing its current value to its values 3 seconds ago.
  • If there is a downward step in this comparison greater than 5000 counts, increment a counter variable by 1.
  • Reset counter at the end of 30 minute period.

I think this method is a pretty reliable proxy, because the MC autolocker certainly takes >3 seconds to re-acquire the lock (it has to run mcdown, wait for the next cavity flash, and run mcup in the meantime).

Preliminary analysis suggests no obvious correlation between MC lock duty cycle and PRM alignment.

I leave further analysis to those who are well versed in the science/art of PRM/IMC statistical correlations.

  13485   Fri Dec 15 19:09:49 2017 gautamUpdateIOOIMC lockloss correlated with PRM alignment?

Motivation:

To test the hypothesis that the IMC lock duty cycle is affected by the PRM alignment. Rana pointed out today that the input faraday has not been tuned to maximize the output->input isolation in a while, so the idea is that perhaps when the PRM is aligned, some of the reflected light comes back towards the PSL through the Faraday and hence, messes with the IMC lock.

A script to test this hypothesis is running over the weekend (in case anyone was thinking of doing anything with the IFO over the weekend).

Methodology:

I've made a simple script - the pseudocode is the following:

  • Align PRM
  • For the next half hour, look for downward transitions in the EPICS record for MC TRANS > 5000 cts - this is a proxy for an MC lockloss
  • At the end of 30 minutes, record number of locklosses in the last 30 minutes
  • Misalign PRM, repeat the above 3 bullets

The idea is to keep looping the above over the weekend, so we can expect ~100 datapoints, 50 each for PRM misaligned/aligned. The times at which PRM was aligned/misaligned is also being logged, so we can make some spectrograms of PC drive RMS (for example) with PRM aligned/misaligned. The script lives at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/FaradayIsolCheck.py. Script is being run inside a tmux session on pianosa, hopefully the machine doesn't crash over the weekend and MC1/CDS stays happy.

A more direct measurement of the input Faraday isolation can be made by putting a photodiode in place of the beam dump shown in Attachment #1 (borrowed from this elog). I measured ~100uW of power leaking through this mirror with the PRM misaligned (but IMC locked). I'm not sure what kind of SNR we can expect for a DC measurement, but if we have a chopper handy, we could put a chopper (in the leaked beam just before the PD so as to allow the IMC to be locked) and demodulate at that frequency for a cleaner measurement? This way, we could also measure the contribution from prompt reflections (up to the input side of the Faraday) by simply blocking the beam going into the vacuum. The window itself is wedged so that shouldn't be a big contributor.

Attachment 1: PSL_layout.JPG
PSL_layout.JPG
  13484   Fri Dec 15 18:24:46 2017 ranaSummaryOptical LeversPRM

Today Angelina and I looked at the PRM OL with an eye towards installing a 2nd QPD. We want to try out using 2 QPDs for a single optic to see if theres a way to make a linear combination of them to reduce the sensitivity to jitter of the HeNe laser or acoustic noise on the table.

The power supply for the HeNe was gone, so I took one from the SP table.

There are WAY too many optics in use to get the beam from the HeNe into the vacuum and then back out. What we want is 1 steering mirror after the laser and then 1 steering mirror before the QPD. Even though there are rumors that this is impossible, I checked today and in fact it is very, very possible.

More optics = more noise = bad.

  13483   Fri Dec 15 18:23:03 2017 ranaUpdatePEMTrillium seismometer DC offset

UVW refers to the 3 internal, orthogonal velocity sensors which are not aligned with the vertical or horizontal directions. XYZ refers to the linear combinations of UVW which correspond to north, east, and up.

  13482   Fri Dec 15 17:05:55 2017 gautamUpdatePEMTrillium seismometer DC offset

Yesterday, while we were bringing the CDS system back online, we noticed that the control room wall StripTool traces for the seismic BLRMS signals did not come back to the levels we are used to seeing even after restarting the PEM model. There are no red lights on the CDS overview screen indicative of DAQ problems. Trending the DQ-ed seismometer signals (these are the calibrated (?) seismometer signals, not the BLRMS) over the last 30 days, it looks like

  1. On ~1st December, the signals all went to 0 - this is consistent with signals in the other models, I think this is when the DAQ processes crashed.
  2. On ~8 December, all the signals picked up a DC offset of a few 100s (counts? or um/s? this number is after a cts2vel calibration filter). I couldn't find anything in the elog on 8 December that may be related to this problem.

I poked around at the electronics rack (1X5/1X6) which houses the 1U interface box for these signals - on its front panel, there is a switch that has selectable positions "UVW" and "XYZ". It is currently set to the latter. I am assuming the former refers to velocities in the xyz directions, and the latter is displacement in these directions. Is this the nominal state? I didn't spend too much time debugging the signal further for now.

 

Attachment 1: Trillium.png
Trillium.png
  13481   Fri Dec 15 11:19:11 2017 gautamUpdateCDSCDS recovery, NFS woes

Looking at the dmesg on c1iscex for example, at least part of the problem seems to be associated with FB1 (192.168.113.201, see Attachment #1). The "server" can be unresponsive for O(100) seconds, which is consistent with the duration for which we see the MEDM status lights go blank, and the EPICS records get frozen. Note that the error timestamped ~4000 was from last night, which means there have been at least 2 more instances of this kind of freeze-up overnight.

I don't know if this is symptomatic of some more widespread problem with the 40m networking infrastructure. In any case, all the CDS overview screen lights were green today morning, and MC autolocker seems to have worked fine overnight.

I have also updated the wiki page with the updated daqd restart commands.

Unrelated to this work - Koji fixed up the MC overview screen such that the MC autolocker button is now visible again. The problem seems to do with me migrating some of the c1ioo EPICS channels from the slow machine to the fast system, as a result of which the EPICS variable type changed from "ENUM" to something that was not "ENUM". In any case, the button exists now, and the MC autolocker blinky light is responsive to its state.

Quote:

I don't think the problem is fb1.  The fb1 NFS is mostly only used during front end boot.  It's the rtcds mount that's the one that sees all the action, which is being served from chiara.

 

Attachment 1: NFS.png
NFS.png
Attachment 2: MCautolocker.png
MCautolocker.png
  13480   Fri Dec 15 01:53:37 2017 jamieUpdateCDSCDS recovery, NFS woes
Quote:

I would make a detailed post with how the problems were fixed, but unfortunately, most of what we did was not scientific/systematic/repeatable. Instead, I note here some general points (Jamie/Koji can addto /correct me):

  1. There is a "known" problem with unloading models on c1lsc. Sometimes, running rtcds stop <model> will kill the c1lsc frontend.
  2. Sometimes, when one machine on the dolphin network goes down, all 3 go down.
  3. The new FB/RCG means that some of the old commands now no longer work. Specifically, instead of telnet fb 8087 followed by shutdown (to fix DC errors) no longer works. Instead, ssh into fb1, and run sudo systemctl restart daqd_*.

This should still work, but the address has changed.  The daqd was split up into three separate binaries to get around the issue with the monolithic build that we could never figure out.  The address of the data concentrator (DC) (which is the thing that needs to be restarted) is now 8083.

Quote:

UPDATE 8:20pm:

Koji suggested trying to simply retsart the ASS model to see if that fixes the weird errors shown in Attachment #2. This did the trick. But we are now faced with more confusion - during the restart process, the various indicators on the CDS overview MEDM screen froze up, which is usually symptomatic of the machines being unresponsive and requiring a hard reboot. But we waited for a few minutes, and everything mysteriously came back. Over repeated observations and looking at the dmesg of the frontend, the problem seems to be connected with an unresponsive NFS connection. Jamie had noted sometime ago that the NFS seems unusually slow. How can we fix this problem? Is it feasible to have a dedicated machine that is not FB1 do the NFS serving for the FEs?

I don't think the problem is fb1.  The fb1 NFS is mostly only used during front end boot.  It's the rtcds mount that's the one that sees all the action, which is being served from chiara.

  13479   Fri Dec 15 00:26:40 2017 johannesUpdateCDSRe: CDS recovery, NFS woes
Quote:

Didn't touch Xarm because we don't know what exactly the status of ETMX is.

The Xarm is currently in its original state, all cables are connected and c1auxex is hosting the slow channels.

  13478   Thu Dec 14 23:27:46 2017 johannesUpdateDAQaux chassis design

Made a front and back panel and slot panels for DSub and IDC breakouts. I want to send this out soon, are there any comments? Preferences for color schemes?

Attachment 1: auxdaq_40m_4U_front.pdf
auxdaq_40m_4U_front.pdf
Attachment 2: auxdaq_40m_4U_rear.pdf
auxdaq_40m_4U_rear.pdf
Attachment 3: auxdaq_40m_4U_DSub37x2.pdf
auxdaq_40m_4U_DSub37x2.pdf
Attachment 4: auxdaq_40m_4U_IDC50.pdf
auxdaq_40m_4U_IDC50.pdf
  13477   Thu Dec 14 19:41:00 2017 gautamUpdateCDSCDS recovery, NFS woes

[Koji, Jamie(remote), gautam]

Summary: The CDS system seems to be back up and functioning. But there seems to be some pending problems with the NFS that should be looked into.

We locked Y-arm, hand aligned transmission to 1yes. Some pending problems with ASS model (possibly symptomatic of something more general). Didn't touch Xarm because we don't know what exactly the status of ETMX is.

Problems raised in elogs in the thread of 13474 and also 13436 seem to be solved.


I would make a detailed post with how the problems were fixed, but unfortunately, most of what we did was not scientific/systematic/repeatable. Instead, I note here some general points (Jamie/Koji can addto /correct me):

  1. There is a "known" problem with unloading models on c1lsc. Sometimes, running rtcds stop <model> will kill the c1lsc frontend.
  2. Sometimes, when one machine on the dolphin network goes down, all 3 go down.
  3. The new FB/RCG means that some of the old commands now no longer work. Specifically, instead of telnet fb 8087 followed by shutdown (to fix DC errors) no longer works. Instead, ssh into fb1, and run sudo systemctl restart daqd_*.
  4. Timing error on c1sus machine was linked to the mx_stream processes somehow not being automatically started. The "!mxstream restart" button on the CDS overview MEDM screen should run the necessary commands to restart it. However, today, I had to manually run sudo systemctl start mx_stream on c1sus to fix this error. It is a mystery why the automatic startup of this process was disabled in the first place. Jamie has now rectified this problem, so keep an eye out.
  5. c1oaf persistently reported DC errors (0x2bad) that couldn't be addressed by running mxstream restart or restarting the daqd processes on FB1. Restarting the model itself (i.e. rtcds restart c1oaf) fixed this issue (though of course I took the risk of having to go into the lab and hard-reboot 3 machines).
  6. At some point, we thought we had all the CDS lights green - but at that point, the END FEs crashed, necessitating Koji->EX and Gautam->EY hard reboots. This is a new phenomenon. Note that the vertex machines were unaffected.
  7. At some point, all the DC lights on the CDS overview screen went white - at the same time, we couldn't ssh into FB1, although it was responding to ping. After ~2mins, the green lights came back and we were able to connect to FB1. Not sure what to make of this.
  8. While trying to run the dither alignment scripts for the Y-arm, we noticed some strange behaviour:
    • Even when there was no signal (looking at EPICS channels) at the input of the ASS servos, the output was fluctuating wildly by ~20cts-pp.
    • This is not simply an EPICS artefact, as we could see corresponding motion of the suspension on the CCD.
    • A possible clue is that when I run the "Start Dither" script from the MEDM screen, I get a bunch of error messages (see Attachment #2).
    • Similar error messages show up when running the LSC offset script for example. Seems like there are multiple ports open somehow on the same machine?
    • There are no indicator lights on the CDS overview screen suggesting where the problem lies.
    • Will continue investigating tomorrow.

Some other general remarks:

  1. ETMX watchdog remains shutdown.
  2. ITMY and BS oplevs have been hijacked for HeNe RIN / Oplev sensing noise measurement, and so are not enabled.
  3. Y arm trans QPD (Thorlabs) has large 60Hz harmonics. These can be mitigated by turning on a 60Hz comb filter, but we should check if this is some kind of ground loop. The feature is much less evident when looking at the TRANS signal on the QPD.

UPDATE 8:20pm:

Koji suggested trying to simply retsart the ASS model to see if that fixes the weird errors shown in Attachment #2. This did the trick. But we are now faced with more confusion - during the restart process, the various indicators on the CDS overview MEDM screen froze up, which is usually symptomatic of the machines being unresponsive and requiring a hard reboot. But we waited for a few minutes, and everything mysteriously came back. Over repeated observations and looking at the dmesg of the frontend, the problem seems to be connected with an unresponsive NFS connection. Jamie had noted sometime ago that the NFS seems unusually slow. How can we fix this problem? Is it feasible to have a dedicated machine that is not FB1 do the NFS serving for the FEs?

Attachment 1: CDS_14Dec2017.png
CDS_14Dec2017.png
Attachment 2: CDS_errors.png
CDS_errors.png
  13476   Thu Dec 14 19:33:20 2017 gautamFrogsASSc1ass slow channel offloading scripts with small

I don't think this is really a problem - we offload to the fast channels and not to the slow (although we really should offload to the slow channels). I think the best approach is to use the ezcaservo utility to offload the DC part of the ASS control signals to the slow channels, so as to not waste fast channel DAC counts on DC offsets. In principle, this approach should be somewhat immune to the slow channel calibration not being perfect.

Quote:

While staring at epics records all day I noticed something about the PIT/YAW offset sliders and ASS offset offloading to slow channels scripts that I'm not sure others are aware off, so I'll briefly discuss it in this post.

The PIT and YAW sliders directly control soft channels that are hosted on the slow machine. Secondary epics records disentangle them for the individual coils:

  • UL = PIT+YAW
  • LL = -PIT+YAW
  • UR = PIT-YAW
  • LR = -PIT-YAW

These channels are the direct input for the physical output channels that generate the control voltage.

The fast channels for PIT and YAW have a numerical correction factor built in that accounts for differences between the OSEMs, but the slow channels don't. This means that the slow PIT/YAW controls are not entirely orthogonal but have crosstalk on the order of 10 percent. This in itself is not that dramatic, however the offload offsets scripts for the dither alignment use the fast PIT/YAW values as inputs, which represent the necessary adjustments to the OSEMs only after the individual correction factors have been applied. The offloading to slow knows nothing of this calibration difference between the OSEMs. The result is that there is a ~10 percent of the offset correction error on the mirror alignment AFTER offloading. This will of course converge after a few iterations, but in any case it is recommendable to run the dither alignment again after offloading and not offload the new offsets to the fast channels.

 

  13475   Thu Dec 14 08:59:17 2017 SteveUpdateGeneralwe are here
Attachment 1: 8_days.png
8_days.png
  13474   Thu Dec 14 07:07:09 2017 ranaUpdateIOOLots of red on the FE status screen

I had to key the c1psl crate to get the PMC locking again. Without this, it would still sort of lock, but it was very hard to turn on the loop; it would push itself off the fringe. So probably it was stuck in some state with the gain wrong. Since the RF stuff is now done in a separate electronics chain, I don't think the RF phase can be changed by this. Probably the sliders are just not effective until power cycling.

Quote:

Once the RT machines were back, we launched only the five IOPs. They had bunch of red lights, but we continued to run essential models for the IFO. SOme of the lights were fixed by "global diag reset" and "mxstream restart".

The suspension were damped. We could restore the IMC lock. The locking became OK and the IMC was aligned. The REFL spot came back.

At least, I could confirm that the WFS ASC signals were not transmitted to c1mcs. There must be some disconneted links of IPC.

I then tried to get the MC WFS back, but running rtcds restart --all would make some of the computers hang. For c1ioo I had to push the reset button on the computer and then did 'rtcds start --all' after it came up. Still missing IPC connections.

I'm going to get in touch with Rolf.

  13473   Thu Dec 14 00:32:56 2017 johannesUpdateASSAcromag new crate; c1auxex2 configured as gateway server for acromag

This splicing in of fast binary channels we discussed at yesterday's and today's meetings is getting messy with the current chassis. Cleaning up the cable mess was a key point, so I got a 4U height DEEP chassis from Rich and drew up a front panel for a modular approach that we can use at the other 40m locations as well. The front panel will have slots for smaller slot panels to which we can mount the breakout boards as before, so all the wiring that I've done can be transfered to this design. If some new connector standard is required it will be easy to draw a new slot panel from a template, for now I'll make some with two DSub37 and IDC50. Since this chassis is so huge it will have ample space for cross-connects.

I also moved the communication of c1auxex2 with the Acromag units off the martian network, connecting them with a direct cable connection out of the second ethernet port. To test if this works I configured the second ethernet port of c1auxex2 to have the IP address 192.168.114.1 and one of the acromag units to have 192.168.114.11, and initialized an IOC with some test channels. Much to my surprise this actually worked straight out of the box, and the test channels can be accessed from the control room computers without having a direct ethernet link to the acromag modules. huzzah!

Steve: it would be nice to have all plugs- connectors lockable

 

Attachment 1: fp_mod_4U.pdf
fp_mod_4U.pdf
Attachment 2: IMG_20171213_171541850_HDR.jpg
IMG_20171213_171541850_HDR.jpg
  13472   Wed Dec 13 17:46:08 2017 Udit KhandelwalSummaryGeneralSummary of Current Tasks

40m Lab CAD

1. 40m_bldg.dwg has 2D drawing of the 40m building

  • After importing file as a 2D sketch into solidworks, make sure to retrace all the lines before performing any 3D extrusion stuff.
  • Made walls 3m high

2. 40m_VE.dwg has the Vaccuum Envelope.

  • Divided the file into individual sketches for the tubes, test mass, and beam splitter chambers (so they can be individually modified later if required).

3. 40melev.dwg has the relative positioning between (1) and (2).

  • Using this file to position objects inside building cad.

4. All files can be found in Dropbox folder [40m SOS Modeling], which should be renamed to [40m CAD].

5. Next step would be to add the optical table, mirrors.

Tip-Tilt Suspension

1. Current objective: (refer to D070172) - Increase the length of the side arms (so it matches the dimensions of D960001), while keeping the test mass subassembly at the same height.
2. Future objective: Resonant frequency FEM of the frame (sans the test mass), and then change height to get the desired frequency.

Past Work

  • Completed solidworks model of SOS (D960001). I understand this is not the focus right now so this is for reference that the model is ready to be used.

Comments

  • I will be in India from 16th December until 6th January so this is my final visit for this year. I have enough material to work from home, and will correspond with Koji over email regarding Lab CAD and tip-tilt suspension.
  13471   Wed Dec 13 09:49:23 2017 johannesUpdateASSwiring diagram

I attached a wiring schematic from the slow DAQ to the eurocrate modules. Of these, pins 1-32 (or 1A-16C) and pins 33-64 (17A-32C) are on separate DSub connectors. Therefore the easiest solution is to splice the slow DIO channels into the existing breakouts so we can proceed with the transition. This will still remove a lot of the current cable salad. For the YEND we can start thinking about a more elegant solution (For example a connector on the front panel of the Acromag chassis for the fast DIO) now that the problem is better defined.

Attachment 1: 1Y9.pdf
1Y9.pdf
  13470   Fri Dec 8 23:31:31 2017 johannesFrogsASSc1ass slow channel offloading scripts with small

While staring at epics records all day I noticed something about the PIT/YAW offset sliders and ASS offset offloading to slow channels scripts that I'm not sure others are aware off, so I'll briefly discuss it in this post.

The PIT and YAW sliders directly control soft channels that are hosted on the slow machine. Secondary epics records disentangle them for the individual coils:

  • UL = PIT+YAW
  • LL = -PIT+YAW
  • UR = PIT-YAW
  • LR = -PIT-YAW

These channels are the direct input for the physical output channels that generate the control voltage.

The fast channels for PIT and YAW have a numerical correction factor built in that accounts for differences between the OSEMs, but the slow channels don't. This means that the slow PIT/YAW controls are not entirely orthogonal but have crosstalk on the order of 10 percent. This in itself is not that dramatic, however the offload offsets scripts for the dither alignment use the fast PIT/YAW values as inputs, which represent the necessary adjustments to the OSEMs only after the individual correction factors have been applied. The offloading to slow knows nothing of this calibration difference between the OSEMs. The result is that there is a ~10 percent of the offset correction error on the mirror alignment AFTER offloading. This will of course converge after a few iterations, but in any case it is recommendable to run the dither alignment again after offloading and not offload the new offsets to the fast channels.

  13469   Fri Dec 8 12:06:59 2017 johannesOmnistructureComputersc1auxex2 ready - but need more cables

The new slow machine c1auxex2 is ready to deploy. Unfortunately we don't have enough 37pin DSub cables to connect all channels. In fact, we need a total of 8, and I found only three male-male cables and one gender changer. I asked Steve to buy more.

Over the past week I have transferred all EPICS records - soft channels and physical ones - from c1auxex to c1auxex2, making changes where needed. Today I started the in-situ testing

  1. Unplugged ETMX's satellite box
  2. Unplugged the eurocrate backplane DIN cables from the SOS Driver and QPD Whitening filter modules (the ones that receive ao channels)
  3. Measured output voltages on the relevant pins for comparison after the swap
  4. Turned off c1auxex by key, removed ethernet cable
  5. Started the modbus ioc on c1auxex2
  6. Slow machine indicator channels came online, ETMX Watchdog was responsive (but didn't have anything to do due to missing inputs) and reporting. PIT/YAW sliders function as expected
  7. Restoring the previous settings gives output voltages close to the previous values, in fact the exact values requested (due to fresh calibration)
  8. Last step is to go live with c1auxex2 and confirm the remaining channels work as expected.

I copied the relevant files to start the modbus server to /cvs/cds/caltech/target/c1auxex2, although kept local copies in /home/controls/modbusIOC/ from which they're still run.

I wonder what's the best practice for this. Probably to store the database files centrally and load them over the network on server start?

  13468   Thu Dec 7 22:24:04 2017 johannesOmnistructureComputersAcromag XEND progress

 

Quote:
 
  • Need to calibrate the +/- 10V swing of the analog channels via the USB utility, but that requires wiring the channels to the connectors and should probably be done once the unit sits in the rack
  • Need to wire power from the Sorensens into the chassis. There are +/- 5V, +/- 15V and +/- 20V present. The Acromags need only +12V-32V, for which I plan to use the +20V, and an excitation voltage for the binary channels, for which I'm going to wire the +5V. Should do this through the fuse rails on the side.
  • The current slow binary channels are sinking outputs, same as the XT1111 16-channel module we have. The additional 4 binary outputs of the XT1541 are sourcing, and I'm currently not sure if we can use them with the sos driver and whitening vme boards that get their binary control signals from the slow system.
  • Confirm switching of binary channels (haven't used model XT1111 before, but I assume the definitions are identical to XT1121)
  • Setup remaining essential EPICS channels and confirm that dimensions are the same (as in both give the same voltage for the same requested value)
  • Disconnect DIN cables, attach adapter boards + DSUB cables
  • Testing

Getting the chassis ready took a little longer than anticipated, mostly because I had not looked into the channel list myself before and forgot about Lydia's post which mentions that some of the switching controls have to be moved from the fast to the slow DAQ. We would need a total of 5+5+4+8=22 binary outputs. With the existing Acromag units we have 16 sinking outputs and 8 sourcing outputs. I looked through all the Eurocrate modules and confirmed that they all use the same switch topology which has sourcing inputs.

While one can use a pull-down resistor to control a sourcing input with a sourcing output,

pulling down the MAX333A input (datasheet says logic low is <0.8V) requires something like 100 Ohms for the pull down resistor, which would require ~150mA of current PER CHANNEL, which is unreasonable. Instead, I asked Steve to buy a second XT1111 and modified the chassis to accomodate more Acromag units.

I have now finished wiring the chassis (except for 8 remaining bypass controls to the whitening board which need the second XT1111), calibrated all channels in use, confirmed all pin locations via the existing breakout boards and DCC drawings for the eurocrate modules, and today Steve and I added more fuses to the DIN rail power distribution for +20V and +15V.

There was not enough contingent free space in the XEND rack to mount the chassis, so for now I placed it next to it.

c1auxex2 is currently hosting all original physical c1auxex channels (not yet calc records) under their original name with an _XT added at the end to avoid duplicate channel names. c1auxex is still in control of ETMX. All EPICS channels hosted by c1auxex2 are in dimensions of Volts. The plan for tomorrow is to take c1auxex off the grid, rename the c1auxex2 hosted channels and transfer ETMX controls to it, provided we can find enough 37pin DSub cables (8). I made 5 adapter boards for the 5 Eurocrate modules that need to talk to the slow DAQ through their backplane connector.

  13467   Thu Dec 7 16:28:06 2017 KojiUpdateIOOLots of red on the FE status screen

Once the RT machines were back, we launched only the five IOPs. They had bunch of red lights, but we continued to run essential models for the IFO. SOme of the lights were fixed by "global diag reset" and "mxstream restart".

The suspension were damped. We could restore the IMC lock. The locking became OK and the IMC was aligned. The REFL spot came back.

At least, I could confirm that the WFS ASC signals were not transmitted to c1mcs. There must be some disconneted links of IPC.

  13466   Thu Dec 7 15:46:31 2017 johannesHowToComputer Scripts / ProgramsLots of red on the FE status screen

[Koji, Johannes]

The issue was partially fixed and the interferometer is in workable condition now.

What -probably- fixed it was restarting the dhcp server on chiara

sudo service isc-dhcp-server restart

Afterwards the frontends were restarted one by one. SSH access was possible and the essential models for IFO operation were started.

c1iscex reported initially that no DAQ card was found, and inside the IO chassis the LED indicator strip was red. Turning off the machine, checking the cables and rebooting fixed this.

Attachment 1: 04.png
04.png
  13465   Thu Dec 7 15:02:37 2017 KojiHowToComputer Scripts / ProgramsLots of red on the FE status screen

Once a realtime machine was rebooted, it did not come back. I suspect that the diskless hosts have a difficulty to boot up.

Attachment 1: DSC_0552.JPG
DSC_0552.JPG
  13464   Thu Dec 7 11:14:37 2017 johannesHowToComputer Scripts / ProgramsLots of red on the FE status screen

Since we're getting ready to put the replacement slow DAQ for c1auxex in I wanted to bring the IFO back to operating condition after the PMC hasn't been locked for days. Something seems wrong with the CDS system though, many of the frontent models have red background and don't seem to be responsive. I followed the instructions laid out in https://wiki-40m.ligo.caltech.edu/Computer_Restart_Procedures.

In the attached screenshot, initially all c1ioo models were red, and on c1iscex only c1x01 was blue, the other ones red. I was able to ssh into both machines and tried to restart indivitual models, which didn't work and instead turned their background white. Still following the wiki page, I restarted both machines but they don't respond to pinging anymore and thus I cannot use ssh to reach them. Not sure what to do, I also rebooted fb over telnet.

So far I couldn't find any records of how to fix this situation.

Attachment 1: 22.png
22.png
  13463   Mon Dec 4 22:06:07 2017 johannesOmnistructureComputersAcromag XEND progress

I wired up the power distribution, and ethernet cables in the Acromag chassis today. For the time being it's all kind of loose in there but tomorrow the last parts should arrive from McMaster to put everything in its place. I had to unplug some of the wiring that Aaron had already done but labeled everything before I did so. I finalized the IP configuration via USB for all the units, which are now powered through the chassis and active on the network.

I started transcribing the database file ETMXaux.db that is loaded by c1auxex in the format required by the Acromags and made sure that the new c1auxex2 properly functions as a server, which it does.

ToDo-list:

  • Need to calibrate the +/- 10V swing of the analog channels via the USB utility, but that requires wiring the channels to the connectors and should probably be done once the unit sits in the rack
  • Need to wire power from the Sorensens into the chassis. There are +/- 5V, +/- 15V and +/- 20V present. The Acromags need only +12V-32V, for which I plan to use the +20V, and an excitation voltage for the binary channels, for which I'm going to wire the +5V. Should do this through the fuse rails on the side.
  • The current slow binary channels are sinking outputs, same as the XT1111 16-channel module we have. The additional 4 binary outputs of the XT1541 are sourcing, and I'm currently not sure if we can use them with the sos driver and whitening vme boards that get their binary control signals from the slow system.
  • Confirm switching of binary channels (haven't used model XT1111 before, but I assume the definitions are identical to XT1121)
  • Setup remaining essential EPICS channels and confirm that dimensions are the same (as in both give the same voltage for the same requested value)
  • Disconnect DIN cables, attach adapter boards + DSUB cables
  • Testing

 

Quote:

[Aaron, Johannes]

We configured the AtomServer for the Martian network today. Hostname is c1auxex2, IP is 192.168.113.49. Remote access over SSH is enabled.

There will be 6 acromag units served by c1auxex2.

Hostname Type IP Address
c1auxex-xt1221a 1221 192.168.113.130
c1auxex-xt1221b 1221 192.168.113.131
c1auxex-xt1221c 1221 192.168.113.132
c1auxex-xt1541a 1541 192.168.113.133
c1auxex-xt1541b 1541 192.168.113.134
c1auxex-xt1111a 1111 192.168.113.135

Some hardware to assemble the Acromag box and adapter PCBs are still missing, and the wiring and channel definitions have to be finalized. The port driver initialization instructions and channel definitions are currently locally stored in /home/controls/modbusIOC/ but will eventually be migrated to a shared location, but we need to decide how exactly we want to set up this infrastructure.

  • Should the new machines have the same hostnames as the ones they're replacing? For the transition we simply named it c1auxex2.
  • Because the communication of the server machine with the DAQ modules is happening over TCP/IP and not some VME backplane bus we could consolidate machines, particularly in the vertex area.
  • It would be good to use the fact that these SuperMicro servers have 2+ ethernet ports to separate CDS EPICS traffic from the modbus traffic. That would also keep the 30+ IPs for the Acromag thingies off the Martian host tables.
  13462   Sun Dec 3 17:01:08 2017 KojiConfigurationComputerssendmail installed on nodus

An email has come at 5PM on Dec 3rd.

 

ELOG V3.1.3-