40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 68 of 337  Not logged in ELOG logo
ID Date Author Type Category Subject
  13562   Fri Jan 19 23:04:11 2018 gautamUpdateALSFiber ALS assay

[rana, kevin, udit, gautam]

quick notes of some discussions we had today:

  1. Earlier in the day, Udit and I measured (with a 20dB coupler and AG4395) ~20dBm of RF beat power at input to power splitter (just before delay line box) at the LSC rack. This means that we have ~17dBm going into the LO input of the demod board. The AP1053 can only really handle a max of 16dBm at the input. After discussion with Rana, I put a 3dB attenuator at the input to the power splitter so as to preserve the LO/RF ratio in the demod circuit.
  2. Need to make a detailed optical and RF power budget for both arms.
  3. The demod circuit board is configured to have gain of x100 post demod (conversion loss of the mixer is ~-8dB). This works well for the PDH cavity locking type of demod scheme, where the loop squishes the error signal in lock, so most of the time, the RF signal is tiny, and so a gain of x100 is good. For ALS, the application needs are rather different. So we lowered the gain of the "Audio IF amplifier" stage of the circuit from x100 to x10, by effecting the resistor swaps 10ohms->50ohms, 1kohm->500ohms (more details about this later).
  4. There is some subtlety regarding the usage of the whitening interface boards - I need to look at the circuit again and understand this better, but Rana advised against running with the whitening gain at low values. Point #3 above should have helped with this regard.
  5. I wanted to test the new signal chain (with 3dB attenuation and modified IF gain) but ETMX is not happy now, and is making it impossible to keep the X arm locked. Will try again tomorrow.
  6. Eventually: need to measure the mode of the fiber, and up the MM efficiency to at least 80%, which should be doable without using any fancy lenses/collimators.
  7. Udit and I felt that the back panel RF power monitor wasn't working as expected - I will re-investigate this when I have the board out again to make the IF gain change permanent with the right footprint SMD resistors.

RXA: 0805 size SMD thin film resistors have been ordered from Mouser, to be shipped on Monday. **note that these thin film resistors are black; i.e. it is NOT true that all black SMD resistors are thick film**

  13561   Fri Jan 19 20:59:07 2018 Udit KhandelwalUpdateGeneralSolidworks Rendering

Rendered the SOS assembly (D960001) with correct materials and all and it looks very nice. Will extend this to the building cad later.

  13560   Fri Jan 19 15:22:19 2018 Udit KhandelwalSummaryGeneral40m CAD update 2018/01/19

40m CAD Project

  1. All parts will be now named according to the numbering system in this excel sheet: LIGO 40m Parts List in dropbox folder [40mShare] > [40m_cad_models] > [40m Lab CAD]
  2. I've placed optical tables in the chambers at 34.82" from the bottom for now. This was chosen by aligning the centre of test mass of SOS assembly (D960001) with that of vacuum tube (Steve however pointed out last week they might not necessarily be concentric).

  13559   Fri Jan 19 11:34:21 2018 gautamUpdateALSFiber ALS assay

I swapped the inputs to the ZHL-3A at the PSL table - so now the X beat RF signals from the beat mouth are going through what was previously the Y arm ALS electronics. From Attachment #1, you can see that the Y arm beat is now noisier than the X. The ~5kHz peak has also vanished.

So I will pursue this strategy of switching to try and isolate where the problem lies...


Somebody had forgotten to turn the HEPA variac on the PSL table downsad. It was set at 70. I set it at 20, and there is already a huge difference in the ALS spectra

Quote:
 
  1. For Problem #1 - usual debugging tactic of switching X and Y electronics paths to see if the problem lies in the light or in the electronics. If it is in the electronics, we can swap around at various points in the signal chain to try and isolate the problematic component.
Attachment 1: IR_ALS_20180119.pdf
IR_ALS_20180119.pdf
  13558   Fri Jan 19 11:13:21 2018 gautamUpdateCDSslow machine bootfest

c1psl, c1susaux, and c1auxey today

Quote:

MC autolocker got stuck (judging by wall StripTool traces, it has been this way for ~7 hours) because c1psl was unresponsive so I power cycled it. Now MC is locked.

 

  13557   Thu Jan 18 00:35:00 2018 gautamUpdateALSFiber ALS assay

Summary:

I am facing two problems:

  1. The X arm beat seems to be broadband noisier than the Y arm beat - see Attachment #1. The Y-axis calibration is uncertain, but at least the Y beat has the same profile as the reference traces, which are for the green beat from a time when we had ALS running. There is also a rather huge ~5kHz peak, which I confirmed isn't present in the PDH error/control signal spectra (with SR785).
  2. The Y-arm beat amplitude, at times, "breathes" in amplitude (as judged by control room analyzer). Attachment #2 is a time-lapse of this behaviour (left beat is X arm beat, right peak is the Y arm peak) - I caught only part of it, the the beat note basically vanishes into the control room noise floor and then comes back up to almost the same level as the X beat. The scale is 10dB/div. During this time, the green (and IR for that matter) stay stably locked to the arm - you'll have to take my word for it as I have no way to sync my video with StripTool Traces, but I was watching the DC transmission levels the whole time. The whole process happens over a few (1< \tau <5) minutes - I didn't time it exactly. I can't really say this behaviour is periodic either - after the level comes back up, it sometimes stays at a given level almost indefinitely.

More details:

  • Spent some time today trying to figure out losses in various parts of the signal chain, to make sure I wasn't in danger of saturating RF amplifiers. Cabling from PSL table -> LSC rack results in ~2dB loss.
  • I will upload the updated schematic of the Beat-Mouth based ALS - I didn't get a chance to re-measure the optical powers into the Beat Mouth, as someone had left the Fiber Power Meter unplugged, and it had lost all of its charge angry.
  • The Demod boards have a nice "RF/LO power monitor" available at the backplane of the chassis - we should hook these channels up to the DAQ for long term monitoring.
    • The schematic claims "120mV/dBm" into 50ohms at these monitoring pins.
    • I measured the signal levels with a DMM (Teed with 50ohm), but couldn't really make the numbers jive - converting the measured backplane voltage into dBm of input power gives me an inferred power level that is ~5dBm higher than the actual measured power levels (measured with Agilent analyzer in Spectrum Analyzer mode).
  • Looking at the time series of the ALS I and Q inputs, the signals are large, but we are well clear of saturating our 16-bit ADCs.
  • In the brief periods when both beats were stable in amplitude (as judged by control room analyzer), the output of the Q quadrature of the phase tracker servo was ~12,000 cts - the number I am familiar with for the green days is ~2000cts - so naively, I would say we have ~6x the RF beat power from the Beat Mouth compared to green ALS.
  • I didn't characterize the conversion efficiency of the demod boards so I don't have a V (IF)/V (RF) number at the moment.
  • I confirmed that the various peaks seen in the X arm beat spectrum aren't seen in the control signal of the EX Green PDH, by looking at the spectrum on an SR785 (it is also supposedly recorded in the DAQ system, but I can't find the channel and the cable is labelled "GCX-PZT_OUT", which doesn't match any of our current channels).
    Note to self from the future: the relevant channels are: C1:ALS-X_ERR_MON_IN1 (green PDH error signal with x10 gain from an SR560) and C1:ALS-X_SLOW_SERVO_IN1 (green PDH control signal from monitor point - I believe this is DC coupled as this is the error signal to the slow EX laser PZT temp control). I've changed the cable labels at the X end to reflect this reality. At some point I will calibrate these to Hz.
  • The control room analyzer signals come from the "RF mon" outputs on the demod board, which supposedly couple the RF input with gain of -23dBm. These are then routed reverse through a power splitter to combine the X and Y signals, which is then plugged into the HP analyzer. The problem is not local to this path, as during the "breathing" of the Y beat RF amplitude, I can see the Q output of the phase tracker also breathing.

Next steps (that I can think of, ideas welcome!):

  1. For Problem #1 - usual debugging tactic of switching X and Y electronics paths to see if the problem lies in the light or in the electronics. If it is in the electronics, we can swap around at various points in the signal chain to try and isolate the problematic component.
  2. For Problem #2 - hook up the backplane monitor channels to monitor RF amplitudes over time and see if the drifts are correlated with other channels.
  3. There is evidence of some acoustic peaks, which are possibly originating from the fibers - need to track these down, but I think for a first pass to try and get the red ALS going, we shouldn't be bothered by these.

 

 

Attachment 1: IR_ALS_20180118.pdf
IR_ALS_20180118.pdf
Attachment 2: C2B4C1DD-6528-4067-9C13-6BD248629AD6.MOV
  13555   Wed Jan 17 23:36:12 2018 johannesConfigurationGeneralAS port laser injection

Status of the AS-port auxiliary laser injection

  • Auxiliary laser with AOM setup exists, first order diffracted beam is coupled into fiber that leads to the AS table.
  • There is a post-PMC picked-off beam available that is currently just dumped (see picture). I want to use it for a beat note with the auxiliary laser pre-AOM so we can phaselock the lasers and then fast-switch the phaselocked light on and off.
  • I was going to use the ET3010 PD for the beat note unless someone else has plans for it.
  • I obtained a fixed triple-aspheric-lens collimator which is supposed to have a very small M^2 value for the collimation on the AS table. I still have the PSL-lab beam profiler and will measure its output mode.
  • Second attached picture shows the space on the AS table that we have for mode-matching into the IFO. Need to figure out the desired mode and how to merge the beams best.
Attachment 1: PSLbeat.svg.png
PSLbeat.svg.png
Attachment 2: ASpath.svg.png
ASpath.svg.png
  13554   Wed Jan 17 22:44:14 2018 johannesUpdateDAQAcromag checks

This happened because there are multiple ways to scale the raw value of an EPICS channel to the desired output range. In the CryoLab I was using one way, but the EPICS records I copied from c1auxex were doing it differently. Basically this:

DTYP  - Data type -
LINR "NO CONVERSION" vs "LINEAR"
RVAL Raw value
EGUF Engineering units full scale
EGUL Engineering units low
ASLO Manual scaling factor
AOFF Manual offset
VAL Value

If the "LINR" field is set to "LINEAR", the fields EGUF and EGUL are used to convert the raw value to the channel value VAL. To use them, one needs to enter the voltages that return the maximum and minimum values expected for the given data type. It used to be +10V and -10V, respectively, and was copied that way but that doesn't work with the data type required for the Acromag units. For -some- reason, while the the ADC range is -10V to +10V, this corresponds to values -20000 to +20000, while for the DAC channels it's -30000 to +30000. I had observed this before when setting up the DAQ in the CryoLab, but there we were using "NO CONVERSION", which skips the EGUF and EGUL fields, and used the ASLO and AOFF for manual scaling to get it right. When I mixed the records from there with the old ones from c1auxex this got lost in translation. Gautam and I confirmed by eye that this indeed explains the different levels well. This means that the VMon channelsfor the coils are also showing the wrong voltages, which will be corrected, but the readback still definitely works and confirms that the enable switches do their job.

Quote:
  1. I take back what I said about the OSEM PD mon at the meeting - there does seem to be to be some overall calibration factor (Attachment #1) that has scaled the OSEM PD readback channels, by a factor of (20000/2^15), which Johannes informs me is some strange feature of the ADC, which he will explain in a subsequent post.

 

  13553   Wed Jan 17 14:32:51 2018 gautamUpdateDAQAcromag checks
  1. I take back what I said about the OSEM PD mon at the meeting - there does seem to be to be some overall calibration factor (Attachment #1) that has scaled the OSEM PD readback channels, by a factor of (20000/2^15), which Johannes informs me is some strange feature of the ADC, which he will explain in a subsequent post.
  2. The coil redback fields on the MEDM screen have a "30Hz HPF" text field below them - I believe this is misleading. Judging by the schematic, we are monitoring, on the backplane (which is what these channels are reading back from), the coil to the voltage with a gain of 0.5. We can reconfirm by checking the ETMX coil driver board, after which we should remove the misleading label on the MEDM screens.
Quote:

Some suggestions of checks to run, based on the rightmost colum in the wiring diagram here - I guess some of these have been done already, just noting them here so that results can be posted.

  1. Oplev quadrant slow readouts should match their fast DAQ counterparts.
  2. Confirm that EX Transmon QPD whitening/gain switching are working as expected, and that quadrant spectra have the correct shape.
  3. Watchdog tripping under different conditions.
  4. Coil driver slow readbacks make sense - we should also confirm which of the slow readbacks we are monitoring (there are multiple on the SOS coil driver board) and update the MEDM screen accordingly.
  5. Confirm that shadow sensor PD whitening is working by looking at spectra.
  6. Confirm de-whitening switching capability - both to engage and disengage - maybe the procedure here can be repeated.
  7. Monitor DC alignment of ETMX - we've seen the optic wander around (as judged by the Oplev QPD spot position) while sitting in the control room, would be useful to rule out that this is because of the DC bias voltage stability (it probably isn't).
  8. Confirm that burt snapshot recording is working as expected - this is not just for c1auxex, but for all channels, since, as Johannes pointed out, the 2018 directory was totally missing and hence no snapshots were being made.
  9. Confirm that systemd restarts IOC processes when the machine currently called c1auxex2 gets restarted for whatever reason.

 

 

 

Attachment 1: OSEMPDmon_Acro.png
OSEMPDmon_Acro.png
  13552   Tue Jan 16 21:50:53 2018 gautamUpdateALSFiber ALS assay

With Johannes' help, I re-installed the box in the LSC electronics rack. In the end, I couldn't find a good solution to thermally insulate the inside of the box with foam - the 2U box is already pretty crowded with ~100m of cabling inside of it. So I just removed all the haphazardly placed foam and closed the box up for now. We can evaluate if thermal stability of the delay line is limiting us anywhere we care about and then think about what to do in this respect. This box is actually rather heavy with ~100m of cabling inside, and is right now mounted just by using the ears on the front - probably should try and implement a more robust mounting solution for the box with some rails for it to sit on.

I then restored all the cabling - but now, the delayed part of the split RF beat signal goes to the "RF in" input of the demod board, and the non-delayed part goes to the back-panel "LO" input. I also re-did the cabling at the PSL table, to connect the two ZHL3-A amplifier inputs to the IR beat PDs in the BeatMouth instead of the green BBPDs.

I didn't measure any power levels today, my plan was to try and get a quick ALS error signal spectrum - but looks like there is too much beat signal power available at the moment, the ADC inputs for both arm beat signals are overflowing often. The flat gain on the AS165 (=ALS X) and POP55 (=ALS Y) channels have been set to 0dB, but still the input signals seem way too large. The signals on the control room spectrum analyzer come from the "RF mon" ports on the demod board, and are marked as -23dBm. I looked at these peak heights with the end green beams locked to the arm cavities, as per the proposed new ALS scheme. Not sure how much cable loss we have from the LSC rack to the network analyzer, but assuming 3dB (which is the Google value for 100ft of RG58), and reading off the peak heights from the control room analyzer, I figure that we have ~0dBm of RF signal in the X arm. => I would expect ~3dBm of signal to the LO input. Both these numbers seem well within range of what the demod board is designed to handle so I'm not sure why we are saturating.

Note that the nominal (differential) I and Q demodulated outputs from the demod board come out of a backplane connector - but we seem to be using the front panel (single-ended) "MON" channels to acquire these signals. I also need to update my Fiber ALS diagram to indicate the power loss in cabling from the PSL table to the LSC electronics rack, expect it to be a couple of dB.

 

Quote:

After labeling cables I would disconnect, I pulled the box out of the LSC rack. Attachment #1 is a picture of the insides of the box - looks like it is indeed just two lengths of cabling. There was also some foam haphazardly stuck around inside - presumably an attempt at insulation/isolation.

Since I have the box out, I plan to measure the delay in each path, and also the signal attenuation. I'll also try and neaten the foam padding arrangement - Steve was showing me some foam we have, I'll use that. If anyone has comments on other changes that should be made / additional tests that should be done, please let me know.

20180111_2200: I'm running some TF measurements on the delay line box with the Agilent in the control room area (script running in tmux sesh on pianosa). Results will be uploaded later.

 

 

  13551   Tue Jan 16 21:46:02 2018 gautamUpdatePSL PSL shelf - AOM power connection interrupted

Johannes informed me that he touched up the PMC REFL camera alignment. I am holding off on re-soldering the AOM driver power as I could use another pair of hands getting the power cable disentangled and removed from the 1X2 rack rails, so that I can bring it out to the lab and solder it back on.

Is anyone aware of a more robust connector solution for the type of power pins we have on the AOM driver?

Quote:

While moving the RefCav to facilitate the PSL shelf install, I bumped the power cable to the AOM driver. I will re-solder it in the evening after the shelf installation. PMC and IMC have been re-locked. Judging by the PMC refl camera image, I may also have bumped the camera as the REFL spot is now a little shifted. The fact that the IMC re-locked readily suggests that the input pointing can't have changed significantly because of the RefCav move.

 

 

  13550   Tue Jan 16 16:18:47 2018 SteveUpdatePSLnew PSL shelf in place

[ Johannes, Rana, Mark and Steve ]

On the second trial the shelf was installed. Plastic cover removed. South end door put back on and 2W Inno turned on.

Shelf 10 " below the existing one:   92" x 30" x 3/4" melamine (or MDF) covered with white Formica.  200 lbs it's max load. Working distance to top of the table 18"

Quote:

While moving the RefCav to facilitate the PSL shelf install, I bumped the power cable to the AOM driver. I will re-solder it in the evening after the shelf installation. PMC and IMC have been re-locked. Judging by the PMC refl camera image, I may also have bumped the camera as the REFL spot is now a little shifted. The fact that the IMC re-locked readily suggests that the input pointing can't have changed significantly because of the RefCav move.

 

 

Attachment 1: DSC00020.JPG
DSC00020.JPG
  13549   Tue Jan 16 11:05:51 2018 gautamUpdatePSL PSL shelf - AOM power connection interrupted

While moving the RefCav to facilitate the PSL shelf install, I bumped the power cable to the AOM driver. I will re-solder it in the evening after the shelf installation. PMC and IMC have been re-locked. Judging by the PMC refl camera image, I may also have bumped the camera as the REFL spot is now a little shifted. The fact that the IMC re-locked readily suggests that the input pointing can't have changed significantly because of the RefCav move.

 

  13548   Mon Jan 15 17:36:03 2018 gautamHowToOptical LeversOplev calibration

Summary:

I checked the calibration of the Oplevs for both ITMs, both ETMs and the BS. The table below summarizes the old and new cts->urad conversion factors, as well as the factor describing the scaling applied. Attachment #1 is a zip file of the fits performed to calculate these calibration factors (GPS times of the sweeps are in the titles of these plots). Attachment #2 is the spectra of the various Oplev error signals (open loop, so a measure of seismic induced angular motion for a given optic, and DoF) after the correction. Loop TF measurements post calibration factor update and loop gain adjustment to be uploaded tomorrow.

Optic, DoF Old calib [urad/ct] New Calib [urad/ct] Correction Factor [new/old]
ETMX, Pitch 200 175 0.88
ETMX, Yaw 222 175 0.79
ITMX, Pitch 122 134 1.1
ITMX, Yaw 147 146 1
BS, Pitch 130 136 1.05
BS, Yaw 170 176 1.04
ITMY, Pitch 239 254 1.06
ITMY, Yaw 226 220 0.97
ETMY, Pitch 140 164 1.17
ETMY, Yaw 143 169 1.18

Motivation:

We'd like for the Oplev calibration to be a reliable readback of the optic alignment. For example, a calibrated Oplev would be a useful diagnostic to analyze the drifting (?) ETMX.

Details:

  1. I locked and dither aligned the individual arms.
  2. I then used a 60 second ramp time to misalign <optic> in {ITMX, ITMY, BS, ETMX, ETMY} one at a time, and looked at the appropriate arm cavity transmission while the misalignment was applied. The amplitude of the misalignment was chosen such that in the maximally misaligned state, the arm cavity was still locked to a TEM00 mode, with arm transmission ~40% of the value when the cavity transmission was maximized using the dither alignment servos. The CDS ramp is not exactly linear, it looks more like a sigmoid near the start and end, but I don't think that really matters for these fits.
  3. I used the script OLcalibFactor.py (located at /opt/rtcds/caltech/c1/scripts/OL) to fit the data and extract calibration factors. This script downloads the arm cavity transmission and the OL error signal during the misalignment period, and fits a Gaussian profile to the data (X=oplev error signal, Y=arm transmission). Using geometry and mode overlap considerations, we can back out the misalignment in physical units (urad).

Comments:

  1. For the most part, the correction was small, of the order of a few percent. The largest corrections were for the ETMs. I believe the last person to do Oplev calibration for the TMs was Yutaro in Dec 2015, and since then, we have certainly changed the HeNes at the X and Y ends (but not for the ITMs), so this seems consistent.
  2. From attachment #2, most of the 1Hz resonances line up quite well (around 1-3urad/rtHz), so gives me some confidence in this calibration.
  3. I haven't done a careful error analysis yet - but the fits are good to the eye, and the residuals look randomly distributed for the most part. I've assumed precision to the level of 1 urad/ct in setting the new calibration factors.
  4. I think the misalignment period of 60 seconds is sufficiently long that the disturbance applied to the Oplev loop is well below the lower loop UGF of ~0.2Hz, and so the in loop Oplev error signal is a good proxy for the angular (mis)alignment of the optic. So no loop correction factor was applied.
  5. I've not yet calibrated the PRM and SRM oplevs.

Now that the ETMX calibration has been updated, let's keep an eye out for a wandering ETMX.

Attachment 1: OLcalib_20180115.tar.bz2
Attachment 2: Oplevs.pdf
Oplevs.pdf Oplevs.pdf
  13547   Mon Jan 15 11:53:57 2018 gautamUpdateIOOMCautolocker getting stuck

Looks like this problem presisted over the weekend - Attachment #1 is the wall StripTool trace for PSL diagnostics, seems like the control signal to the NPRO PZT and FSS EOM were all over the place, and saturated for the most part.

I traced down the problem to an unresponsive c1iool0. So looks like for the IMC autolocker to work properly (on the software end), we need c1psl, c1iool0 and megatron to all be running smoothly. c1psl controls the FSS box gains through EPICS channels, c1iool0 controls the MC servo board gains through EPICS channels, and megatron runs the various scripts to setup the gains for either lock acquisition or in lock states. In this specific case, the autolocker was being foiled because the mcdown script wasn't running properly - it was unable to set the EPICS channel C1:IOO-MC_VCO_GAIN to its lock acquisition value of -15dB, and was stuck at its in-lock value of +7dB. Curiously, the other EPICS channels on c1iool0 seemed readable and were reset by mcdown. Anyways, keying the c1iool0 crate seems to have fixed the probelm.

Quote:

I've noticed this a couple of times today - when the autolocker runs the mcdown script, sometimes it doesn't seem to actually change the various gain sliders on the PSL FSS. There is no handshaking built in to the autolocker at the moment. So the autolocker thinks that the settings are correct for lock re-acquisition, but they are not. The PCdrive signal is often railing, as is the PZT signal. The autolocker just gets stuck waiting to re-acquire lock. This has happened today ~3 times, and each time, the Autolocker has tried to re-acquire lock unsuccessfully for ~1hour.

Perhaps I'll add a line or two to check that the signal levels are indicative of mcdown being successfully executed.

 

Attachment 1: MCautolkockerStuck.png
MCautolkockerStuck.png
  13546   Sat Jan 13 03:20:55 2018 KojiConfigurationComputerssendmail troubles on nodus

I know it, and I don't like it. DokuWiki seems to allow us to use an external server for notification emails. That would be the way to go.

  13545   Sat Jan 13 02:36:51 2018 ranaConfigurationComputerssendmail troubles on nodus

I think sendmail is required on nodus since that's how the dokuwiki works. That's why the dokuwiki was trying to send an email to Umakant.

  13544   Fri Jan 12 20:35:34 2018 Udit KhandelwalSummaryGeneral2018/01/12 Summary
  1. 40m Lab CAD
    1. Worked further on positioning vacuum tubes and chambers in the building.
    2. Next step would be to find some drawings for optical table positions and vibration isolation stack. Need help with this! 
  2. Tip Tilt Suspension (D070172)
    1. Increased the length of side arms. The overall height of D070172 assembly matches that of D960001.
    2. The files are present in dropbox in [40mShare] > [40m_cad_models] > [TT - Tip Tilt Suspension]
  13543   Fri Jan 12 19:15:34 2018 johannesUpdateDAQetmx slow daq chassis

Steve and I removed c1auxex from 1X9 today to make space for the DAQ chassis. Steve installed rails for mounting. To install the box I had to remove all cabling, for which I used the usual precautions (disconnect satellite box etc.)

On reconnect c1auxex2 didn't initialize the physical EPICS channels (the 'actual' acromag channels), apparently it had trouble communicating. A reboot fixed this. It's possible that this is because of the direct cable connection without a network switch that exists
between the Acromags and c1auxex. The EPICS server was started automatically on reboot.

Currently the channel defaults need to be loaded manually after every EPICS server script restart with burt. I'm looking for a good way to automate this, but the only compiled burt binaries for x86 (that we can in principle run on c1auxex2 itself) on the martian network are from EPICS version 3.14.10 and throw a missing shared object error. Could be that simply some path variable is missing.

The burt binaries are not distributed by the lscsoft or cdssoft packages, so alternatively we would need to compile it ourselves for x86 or get it working with the older epics version.

  13542   Fri Jan 12 18:22:09 2018 gautamConfigurationComputerssendmail troubles on nodus

Okay I will port awade's python mailer stuff for this purpose.

gautam 14Jan2018 1730: Python mailer has been implemented: see here for the files. On shared drive, the files are at /opt/rtcds/caltech/c1/scripts/general/pizza/pythonMailer/

gautam 11Feb2018 1730: The python mailer had never once worked successfully in automatically sending the message. I realized this may be because I had put the script on the root user's crontab, but had setup the authentication keyring with the password for the mailer on the controls user. So I have now setup a controls user crontab, which for now just runs the pizza mailing. let's see if this works next Sunday...

Quote:

I personally don't like the idea of having sendmail (or something similar like postfix) on a personal server as it requires a lot of maintenance cost (like security update, configuration, etc). If we can use external mail service (like gmail) via gmail API on python, that would easy our worry, I thought.

 

  13541   Fri Jan 12 18:08:55 2018 gautamUpdateGeneralpip installed on nodus

After much googling, I figured out how to install pip on SL7:

sudo easy_install pip

Next, I installed git:

sudo yum install git A

Turns out, actually, pip can be installed via yum using

sudo yum install python-pip
  13540   Fri Jan 12 16:01:27 2018 KojiConfigurationComputerssendmail troubles on nodus

I personally don't like the idea of having sendmail (or something similar like postfix) on a personal server as it requires a lot of maintenance cost (like security update, configuration, etc). If we can use external mail service (like gmail) via gmail API on python, that would easy our worry, I thought.

  13539   Fri Jan 12 12:31:04 2018 gautamConfigurationComputerssendmail troubles on nodus

I'm having trouble getting the sendmail service going on nodus since the Christmas day power failure - for some reason, it seems like the mail server that sendmail uses to send out emails on nodus (mx1.caltech.iphmx.com, IP=68.232.148.132) is on a blacklist! Not sure how exactly to go about remedying this.

Running sudo systemctl status sendmail.service -l also shows a bunch of suspicious lines:

Jan 12 10:15:27 nodus.ligo.caltech.edu sendmail[6958]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 10:15:45 nodus.ligo.caltech.edu sendmail[6958]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+10:49:16, xdelay=00:00:39, mailer=esmtp, pri=5432408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 11:15:23 nodus.ligo.caltech.edu sendmail[10334]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 11:15:31 nodus.ligo.caltech.edu sendmail[10334]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+11:49:02, xdelay=00:00:27, mailer=esmtp, pri=5522408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 12:15:25 nodus.ligo.caltech.edu sendmail[13747]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 12:15:42 nodus.ligo.caltech.edu sendmail[13747]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+12:49:13, xdelay=00:00:33, mailer=esmtp, pri=5612408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable

 

Why is nodus attempting to email umakant.rapol@iiserpune.ac.in?

  13538   Fri Jan 12 10:26:24 2018 SteveUpdatePSL PSL shelf work schedule

Measurements for good fit were made. The new shelf will be installed on next Tuesday at 2pm

The reference cavity ion pump is in the way  so the cavity will be moved 5" westward. The shelf height space will be 10"  Under shelf working height 18" to optical table.

Quote:

I have just received the scheduling of the PSL self work for tomorrow. Gautam and I agreed that if it is needed I will shut the laser off and cover the hole table with plastic.

 

  13537   Fri Jan 12 10:02:05 2018 johannesUpdateDAQetmx slow daq chassis
Quote:

There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)

The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Both issues were fixed. In both cases it was two separate causes that prevented them from working.

The QPD gain stage switch software channels were assigned to wrong physical pins of the Acromag, and additionally their DSub cable was swapped with a different one.

The BIO switching had its signal and ground wires swapped on ALL connections, and part of it was also suffering from the cable-mixup.

Both issues were fixed. All backplane signals are now routed through the Acromag chassis.

 

Gautam and I did notice that occasionally ETMX alignment will start drifting as evident from the OpLev. I want to set up a diagnostic channel to see if the DAC voltages coming from the Acromag are responsible for this.

  13536   Thu Jan 11 21:09:33 2018 gautamUpdateCDSrevisiting Acromag

We'd like to setup the recording of the PSL diagnostic connector Acromag channels in a more robust way - the objective is to assess the long term performance of the Acromag DAQ system, glitch rates etc. At the Wednesday meeting, Rana suggested using c1ioo to run the IOC processes - the advantage being that c1ioo has the systemd utility, which seems to be pretty reliable in starting up various processes in the event of the computer being rebooted for whatever reason. Jamie pointed out that this may not be the best approach however - because all the FEs get the list of services to run from their common shared drive mount point, it may be that in the event of a power failure for example, all of them try and start the IOC processes, which is presumably undesirable. Furthermore, Johannes reported the necessity for the procServ utility to be able to run the modbusIOC process in the background - this utility is not available on any of the FEs currently, and I didn't want to futz around with trying to install it.

One alternative is to connect the PSL Acromag also to the Supermicro computer Johannes has set up at the Xend - it currently has systemd setup to run the modbusIOC, so it has all the utilities necessary. Or else, we could use optimus, which has systemd, and all the EPICS dependencies required. I feel less wary of trying to install procServ on optimus too. Thoughts?

 

  13535   Thu Jan 11 20:59:41 2018 gautamUpdateDAQetmx slow daq chassis

Some suggestions of checks to run, based on the rightmost colum in the wiring diagram here - I guess some of these have been done already, just noting them here so that results can be posted.

  1. Oplev quadrant slow readouts should match their fast DAQ counterparts.
  2. Confirm that EX Transmon QPD whitening/gain switching are working as expected, and that quadrant spectra have the correct shape.
  3. Watchdog tripping under different conditions.
  4. Coil driver slow readbacks make sense - we should also confirm which of the slow readbacks we are monitoring (there are multiple on the SOS coil driver board) and update the MEDM screen accordingly.
  5. Confirm that shadow sensor PD whitening is working by looking at spectra.
  6. Confirm de-whitening switching capability - both to engage and disengage - maybe the procedure here can be repeated.
  7. Monitor DC alignment of ETMX - we've seen the optic wander around (as judged by the Oplev QPD spot position) while sitting in the control room, would be useful to rule out that this is because of the DC bias voltage stability (it probably isn't).
  8. Confirm that burt snapshot recording is working as expected - this is not just for c1auxex, but for all channels, since, as Johannes pointed out, the 2018 directory was totally missing and hence no snapshots were being made.
  9. Confirm that systemd restarts IOC processes when the machine currently called c1auxex2 gets restarted for whatever reason.

 

  13534   Thu Jan 11 20:51:20 2018 gautamUpdateALSFiber ALS assay

After labeling cables I would disconnect, I pulled the box out of the LSC rack. Attachment #1 is a picture of the insides of the box - looks like it is indeed just two lengths of cabling. There was also some foam haphazardly stuck around inside - presumably an attempt at insulation/isolation.

Since I have the box out, I plan to measure the delay in each path, and also the signal attenuation. I'll also try and neaten the foam padding arrangement - Steve was showing me some foam we have, I'll use that. If anyone has comments on other changes that should be made / additional tests that should be done, please let me know.

20180111_2200: I'm running some TF measurements on the delay line box with the Agilent in the control room area (script running in tmux sesh on pianosa). Results will be uploaded later.

Quote:

For completeness, I'd like to temporarily pull the box out of the rack, open it up, take photos, and make a diagram unless there are any objections.

 

Attachment 1: IMG_5112.JPG
IMG_5112.JPG
  13533   Thu Jan 11 18:50:31 2018 gautamUpdateIOOMCautolocker getting stuck

I've noticed this a couple of times today - when the autolocker runs the mcdown script, sometimes it doesn't seem to actually change the various gain sliders on the PSL FSS. There is no handshaking built in to the autolocker at the moment. So the autolocker thinks that the settings are correct for lock re-acquisition, but they are not. The PCdrive signal is often railing, as is the PZT signal. The autolocker just gets stuck waiting to re-acquire lock. This has happened today ~3 times, and each time, the Autolocker has tried to re-acquire lock unsuccessfully for ~1hour.

Perhaps I'll add a line or two to check that the signal levels are indicative of mcdown being successfully executed.

  13532   Thu Jan 11 14:47:11 2018 SteveUpdatePSLshelf work for tomorrow

I have just received the scheduling of the PSL self work for tomorrow. Gautam and I agreed that if it is needed I will shut the laser off and cover the hole table with plastic.

  13531   Thu Jan 11 14:22:40 2018 gautamUpdateALSFiber ALS assay

I did a cursory check of the ALS signal chain in preparation for commissioning the IR ALS system. The main elements of this system are shown in my diagram in the previous elog in this thread.

Questions I have:

  1. Does anyone know what exactly is inside the "Delay Line" box? I can't find a diagram anywhere.
    • Jessica's SURF report would suggest that there are just 2 50m cables in there.
    • There are two power splitters taped to the top of this box.
    • It is unclear to me if there are any active components in the box.
    • It is unclear to me if there is any thermal/acoustic insulation in there.
    • For completeness, I'd like to temporarily pull the box out of the LSC rack, open it up, take photos, and make a diagram unless there are any objections.
  2. If you believe the front panel labeling, then currently, the "LO" input of the mixer is being driven by the part of the ALS beat signal that goes through the delay line. The direct (i.e. non delayed) output of the power splitter goes to the "RF" input of the mixer. The mixer used, according to the DCC diagram, is a PE4140. Datasheet suggests the LO power can range from -7dBm to +20dBm. For a -8dBm beat from the IR beat PDs, with +24dB gain from the ZHL3A but -3dB from the power splitter, and assuming 9dB loss in the cable (I don't know what the actual loss is, but according to a Frank Seifert elog, the optimal loss is 8.7dB and I assume our delay line is close to optimal), this means that we have ~4dBm at the "LO" input of the demod board. The schematic says the nominal level the circuit expects is 10dBm. If we use the non-delayed output of the power splitter, we would have, for a -8dBm beat, (-8+24-3)dBm ~13dBm, plus probably some cabling loss along the way which would be closer to 10dBm. So should we use the non-delayed version for the LO signal? Is there any reason why the current wiring is done in this way?

 

  13530   Thu Jan 11 09:57:17 2018 SteveUpdateDAQacromag at ETMX

Good going Johannes!

Quote:

This evening I transitioned the slow controls to c1auxex2.

  1. Disconnected satellite box
  2. Turned off c1auxex
  3. Disconnected DIN cables from backplace connectors
  4. Attached purple adapter boards
  5. Labeled DSub cables for use
  6. Connected DSub cables to adapter boards and chassis
  7. Initiated modbus IOC on c1auxex2

Gautam and I then proceeded to test basic functionality

  1. Pitch bias sliders move pitch, yaw moves yawyes.
  2. Coil enable and monitoring channels work yes
  3. Watchdog seems to work. yes We set the treshold for tripping low, excited the optic, watchdog didn't disappoint and triggered.
  4. All channels Initialize with "0" upon machine/server script restart. This means the watchdog happens to be OFF, which is good yes. It would be great if we could also initialize PIT and YAW to retain their value from before to avoid kicking the optic. This is not straightforward with EPICS records but there must be a way.
  5. We got the local damping going yes.
  6. There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)
  7. The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Arms are locked, have been for ~1hour with no hickups. We will leave it like this overnight to observe, and debug further tomorrow.

 

Attachment 1: Acromg_in_action.png
Acromg_in_action.png
  13529   Wed Jan 10 22:24:28 2018 johannesUpdateDAQetmx slow daq chassis

This evening I transitioned the slow controls to c1auxex2.

  1. Disconnected satellite box
  2. Turned off c1auxex
  3. Disconnected DIN cables from backplace connectors
  4. Attached purple adapter boards
  5. Labeled DSub cables for use
  6. Connected DSub cables to adapter boards and chassis
  7. Initiated modbus IOC on c1auxex2

Gautam and I then proceeded to test basic functionality

  1. Pitch bias sliders move pitch, yaw moves yawyes.
  2. Coil enable and monitoring channels work yes
  3. Watchdog seems to work. yes We set the treshold for tripping low, excited the optic, watchdog didn't disappoint and triggered.
  4. All channels Initialize with "0" upon machine/server script restart. This means the watchdog happens to be OFF, which is good yes. It would be great if we could also initialize PIT and YAW to retain their value from before to avoid kicking the optic. This is not straightforward with EPICS records but there must be a way.
  5. We got the local damping going yes.
  6. There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)
  7. The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Arms are locked, have been for ~1hour with no hickups. We will leave it like this overnight to observe, and debug further tomorrow.

  13528   Wed Jan 10 22:19:44 2018 ranaUpdateSUSETMX DC alignment

Best to just calibrate the ETM OL in the usual way. I bet the OSEM outputs have a cal uncertainty of ~50% since the input matrix changes as a function of the DC alignment. Still, a 30 urad pitch mis-alignment gives a (30e-6 rad)(40 m) ~ 1 mm beam spot shift. This would be enough to flash other modes, but it would still be easy to lock on a TEM00 like this. I also doubt that the OL calibration is valid outside of some region near zero - can easily check by moving the ETM bias sliders.

Quote:

I should've put in the SUSPIT and SUSYAW channels in the previous screenshot. I re-aligned ETMX till I could see IR flashes in the arm, and also was able to lock the green beam on a TEM00 mode with reasonable transmission. As I suspected, this brought the Oplev spot back near the center of it's QPD. But the answer to the question "How much did I move the ETM by" still varies by ~1 order of magnitude, depending on if you believe the OSEM SUSPIT and SUSYAW signals, or the Oplev error signals - I don't know which, if any, of these, are calibrated.

What we still don't know is if this is due to Johannes/Aaron working at the ETMX rack (bumping some of the flaky coil cables and/or bumping the blue beams which support the stack). Adding or substracting weight from the stack supports will give us an ETM mis alignment.

  13527   Wed Jan 10 18:53:31 2018 gautamUpdateSUSETMX DC alignment

I should've put in the SUSPIT and SUSYAW channels in the previous screenshot. I re-aligned ETMX till I could see IR flashes in the arm, and also was able to lock the green beam on a TEM00 mode with reasonable transmission. As I suspected, this brought the Oplev spot back near the center of it's QPD. But the answer to the question "How much did I move the ETM by" still varies by ~1 order of magnitude, depending on if you believe the OSEM SUSPIT and SUSYAW signals, or the Oplev error signals - I don't know which, if any, of these, are calibrated.

Attachment 1: ETMXdrift.png
ETMXdrift.png
  13526   Wed Jan 10 16:27:02 2018 SteveConfigurationSEIload cell for weight measurement

We could use similar load cells   to make the actual weight measurement on the Stacis legs. This seems practical in our case.

I have had bad experience with pneumatic Barry isolators.

Our approximate max compression loads are 1500 lbs on 2 feet and 2500 lbs on the 3rd one.

Quote:

We've been thinking about putting in a blade spring / wire based aluminum breadboard on top of the ETM & ITM stacks to get an extra factor of 10 in seismic attenuation.

Today Koji and I wondered about whether we could instead put something on the outside of the chambers. We have frozen the STACIS system because it produces a lot of excess noise below 1 Hz while isolating in the 5-50 Hz band.

But there is a small gap between the STACIS and the blue crossbeams that attache to the beams that go into the vacuum to support the stack. One possibility is to put in a small compliant piece in there to gives us some isolation in the 10-30 Hz band where we are using up a lot of the control range. The SLM series mounts from Barry Controls seems to do the trick. Depending on the load, we can get a 3-4 Hz resonant frequency.

Steve, can you please figure out how to measure what the vertical load is on each of the STACIS?

 

Attachment 1: stacis3LoadCells.png
stacis3LoadCells.png
  13525   Wed Jan 10 15:25:43 2018 johannesConfigurationComputer Scripts / Programsautoburt making backups again
Quote:

I manually created the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2018/ directory. Maybe this fixes the hickup? Gotta wait about 30 minutes.

It worked. The first backup of the year is now from Wednesday, 01/10/18 at 15:19. Ten days of automatic backups are missing. Up until 2204 the year folders had been pre-emptively created so why was 2018 missing?

gautam: this is a bit suspect still - the snapshot file for c1auxex at least seemed to be too light on channels recorded. this was before any c1auxex switching. to be investigated.

  13524   Wed Jan 10 14:17:57 2018 johannesConfigurationComputer Scripts / Programsautoburt no longer making backups

I was looking into setting up autoburt for the new c1auxex2 and found that it stopped making automatic backups for all machines after the beginning of the new year. There is no 2018 folder (it was the only one missing) in /opt/rtcds/caltech/c1/burt/autoburt/snapshots and the /latest/ link in /opt/rtcds/caltech/c1/burt/autoburt/ leads to the last backup of 2017 on 12/31/17 at 23:19.

The autoburt log file shows that the back script last ran today 01/10/18 at 14:19, as it should have, but doesn't show any errors and ends with "You are at the 40m".

I'm not familiar with the autoburt scheduling using cronjobs. If I'm not mistaken the relevant cronjob file is /cvs/cds/rtcds/caltech/c1/scripts/autoburt/autoburt.cron which executes /cvs/cds/rtcds/caltech/c1/scripts/autoburt/autoburt.pl

I've never used perl but there's the following statement when establishing the directory for the new backup:

  $yearpath = $autoburtpath."/snapshots/".$thisyear;
  # print "scanning for path $yearpath\n";
  if (!-e $yearpath) {
    die "ERROR: Year directory $yearpath does not exist\n";
  }

I manually created the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2018/ directory. Maybe this fixes the hickup? Gotta wait about 30 minutes.

  13523   Wed Jan 10 12:42:27 2018 gautamUpdateSUSETMX DC alignment

I've been observing this for a few days: ETMX's DC alignment seems to drift by so much that the previously well aligned X arm cavity is now totally misaligned.

The wall StripTool trace shows that both the X and Y arms were locked with arm transmissions around 1 till c1psl conked out - so in the attached plot, around 1400 UTC, the arm cavity was well aligned. So the sudden jump in the OSEM sensor signals is the time at which LSC control to the ETM was triggered OFF. But as seen in the attached plot, after the lockloss, the Oplev signals seem to show that the mirror alignment drifted by >50urad. This level of drift isn't consistent with the OSEM sensor signals - of course, the Oplev calibration could be off, but the tension in values is almost an order of magnitude. The misalignment seems real - the other Oplev spots have stuck around near the (0,0) points where I recentered them last night, only ETMX seems to have undergone misalignment.

Need to think about what's happening here. Note that this kind of "drift" behaviour seems to be distinct from the infamous ETMX "glitching" problem that was supposed to have been fixed in the 2016 vent.

 

Attachment 1: ETMXdrift.png
ETMXdrift.png
  13522   Wed Jan 10 12:24:52 2018 gautamUpdateCDSslow machine bootfest

MC autolocker got stuck (judging by wall StripTool traces, it has been this way for ~7 hours) because c1psl was unresponsive so I power cycled it. Now MC is locked.

  13521   Wed Jan 10 09:49:28 2018 SteveUpdatePEMthe rat is back

Five mechcanical traps set inside of boxes. Red-white warning tape on top of each.

Quote:

Last jump at rack Y2.

 

  13520   Tue Jan 9 21:57:29 2018 gautamUpdateOptimal ControlOplev loop tuning

After some more tweaking, I feel like I may be getting closer to a cost-function definition that works.

  • The main change I made was to effectively separate the BR-bandstop filter poles/zeros and the rest of the poles and zeros.
  • So now the input vector is still a list of highest pole frequency followed by frequency separations, but I can specify much tighter frequency bounds for the roots of the part of the transfer function corresponding to the Bounce/Roll bandstops.
  • This in turn considerably reduces the swarming area - at the moment, half of the roots are for the notches, and in the (f0,Q) basis, I see no reason for the bounds on f0 to be wider than [10,30]Hz.

Some things to figure out:

  1. How the "force" the loop to be AC coupled without explicitly requiring it to be so? What should the AC coupling frequency be? From the (admittedly cursory) sensing noise measurement, it would seem that the Oplev error signal is above sensing noise even at frequencies as low as 10mHz.
  2. In general, the loops seem to do well in reducing sensing noise injection - but they seem to do this at the expense of the loop gain at ~1Hz, which is not what we want.
    • I am going to try and run the optimizer with an excess of poles relative to zeros
    • Currently, n(Poles) = n(Zeros), and this is the condition required for elliptic low pass filters, which achieve fast transition between the passband and stopband - but we could just as well use a less rapid, but more monotonic roll-off. So the gain at 50Hz might be higher, but at 200Hz, we could perhaps do better with this approach.
  3. The loop shape between 10 and 30Hz that the optimizer outputs seems a but weird to me - it doesn't really quite converge to a bandstop. Need to figure that out.
Attachment 1: loopOpt_180108_2232.pdf
loopOpt_180108_2232.pdf
  13519   Tue Jan 9 21:38:00 2018 gautamUpdateALSALS recovery
  • Aligned IFO to IR.
    • Ran dither alignment to maximize arm transmission.
    • Centered Oplev reflections onto their respective QPDs for ITMs, ETMs and BS, as DC alignment reference. Also updated all the DC alignment save/restore files with current alignment. 
  • Undid the first 5 bullets of elog13325. The AUX laser power monitor PD remains to be re-installed and re-integrated with the DAQ.
    • I stupidly did not refer to my previous elog of the changes made to the X end table, and so spent ages trying to convince Johannes that the X end green alignment had shifted, and turned out that the green locking wasn't going because of the 50ohm terminator added to the X end NPRO PZT input. I am sorry for the hours wasted sad
    • GTRY and GTRX at levels I am used to seeing (i.e. ~0.25 and ~0.5) now. I tweaked input pointing of green and also movable MM lenses at both ends to try and maximize this. 
    • Input green power into X arm after re-adjusting previously rotated HWP to ~100 degrees on the dial is ~2.2mW. Seems consistent with what I reported here.
    • Adjusted both GTR cameras on the PSL table to have the spots roughly centered on the monitors.
    • Will update shortly with measured OLTFs for both end PDH loops.
    • X end PDH seems to have UGF ~9kHz, Y end has ~4.5kHz. Phase margin ~60 degrees in both cases. Data + plotting code attached. During the measurement, GTRY ~0.22, GTRX~0.45.

Next, I will work on commissioning the BEAT MOUTH for ALS beat generation. 

Note: In the ~40mins that I've been typing out these elogs, the IR lock has been stable for both the X and Y arms. But the X green has dropped lock twice, and the Y green has been fluctuating rather more, but has mangaged to stay locked. I think the low frequency Y-arm GTRY fluctuations are correlated with the arm cavity alignment drifting around. But the frequent X arm green lock dropouts - not sure what's up with that. Need to look at IR arm control signals and ALS signals at lock drop times to see if there is some info there.

Attachment 1: GreenLockStability.png
GreenLockStability.png
Attachment 2: ALS_OLTFs_20180109.pdf
ALS_OLTFs_20180109.pdf
Attachment 3: ALS_OLTF_data_20180109.tar.bz2
  13518   Tue Jan 9 11:52:29 2018 gautamUpdateCDSslow machine bootfest

Eurocrate key turning reboots today morning for and c1susaux, c1auxey and c1iscaux. These were responding to ping but not telnet-able. Usual precautions were taken to minimize risk of ITMX getting stuck.

 

  13517   Tue Jan 9 00:07:03 2018 johannesUpdateDAQetmx slow daq chassis

All parts received and assembly near complete, small problem detected because the two DSub connectors are too close together for two cables to fit at the same time. Gautam and I will make some additional slot panels tomorrow using a waterjet cutter, so we can spread the breakout boards out and remedy this.

Fast binary channels need to be spliced into DSub connectors. Aaron is on this. All other, slow connections are already wired from before and have been tested for correct pins on the backplane DIN connectors.

 

The Acromag modules require only a positive supply voltage between +12 and +30 VDC and draw a maximum of 2.8W at that. This raises the question if we want this supply rail to be regulated or take the raw power from the Sorensens. Consulting with Ben Abbott: The Acromags in LIGO do not operate with regulated power. We could easily accomodate the standard regulator boards D1000217 in the chassis, which is probably a good idea if we want to place any custom electronics inside the chassis in the future, for example for whitening or active lowpass filtering.

  13516   Mon Jan 8 20:50:01 2018 ranaSummaryElectronicsSR560: reworking

I replaced the NPD5565 with a LSK389 (SOIC-8 with DIP adapter). There was a noise reduction of ~30%, but not nearly as much as I expected. I wonder if I have to change the DC bias current on these to get the low noise operation?

https://photos.app.goo.gl/hsMwsif7NLscsgpx1

  13515   Sun Jan 7 20:11:54 2018 KojiUpdatePonderSqueezeDisplacement requirements for short-term squeezing

In fact, that is my point. If we use signal recycling instead of resonant sideband extraction, the "tuning" of the SRC is opposite to the current setup. We need to change the macro length of the SRC to make 55MHz resonant with this tuning. And if we make the SRC macro length together with the PRC macro length for this reason, we need to thing again about the mode matching. Fortunately, we have the spare PRM (T=5%) which matches with this curvature. This was the motivation of my question. We may also choose to keep the current SRM because of its higher T and may want to evaluate the effect of expected mode mismatch.

  13514   Sun Jan 7 17:27:13 2018 gautamUpdatePonderSqueezeDisplacement requirements for short-term squeezing

Maybe you've accounted for this already in the Optickle simulations - but in Finesse (software), the "tuning" corresponds to the microscopic (i.e. at the nm level) position of the optics, whereas the macroscopic lengths, which determine which fields are resonant inside the various cavities, are set separately. So it is possible to change the microscopic tuning of the SRC, which need not necessarily mean that the correct resonance conditions are satisfied. If you are using the Finesse model of the 40m I gave you as a basis for your Optickle model, then the macroscopic length of the SRC in that was ~5.38m. In this configuration, the f2 (i.e. 55MHz sideband) field is resonant inside the SRC while the f1 and carrier fields are not.

If we decide to change the macroscopic length of the SRC, there may also be a small change to the requirements on the RoCs of the RC folding mirrors. Actually, come to think of it, the difference in macroscopic cavity lengths explains the slight differences in mode-matching efficiencies I was seeing between the arms and RCs I was seeing before.

Quote:

Yes, this SRC detuning is very close to extreme signal recycling (0° in this convention), and the homodyne angle is close to the amplitude quadrature (90° in this convention).

For T(SRM) = 5% at the optimal angles (SRC detuning of -0.01° and homodyne angle of 89°), we can see 0.7 dBvac at 210 Hz.

 

  13513   Sun Jan 7 11:40:58 2018 KevinUpdatePonderSqueezeDisplacement requirements for short-term squeezing

Yes, this SRC detuning is very close to extreme signal recycling (0° in this convention), and the homodyne angle is close to the amplitude quadrature (90° in this convention).

For T(SRM) = 5% at the optimal angles (SRC detuning of -0.01° and homodyne angle of 89°), we can see 0.7 dBvac at 210 Hz.

  13512   Sun Jan 7 03:22:24 2018 KojiUpdatePonderSqueezeDisplacement requirements for short-term squeezing

Interesting. My understanding is that this is close to signal recycling, rather than resonant sideband extraction. Is that correct?

For signal recycling, we need to change the resonant condition of the carrier in the SRC. Thus the macroscopic SRC length needs to be changed from ~5.4m to 9.5m, 6.8m, or 4.1m.
In the case of 6.8m, SRC legnth= PRC length. This means that we can use the PRM (T=5%) as the new SRM.

Does this T(SRM)=5% change the squeezing level?

ELOG V3.1.3-