40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 222 of 341  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  14625   Mon May 20 17:12:57 2019 gautamUpdateSUSETMY LL adjustment

Following the observation that the response in the LL shadow sensor was lower than that of the others, I decided to pull it out a little to move the signal level with nominal DC bias voltage applied was closer to half the open-voltage. I also chose to rotate the SIDE OSEM by ~20 degrees CCW in its holder (viewed from the south side of the EY chamber), to match more closely its position from a photo prior to the haphazhard vent of the summer of 2018. For the SIDE OSEM, the theoretical "best" alignment in order to be insensitive to POS motion is the shadow sensor beam being horizontal - but without some shimming of the OSEM in the holder, I can't get the magnet clear of the teflon inside the OSEM.

While I was inside the chamber, I attempted to minimize the Bounce/Roll mode coupling to the LL and SIDE OSEM channels, by rotating the Coil inside the holder while keeping the shadow sensor voltage at half-light. To monitor the coupling "live", I set up DTT with 0.3 Hz bandwidth and 3 exponentially weighted averages. For the LL coil, I went through pi radians of rotation either side of the equilibrium, but saw no significant change in the coupling - I don't understand why.

In any case, this wasn't the most important objective so I pushed ahead with recovering half-light levels for all the shadow sensors and closed up with the light doors. I kicked the optic again at 1712:14 PDT, let's see what the matrix looks like now.


before starting this work, i had to key the unresponsive c1auxey VME crate.

  14627   Mon May 20 22:06:07 2019 gautamUpdateSUSITMY also kicked

For good measure:

The following optics were kicked:
ITMY
Mon May 20 22:05:01 PDT 2019
1242450319
  14628   Tue May 21 00:15:21 2019 gautamUpdateSUSMain objectives of vent achieved (?)

Summary:

  1. ETMY now shows four suspension eigenmodes, with sensible phasing between signals for the angular DoFs. However, the eigenfrequencies have shifted by ~10% compared to 16 May 2019.
  2. PIT and YAW for ETMY as witnessed by the Oplev are now much better separated.
  3. ITMY can have its bias voltage set to zero and back to nominal alignment without it getting stuck.
  4. The sensing matrix for ETMY that I get doesn't make much sense to me. Nevertheless, the optic damps even with the "naive" input matrix.

So the primary vent objectives have been achieved, I think. 


Details:

  1. ETMY free-swinging data after adjusting LL and SIDE coils such that these were closer to half-light values
    • Attachment #1 - oplev witnessing the angular motion of the optic. PIT and YAW are well decoupled.
    • Attachment #2 - complex TF between the suspension coils. There is still considerable imbalance between coils, but at least the phasing of the signals make sense for PIT and YAW now.
    • Attachment #3 - DoFs sensed using the naive and optimized sensing matrices.
    • Attachment #4 - sensing matrix that the free swinging data tells me to implement. If the local damping works with the naive input matrix but we get better diagonality in the actuation matrix, I think we may as well stick to the naive input matrix.
  2. BR mode coupling minimization:
    • As alluded to in my previous elog, I tried to reduce the bounce mode coupling into the shadow sensor by rotating the OSEM in its holder.
    • However, I saw negligible change in the coupling, even going through a full pi radian rotation. I imagine the coupling will change smoothly so we should have seen some change in one of the ~15 positions I sampled in between, but I saw none.
    • The anomalously high coupling of the bounce mode to the shadow sensor readout is telling us something - I'm just not sure what yet.
  3. ITMY:
    • The offender was the LL OSEM, whose rotational orientation was causing the magnet to get stuck to the teflon part of the OSEM coil when the bias voltage was changed by a sufficiently large amount.
    • I rectified this (required adjustment of all 5 OSEMs to get everything back to half light again).
    • After this, I was able to zero the bias voltage to the PIT/YAW DoFs and not have the optic get stuck - huzzah 😀 
    • While I have the chance, I'm collecting the free-swinging data to see what kind of sensing matrix this optic yields.

Tomorrow and later this week:

  1. Prepare ETMY for first contact cleaning to remove the residual piece. 
    • Drag wipe the HR surface with dehydrated acetone 
    • Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
    • This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
  2. Confirm ETMY actuation makes sense.
    • Use the green beam for an ASS proxy implementation?
  3. High quality close out pictures of OSEMs and general chamber layout.
  4. Anything else? Any other tests we can do to convince ourselves the suspensions are well-behaved?

While we have the chance:

  1. Fix the IPANG alignment? Because the TT drift/hysteresis problem is still of unknown cause.
  2. Check that the AS beam is centered on OMs 1-6?
  3. Recover the 70% AS light that is being diverted to the OMC?

Unrelated to this work: megatron is responding to ping but isn't ssh-able. I also noticed earlier to day that the IMC autolocker blinky wasn't blinking. So it probably requries a hard reboot. I left the lab for tonight so I'll reboot it tomorrow, but no nds data access in the meantime... 

Attachment 1: etmy_oplevs_20190520.pdf
etmy_oplevs_20190520.pdf
Attachment 2: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 3: ETMY_diagComp.pdf
ETMY_diagComp.pdf
Attachment 4: Screen_Shot_2019-05-21_at_12.37.08_AM.png
Screen_Shot_2019-05-21_at_12.37.08_AM.png
  14629   Tue May 21 21:33:27 2019 gautamUpdateSUSETMY HR face cleaned

[koji, gautam]

We executed this plan. Photos are here. Summary:

  1. Optic was EQ-stopped (face stops only)., with the OSEMs in situ. We tried to do this as evenly as possible to avoid any magnets getting stuck on OSEMs.
  2. We used the specially procured acetone from Chub to drag wipe the HR face. This was a definite improvement, we should always get the correct grade of solvents when we attempt cleaning optics.
  3. It was observed that drag-wiping did not really have the desired cleaning effect. So Koji went in with hemostat / lens tissue soaked in acetone and wiped the HR face. This improved the situation.
  4. Applied a layer of F.C. Waited for it to dry, and then peeled it off. Under the green flashlight, the optic still looks horrific - but we decided against further drag-wiping/first-contacting. If the loss is truly 50 ppm, this is totally not a show-stopper for now.
  5. Suspension cage was replaced. EQ stops were released. Bias voltages were adjusted to bring the Oplev spot back to the center of the QPD. Now a free-swinging data collection is ongoing...
The following optics were kicked:
ETMY
Tue May 21 22:58:18 PDT 2019
1242539916

So if nothing, we got to practise this new wiping technique with OSEMs in situ successfully.

Quote:
 
  1. Prepare ETMY for first contact cleaning to remove the residual piece. 
    • Drag wipe the HR surface with dehydrated acetone 
    • Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
    • This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
  14630   Wed May 22 11:53:50 2019 gautamUpdateSUSETMY EQ stops backed out

Yesterday we noticed that the POS and SIDE eigenmodes were degenerate (with 1mHz spectral resolution). Moreover, the YAW peak had shifted down by ~500 mHz compared to earlier this week, although there was still good separation between PIT and YAW in the Oplev error signals. Ideas were (i) check if EQ stops were not backed out sufficiently, and (ii) look for any fibers/other constraints in the system. Today morning, I inspected the optic again. I felt the EQ stop viton tips were a bit close to the optic, so I backed them out further. Apart from this, I adjusted the LR and SIDE OSEM position in their respective holders to make the sensor voltages closer to half-light. Kicked the optic again just now, let's see if there is any change.

Remaining tasks:

  1. Check EY table leveling.
  2. Check EY actuation matrix diagonality using this technique.
  3. Check that IR resonances are seen (and all the usual pre-pumpdown alignment checks).
  4. Take close out pictures.
  5. Heavy doors on, pump down.

If everything goes smoothly, I think we should plan for the heavy doors going back on and commencing the pumpdown tomorrow. After discussion with Koji, we came to the conclusion that it isn't necessary to investigate IPANG (high likelihood of it falling off the steering optics during the pumpdown) / AS beam clipping (no strong evidence that this is a problem) for this vent.

Update 1235: Indeed, the eigenmodes are back to their positions from earlier this week. Indeed, the POS and SIDE modes are actually better separated! So, the OSEM/magnet and EQstop/optic interactions are non-negligible in the analysis of the dynamics of the pendulum.

Attachment 1: ETMY_eigenmodes.pdf
ETMY_eigenmodes.pdf
  14631   Wed May 22 22:50:13 2019 gautamUpdateVACPumpdown prep

I did the following:

  1. Checked the ETMY OSEM sensing matrix and OSEM actuation matrix - more on this later, but everything seems much more reasonable than it was prior to this vent.
  2. Checked that the IMC could be locked with the low-power beam
  3. Aligned the Y-arm cavity using the green beam. Then tweaked the TT1/TT2 alignment until I saw IR flashes in TRY.
  4. Repeated #2 for the X arm, using the BS to control the beam pointing.
  5. Confirmed that the AS beam makes it out of the vacuum. It is only ~30uW in a large (~1cm dia) beam, so not the clearest spot on an IR card, but looks pretty clean, no evidence of clipping. I removed an ND filter on the AS port camera in order to better see the beam on the CRT monitor, this should be re-installed prior to ramping the input power to the IMC again.
  6. With the PRM aligned, I confirmed that I could see resonant flashes in the POP QPD.
  7. With the SRM aligned, I confirmed that I could see SRC cavity flashes on the AS camera.

I think this completes the pre-pumpdown alignment checks we usually do. The detailed plan for tomorrow is here: please have a look and lmk if I missed something. 

  14634   Thu May 23 15:30:56 2019 gautamUpdateVACPumpdown underway - so far so good!

[chub, koji, gautam]

  1. We executed the pre-pumpdown tasks per the checklist - heavy doors were on by ~1030am.
  2. We were thwarted by the display of c1vac becoming unresponsive - the mouse cursor moves, but we could not interact with any screens. Connecting to c1vac by ssh with the -X option, we could interact with everything. Using top, we saw that the load average was reporting ~8 - this is pretty high! The most demanding processes were the modbus IOC and some python processes, presumably connected with the interlocks. We tried stopping the interlock systemctl process, kill -9ing the heavy processes, but to no avail. Next, we tried killing the X display proces, but this also did not fix the problem. Finally, we did a soft reboot of c1vac - the machine came back up, but still no interactivity. So we moved asia, the EY laptop, to the vacuum station for this pumpdown. We will fix the situation once the vacuum is in the nominal state.
  3. The actual pumpdown commenced by first evacuating the EY and IY annular volumes with the roughing pump. There is an interlock condition that prevents V6 from being opened if the PRP gauge reports < 0.25 torr (this is to protect against oil backstreaming from the roughing pumps I believe). To get around this, we gave the roughing pumps some work by exposing the annular line to the atmospheric pressure of the EY and IY annuli. In a few minutes, both of these reported < 1 torr.
  4. Main volume pumping started around noon - we have been going down in pressure steadily at ~3 torr/min (Koji has a nice python utility made that calculates the rate from the pressure channel).
  5. At the time of writing, after ~3.5 hrs of pumping, we are at 25 torr. I will keep going till ~1 torr, and then valve off the main volume until tomorrow, when Chub and I will work on getting the turbo pumps exposed to the main volume. Pausing at 355pm while I go for the colloquium. Resumed later in the evening, stopping for today at 500 mtorr.
  6. In preparation for the increased load on TP2 and TP3, I spun them up to the "high RPM mode" from their nominal "Standby mode".

Close up photos of the EY and IY chambers may be found here.


Update on the display manager of c1vac: I was able to get it working again by running sudo systemctl restart display-manager. Now I can interact with the MEDM screens on c1vac. It is a bit annoying that this machine doesn't have the users directory so I don't have access to the many convenient StripTool templates though - maybe I'll make local copies tomorrow for the pumpdown.

Attachment 1: pumpdownPres.png
pumpdownPres.png
  14636   Fri May 24 11:47:15 2019 gautamUpdateVACIFO is almost at nominal vacuum

[chub, gautam]

Overnight, the pressure of the main volume only rose by 10 mtorr, so there was no need to run the roughing pumps again. So we went straight to the turbos - hooked up the AUX drypump and set it up to back TP2. Initially, we tried having both TP2 and TP3 act as backing pumps for TP1, but the wimpy TP3 current was always passing the interlock threshold. So we decided to pump down with TP3 valved off, only TP2 backing TP1. This went smooth - we had to keep an eye on P2, to make sure it stayed below 1 torr. It took ~ 1 hour to go from 500 mtorr to 100 mtorr, but after that, I could almost immediately open up RV2 completely. A safe setting to run at seems to be to have RV2 open by between 0.5 and 1 turn (out of the full range of 7 turns) until the pressure drops to ~100 mtorr. Then we can crank it open. We are, at the time of writing, at ~8e-5 torr and the pressure is coming down steadily.

I had to manually clear the IG error on the CC1 gauge, and re-enabled the High Voltage, so that we have a readback of the main volume pressure in that range. I made a script to do this (enable the HV, the IG error still has to be cleared by pushing the appropriate buttons on the Hornet), it lives at /opt/target/python/serial/turnHornetON.py. I guess it'll take a few days to hit 8e-6 torr, but I don't see any reason to not leave the turbos running over the weekend.

Remaining tasks are (i) disconnect the roughing pump line and (ii) pump down the annuli, which will be done later today. Both were done at ~2pm, now we are in the vacuum normal config. I'll turn the two small turbos to run on "Standby Mode" before I head home today. I think TP3 may be close to end-of-life - the TP3 current went up to 1A even while evacuating the small volume of the annular line (which was already at 1 torr) with the AUX drypump backing it. The interlock condition is set to trip at 1.2A, and this pump is nominally supposed to be able to back TP1 during the pumpdown of the main volume from 500 mtorr, which it wasn't able to do.

Attachment 1: pumpdown_20190524.png
pumpdown_20190524.png
  14637   Fri May 24 17:50:19 2019 gautamUpdateIOOIFO recovery

At ~4pm, the main volume pressure (CC1) was reported to be ~5e-5 torr. So I replaced the HR mirror in the MC REFL path with the usual 10% beamsplitter, and aligned the beam onto MCREFL photodiode. I also replaced the ND filter on the AS port camera, and in front of the IPPOS QPD.

Then I turned up the power by HWP rotation - at the input to the IMC, I now measured 960 mW with the Coherent power meter, so the NPRO power has certainly decayed by ~10% from 2018 July. Normal high-power IMC autolocker script was re-enabled on megatron (and the slow servo enable threshold raised from 1000 cts to 8000cts). IMC was readily locked, after some hand alignment, I got a maximum of 14500 cts transmission. I was then able to lock the Y-arm. The dither alignment servo did not work with the nominal settings, but by hand alignment, I was able to get TRY up to 0.6 (I didn't try too hard to optimize this in any systematic way). X arm was also locked.

AUX drypump valved off and shutdown at ~610pm. I also switched both TP2 and TP3 to their lower rotation "standby" mode. So overall no major mishaps this time around. I am leaving the PSL shutter open over the long weekend. For in-air vs vacuum suspension spectra comparison, I kicked the ETMY optic at Fri May 24 18:26:10 PDT 2019.

  14640   Mon May 27 11:37:13 2019 gautamUpdateVACc1vac is unresponsive

I've been monitoring the status of the pumpdown remotely with ndscope lookbacks of C1:Vac-CC1_pressure. Today morning, I saw that the channel was putting out a constant value (signature of EPICS server being frozen). caget did not work either. Then I tried ssh-ing into c1vac to see if there were any issues but I was unable to. The machine isn't responding to ping either. The EPICS value has been frozen since ~1030pm PDT 26 May 2019.

I will try and head to campus later today to check on it. Isn't an email alert or soemthing supposed to be sent out in such an event?

  14641   Tue May 28 09:51:33 2019 gautamUpdateVACc1vac hard-rebooted

The vacuum itself was fine - CC1 gauge reported a pressure of 1.3e-5 torr. Note to self: the C1:Vac-CC1_HORNET_PRESSURE channel, which is the analog readback of the Hornet gauge and which is hooked up to an Acromag ADC in the c1auxex chassis, is independent of the status of the c1vac machine, and so can serve as a diagnostic.

However, I was unable to interact with c1vac in any way, the monitor hooked up directly to it was showing a frozen display. So I hard-rebooted the system. It took a few minutes to come back online - but even after 10 minutes of waiting, still no display. In the process of the reboot, several valves were closed off - when the EPICS processes restart, there are momentary instances where the readback channels get an "undefined" value, which prompts the main interlock process to transition to a "SAFE" state. 

Running df -h, I saw that the /var partition was completely full. Maybe this was somehow interfering with the machine running smoothly? Two files in particular, daemon.log and daemon.log.1 were ~1GB each. The contents of these files seemed to be just the readbacks for the caget and caput commands. So I cleared both these files, and now the /var partition usage is only 26%. I also got the display back up and running on the physical monitor hooked up to the c1vac machine's VGA port. Let's see if this has improved the stability situation. The CPU load is still high (~6-7), with most of this coming from the modbus process. Why is this so high? c1susaux has more Acromag units but claims a much lower load of 0.71. Is the CPU of the c1vac machine somehow inferior?

In the meantime, I ssh-ed into c1vac and restored the "Vacuum normal" valve config. During this little escapade, the main volume pressure rose to ~6e-5 torr. It's coming back down smoothly.


Unrelated to this work: we had turned the RGA off for the vent, I powered it back on and re-initialized it this morning.

Attachment 1: Screen_Shot_2019-05-31_at_12.44.54_PM.png
Screen_Shot_2019-05-31_at_12.44.54_PM.png
  14642   Tue May 28 17:41:13 2019 gautamUpdateGeneralIFO status

[chub, gautam]

Today, we tried to resuscitate the c1iscaux2 channels by swapping the existing, failed VME crate with the newly freed up crate from c1susaux. In summary, the crate gets power, and the EPICS server gets satrted, but I am unable to switch the whitening gain on the whitening boards. I belive that this has to do with the FAIL LEDs that are on for the XVME-220 units. We were careful to preserve the location of the various cards in the VME crates during the swap. Rather than do a detailed debugging with custom RJ45 cables and terminal emulators, I think we should just focus the efforts on getting the Acromag system up and running.

Our work must have bumped a cable to the c1lsc expansion chassis in the same rack - the c1lsc FE had crashed. I rebooted it using the script - everything came back gracefully.

Attachment 1: IMG_7444.JPG
IMG_7444.JPG
  14643   Wed May 29 18:13:25 2019 gautamUpdateALSFiber beam-splitters are now PM

To maintain PM fibers all the way through to the photodiode, I had ordered some PM versions of the 50/50 fiber beamsplitters from AFW technologies. They arrived some days ago, and today I installed them in the BeatMouth. Before installation, I checked that the ends of the fibers were clean with the fiber microscope. I also did a little cleanup of the NW corner of the PSL table, where the 1um MZ setup was completely disassembled. We now have 4 non-PM fiber beamsplitters which may be useful for non polarizaiton sensitive applications - they are stored in the glass-door cabinet slightly east of the IY chamber along the Y arm, together with all the other fiber-related hardware.

Anjali had changed the coupling of the beam to the slow axis for her experiment but I ordered beamsplitters which have the slow axis blocked (because that was the original config). I need to revert to this config, and then make a measurement of the ALS noise - if things look good, I'll also patch up the Y arm ALS. We made several changes to the proposed timeline for the summer but I'd like to see this ALS thing through to the end while I still have some momentum before embarking on the BHD project. More to follow later in the eve.

Quote:

Get a fiber BS that is capable of maintaining the beam polarization all the way through to the beat photodiode. I've asked AFW technologies (the company that made our existing fiber BS parts) if they supply such a device, and Andrew is looking into a similar component from Thorlabs.

  14645   Fri May 31 15:55:16 2019 gautamUpdateALSPSL + X beat restored

Coupling into the fast axis of the fiber:

The PM couplers I bought require that the light is coupled to the fast axis. The Thorlabs part that Andrew ordered, and which Anjali was using for the MZ experiment, was the opposite configuration, and so the input coupler K6XS mount was rotated to accommodate this polarization. The HWP was also rotated to cut the power into the fiber. I undid these changes. Mode-matching is ~65% (2.42mW/3.70mW) which isn't stellar, but good enough. The PER is ~15dB (ratio of power in fast axis to slow axis is ~40), which I verified using another collimator at the output, and a PBS + two photodiodes. Again isn't stellar but good enough.

EX laser temperature adjustment:

Rana adjusted the temperature of the main laser to 30.61 C. According to the calibration, the EX laser temperature needed to be ~32.8 C. It was ~31.2 C. I made the change by rotating the dial on the front panel of the EX laser controller. Fine adjustment was done using the temperature slider on the ALS screen. With an offset of ~+610 counts, I found a beat at ~80 MHz.

First look at PM beamsplitters:

From my initial test, the beat amplitude was stable to my moving of the fibers yes. The NF1611 DC monitor reports 2.6 V DC with only the EX light, and 3.15 V DC with only the PSL light. So I should probably cut the PSL power a little to improve the contrast. Assuming the 10 kohm DC transimpedance spec can be believed, this means the expected signal level is 4*sqrt(260uA * 315uA)*700V/A ~0.8 Vpp, and I see ~0.9 Vpp, so roughly things add up (this is actually more consistent with an RF transimpedance of 800V/A, which is maybe not unreasonable). The RF amps for routing this signal to the delay line has been borrowed for the 2um frequency noise experiemnt - I will reacquire it today and check the ALS noise performance.

So overall, I am happy with the performance of the current iteration of the BeatMouth.

  14647   Mon Jun 3 16:46:31 2019 gautamUpdateIOOIMC not locking

Since ~ 2 hours ago, the IMC autolocker has not been able to keep the IMC locked. I don't see any obvious trends in the wall StripTool that may point to what's going on. For the brief periods in which a TEM00 mode is locked, the PC Drive RMS level is ~5x what the nominal level is, and while the autolocker is trying to lock the IMC, the PC drive RMS level is hovering around 4V DC, which is high. The PMC Error and Control signal spectra show huge 60 Hz (and harmonics) peaks, and indeed this is visible in the time domain signals as well (on ndscope or on the oscilloscope on the PSL table), but this is not a new feature in the last two hours. Usually, this kind of problem signals that either/both the c1psl or c1iool0 slow machines need to be power-cycled, but I confirmed that both machines are online and telnet-able. Possibilities: (i) some card in the c1psl / c1ioo crates have failed or (ii) something in the MC/FSS electronics chain has failed or (iii) there is a huge amount of excess high-frequency noise from the NPRO.

I am leaving the PSL shutter closed.

Attachment 1: PCdrive_RMS.png
PCdrive_RMS.png
  14652   Tue Jun 4 00:17:15 2019 gautamUpdateBHDPreliminary BHD calculations

​Summary:

Attachment #1 shows the RIN and phase noise requirements for the 40m BHD for measuring Ponderomotive squeezing.

Some details:

  1. The interferometer topology is not systematically optimized - I just picked values which are likely close to what we will eventually choose. Namely, P_{\mathrm{PRM}} = 8\,\mathrm{W}\phi_{\mathrm{SRC}} = 0.275 ^{\circ}\zeta_{\mathrm{homodyne}} = 88 ^{\circ}\mathcal{L}_{\mathrm{rt}}^{\mathrm{arm}} = 30\, \mathrm{ppm}G_{\mathrm{PRC}}\approx 40. Nevertheless, I think these requirements will not change by more than 30% for changes to the interferometer config.
  2. The requirements are evaluated using the following criterion: assuming that the dominant noises are (i) coil driver at mid-frequencies and (ii) quantum noise at high frequencies, what do the RIN and phase noise on the LO have to be such that the equivalent displacement noise is a factor of 10 below? I opted for a safety factor of 10, this can be relaxed. 
  3. An unknown is how much contrast defect light we will end up having due to the mismatch between arms. I assumed a few representative values.
  4. The calculations were done analytically. This paper provides a good summary of the relations - although my RIN requirement is more stringent because of the safety factor of 10, and phase noise requirement is less stringent (despite the same safety factor) because we plan to read out at nearly the amplitude quadrature.
  5. Since we are discussing the possibility of delivering the LO field using a fiber-coupled pickoff of the laser prior to RF sidebands being added, these requirements do not benefit from passive filtering from the cavity transfer functions. Consequently, the requirements are pretty challenging I think.

Conclusions:

  1. The RIN requirement looks very challenging - we will need a shot noise limited ISS with 100 mW DC sensing light, and will likely have to relax the safety factor depending on how much contrast defect light we end up having. This actually sets some requirement on the amount of filtering we need from the OMC (next step).
  2. The phase noise requirement also looks very challenging - I need to look up what is possible with the double-pass through fiber technique.

Next steps:

  1. Evaluate the pointing stability requirement on the LO field (IFO output is filtered by the OMC).
  2. We still need to think of a control scheme for the LO phase - likely, I think we will need a suspended optic between the fiber collimator delivering the light to the BHD setup with some kind of length actuation capability. 
  3. Numerical validation of this analytic study. I believe Finesse is still missing some capabilities that allow us to calculate these couplings, but I'll ask the experts to be sure.
  4. Build up the requirements on the OMC cavity:
    • Backscatter requirement (related = OFI isolation requirement, relative length noise between SRM and OMC, OFI and SRM). Does the OFI also have to be suspended?
    • Filtering requirement
    • Pointing stability requirement
    • Length noise requirement 
Attachment 1: LOreqs.pdf
LOreqs.pdf
  14653   Tue Jun 4 10:56:31 2019 gautamUpdateIOOIMC diagnostics

I briefly managed to lock the IMC today - it stayed locked for ~10 minutes. Attachment #1 shows spectra of a few error and control signals for today's lock, and from a stretch yesterday before the problems surfaced*. The 60 Hz lines are much bigger, and MC_F signals broadband excess noise above a few Hz. I suspect a problem somewhere in the electronics.

*I confess the comparison isn't entirely valid because I had to tweak the FSS FAST gain from its nominal value of 22 to 25 in order to get the PC drive RMS down to the ~1.5V level. At the nominal gain setting, with the laser frequency locked to the cavity length, the PC Drive RMS was ~4 V. Still, indicative of something being off in the electronics.

Attachment 1: IMCdiag.pdf
IMCdiag.pdf
  14655   Tue Jun 4 23:41:13 2019 gautamUpdateCamerasSteps to interact with GigE

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

  14658   Thu Jun 6 18:49:22 2019 gautamUpdateBHDPreliminary BHD calculations

Summary:

I did some more calculations based on our discussions at the meeting yesterday. Posting preliminary results here for comments.

Details:

Attachment #1 - Schematic illustration for the scattering scenarios. For all three scenarios, we would like for the scattered field to be lower than unsqueezed vacuum (safety factor to be debated).

Attachment #2 - Requirements on a fraction \epsilon_{\mathrm{bs}} = 10 \, \mathrm{ppm} of the counter-propagating resonant mode of the OMC scattering back into the antisymmetric port, as a function of RIN and phase noise on this field (y-axis) and amount of field (depends on the amount of contrast defect light which can become resonant in the counter propagating mode). I don't encode any frequency dependence here.

Attachment #3 - Requirements on the direct scatter from the arm cavity resonant field (assumed to dominate any contribution from the PRC) onto the OMC DCPDs, for some assumed phase noise (y-axis) and fraction of the field that makes it onto the OMC DCPDs. This is a pretty stringent requirement. But the probability is low (it is the product of three presumably small numbers, (i) probablity of the beam scattering out of the TEM00 mode, (ii) BRDF of the scattering surface, (iii) probability of scattering back towards the DCPDs), so maybe feasible? I didn't model any RIN on this field, which would be an additional noise term to contend with. The range of the y-axis was chosen because I think these are reasonable amplitudes for chamber wall  / other scattering surface motion at acoustic frequencies.

Attachment 1: darkPortScatter.pdf
darkPortScatter.pdf
Attachment 2: OMCbackscatter.pdf
OMCbackscatter.pdf
Attachment 3: directScatter.pdf
directScatter.pdf
  14687   Sun Jun 23 08:09:53 2019 gautamUpdateIOONPRO diagnostics

Summary:

Over the last few days, I've been doing some (complementary) measurements to what Aaron and Koji have been looking at. The motivation was to identify if the problems we are seeing are optical (i.e. imprinted on the PSL light) or electronic. My findings:

  1. 60 Hz line noise in PMC REFL and PMC TRANS is heavily dependent on whether I connect cables between the measuring PDs and Acromag ADC or not - but even with the Acromag cable disconnected, the 60 Hz RIN is HUGE - 10 mVpp out of 670 mV DC, and lines are much dirtier if you have connections to the SLOW ADCs. Measurement was made by looking at the time-domain signals on a battery powered Tektronix oscilloscope. See Attachment #1. I believe this line noise is higher it was. Cause is unknown to me at this point.
  2. The NPRO noise eater seems to function as advertised. The measured RIN with the noise eater enabled (our nominal operating condition) is in line with what the manual tells us it should be. See Attachment #2.
  3. There isn't strong evidence of excess frequency noise (measured with PLL) out to 100 kHz. I didn't measure the high-frequency part yet, but maybe I'm doing something wrong with the PLL setup which should be first corrected. See Attachments #3, #4.
  4. The beat note frequency between the free-running PSL and EX NPRO's is definitely slewing more than the quadrature sum of the advertised 1 MHz/min slewing per the manual.

Evidence:

Attachment #1: Time domain look at PMC Refl and Trans signals under various operating conditions. During this work, I took the chance to remove ~4 BNC T connectors that were connected on the PMC TRANS photodiode (Thorlabs). Now, there is one cable going to the Acromag ADC, and one going to the Oscilloscope used to monitor these signals. Any further T-ing can be done at the oscilloscope.

Attachment #2: RIN measurement of the NPRO light. I opted to place a Thorlabs PDA55 in the IR ALS pickoff light path. This is before the light sees the PMC. A DC block was inserted between the PDA55 and the AG4395 used to make the measurement. DC level of the PD output was 3.1 V into high-Z and I used half this value to normalize the measurement made by the 50-ohm input AG4395 into RIN units. The measurement was made with the PZT and slow temperature controls to the NPRO connected/disconnected, but I saw no significant difference. 

Attachment #3: Frequency noise measurement via PLL. This shows the loop transfer funtion for the PLL. Some details of the setup:

  • The beat note for locking the PLL was made between the PSL NPRO and the EX NPRO (output of the IR ALS BeatMouth). ~4dBm beatnote.
  • Local oscillator was sourced by a Marconi, f_carrier=33 MHz, RF level = +10dBm.
  • Level 7 Mixer and LB1005 controller from the mode-spectroscopy PLL setup.
  • PLL control signal routed to EX NPRO PZT via Heliax cable running along south arm. 
  • Why EX and not PSL or Marconi FM? Latter has limited range, ~1/10th of that offered by NPRO PZT. PSL PZT has a 2.9 Hz corner freq Pomona box. I could disconnect this for the purpose of PLL locking, but I thought it may be interesting to see if there’s any hints of the problem being electrical, by looking at PLL spectra with / without Pomona box. The expected delay due to cabling is only 400 ns, so not really a limiting factor for the PLL bandwidth.
  • LB 1005 settings:
    • PI corner = 3 kHz.
    • G = 2.30 (I could not increase this further - with the PSL+Lightwave NPRO PLL, we could achieve a UGF of ~60 kHz, but in this setup, I can't do much better than ~7kHz before the loop starts oscillating, not sure if the fact that the PZT actuation coefficient for the Innolight is ~5x lower than for the Lightwave is enough to explain this?).
    • LFGL = 90 dB.
  • Mixer output had a maximum value of 800 mVpp => PLL discriminant is 400 mV/rad.
  • The "eye fit" is just the transfer function of two poles at DC (one for frequency to phase conversion in the PLL and one for the LB1005 integrator), and a zero at 3kHz (PI corner). I scaled the gain till the "fit" and measurement lined up, and then used this model to undo the loop suppression of the error signal to extract the frequency noise without worrying about the frequency vector of the measurement being limited.
  • Once again, slow temperature control and PZT controls to the PSL NPRO were disconnected so this measurement was made with two free-running NPROs.

Attachment #4: Frequency noise measurement via PLL. This shows the frequency noise. I've overlaid the expected frequency noise between 2 free-running NPROs, model used is in the text box in the plot. There isn't strong evidence of excess high frequency noise in this measurement. The fact that the "LB 1005 input terminated" trace is below all the others supports the hypothesis that I'm measuring real frequency noise. The bump around a few kHz could indicate some gain peaking?

However, I'm unable to find good agreement between the measured frequency noise using the error point and the control point. For the former, I used the PLL discriminant mentioned above of 400 mV/rad, and undid the loop suppression, and for the latter I used a PZT discriminant of 1.7 MHz/V. However, there is still a constant scale difference between these two traces. So I'm doing something wrong?

Next steps:

  1. More interpretation of the PLL measurement results required.
  2. Measure the PLL error signal spectrum to higher frequencies using the AG4395. 
  3. ???

I've not disturbed the PLL setup in case anyone else wants to repeat these measurements, but I have restored the normal electrical connections to the PSL PZT and temperature control.

Some other activity:

  1. Alignment into the PMC was tweaked.
  2. NPRO laser pump current was increased from 1.9 A to 2.0 A.
  3. PMC servo gain was changed from +18 to +17 to prevent the servo from oscillating.
Attachment 1: consolidatedOscopeScreenCaps.pdf
consolidatedOscopeScreenCaps.pdf
Attachment 2: RINcomp.pdf
RINcomp.pdf
Attachment 3: PLL_OLTF.pdf
PLL_OLTF.pdf
Attachment 4: PLLnoise.pdf
PLLnoise.pdf
  14688   Sun Jun 23 09:36:32 2019 gautamUpdateIOOIMC is locking normally again

After typing up the elog, I decided to try locking the IMC again - now it locks again with the "OLD" gain settings. I tested it ~5 times, the autolocker brings the lock back and the PC drive levels are normal. IMC transmission and MC REFL DC light levels in lock are normal. The PC Drive RMS voltage is <1V. What's more, there is no longer any evidence of 60 Hz line harmonics any more in the PMC diagnostics channels. Compare attachment #1 to this elog.

WTF.

I undid the changes Koji made to the autolocker gains, and am trying the old settings again. Let' see how stable or otherwise the config is. I must've jiggled some poor cable connection back into a good spot while working on the PSL table?

Anyway, this helps Kruthi and Milind.

Attachment 1: PMCdiag.pdf
PMCdiag.pdf
  14690   Mon Jun 24 08:12:10 2019 gautamUpdateIOOIMC is locking normally again

Over the last 24 hours, the IMC autolocker was able to keep the MC locked ~60% of the time. This is not particularly good, but is an improvement on ~2 weeks ago when the IMC couldn't be locked.

There are two periods, which I've indicated by vertical cursors, between which the autolocker was doing something strange - usually this kind of trend is caused by one or more of the VME crates being unresponsive and the autolocker gets stuck, but I confirmed that both c1psl and c1iool0 are telnet-able. So I conclude that the stability and reliability of the IMC loop is still not as good as it used to be.

Note also that while the PC drive RMS level mostly hovers around 1 V, there are several excursions above that level. This in itself isn't a new phenomenon. I will do some more characterization by measuring the in-loop error signal spectrum and maybe the OLTF of the IMC locking loop.

Quote:
 

Let' see how stable or otherwise the config is. I must've jiggled some poor cable connection back into a good spot while working on the PSL table? Or the NPRO decided to be less noisy on Sunday.

Attachment 1: IMCdutycycle.png
IMCdutycycle.png
  14691   Mon Jun 24 11:48:35 2019 gautamUpdateIOOIMC in-loop error spectra and OLTF

Attachment #1 - In loop error spectra, measured as Koji posted end of last week.

  • Main difference is that the line noise seems much lower.
  • For the "dark" measurement, I set the IN1 gain of the servo board to the value of +4 dB, which is what it is in lock.
  • As Koji mentioned, this isn't an apple-to-apple comparison as the IMC loop will squish the plotted orange trace.
  • Nevertheless, the fact that the blue trace is above orange everywhere gives confidence that we are in fact measuring frequency noise.
  • For the higher frequency measurement, I used the AG4395 analyzer, which has 50 ohm input impedance. So to get the measurements with the SR785 to line up, I multiplied these by x2.
  • For the frequency axis calibration, I used the value of 13 kHz/V for the PDH discriminant, which was what I measured it to be last year (but I didn't check again today).
  • Note that the IMC locking loop OLTF has not been undone, so this isn't the actual laser frequency noise on the transmitted beam. In order to measure the latter, we'd have to use (for example) an arm cavity as an analyzer.

Attachment #2 - OLTF of the IMC loop.

  • Measurement was made using the IN1/IN2 method, injection was done at the "A EXC" front panel BNC input.
  • For comparison, I've overlaid a measurement from the 2017 IMC loop investigations. Doesn't seem to be significantly different.
  • UGF and phase margin are in the ballpark of what they were reported to be in the past.

Attachment #3 - Photo courtesy Koji showing the bank of BNC connectors used for these measurements.

Clearly, these measurements were taken in a time when the IMC was "well behaved". How to characterize what's happening when this isn't the case?

Attachment 1: IMCfreqNoise.pdf
IMCfreqNoise.pdf
Attachment 2: IMC_OLTF.pdf
IMC_OLTF.pdf
Attachment 3: IMC_CMboard.jpg
IMC_CMboard.jpg
  14696   Tue Jun 25 15:32:16 2019 gautamUpdateIOOPMC and IMC locked again, some MEDM maintenance

Aaron complained to me earlier that the PMC could not be locked. Turned out to be a classic sticky slider problem, I keyed the c1psl VME crate, and did the usual burtrestore trick. After that, I could immediately lock the PMC and IMC with the nominal gain settings.

I also looked at the wiring at the rack. An SLP250 was installed at the mixer IF output, in parallel with a 50 ohm terminator to ground. I removed this, because as Aaron pointed out, the PMC servo card "FP1 TEST" input is already 50 ohm, and has two cascaded LC filter stages immediately after to filter out the 2f component, so the extra low-pass filtering is superfluous (in any case, 250 MHz is much too high a cutoff to be using for cutting out the 2f component which will be at ~70 MHz).

Finally, in the last ~2 weeks, we have been running with the PMC servo gain of +17 (as opposed to +18 from before). The old gain is too high, as noted by Milind. But the MEDM field for this gain goes RED unless the gain is +18. I adjusted the value of the  C1:PSL-STAT_PMC_NOM_GAIN channel to +17, so that this is no longer the case. I also edited the PMC MEDM screen to get rid of my comment that the "SLOW ADC IS DEAD" for the PMC TRANS field, since I have now hooked up the PMC trans photodiode to our temporary Acromag box.

Attachment 1: PMCctrl.png
PMCctrl.png
  14703   Wed Jun 26 20:45:03 2019 gautamUpdateCamerasField of view options

For the beam spot position tracking, I am wondering if there is any benefit to going for a wider field of view and getting the OSEMs in the frame? It may provide some "anchor points" against which whatever algorithm can calibrate the spot position against. But there are also several point scatterers visible in the current view, and perhaps the Gaussiam beam profile moving over them and tracking the scattered intensity from these point scatterers serves the same function? I don't know of a good solution to have a "switchable" field of view configuration in the already cramped camera enclosure though.

Also, I think it may be useful to have a cron job take a picture of MC2 and archive it (once a week? or daily?) to have some long term diagnostic of how the scattered light received by the camera changes over several months.

Quote:

The GigE is focused now and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs

  14704   Wed Jun 26 21:01:26 2019 gautamUpdateLSCPOX and POY locking

Now that the IMC is remaining locked for extended periods of time, the next problem to attack is the ASS dither alignment system. For a start, I decided to try and get the POX and POY locking working again, as we have not fully recovered the interferometer alignment after the most recent pumpdown. I spent a couple of hours tweaking the alignment of the arm cavity mirrors, BS, and TTs to try and recover the maximum possible TRX and TRY - however, my best efforts only yielded TRX~0.8, TRY~0.75. Moreover, the beam axis is such that the spot is significantly off in YAW on both ETMs, as evidenced by the camera views (also true but less obvious on the ITMs). However, trying to bring the beam back to the center of the optics yields TRY and TRX values lower than the above reported maxima. The EX green beam is currently unavailable to verify the arm cavity alignment because of my hijacking the EX NPROs PZT control for PLL investigations, but with the Y arm, I'm able to lock a TEM00 mode. Probably just needs more careful systematic alignment, but I'm not pursuing this tonight.

  14705   Thu Jun 27 14:28:12 2019 gautamUpdateLSCPOX and POY locking

After a more systematic alignment effort, I was able to get the spots better centered on the optics (judged by eye from the analog camera views). TRY ~0.7, TRX~1.15. The X-arm dither alignment system seems to work out-of-the-box with the existing settings, I was able to run it and maximize the X-arm transmission.

Other work: I also cleaned up the area around MC2 a litte - laptop from on top of the vacuum chamber was removed and a rogue ethernet cable was also removed. The resulted in some misalignment of the IMC, which I corrected by manual alignment. Now the IMC is locked again with nominal transmission levels.

On the PSL table, I re-routed the RF output from the BeatMouth to the regular IR-ALS electronics chain (it was hijacked for PLL investigations). At EX, I disconnected the cable running from the LB1005 to the EX NPRO laser PZT (again was being used for PLL locking), and re-connected the output from the Green uPDH box to allow for some ALS tests to be done. I could then lock the EX green beam to the X-arm, and achieved GTRY ~ 0.35 using the ASX system. More to follow on ALS tests later today.

  14716   Mon Jul 1 20:27:44 2019 gautamUpdateASCASX tuning

Summary:

To practise the dither alignment servo tuning, I decided to make the ASX system work again (mainly because it has fewer DoFs and so I thought it'd be easier to manage). Setup is: dither PZT mirrors on EX table-->demodulate green transmission at the dither frequencies-->Servo the error signals to 0 by an integrator.

Details:

  1. Started by checking the dither lines are showing up with good SNR in GTRX. They are, see Attachment #1. The dither lines are at 18.23 Hz, 27.13 Hz, 53.49 Hz and 41.68 Hz, and all of them show up with SNR ~100.
  2. Hand-aligned the beam till I got a maximum of GTRX ~ 0.35. This is lower than the usual ~0.5 I am used to - possibilities are (i) in the process of plugging in the BNC cable to the rear of the EX laser for my PLL investigations, I disturbed the alignment into the SHG crystal ever so slightly and I now have less green light going into the cavity or (ii) there is an iris on the EX table just before the green beam goes into the vacuum on which it is getting clipped. IIRC, I had centered the GTRX camera view such that the spot was well centered in the field of view, but now I see substantial mis-centering in pitch. So the cavity alignment for IR could also be sub-optimal (although I saw TRX ~1.15). Anyways, I decided to push on.
  3. Introduced a deliberate offset in a given DoF, e.g. M1 PIT. Then I looked at the demodulated error signals (filtered through an RLP0.5 filter post demodulation, so the 2f component should be attenuated by 100 dB at least), and tuned the demod phase until most of the signal appeared in the I-phase, which is what is used for servoing. The Q-phase signals were ~x10 lower than their I-phase counterparts after the tuning.
  4. Checked the linearity of the error signal in response to misalignment of a given DoF. I judged it to be sufficiently linear for all four DoFs about the quadratic part of the GTRX variation.
  5. Tweaked the overall servo gains to have the error signals be driven to 0 in ~10 seconds.
  6. There was quite significant cross-coupling between the DoFs - why should this be? I can understand the PIT->YAW coupling because of imperfect mounting of the PZT mounted mirror in a rotational sense, but I don't really understand the M1->M2 coupling.
  7. Nevertheless, the servo appears to work - see Attachment #2.

The adjusted demod phases, servo gains were saved to the .snap file which gets called when we run the "DITHER ON" script. Also updated the StripTool template.

I plan to repeat similar characterization on the IR dither alignment servos. I think the tuning of the ASS settings can be done independently of figuring out the mystery of why the TRY level is so low.

Attachment 1: ASX_ditherlines.pdf
ASX_ditherlines.pdf
Attachment 2: ASX.png
ASX.png
  14718   Tue Jul 2 12:30:53 2019 gautamUpdateElectronicsAcromag crate switched to Sorensens

[chub, gautam]

We crossed off another couple of bullets today.

It took me ~1 hour to realize that c1susaux requries the running of sudo /sbin/ifup eth0 to be run in order to see the martian network - why???

Activity:

  1. Stopped the c1susaux machine:
    • Moved alignment sliders of ITMX and ITMY to 0 as a precaution.
    • Shutdown the c1susaux machine so that it doesn't become unhappy with the missing Acromags when we power the unit down.
  2. Dialled down supply voltages on the +/- 15 V and +/- 20 V DC Sorensens. Current draw became 0 A on the front panel indicators.
  3. Chub tapped some new terminal blocks for +15 V DC and +20 V DC
    • This required some additional daisy chaining, which is why we dialled down the Sorensens.
    • New cables were made using the "standard" LIGO color scheme, which isn't really applicable in this case because we are using +15 V DC (orange sheath wire) and + 20 V DC (yellow sheath wire) whereas the closest LIGO standard voltages are +18 V DC and +24 V DC.
    • A test cable, presumably meant to be used in the electronics area (orange for +15 V DC) was destroyed for this work as we opted for speed rather than making a new cable.
  4. Disconnected bench power supplies that were powering the Acromags, and connected the new cables.
    • I opted to use 5 A fuses in the terminal blocks for these supplies as the current draw is pretty significant.
  5. Dialled the Sorensens back up to the nominal voltages:
    • Attachment #1 shows the front panels of the Sorensens before and after this work.
    • The current limit on the +20 V DC Sorensen had to be raised, because the Acromag box draws ~2.3 A on its own, whereas the previous current draw was 2.8 A.
  6. Brought the c1susaux machine back online. Took me a while to get to the bottom of why I wasn't able to see c1susaux on the martian, but eventually, I figured out the whole sbin/ifup thingy. 

I don't understand the exact chain of causation, but during this work, the fast c1sus model crashed. I had to go through a few iterations of the scripted vertex machine rebooting, but things seem to be back in a normal state now, see Attachment #2. Should probably run the IFO test suite to make sure everything is a-okay, but for now, I am able to lock the IMC so I'm moving on.

The main task remaining here is to take new pictures of everything and upload to the wiki. Also, need to update the Sorensen labels to reflect their current values, some of them are outdated.

Quote:
  • Take photos of the new setup, cabling.
  • Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
  • Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
Attachment 1: 1X5Sorensens.pdf
1X5Sorensens.pdf
Attachment 2: CDS_20190702.png
CDS_20190702.png
  14719   Tue Jul 2 16:57:09 2019 gautamUpdateCDSc1sus is flaky

Since the work earlier this morning, the fast c1sus model has crashed ~5 times. Tried rebooting vertex FEs using the reboot script a few times, but the problem is persisting. I'm opting to do the full hard reboot of the 3 vertex FEs to resolve this problem.

Judging by Attachment #1, the processes have been stable overnight.

Attachment 1: c1sus_timing.png
c1sus_timing.png
  14720   Tue Jul 2 17:34:54 2019 gautamUpdateLSCIrides opened up on EY table

In preparation for the ASS debugging, I decided to check out the beam path on the EY table. In order to be able to do this, I had to setup the POY locking to trigger on AS110 instead of TRY (as is usual for this kind of debugging). Then I could poke an IR card in the beam path without destroying the lock.

There are two irides in the beam path immediately between the vacuum window and the harmonic separator that splits off the IR and green beams. I found that the beam was in fact getting clipped on both of them. It was also somewhat off center on a 2" beamsplitter that sends half of the light to the QPD (currently decommissioned). The purpose of these irides are (I think) to eliminate some ghost reflections of the green beam and also the Oplev beam. I opened up the irides until I felt that there wasn't any more clipping of the IR beam, but the appropriate ghost beams were still getting caught.

I also re-aligned the beam onto the TRY Thorlabs PD so as to better center it on the active area. In summary, the result of this work was that the TRY level went from ~0.6 to ~0.93. There may still be some scope for optimizing this - I tried running the Y-arm ASS scripts, and already, the loops don't run away any more. I'll do the systematic analysis of the servo anyways. But given that the IMC Trans lev el used to be ~15,500 counts and is now ~14,500 counts, I think ~7% drop in TRY level is in line with what we "expect" (assuming the pre-power-degradation TRY level was 1.000).

Note that these irides were installed (I think) by Yuki, and so cannot explain the ASS anomalies of July 2018 (i.e. it does not exonerate in-vacuum clipping of the beam, as Koji had already verified that the in-air path was clean back then).

  14722   Wed Jul 3 11:47:36 2019 gautamUpdateBHDPRC filtering

A question was raised as to how much passive filtering we benefit from if we pick off the local oscillator beam for BHD from the PRC. I did some simplified modeling of this. For the expected range of arm cavity round trip losses (20-50 ppm), I think that the 40m CARM pole will be between 75-85 Hz. The corresponding recycling gain will be 40-50, with the current PRM. I assumed 1000 ppm loss inside the PRC. The net result is that, assuming the single pole coupled cavity response, we will get ~8-9 dB of filtering at ~200 Hz of the intensity noise of the input laser field to the interferometer if we pick the LO beam off from the PRC (e.g. PR2 transmission), instead of picking it off before.

The next questions are: (i) can we do a sufficiently good job of achieving the required RIN stability on the LO field for BHD without relying on the passive filtering action of the PRC? and (ii) is the benefit of the PRC filtering ruined in the process of routing the LO field from wherever the pickoff happens to the BHD setup?

Attachment 1: PRCfiltering.pdf
PRCfiltering.pdf
  14736   Tue Jul 9 08:33:31 2019 gautamSummarySUSETMX PIT bias voltage changed by ~1V

After this activity, the DC bias voltage required on ETMX to restore good X arm cavity alignment has changed by ~1.3 V. Assuming a full actuation range of 30 mrad for +/- 10 V, this implies that the pitch alignment of the stack has changed by ~2 mrad? Or maybe the suspension wires shifted in the standoff grooves by a small amount? This is ~x10 larger than the typical change imparted while working on the table, e.g. during a vent.

Main point is that this kind of range requirement should probably be factored in when thinking about the high-voltage coil driver actuation.

Quote:

We unstuck ETMX by shaking the stack. Most effective was to apply large periodic human sized force to the north STACIS mounts.

  14738   Tue Jul 9 18:06:05 2019 gautamUpdateLSCY-arm ASS in a workable state

The Y-arm ASS was tuned to be in a workable state. Basically, I followed Koji's recipe.

The SNR of the dither lines in the TRY and YARM control signals were checked - Attachment #1. The dither frequencies are marked with vertical dashed lines (can't figure out how to add 4 cursors in DTT so there's two in each row for a total of 4). A couple of days ago, when I was doing some preliminary checks, I found that the oscillator at 24.91 Hz caused a broadband increase in the TRY noise between DC and ~100 Hz. But today I saw no evidence of such behaviour. So I decided against changing the frequency.

The linearity of the demodulated error signals around the quadratic maxima of the TRY level was checked. I did not, however, investigate in detail the frequency-dependent offset Koji has reported in his elog. 

After this work, the TRY level is at 0.95. This is commensurate with the MC trans level being lower by ~7% relative to July 2018. Furthermore, the ASS servo is able to return to TRY~0.95 with a time-constant of ~5 seconds in response to misalignment of the cavity optics. After I investigate the X-arm ASS, I will reset the normalization for TRX and TRY.

Update 645pm: In the spirit of general IFO recovery, I re-centered the ITM and ETM oplev spots, and also the IR beam on the IPPOS QPD to mark the new input pointing alignment (the spot is slightly lower on the AS camera than what I remember). I then tweaked the XARM transmission to maximize it, and re-set the TransMon normalization. I edited the normalization script to comment out the normalizing of the TransMon QPD gains as the QPDs are in some kind of indeterminate state now. Attachment #2 shows the current status, you can also see the normalization being reset. LSC mode disabled for overnight.

Once the XARM ASS is also checked out, I propose moving back to locking the DRMI / PRFPMI configs. 

Attachment 1: ditherFreqs.pdf
ditherFreqs.pdf
Attachment 2: transRenorm.png
transRenorm.png
  14739   Tue Jul 9 18:17:48 2019 gautamUpdateGeneralProjector lightbulb blown out

Last documented replacement in Nov 2018, so ~7 months, which I believe is par for the course. I am disconnecting its power supply cable.

  14740   Tue Jul 9 18:42:15 2019 gautamUpdateALSEX green doubling oven temperature controller power was disconnected

There was no green light even though the EX NPRO was on. I checked the doubling oven temperature controller and found that its power cable was loose on the rear. I reconnected it, and now there is green light again. 

  14742   Wed Jul 10 10:04:09 2019 gautamUpdateSUSTip-Tilt moved from South clean cabinet to bake lab cleanroom

Arnaud and I moved one of the two spare TT suspensions from the south clean cabinet to the bake lab clean room. The main purpose was to inspect the contents of the packaging. According to the label, this suspension was cleaned to Class A standards, so we tried to be clean while handling it (frocks, gloves, masks etc). We found that the foil wrapping contained one suspension cage, with what looked like all the parts in a semi-assembled state. There were no OSEMs or electronics together with the suspension cage. Pictures were taken and uploaded to gPhoto. Arnaud is going to plan his tests, so in the meantime, this unit has been stored in Cabinet #6 in the bake lab cleanroom.

  14745   Wed Jul 10 16:53:22 2019 gautamUpdateSUSPRM watchdog condition modified

[koji, gautam]

We noticed that the PRM watchdog was tripping frequently. This is a period of enhanced seismic activity. The reason PRM in particular trips often is because the SIDE OSEM has 5x increased transimpedance. We implemented a workaround by modifying the watchdog tripping condition to scale the SD channel RMS by a factor of 0.2 (relative to the UL and LL channels). We restarted the modbus process on c1susaux and tested that the new logic works. Here is the relevant snippet of code:

# Disable fast DAC if variation tests too high
# PRM Side is special, see elog 14745
record(calc,"C1:SUS-PRM_LOGIC")
{
    field(DESC,"Tests whether RMS too high")
    field(SCAN,"1 second")
    field(PHAS,"1")
    field(PREC,"0")
    field(HOPR,"1")
        field(LOPR,"0")
        field(CALC,"(A<B)&(C<B)&(0.2*D<B)")
        field(INPA,"C1:SUS-PRM_ULPD_VAR  NPP  NMS")
        field(INPB,"C1:SUS-PRM_PD_MAX_VAR  NPP  NMS")
        field(INPC,"C1:SUS-PRM_LLPD_VAR  NPP  NMS")
        field(INPD,"C1:SUS-PRM_SDPD_VAR  NPP  NMS")
}

The db file has a note about this as well so that future debuggers aren't mystified by a factor of 0.2.

  14747   Thu Jul 11 12:42:35 2019 gautamSummaryCDSP2 interface board

I looked into the design of the P2 interface board. The main difficulty here is geometric - we have to somehow accommodate sufficient number of D-sub connectors in the tight space between the two P-type connectors. 

I think the least painful option is to stick with Johannes' design for the P1 connector. For the CM board, the P2 connector only uses 6 pairs of conductors for signals. So we can use a D-15 connector instead of 2 D-37 connectors. Then we can change the PCB shape such that the P1 connector can be accommodated (see Attachment #1). The other alternative would be to have 2 P-type connectors and 3 D-subs on the same PCB, but then we have to be extra careful about the relative positioning of the P-type connectors (otherwise they wont fit onto the Eurocrate). So I opted to still have two separate PCBs.

I took a first pass at the design, the files may be found here. I just auto-routed the connections, this is just an electrical feedthrough so I don't think we need to be too concerned about the PCB trace routing? If this looks okay, we should send out the piece for fab ASAP.

I will work on putting together the EPICS server machine (SuperMicro) this afternoon.

Quote:

2. D040180 / D1500308 Common Mode Board

CM servo board itself doesn't need any modification. The CM board uses P1 and P2. So we need to manufacture a special connector for CM Board P2. (cf The adapter board for P1 T1800260). See also D1700058.

Attachment 1: IMG_7728.JPG
IMG_7728.JPG
  14753   Thu Jul 11 17:58:38 2019 gautamUpdateEquipment loanTT suspension --> Downs

Arnaud has taken 1 TT suspension from the 40m clean lab to Downs for modal testing. Estimated time of return is tomorrow evening.

  14754   Thu Jul 11 18:15:22 2019 gautamSummaryElectronicsPSL/IOO rack checkout

I looked at the PSL/IOO racks to check for which boards, if any, require an additional P2 interface, so that we can try and design a generic one for the IMC/CM boards and whatever else may require it. While searching the elog, I saw that Koji and Johannes had already done this, see Koji's elog in this thread. Some remarks:

  1. D990155 seems to be unused in both PSL and IOO racks. The one in the PSL rack has some LEMO cables plugged in to the front panel, but they go nowhere. So I think that both of these are redundant (in the assessment below, only one was marked redundant).
  2. In the PSL rack, the "TTFSS Interface", "PSL PMC SERVO", and "DAQ INTERFACE" (which I think is obsolete) cards all have their P2 connectors daisy chained together, going to a cross-connect. Kruthi and I traced this to be going to a cross connect marked "J23-PSLRACK-CCP". In the PSL wiring diagram of which we have a hardcopy in the control room, it looks like these channels are related to the RefCav? So I think this is not required to be interfaced to our new Acromag DAQ system. 

Conclusion: Only the IMC Servo and CM boards need their P2 connectors connected to Acromag.It would be helpful to remove the TTFSS Interface board and figure out what exactly the pin-mapping for the backplane connectors are, but I didn't do this today because there is a "High Voltage" line going to the Interface Board and I'm not actually sure of the signal chain for the FSS servo.

  14755   Fri Jul 12 07:37:48 2019 gautamUpdateSUSM4.9 EQ in Ridgecrest

All suspension watchdogs were tripped ~90mins ago. I restored the damping. IMC is locked.

ITMX was stuck. I set it free. But notice that the UL Sensor RMS is higher than the other 4? I thought ITMY UL was problematic, but maybe ITMX has also failed, or maybe it's coincidence? Something for IFOtest to figure out I guess. I don't think there is a cable switch between ITMX/ITMY as when I move the ITMX actuators, the ITMX sensors respond and I can also see the optic moving on the camera.

Took me a while to figure out what's going on because we don't have the seis BLRMS - i moved the usual projector striptool traces to the TV screen for better diagnostic ability.

Update 16 July 1515: Even though the RMS is computed from the slow readback channels, for diagnosis, I looked at the spectra of the fast PD monitoring channels (i.e. *_SENSOR_*) for ITMX - looks like the increased UL RMS is coming from enhanced BR-mode coupling and not of any issues with the whitening switching (which seems to work as advertised, see Attachment #3, where the LL traces are meant to be representative of LL, LR, SD and UR channels).

Attachment 1: 56.png
56.png
Attachment 2: ITMXunstick.png
ITMXunstick.png
Attachment 3: ITMX_UL.pdf
ITMX_UL.pdf
  14762   Mon Jul 15 18:55:05 2019 gautamUpdateIOOMegatron hard-rebooted

[koji, gautam]

In addition to c1psl needing a reboot, megatron was un-ssh-able (although it was responding to ping). Clue was that the NPRO PZT control voltage was drifting a lot on the StripTool trace. Koji hard-rebooted the machine. Now IMC is locked, and FSS slow servo is also running.

  14763   Tue Jul 16 15:00:03 2019 gautamUpdateSUSMultiple small EQs

There were several small/medium earthquakes in Ridgecrest and one medium one in Blackhawk CA at about 2000 UTC (i.e. ~ 2 hours ago), one of which caused BS, ITMY, and ETM watchdogs to trip. I restored the damping just now.

  14765   Tue Jul 16 16:00:01 2019 gautamUpdateCDSc1iscaux Supermicro setup

I worked on preparing for the c1iscaux upgrade a bit today.

  1. Attachment #1: This shows where the 120 GB solid-state hard-drive and the 2 RAM cards (2GB each) are installed.
    • I found that it required considerable application of force to get the RAM cards into their slots.
    • Note: the 4GB RAM is broken up into two separate physical cards, each 2GB. The labeling is a bit confusing, as each card suggests it is by itself 4GB.
  2. OS install for c1iscaux:
    • I followed Jon's instructions (and added some of mine to the wiki page to hopefully make this process even less thinking-intensive).
    • To be able to use the IP address 192.168.113.83, removed "bscteststand" from chiara martian.hosts and rev.113.168.192.in-addr.arpa as the last mention I could find of this machine was from 2009 (and I'm pretty sure it isn't an active unit anymore). I then restarted the bind9 process. 
    • The hostname for this machine is currently "c1iscaux3" for testing purposes, I will change it once we do the actual install.
    • There was an error in the installation instructions to allow incoming ssh connections - it is openssh-server that is required, not openssh-client. This has now been fixed on the wiki page instructions.
  3. Acromag static IP assignment:
    • Assigned 2 ADCs (XT1221), 5 DACs (XT1541) and 5 sinking BIO units (XT1111) static IP addresses (and labelled them for easy reference) using the windows laptop and the Acromag IP config utility.
    • I saw no reason not to use the 192.168.114.yyy scheme for the Acromag subnet on this machine, even though c1auxex and c1vac both have subnets with this addressing prefix. For reasons unknown to me, Jon opted to use 192.168.115.yyy for the c1susaux Acromag subnet.
  4. Followed the excellent step-by-step to install EPICS, Modbus and Asyn.
    • This took a while, ~1 hour, dominated by the building of EPICS. The other two took only a couple of minutes each.
    • The same combination suggested on Jon's wiki, of Modbus R2-11, EPICS base-7.0.1 and asyn4-33, are the most current at the time of installation.
    • Couple of typos that prevented straight up copy-pasting were fixed on the wiki.
  5. Playground for testing new database files:
    • made a directory /cvs/cds/caltech/target/c1iscaux3 and copied over the .db files from /cvs/cds/caltech/target/c1iscaux and /cvs/cds/caltech/target/c1iscaux2 over.
    • Johannes said he did not develop any code to automate the process of translating the old .db files into the new ones for the Acromag - I won't invest the time in developing any either as I think just manually editing the files will be faster. 
    • I think I will follow the c1susaux convention of grouping .db files by the physical electronics system where possible (e.g. REFL11 channels in one file, CM channels in one file etc), as I think this makes for easier debugging.
    • There is an old "PZT_AI.db" file which I think consists completely of obsolete channels.
  6. Next steps:
    • Wire up the crate [Chub]
    • Make the database files and modbus files for talking to the Acromags on the internal subnet [Gautam], check the .db files [Koji]
    • Wiring of whitening switching from P1 to P2 connector, Issue #1 in this elog (this will also requrie the installation of the DIN shrouds) [Koji]
    • Soldering of P2 interface boards [Gautam]
    • Bench testing [Gautam, Koji, Chub]
    • Installation and in-situ testing [Gautam, Koji, Chub]

All the required additional parts should be here by the end of the week - I'd like to aim for Wednesday 7/24 for the installation in 1Y3 and in-situ testing. While talking to Rana, I realized that we should also factor in the c1aux slow channels into this acromag crate - there is no need for a separate machine to handle the shutters and illuminators. But let's not worry about that for now, those channels can simply be added later.

Attachment 1: IMG_7769.JPG
IMG_7769.JPG
  14769   Wed Jul 17 21:22:41 2019 gautamUpdateCDSCM board Latch Enable subtlety

[koji, gautam]

Koji pointed out an important subtlety pertaining to the "LATCH ENABLE" signal line on the CM board. The purpose of this line is to smoothly facilitate the transition of a change in the "multi-bit-binary-outputs", a.k.a. "mbbo", that are controlled by MEDM gain sliders, to the analog electronics on the CM board. Why is this necessary? Imagine changing the gain from 7dB (=0111 in mbbo representation) to 8dB (=1000 in mbbo representation). In order to realize this change, all 4 bits have to change their state. But this almost certainly doesn't happen synchronously, because our EPICS interface isn't synchronous. So at some intermediate times, the mbbo representation could be 0100 (=4dB), or 1111 (=15dB), or many other possible values, which are all significantly different from either the initial value or the desired final state. This is clearly undesirable.

In order to protect against this kind of error, a Latched output part, 74ALS573, is used to buffer the physical digital logic levels from the switches in the analog gain stages. So in the default state, the "LATCH ENABLE" signal line is held "LOW". When a change happens in the EPICS value corresponding to a gain slider, the "LATCH ENABLE" state is quickly toggled to "HIGH", so as to enable the appropriate analog gain stages to be switched, and then again to "LOW", at which point the latch holds its output state. This logic is currently implemented by a piece of code called "latch.o", which is the compiled version of "latch.st", which may be found in /cvs/cds/caltech/target/c1iool0 where it presumably was written for the IMC servo board, but not in /cvs/cds/caltech/target/c1iool0  , which is where the CM board database files reside. The only elog reference I can find pertaining to this particular piece of code is from Alan, and doesn't say anything about the actual logic.

For the new c1iscaux, we need to implement this logic somehow. After discussion between Koji and me, we feel that a piece of python code is sufficient. This would continuously run in the background on the supermicro server machine. The channel hierarchy for each gain channes is as follows (I've taken the example of C1:LSC-CM_REFL1_GAIN):

  • C1:LSC-CM_REFL1_GAIN ------ this is the channel tied to an MEDM slider, and so is a "soft" channel
  • C1:LSC-CM_REFL1_SET ------- this is a "soft" channel that gets converted to an mbbo
  • C1:LSC-CM_REFL1_BITS ------ this is a channel that actually controls (multiple) physical binary outputs on the Acromag

So the logic will be that it continuously scans the EPICS channel C1:LSC-CM_REFL1_GAIN  for a change in set value. When a change is detected, it has to update the C1:LSC-CM_REFL1_SET channel. In the next EPICS refresh cycle, this would result in the mbbo bits, C1:LSC-CM_REFL1_BITS , all changing to the appropriate values. After these changes have happened, we need to toggle the LATCH ENABLE in order to allow the changes to propagate to the analog gain stage switches. Need to think about what's the best way to do this.

  14771   Thu Jul 18 10:46:04 2019 gautamUpdateCDSDatabase files made

I completed the translation of the .db files for the EPICS database records from the VME notation to the Acromag/Modbus/Asyn notation. The channels are now organized into 5 database files, located in /cvs/cds/caltech/target/c1iscaux3/,  for convenience:

  1. C1_ISC-AUX_LSCPDs.db -------- This handles whitening gain, AA enable/bypass, Demodulator FE, and PD Interface Board channels for REFL11, REFL55, REFL33, REFL165, POP22, POP110, POX11, POY11, AS55 and AS110 photodiodes.
  2. C1_ISC-AUX_CM.db -------------- This handles all channels for the CM board. The mbbo addressing notation needs to be checked.
  3. C1_ISC-AUX_QPDs.db ----------- This handles all channels for the IPPOS QPD.
  4. C1_ISC-AUX_ALS.db ------------- This handles all channels for the IR ALS DFD LO and RF power monitoring.
  5. C1_ISC-AUX_SPARE.db ---------- This handles the unused channels for the various whitening, AA and PD interface boards.

For reasons unknown to me, the database files in the other Acromag system target directories (e.g. c1susaux, c1auxex) all had 755 level access permission - maybe this is required for systemctl to handle the EPICS serving? Anyways, I upgraded the permission level of the above 5 files using chmod.

There are almost certainly typos / other errors, and I may have missed copying over some soft/calibrated channels, but I hope that this way of grouping by subsystem will make the debugging less painful. Once Chub connects up the power lines to the Acromags, I will run the soft tests. For this purpose, I've also made a C1_ISC-AUX.cmd file and a C1_ISC-AUX.env file in the above target directory, and also made the modbusIOC.service file in /etc/systemd/system on the supermicro.

  14773   Thu Jul 18 19:58:56 2019 gautamUpdateCDSWork on Acromag chassis

Now that the .db files were prepared, I wanted to test for errors. So I did the following:

  1. Acromags were mounted on the DIN rails. Attachment #1 shows the grouping of ADC, DAC and BIO units. They are labelled with their IP addresses.
  2. Wiring of power:
    • Chub had already prepared the backplane with the power connectors, switches and indicator LEDs.
    • So I just had to daisy chain the +24 V (RED) and GND (BLACK) terminals for all the acromags together, which I did using 24 AWG wire (we may want to use heavier gauge given the current draw).
  3. Ethernet cables were used to daisy chain the network connectivity between the various units.  Attachment #1 shows the current state of the chassis box.
  4. Front panel pieces were attached and labelled, see Attachment #2
    • I found it was sufficient to use the front - we may use the rear panel slots when we want to add connections for controlling the c1aux machine channels.
    • The D15 P2 connector panel for the CM board will arrive tomorrow and will be installed then.
  5. Entire setup was connected to power and ethernet, see Attachment #3
    • As usual, the current draw is significant for the collection of Acromags, I got around this problem by using the bench supply to "Parallel" mode to enhance the current driving capacity.
    • For the ethernet connection, I used the office space port #6, which I connected at the network rack end to the eth1 port of the Supermicro.

All the Acromags are seen on the 192.168.114 subnet on c1iscaux3 yes- however, when I run the modbusIOC process, I see various errors in the logfile no, so more debugging is required. Nevertheless, progress.

Update 2245: Turns out the errors were indeed due to a copy/paste error - I had changed the IP addresses for the ADCs from the .115 subnet c1susaux was using, but forgot to do so for the DACs and BIOs. Now, if I turn off the existing c1iscaux so that there aren't any EPICS clashes, the EPICS server initializes correctly. There are still some errors in the log file - these pertain to (i) the mbbo notation, which I have to figure out, and (ii) the fact that this version of EPICS, 7.0.1, does not support channel descriptions longer than 28 characters (we have several that exceed this threshold). I think the latter isn't a serious problem.

Getting closer... Note that I turned off the c1iscaux VME crate to prevent any EPICS server clashes. I will turn it back on tomorrow.

Attachment 1: IMG_7771.JPG
IMG_7771.JPG
Attachment 2: IMG_7770.JPG
IMG_7770.JPG
Attachment 3: IMG_7772.JPG
IMG_7772.JPG
  14776   Fri Jul 19 12:50:10 2019 gautamUpdateSUSDC bias actuation options for SOS

Rana and I talked about some (genius) options for the large range DC bias actuation on the SOS, which do not require us to supply high-voltage to the OSEMs from outside the vacuum.

What we came up with (these are pretty vague ideas at the moment):

  1. Some kind of thermal actuation.
  2. Some kind of electrical actuation where we supply normal (+/- 10 V) from outside the vacuum, and some mechanism inside the chamber integrates (and hence also low-pass filters) the applied voltage to provide a large DC force without injecting a ton of sensor noise.
  3. Use the blue piers as a DC actuator to correct for the pitch imbalance --- Kruthi and Milind are going to do some experiments to investigate this possibility later today.

For the thermal option, I remembered that (exactly a year ago to the day!) when we were doing cavity mode scans, once the heaters were turned on, I needed to apply significant correction to the DC bias voltage to bring the cavity alignment back to normal. The mechanism of this wasn't exactly clear to me - furthermore, we don't have a FLIRcam picture of where the heater radiation patter was centered prior to my re-centering of it on the optic earlier this year, so we don't know what exactly we were heating. Nevertheless, I decided to look at the trend data from that night's work - see Attachment #1. This is a minute trend of some ETMY channels from 0000 UTC on 18 July 2018, for 24 hours. Some remarks:

  1. We did multiple trials that night, both with the elliptical reflector and the cylindrical setup that Annalisa and Terra implemented. I think the most relevant part of this data is starting at 1500 UTC (i.e. ~8am PDT, which is around when we closed shop and went home). So that's when the heaters were turned off, and the subsequent drift of PIT/YAW are, I claim, due to whatever thermal transients were at play.
  2. Just prior to that time, we were running the heater at close to its maximum rated current - so this relaxation is indicative of the range we can get out of this method of actuation.
  3. I had wrongly claimed in my discussion with Rana this morning that the change in alignment was mostly in pitch - in fact, the data suggests the change is almost equal in the two DoFs. Oplev and OSEMs report different changes though, by almost a factor of 2....
  4. The timescale of the relaxation is ~20 minutes - what part(s) of the suspension take this timescale to heat up/cool down? Unlikely to be the wire/any metal parts because the thermal conductivity is high? 
  5. In the optimistic scenario, let's say we get 100 urad of actuation range - over 40m, this corresponds to a beam spot motion of ~8mm, which isn't a whole lot. Since the mechanism of what is causing this misalignment is unclear, we may end up with significantly less actuation range as well.
  6. I will repeat the test (i.e. drive the heater and look for drift in the suspension alignment using OSEMs/Oplev) in the afternoon - now I claim the radation pattern is better centered on the optic so maybe we will have a better understanding of what mechanisms are at play.

Also see this elog by Terra.

Attachment #2 shows the results from today's heating. I did 4 steps, which are obvious in the data - I=0.6A, I=0.76A, I=0.9A, and I=1.05A.


In science, one usually tries to implement some kind of interpretation. so as to translate the natural world into meaning.

Attachment 1: heaterPitch_2018.pdf
heaterPitch_2018.pdf
Attachment 2: Screenshot_from_2019-07-19_16-39-21.png
Screenshot_from_2019-07-19_16-39-21.png
  14777   Fri Jul 19 15:51:55 2019 gautamUpdateGeneralProjector lightbulb blown out

[chub, gautam]

Bulb replaced. Projector is back on.

ELOG V3.1.3-