40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 297 of 350  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  14145   Wed Aug 8 20:56:11 2018 KojiUpdatePSLEOM measuement preparation

Rich and I worked on the EOM measurement. After the measurement, the setup was reverted to the nominal state

  • AUX PLL mixer was restored to ZAD-6
  • The PLL gain was restored to 3.10
  • The main PSL marconi is connected to the freq generator again. Using the beat note, I've confirmed that the modulations are applied on the beam.
  • The PSL HEPA was reduced from 100 to 30.
  14146   Wed Aug 8 23:03:42 2018 gautamUpdateCDSc1lsc model started

As part of this slow but systematic debugging, I am turning on the c1lsc model overnight to see if the model crashes return.

  14147   Wed Aug 8 23:06:59 2018 gautamUpdateSUSAnother low noise bias path idea

Today while Rich Abbott was here, Koji and I had a brief discussion with him about the HV amplifier idea for the coil driver bias path. He gave us some useful tips, perhaps most useful being a topology that he used and tested for an aLIGO ITM ESD driver which we can adapt to our application. It uses a PA95 high voltage amplifier which differs from the PA91 mainly in the output voltage range (up to 900V for the former, "only" 400V for the former. He agrees with the overall design idea of 

  • Having a LN opamp with the HV amp inside the feedback loop for better voltage noise at low frequencies.
  • Having a passive RC network at the output of the HV amp to filter out noise at high frequencies.

He also gave some useful suggestions like 

  • Using the front panel of the box that as a heatsink for the HV amps.
  • Testing the stability of the nested opamp loop by "pinging" the output of the opamp with some pulses from a function generator and monitoring the response to this perturbation on a scope.

I am going to work on making a prototype version of this box for 5 channels that we can test with ETMX. I have been told that the coupling from side coil to longitudinal motion is of the order of 1/30, in which case maybe we only need 4 channels.

  14148   Thu Aug 9 02:12:13 2018 gautamUpdateCOCSouth East or West?

Summary:

For operating the SRC in the "Signal-Recycled" tuning, the SRC macroscopic length needs to be ~4.04m (compared to the current value of ~5.399m), assuming we don't do anything fancy like change the modulation frequencies and not transmit through the IMC. We're putting together a notebook with all the calculations, but today I was thinking about what the signal extraction path should be, specifically which chamber the SRM should be in. Just noting down the thoughts I had here while they're fresh in my head, all this has to be fleshed out, maybe I'm making this out to be more of a problem than it actually is.

Details:

  • For the current modulation frequencies, if we want the reosnance conditions such that the f2 sideband is resonant in the SRC (but not f1, i.e. small Schnupp asymmetry regime) while the carrier is resonant in the arms (required for good sensing of the SRC length), the macroscopic length of the SRC needs to be changed to ~4.04m.
  • Practically, this means that the folded SRC would only have one folding mirror (SR2).
  • There is a shorter SRC length of ~1.something metres which would work, but that would involve changing the relative position between ITMs and BS (currently ~2.3m) so I reject that option for now.
  • So the SR2 would be roughly where it is right now, ~20cm from the BS.
  • The question then becomes, where do we direct the reflection from the SR2? We need an optical path length of ~1.5m from SR2. So options are 
    • ITMY table (East)
    • ITMX table (South)
    • IMC table (West)
  • Moreover, after the SRM, we have to accommodate:
    • Some kind of pickoff for in-air PDs.
    • OFI.
    • OMC MMT.
    • OMC.
  • Some kind of CBA (as of now I think going to the ITMY table is the best option):
Option Advantages Disadvantages
ITMY
  • Easy to direct beam from BS/PRM chamber to the ITMY table (i.e. we don't have to worry too much about avoiding other optics in the path etc).
  • Ease of access to chamber, ease of working in there.
  • ITMY table probably has the most room to work out an OFI + OMC MMT + OMC solution.
  • AS beam extraction to air will be more complicated, possibly have to do it on ITMY optical table.
  • Not sure if the ITMY table can accommodate all of the output optics subsystems I listed above.
  • Routing the LO beam to this table would be tricky I guess.
ITMX
  • Routing the LO beam for homodyne detection is probably easiest in this chamber.
  • Allows for small AoI on folding mirror, reducing the impact of astigmatism.
  • Pain to work in this chamber because of IMC tube.
  • Steering beam from SR2 to ITMX table means threading the needle between PRM and PR3 possibly.
IMC
  • Probably allows the use of (almost) the entire existing OMC chamber for the output optics (OFI, OMC MMT, OMC).
  • IMC table is crowded (2 SOS towers, several steering optics for the input beam, input faraday).
  • Not sure what is the performance of the seismic isolation stacks on these tables vs the larger optical tables.
  • Painful to work in these smaller chambers.
  14149   Thu Aug 9 12:31:13 2018 gautamUpdateCDSCDS status update

The model seems to have run without issues overnight. Not completely related, but the MC1 shadow sensor signals also don't show any abnormal excursions to negative values in the last 48 hours. I'm thinking about re-connecting the satellite box (but preserving the breakout setup at 1X6 for a while longer) and re-locking the IMC. I'll also start c1ass on the c1lsc frontend. I would say that the other models on c1lsc (i.e. c1oaf, c1cal, c1daf) aren't really necessary for basic IFO operation.

Quote:

As part of this slow but systematic debugging, I am turning on the c1lsc model overnight to see if the model crashes return.

  14150   Thu Aug 9 12:40:14 2018 gautamUpdateSUSETMX trip follow-up

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

Attachment 1: ETMXglitch.png
ETMXglitch.png
  14151   Thu Aug 9 22:50:13 2018 gautamUpdateCDSAlignSoft script modified

After this work of increasing the series resistance on ETMX, there have been numerous occassions where the insufficient misalignment of ETMX has caused problems in locking vertex cavities. Today, I modified the script (located at /opt/rtcds/caltech/c1/medm/MISC/ifoalign/AlignSoft.py) to avoid such problems. The way the misalign script works is to write an offset value to the "TO_COIL" filter bank (accessed via "Output Filters" button on the Suspension master MEDM screen - not the most intuitive place to put an offset but okay). So I just increased the value of this offset from 250 counts to 2500 counts (for ETMX only). I checked that the script works, now when both ETMs are misaligned, the AS55Q signal shows a clean Michelson-like sine wave as it fringes instead of having the arm cavity PDH fringes as well yes.

Note that the svn doesn't seem to work on the newly upgraded SL7 machines: svn status gives me the following output.

svn: E155036: Please see the 'svn upgrade' command
svn: E155036: Working copy '/cvs/cds/rtcds/userapps/trunk/cds/c1/medm/MISC/ifoalign' is too old (format 10, created by Subversion 1.6)

 Is it safe to run 'svn upgrade'? Or is it time to migrate to git.ligo.org/40m/scripts?

Attachment 1: MichelsonFringing.png
MichelsonFringing.png
  14152   Fri Aug 10 01:10:56 2018 gautamUpdateLSCSome vertex locking restored

For the first time after the whirlwind vent, I managed to lock the PRMI.

  • First, I did POX/POY locking, dither aligned the arms to maximize TRX and TRY.
  • Next, I misaligned the ETM and tested the Michelson locking
    • Since we've lost ~70% of power on the AS55 PD, I set the whitening gain for AS55 I and Q channels to +6dB (old value was 0dB).
    • worked alright. In this new config, the peak-to-peak Michelson fringe count is ~80 cts, while I reported ~60cts-pp a couple of months ago, so all seems good on that front.
    • But the config script in the IFOconfigure MEDM screen somehow doesn't set the AS55_Q ----> MICH_A element in the LSC input matrix anymore.
    • I edited the .snap file for this configuration to set the relevant matrix element EPICS channel to +1.0.
    • I also edited the overall loop gain for this configuration from +30 to +2 (for bright fringe, use -2 for dark fringe).
  • Feeling adventerous, I decided to try PRMI in the carrier resonant tuning (to be clear, PRCL on REFL11_I, MICH on AS55_Q).
    • Finding the REFL spot on the camera took a while since the PRM has been macroscopically misaligned for the mode-scanning
    • Went out to the table and centered the REFL beam onto REFL11 and REFL55 PDs - didn't need much tweaking, which is a good sign, since we shouldn't have screwed anything up on the symmetric side by any of the vent activities.
    • Restored PRMI locking using the IFOconfigure MEDM screen - lock caught almost immediately.
    • Ran the dither alignment servos for MICH and PRCL - BS needed a bit of encouragement to make the dark spot dark, but POP has been pretty stable over ~15mins.
    • I didn't take any loop transfer functions, to do.

I don't have the energy to make a DRMI attempt tonight - but the signs are encouraging. I'd like to use the IFO in the next few days to try and recover DRMI locking. The main concern is that the optical path on the AS beam has changed by ~0.3m I estimate. So the demod phase for AS55 may need to be adjusted, but the change due to optical path length only should be ~10degrees so the DRMI locking with the old settings should still work. Perhaps we also want to scan the PRC and SRC with the phase information from the Trans/Refl transfer functions as well.


Don't want to jinx it, but the c1lsc FE models have been stable. Tomorrow, I'd like to re-enable c1cal, since it has some useful channels for NBing. Could c1daf/c1oaf which have significant amounts of custom C code be the culprits?

Attachment 1: PRMIcarrier.png
PRMIcarrier.png
  14156   Mon Aug 13 09:56:23 2018 SteveUpdateSUSETMX trip follow-up

Here is an other big one

Quote:

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

 

Attachment 1: ETMXglitch.png
ETMXglitch.png
Attachment 2: ETMXgltch.png
ETMXgltch.png
  14157   Mon Aug 13 11:44:32 2018 gautamUpdateComputer Scripts / ProgramsPatch updates on nodus

Larry W said that some security issues were flagged on nodus. So I ran

sudo yum upgrade --exclude=elog-3.1.3-2.el7.x86_64

on nodus. The exclude flag is because there were some conflicts related to that particular package. Hopefully this has fixed the problem. It's been a while since the last update, which was in January of this year.

controls@nodus|~> sudo yum history
Loaded plugins: langpacks
ID     | Command line             | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
    29 | upgrade --exclude=elog-3 | 2018-08-13 11:36 | E, I, U        |  136 EE
    28 | install yum-utils        | 2018-08-13 11:31 | Update         |    1   
    27 | install nmap             | 2018-06-29 01:57 | Install        |    2   
    26 | install grace            | 2018-05-31 16:52 | Install        |   11   
    25 | install https://dl.fedor | 2018-05-31 16:51 | Install        |    1   
    24 | install perl-Digest-SHA1 | 2018-05-31 15:34 | Install        |    1   
    23 | install python-devel     | 2018-01-13 15:33 | Install        |    1   
    22 | install gcc              | 2018-01-13 15:32 | Install        |    6   
    21 | install git              | 2018-01-12 18:11 | Install        |    4   
    20 | update                   | 2018-01-12 18:01 | I, U           |   39   
    19 | install motif            | 2018-01-05 17:35 | Install        |    3   
    18 | install sendmail sendmai | 2017-12-03 05:11 | Install        |    6   
    17 | install vim              | 2017-11-21 18:12 | Install        |    3   
    16 | reinstall mod_dav_svn    | 2017-11-21 17:40 | Reinstall      |    1   
    15 | install mod_dav_svn      | 2017-11-21 17:39 | Install        |    1   
    14 | install subversion       | 2017-11-21 15:36 | Install        |    2   
    13 | -y install php           | 2017-11-20 22:15 | Install        |    4   
    12 | install links            | 2017-11-20 19:10 | Install        |    2   
    11 | install openssl098e.i686 | 2017-11-18 18:28 | Install        |    1   
    10 | install openssl-libs.i68 | 2017-11-18 18:26 | Install        |   11   
history list
  14159   Mon Aug 13 20:21:10 2018 aaronUpdateOMCNew DAC for the OMC

[aaron, gautam]

We finished up making the new c1omc model  (screenshot attached).

The new channels are only four DAC for ASC into the OMC, and one DAC for the OMC length:

C1:OMC-ASC_PZT1_PIT
C1:OMC-ASC_PZT1_YAW
C1:OMC-ASC_PZT2_PIT
C1:OMC-ASC_PZT2_YAW
C1:OMC-PZT
 
The model compiles and we can change the channel values, so we are all set to do this OMC scan on the software side.
Attachment 1: c1omcSCREENSHOT.png
c1omcSCREENSHOT.png
  14160   Tue Aug 14 00:27:55 2018 gautamUpdateLSCLocking prep

In preparation for attempting some DRMI locking, I did the following:

  • Slow machine reboots for unresponsive c1psl, c1susaux and c1iscaux. The latter requried a manual burtrestore to recover the usual LSC PD whitening settings.
  • Shuttered AUX laser (which was on Standby anyways) - we should really install a remotely controllable shutter for this on the AS table.
  • Re-aligned PMC (half turn of knob in yaw, full turn in pitch) - IMC transmission 15,000cts ---> 15,600cts.
  • Squished sat. box cables at ITMX and ETMX.

Not related to this work, but I turned the Agilent NA off since we aren't using it immediately.

  14161   Tue Aug 14 00:50:32 2018 gautamUpdateASSX arm ASS still not quite right?

While working on the single arm alignment, I noticed that today, i was able to get the X arm transmission back to ~1.22, and the GTRX to 0.52. These are closer to the values I remember from prior to the vent. Running the dither alignment promptly degrades both the green and IR transmissions. Since the pianosa SL7 upgrade, I can't use the sensoray to capture images, but to me, the spot looks a little off-center in Yaw on ETMX in this configuration, I've tried to show this in the phone grab (Atm #2). Maybe indicative of clipping somewhere upstream of ITMX?

Anyways, I'm pushing onwards for now, something to check out in the daytime.

Quote:

[koji, gautam]

After I effected the series resistance change for ETMX, the X arm ASS didn't work (i.e. IR transmission would degrade if the servo was run). Today, we succeeded in recovering a functional ASS servo yes.

We then tried to maximize GTRX using the PZT mirrors, but were only successful in reaching a maximum of 0.41. The value I remember from before the vent was 0.5, and indeed, with the IR alignment not quite optimized before we began this work, I saw GTRX of 0.48. But the IR dither servo signals indicate that the cavity axis may have shifted (spot position on the ITM, which is uncontrolled, seems to have drifred significantly, the Pitch signal doesn't stay on the StripTool scale anymore). So we may have to double check that the transmitted beam isn't falling off the GTRX DC PD.

Attachment 1: POXPOY.png
POXPOY.png
Attachment 2: IMG_7108.JPG
IMG_7108.JPG
  14162   Tue Aug 14 02:01:12 2018 gautamUpdateLSCDRMI locking - partial success

After tweaking the AS55 demod phase, SRM alignment, triggering settings, I got a few brief DRMI locks in tonight, I'm calling it a success (though this isn't really robust yet). The main things to do now are:

  • turn on all the boosts on the LSC loops - today I only managed to trigger the PRCL boost filters successfully without blowing up the lock.
  • measure all 3 loops, tweak gain as necessary.
  • Run some sensing lines, tune the demod phase.
  • The SRCL triggering is strange to me - SRCL loop is currently triggered on POP22_I, but the 2f1 buildup in the symmetric side does not say anything about the linearity of the SRCL error signal? Or are we just hoping the SRM is in the correct place and engaging the servo? Anyway, this setting seems to work but perhaps once the locking is more robust the triggering can be fixed.
  • do a quick NB - I expect the main change to be that the AS55_Q dark noise contribution would have gone up on account on the reduced amount of light at this port.

I think the main IFO characterization remaining to be done to determine the status of the IFO post vent is to measure the losses of the arm cavities. IMO, we will need to certainly fix the clipping at ETMY before we attempt some serious locking.

Attachment 1: DRMI.png
DRMI.png
  14163   Tue Aug 14 23:14:24 2018 aaronUpdateOMCOMC scanning/aligning script

I made a script to scan the OMC length at each setpoint for the two TTs steering into the OMC. It is currently located on nodus at /users/aaron/OMC/scripts/OMC_lockScan.py.

I haven't tested it and used some ez.write syntax that I hadn't used before, so I'll have to double check it.

My other qualm is that I start with all PZTs set at 0, and step around alternative +/- values on each PZT at the same magnitude (for example, at some value of PZT1_PIT, PZT1_YAW, PZT2_PIT, I'll scan PZT2_YAW=1, then PZT2_YAW=-1, then PZT2_YAW=2). If there's strong hysteresis in the PZTs, this might be a problem.

  14164   Wed Aug 15 12:15:24 2018 gautamUpdateCOCMacroscopic SRC length for SR tuning

Summary:

It looks like we can have a stable SRC of length 4.044 m without getting any new mirrors, so this is an option to consider in the short-term.

Details:

  • The detailed calculations are in the git repo
  • The optical configuration is:
    • A single folding mirror approximately at the current SR3 location.
    • An SRM that is ~1.5m away from the above folding mirror. Which table the SRM goes on is still an open question, per the previous elog in this thread. 
  • The SRC length is chosen to be 4.044 m, which is what the modeling tells us we need for operating in the SR tuning instead of RSE.
  • Using this macroscopic length, I found that we could use a single folding mirror in the SRC, and that the existing (convex) G&H folding mirrors, which have a curvature of -700m, happily combine with our existing SRM (concave with a curvature of 142m) to give reasonable TMS and mode-matching to the arm cavity.
  • The existing SRM transmission of 10% may not be optimal but Kevin's calculations say we should still be able to see some squeezing (~0.8 dB) with this SRM.
  • Attachment #1 - corner plot of the distribution of TMS for the vertical and horizontal modes, as well as the mode-matching (averaged between the two modes) between the SRC and arm cavity.
  • Attachment #2 - histograms of the distributions of RoCs and lengths used to generate Attachment #1. The distributions were drawn from i.i.d Gaussian pdfs.

gautam 245pm: Koji pointed out that the G&H mirrors are coated for normal incidence, but looking at the measurement, it looks like the optic has T~75ppm at 45 degree incidence, which is maybe still okay. Alternatively, we could use the -600m SR3 as the single folding mirror in the SRC, at the expense of slightly reduced mode-matching between the arm cavity and SRC.

Attachment 1: SRC_MCMC_shortTerm.pdf
SRC_MCMC_shortTerm.pdf
Attachment 2: SRC_dists_shortTerm.pdf
SRC_dists_shortTerm.pdf
  14165   Wed Aug 15 19:18:07 2018 gautamUpdateSUSAnother low noise bias path idea

I took another pass at this. Here is what I have now:

Attachment #1: Composite amplifier design to suppress voltage noise of PA91 at low frequencies.

Attachment #2: Transfer function from input to output.

Attachment #3: Top 5 voltage noise contributions for this topology.

Attachment #4: Current noises for this topology, comparison to current noise from fast path and slow DAC noise.

Attachment #5: LISO file for this topology.

Looks like this will do the job. I'm going to run this by Rich and get his input on whether this will work (this design has a few differences from Rich's design), and also on how to best protect from HV incidents.

Attachment 1: HV_Bias.pdf
HV_Bias.pdf
Attachment 2: HVamp_TF.pdf
HVamp_TF.pdf
Attachment 3: HVamp_noises.pdf
HVamp_noises.pdf
Attachment 4: currentNoises.pdf
currentNoises.pdf
Attachment 5: HVamp.fil.zip
  14166   Wed Aug 15 21:27:47 2018 gautamUpdateCDSCDS status update

Starting c1cal now, let's see if the other c1lsc FE models are affected at all... Moreover, since MC1 seems to be well-behaved, I'm going to restore the nominal eurocrate configuration (sans extender board) tomorrow.

  14167   Thu Aug 16 07:50:28 2018 SteveUpdateVACpumpdown 81 at day 30

 

 

Attachment 1: pd81d30.png
pd81d30.png
  14168   Thu Aug 16 14:48:14 2018 SteveUpdateVACwhy do we need a root pump?

Basic Pump Throughput Concepts

What is Pump Throughput?

The manufacturer of a vacuum pump supplies a chart for each pump showing pumping speed (volume in unit time) vs pressure. The example, for a fictitious pump, shows the pumping speed is substantially constant over a large pressure range.

By multiplying pumping speed by pressure at which that pumping speed occurs, we get a measure called pump throughput. We can tabulate those results, as shown in the table below, or plot them as a graph of pressure vs pump throughput. As is clear from the chart,  pump throughput (which might also be called mass flow) decreases proportionally with PRESSURE, at least over the pressure range where pumping speed is constant.

 

Pumping Speed Pressure Pressure x Pumping Speed
100 L/sec 10 torr 1000 torr.liter/sec
100 L/sec 1 torr 100 torr.liter/sec
100 L/sec 0.1 torr 10 torr.liter/sec
100 L/sec 0.01 torr

1 torr.liter/sec

The roughing pump speed actually will reach 0 l/s  at it's ultimate pressure performance.

Our roughing pump  pumping speed will slowly drop  as chamber pressure drops. Below 10 Torr this decrease is accelerated and bottoms out. This where the Root pump can help. See NASA evaluation of dry rough pumps...What is a root pump 

We have been operating succsessfully with a narrow margin. The danger is that the Maglev forline peaks at 4 Torr. This puts load on the small turbo TP2, TP3 &  large TP1

The temperature of these TP2 & 3  70 l/s drag turbos go up to 38 C and their  rotation speed slow to 45K rpm from 50K rpm because of the large volume 33,000 liters

Either high temp or low rotation speed of drag turbo or long time of overloading  can shut down the small turbo pumps......meaning: stop pumping, wait till they cool down

The manual gate valve installed helped to lower peak temp to 32C It just took too long.

We have been running with 2 external fans [one on TP1 & one on TP3]  for cooling and one aux drypump to help lowering the foreline pressure of TP2 & 3

The vacuum control upgrade should include adding root pump into the zero pumping speed range.

 

Atm1,   Pump speed chart:   TP1  turbo -red, root pump -blue and mechanical pump green. Note green color here representing an oily rotory pump. Our small drypumps [SH-100] typically run above 100 mTorr

                                           They are the forepump of TP2 & 3     Our pumpdown procedure: Oily Leybold rotory pumps ( with safety orifice 350 mT to atm ) rough to 500 mTorr

                                                                                                 Here we switch over to TP2 & 3 running at 50k RPM with drypumps SH-100 plus Aux Triscroll

                                                                                                 TP1- Maglev rotating full speed when V1 is opened at full volume at 500 mTorr 

                         History: the original design of the early 1990s had no dry scroll pumps. Oil free dry scrools replaced the oily forepumps of TP2 & TP3 in ~2002  at the cost of degrading the forline pressure somewhat.

                                     We had 2 temperature related Maglev failers in 2005 Aug 8 and 2006 April 5  Osaka advised us to use AUX fan to cool TP1  This helped. 

Atm2,   Wanted Root pump - Leybold EcoDry 65 plus  

Atm3,   Typical 8 hrs pumpdown from 2007 with TP2 & 3 

Atm4,   Last pumpdown zoomed in from 400 mT to 1mT with throttled gate  valve took 9 hrs  The foreline pressure of TP1 peaked at 290 mT, TP3 temperature peaked at 32C

            This technic is workable, but 9 hrs is too long.

Atm5,   The lowest pressure achived in  the 40m Vacuum Envelope 5e-7 Torr with pumps Maglev  ~300 l/s,  Cryo 1500 l/s  and 3 ion pumps of 500 l/s      [ in April 2002 at pumpdown 53 day 7 ] with annuloses at ~ 10 mTorr

Atm6,  Osaka TG390MCAB Throughput with screen ~300 L/s at 12 cfm backing pump

Attachment 1: PUMPSPEED_CHAR.pdf
PUMPSPEED_CHAR.pdf
Attachment 2: Leybold_Broschuere_8Seiten_EN_ANSICHT.pdf
Leybold_Broschuere_8Seiten_EN_ANSICHT.pdf
Attachment 3: pd65.jpg.png
pd65.jpg.png
Attachment 4: pd81completed.png
pd81completed.png
Attachment 5: best_.pdf
best_.pdf
Attachment 6: Osaka390.pdf
Osaka390.pdf
  14169   Thu Aug 16 23:06:50 2018 gautamUpdateSUSAnother low noise bias path idea

I had a very fruitful discussion with Rich about this circuit today. He agreed with the overall architecture, but made the following suggestions (Attachment #1 shows the circuit with these suggestions incorporated):

  1. Use an Op27 instead of LT1128, as it is a more friendly part especially in these composite amplifier topologies. I confirmed that this doesn't affect the output voltage noise at 100 Hz, we will still limited by Johnson noise of the 15kohm series resistor.
  2. Take care of voltage distribution in the HV feedback path
    • I overlooked the fact that the passive filtering stage means that the DC current we can drive in the configuration I posted earlier is 150V / 25kohm = 6mA, whereas we'd like to be able to drive at least 10 mA, and probably want the ability to do 12 mA to leave some headroom.
    • At the same time, the feedback resistance shouldn't be too small such that the PA91 has to drive a significant current in the feedback path (we'd like to save that for the coil).
    • Changing the supply voltage of the PA91 from 150 V to 320 V, and changing the gain to x30 instead of x15 (by changing the feedback resistor from 14kohm to 29kohm), we can still drive 12 mA through the 25 kohms of series resistance. This will require getting new HV power supplies, as the KEPCO ones we have cannot handle these numbers.
    • The current limiting resistor is chosen to be 25ohms such that the PA91 is limited to ~26 mA. Of this, 300V / 30kohm ~ 10 mA will flow in the feedback path, which means under normal operation, 12 mA can safely flow through the coils.
    • Rich recommended using metal film resistors in the high voltage feedback path. However, these have a power rating, and also a voltage rating. By using 6x 5kohm resistors, the max power dissipated in each resistor is 50^2 / 5000 ~ 0.5 W, so we can get 0.6 W (or 1W?)  rated resistors which should do the job. I think the S102K or S104K series will do the job.
  3. Add a voltage monitoring capability.
    • This is implemented via a resistive voltage divider at the output of the PA91.
    • We can use an amplifier stage with whitening if necessary, but I think simply reading off the voltage across the terminating resistor in the ladder will be sufficient since this circuit will only have DC authority.
  4. Make a Spice model instead of LISO, to simulate transient effects.
    • I've made the model, investigating transients now.
  5. High voltage precautions:
    • When doing PCB layout, ensure the HV points have more than the default clearance. Rich recommends 100 mils.
    • Use a dual-diode (Schottky) as input protection for the Op27 (not yet implemented in Spice model).
    • Use a TVS diode for the moniotring circuit (not yet implemented in Spice model).
    • Make sure resistors and capacitors that see high voltage are rated with some safety margin.
  6. Consider using the PA95 (which Rich has tested and approves of) instead of the PA91. Does anyone have any opinions on this?

If all this sounds okay, I'd like to start making the PCB layout (with 5 such channels) so we can get a couple of trial boards and try this out in a couple of weeks. Per the current threat matrix and noises calculated, coil driver noise is still projected to be the main technical noise contribution in the 40m PonderSqueeze NB (more on this in a separate elog).

Quote:

Looks like this will do the job. I'm going to run this by Rich and get his input on whether this will work (this design has a few differences from Rich's design), and also on how to best protect from HV incidents.

Attachment 1: HVamp_schem.PDF
HVamp_schem.PDF
Attachment 2: Hvamp.zip
  14171   Mon Aug 20 15:16:39 2018 JonUpdateCDSRebooted c1lsc, slow machines

When I came in this morning no light was reaching the MC. One fast machine was dead, c1lsc, and a number of the slow machines: c1susaux, c1iool0, c1auxex, c1auxey, c1iscaux. Gautam walked me through reseting the slow machines manually and the fast machines via the reboot script. The computers are all back online and the MC is again able to lock.

  14173   Tue Aug 21 09:16:23 2018 SteveUpdateWiki AP table layout 20180821

 

 

Attachment 1: 20180821.JPG
20180821.JPG
  14176   Wed Aug 22 08:44:09 2018 SteveUpdateGeneralearth quake

6.2M Bandon, OR did not trip any sus

 

Attachment 1: yesterday_EQs.png
yesterday_EQs.png
  14178   Thu Aug 23 08:24:38 2018 SteveUpdateSUSETMX trip follow-up

Glitch, small amplitude, 350 counts  &  no trip.

Quote:

Here is an other big one

Quote:

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

 

 

Attachment 1: ETMX-UL_glitch.png
ETMX-UL_glitch.png
Attachment 2: PEM_4d.png
PEM_4d.png
  14179   Thu Aug 23 15:26:54 2018 JonUpdateIMCMC/PMC trouble

I tried unsuccessfully to relock the MC this afternoon.

I came in to find it in a trouble state with a huge amount of noise on C1:PSL-FSS_PCDRIVE visible on the projector monitor. Light was reaching the MC but it was unable to lock.

  • I checked the status of the fast machines on the CDS>FE STATUS page. All up.
  • Then I checked the slow machine status. c1iscaux and c1psl were both down. I manually reset both machines. The large noise visible on C1:PSL-FSS_PCDRIVE disappeared.
  • After the reset, light was no longer reaching the MC, which I take to mean the PMC was not locked. On the PSL>PMC page, I blanked the control signal, reenabled it, and attempted to relock by adjusting the servo gain as Gautam had showed me before. The PMC locks were unstable, with each one lasting only a second or so.
  • Next I tried restoring the burt states for c1iscaux and c1psl from a snapshot taken earlier today, before the machine reboots. That did not solve the problem either.
  14180   Thu Aug 23 16:05:24 2018 KojiUpdateIMCMC/PMC trouble

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

  14181   Thu Aug 23 16:10:13 2018 not KojiUpdateIMCMC/PMC trouble

Great, thanks!

Quote:

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

 

  14182   Fri Aug 24 08:04:37 2018 SteveUpdateGeneralsmall earth quake

 

 

Attachment 1: small_EQ.png
small_EQ.png
  14183   Fri Aug 24 10:51:23 2018 SteveUpdateVACpumpdown 81 at day 38

 

 

Attachment 1: d38.png
d38.png
  14184   Fri Aug 24 14:58:30 2018 SteveUpdateSUSETMX trips again

The second big glich trips ETMX sus. There were small earth quakes around the glitches. It's damping recovered.

Quote:

Glitch, small amplitude, 350 counts  &  no trip.

Quote:

Here is an other big one

Quote:

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

 

 

 

Attachment 1: glitches.png
glitches.png
  14185   Mon Aug 27 09:14:45 2018 SteveUpdatePEMsmall earth quakes

Small earth quakes and suspensions. Which one is the most free and most sensitive: ITMX

 

Attachment 1: small_EQs_vs_SUSs.png
small_EQs_vs_SUSs.png
  14187   Tue Aug 28 18:39:41 2018 JonUpdateCDSC1LSC, C1AUX reboots

I found c1lsc unresponsive again today. Following the procedure in elog #13935, I ran the rebootC1LSC.sh script to perform a soft reboot of c1lsc and restart the epics processes on c1lsc, c1sus, and c1ioo. It worked. I also manually restarted one unresponsive slow machine, c1aux.

After the restarts, the CDS overview page shows the first three models on c1lsc are online (image attached). The above elog references c1oaf having to be restarted manually, so I attempted to do that. I connect via ssh to c1lsc and ran the script startc1oaf. This failed as well, however.

In this state I was able to lock the MICH configuration, which is sufficient for my purposes for now, but I was not able to lock either of the arm cavities. Are some of the still-dead models necessary to lock in resonant configurations?

Attachment 1: CDS_FE_STATUS.png
CDS_FE_STATUS.png
  14188   Wed Aug 29 09:20:27 2018 SteveUpdateSUSlocal 4.4M earth quake

All suspension tripped. Their damping restored. The MC is locked.

ITMX-UL & side magnets are stuck.

 

Attachment 1: 4.4_La_Verne.png
4.4_La_Verne.png
Attachment 2: 3.4_&_4.4M_EQ.png
3.4_&_4.4M_EQ.png
  14189   Wed Aug 29 09:56:00 2018 SteveUpdateVACMaglev controller needs service

TP-1 Osaka maglev controller  [  model TCO10M,  ser V3F04J07 ]  needs maintenance. Alarm led  on indicating  that we need Lv2 service.

The turbo and the controller are in good working order.

*****************************

Hi Steve,

Our maintenance level 2 service price is $...... It consists of a complete disassembly of the controller for internal cleaning of all ICB’s, replacement of all main board capacitors, replacement of all internal cooling units, ROM battery replacement, re-assembly, and mandatory final testing to make sure it meets our factory specifications. Turnaround time is approximately 3 weeks.

  RMA 5686 has been assigned to Caltech’s returning TC010M controller. Attached please find our RMA forms. Complete and return them to us via email, along with your PO, prior to shipping the cont

Best regards,

Pedro Gutierrez

Osaka Vacuum USA, Inc.

510-770-0100 x 109

*************************************************

our TP-1 TG390MCAB is 9 years old. What is the life expectancy of this turbo?

                        The Osaka maglev turbopumps are designed with a 100,000 hours(or ~ 10 operating years) life span but as you know most of our end-users are

                        running their Osaka maglev turbopumps in excess of 10+, 15+ years continuously.     The 100,000 hours design value is based upon the AL material being rotated at

                        the given speed.   But the design fudge factor have somehow elongated the practical life span.  

We should have the cost of new maglev & controller in next year budget. I  put the quote into the wiki.

 

                         

 

  14190   Wed Aug 29 11:46:27 2018 JonUpdateSUSlocal 4.4M earth quake

I freed ITMX and coarsely realigned the IFO using the OPLEVs. All the alignments were a bit off from overnight.

The IFO is still only able to lock in MICH mode currently, which was the situation before the earthquake. This morning I additionally tried restoring the burt state of the four machines that had been rebooted in the last week (c1iscaux, c1aux, c1psl, c1lsc) but that did not solve it.

Quote:

All suspension tripped. Their damping restored. The MC is locked.

ITMX-UL & side magnets are stuck.

 

 

  14191   Wed Aug 29 14:51:05 2018 SteveUpdateGeneraltomorrow morning

Electrician is coming to fix one of the fluorenent light fixture holder in the east arm tomorrow morning at 8am. He will be out by 9am.

The job did not get done. There was no scaffolding or ladder to reach troubled areas.

  14192   Tue Sep 4 10:14:11 2018 gautamUpdateCDSCDS status update

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

Quote:

Starting c1cal now, let's see if the other c1lsc FE models are affected at all... Moreover, since MC1 seems to be well-behaved, I'm going to restore the nominal eurocrate configuration (sans extender board) tomorrow.

  14193   Wed Sep 5 10:59:23 2018 wgautamUpdateCDSCDS status update

Rolf came by today morning. For now, we've restarted the FE machine and the expansion chassis (note that the correct order in which to do this is: turn off computer--->turn off expansion chassis--->turn on expansion chassis--->turn on computer). The debugging measures Rolf suggested are (i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

Another tip from Rolf: if the c1lsc FE is responsive but the models have crashed, then doing sudo reboot by ssh-ing into c1lsc should suffice* (i.e. it shouldn't take down the models on the other vertex FEs, although if the FE is unresponsive and you hard reboot it, this may still be a problem). I'll modify I've modified the c1lsc reboot script accordingly.

* Seems like this can still lead to the other vertex FEs crashing, so I'm leaving the reboot script as is (so all vertex machines are softly rebooted when c1lsc models crash).

Quote:

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

  14194   Thu Sep 6 14:21:26 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Todd E. came by this morning and gave us (i) 1x new ADC card and (ii) 1x roll of 100m (2017 vintage) PCIe fiber. This afternoon, I replaced the old ADC card in the c1lsc expansion chassis, and have returned the old card to Todd. The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue. CDS is back to what is now the nominal state (Attachment #1) and Yarm is locked for Jon to work on his IFOcoupling study. We will monitor the stability in the coming days.

Quote:

(i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

Attachment 1: CDSoverview.png
CDSoverview.png
  14195   Fri Sep 7 12:35:14 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

Attachment 1: Screenshot_from_2018-09-07_12-34-52.png
Screenshot_from_2018-09-07_12-34-52.png
  14196   Mon Sep 10 12:44:48 2018 JonUpdateCDSADC replacement in c1lsc expansion chassis

Gautam and I restarted the models on c1lsc, c1ioo, and c1sus. The LSC system is functioning again. We found that only restarting c1lsc as Rolf had recommended did actually kill the models running on the other two machines. We simply reverted the rebootC1LSC.sh script to its previous form, since that does work. I'll keep using that as required until the ongoing investigations find the source of the problem.

Quote:

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

 

  14197   Wed Sep 12 22:22:30 2018 KojiUpdateComputersSSL2.0, SSL3.0 disabled

LIGO GC notified us that nodus had SSL2.0 and SSL3.0 enabled. This has been disabled now.
The details are described on 40m wiki.

  14198   Mon Sep 17 12:28:19 2018 gautamUpdateIOOPMC and IMC relocked, WFS inputs turned off

The PMC and IMC were unlocked. Both were re-locked, and alignment of both cavities were adjusted so as to maximize MC2 trans (by hand, input alignment to PMC tweaked on PSL table, IMC alignment tweaked using slow bias voltages). I disabled the inputs to the WFS loops, as it looks like they are not able to deal with the glitching IMC suspensions. c1lsc models have crashed again but I am not worrying about that for now.

9pm: The alignment is wandering all over the place so I'm just closing the PSL shutter for now.

  14199   Tue Sep 18 14:02:37 2018 SteveUpdatesafety safety training

Yuki Miyazaki received 40m specific basic safety training.

 

  14200   Tue Sep 18 17:56:01 2018 not gautamUpdateIOOPMC and IMC relocked, WFS inputs turned off

I restarted the LSC models in the usual way via the c1lsc reboot script. After doing this I was able to lock the YARM configuration for more noise coupling scripting.

Quote:

The PMC and IMC were unlocked. Both were re-locked, and alignment of both cavities were adjusted so as to maximize MC2 trans (by hand, input alignment to PMC tweaked on PSL table, IMC alignment tweaked using slow bias voltages). I disabled the inputs to the WFS loops, as it looks like they are not able to deal with the glitching IMC suspensions. c1lsc models have crashed again but I am not worrying about that for now.

9pm: The alignment is wandering all over the place so I'm just closing the PSL shutter for now.

 

  14201   Thu Sep 20 08:17:14 2018 SteveUpdateSUSlocal 3.4M earth quake

M3.4 Colton shake did not trip sus.

 

Attachment 1: local_3.4M.png
local_3.4M.png
  14202   Thu Sep 20 11:29:04 2018 gautamUpdateCDSNew PCIe fiber housed

[steve, yuki, gautam]

The plastic tubing/housing for the fiber arrived a couple of days ago. We routed ~40m of fiber through roughly that length of the tubing this morning, using some custom implements Steve sourced. To make sure we didn't damage the fiber during this process, I'm now testing the vertex models with the plastic tubing just routed casually (= illegally) along the floor from 1X4 to 1Y3 (NOTE THAT THE WIKI PAGE DIAGRAM IS OUT OF DATE AND NEEDS TO BE UPDATED), and have plugged in the new fiber to the expansion chassis and the c1lsc front end machine. But I'm seeing a DC error (0x4000), which is indicative of some sort of timing error (Attachment #1) **. Needs more investigation...

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

** In the past, I have been able to fix the 0x4000 error by manually rebooting fb (simply restarting the daqd processes on fb using sudo systemctl restart daqd_* doesn't seem to fix the problem). Sure enough, seems to have done the job this time as well (Attachment #2). So my initial impression is that the new fiber is functioning alright yes.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3)

Attachment 1: PCIeFiberSwap.png
PCIeFiberSwap.png
Attachment 2: PCIeFiberSwap_FBrebooted.png
PCIeFiberSwap_FBrebooted.png
  14203   Thu Sep 20 16:19:04 2018 gautamUpdateCDSNew PCIe fiber install postponed to tomorrow

[steve, gautam]

This didn't go as smoothly as planned. While there were no issues with the new fiber over the ~3 hours that I left it plugged in, I didn't realize the fiber has distinct ends for the "HOST" and "TARGET" (-5 points to me I guess). So while we had plugged in the ends correctly (by accident) for the pre-lunch test, while routing the fiber on the overhead cable tray, we switched the ends (because the "HOST" end of the cable is close to the reel and we felt it would be easier to do the routing the other way. 

Anyway, we will fix this tomorrow. For now, the old fiber was re-connected, and the models are running. IMC is locked.

Quote:

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

  14206   Fri Sep 21 16:46:38 2018 gautamUpdateCDSNew PCIe fiber installed and routed

[steve, koji, gautam]

We took another pass at this today, and it seems to have worked - see Attachment #1. I'm leaving CDS in this configuration so that we can investigate stability. IMC could be locked. However, due to the vacuum slow machine having failed, we are going to leave the PSL shutter closed over the weekend.

Attachment 1: PCIeFiber.png
PCIeFiber.png
Attachment 2: IMG_5878.JPG
IMG_5878.JPG
ELOG V3.1.3-