40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 294 of 339  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  14180   Thu Aug 23 16:05:24 2018 KojiUpdateIMCMC/PMC trouble

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

  14181   Thu Aug 23 16:10:13 2018 not KojiUpdateIMCMC/PMC trouble

Great, thanks!

Quote:

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

 

  14182   Fri Aug 24 08:04:37 2018 SteveUpdateGeneralsmall earth quake

 

 

Attachment 1: small_EQ.png
small_EQ.png
  14183   Fri Aug 24 10:51:23 2018 SteveUpdateVACpumpdown 81 at day 38

 

 

Attachment 1: d38.png
d38.png
  14184   Fri Aug 24 14:58:30 2018 SteveUpdateSUSETMX trips again

The second big glich trips ETMX sus. There were small earth quakes around the glitches. It's damping recovered.

Quote:

Glitch, small amplitude, 350 counts  &  no trip.

Quote:

Here is an other big one

Quote:

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

 

 

 

Attachment 1: glitches.png
glitches.png
  14185   Mon Aug 27 09:14:45 2018 SteveUpdatePEMsmall earth quakes

Small earth quakes and suspensions. Which one is the most free and most sensitive: ITMX

 

Attachment 1: small_EQs_vs_SUSs.png
small_EQs_vs_SUSs.png
  14187   Tue Aug 28 18:39:41 2018 JonUpdateCDSC1LSC, C1AUX reboots

I found c1lsc unresponsive again today. Following the procedure in elog #13935, I ran the rebootC1LSC.sh script to perform a soft reboot of c1lsc and restart the epics processes on c1lsc, c1sus, and c1ioo. It worked. I also manually restarted one unresponsive slow machine, c1aux.

After the restarts, the CDS overview page shows the first three models on c1lsc are online (image attached). The above elog references c1oaf having to be restarted manually, so I attempted to do that. I connect via ssh to c1lsc and ran the script startc1oaf. This failed as well, however.

In this state I was able to lock the MICH configuration, which is sufficient for my purposes for now, but I was not able to lock either of the arm cavities. Are some of the still-dead models necessary to lock in resonant configurations?

Attachment 1: CDS_FE_STATUS.png
CDS_FE_STATUS.png
  14188   Wed Aug 29 09:20:27 2018 SteveUpdateSUSlocal 4.4M earth quake

All suspension tripped. Their damping restored. The MC is locked.

ITMX-UL & side magnets are stuck.

 

Attachment 1: 4.4_La_Verne.png
4.4_La_Verne.png
Attachment 2: 3.4_&_4.4M_EQ.png
3.4_&_4.4M_EQ.png
  14189   Wed Aug 29 09:56:00 2018 SteveUpdateVACMaglev controller needs service

TP-1 Osaka maglev controller  [  model TCO10M,  ser V3F04J07 ]  needs maintenance. Alarm led  on indicating  that we need Lv2 service.

The turbo and the controller are in good working order.

*****************************

Hi Steve,

Our maintenance level 2 service price is $...... It consists of a complete disassembly of the controller for internal cleaning of all ICB’s, replacement of all main board capacitors, replacement of all internal cooling units, ROM battery replacement, re-assembly, and mandatory final testing to make sure it meets our factory specifications. Turnaround time is approximately 3 weeks.

  RMA 5686 has been assigned to Caltech’s returning TC010M controller. Attached please find our RMA forms. Complete and return them to us via email, along with your PO, prior to shipping the cont

Best regards,

Pedro Gutierrez

Osaka Vacuum USA, Inc.

510-770-0100 x 109

*************************************************

our TP-1 TG390MCAB is 9 years old. What is the life expectancy of this turbo?

                        The Osaka maglev turbopumps are designed with a 100,000 hours(or ~ 10 operating years) life span but as you know most of our end-users are

                        running their Osaka maglev turbopumps in excess of 10+, 15+ years continuously.     The 100,000 hours design value is based upon the AL material being rotated at

                        the given speed.   But the design fudge factor have somehow elongated the practical life span.  

We should have the cost of new maglev & controller in next year budget. I  put the quote into the wiki.

 

                         

 

  14190   Wed Aug 29 11:46:27 2018 JonUpdateSUSlocal 4.4M earth quake

I freed ITMX and coarsely realigned the IFO using the OPLEVs. All the alignments were a bit off from overnight.

The IFO is still only able to lock in MICH mode currently, which was the situation before the earthquake. This morning I additionally tried restoring the burt state of the four machines that had been rebooted in the last week (c1iscaux, c1aux, c1psl, c1lsc) but that did not solve it.

Quote:

All suspension tripped. Their damping restored. The MC is locked.

ITMX-UL & side magnets are stuck.

 

 

  14191   Wed Aug 29 14:51:05 2018 SteveUpdateGeneraltomorrow morning

Electrician is coming to fix one of the fluorenent light fixture holder in the east arm tomorrow morning at 8am. He will be out by 9am.

The job did not get done. There was no scaffolding or ladder to reach troubled areas.

  14192   Tue Sep 4 10:14:11 2018 gautamUpdateCDSCDS status update

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

Quote:

Starting c1cal now, let's see if the other c1lsc FE models are affected at all... Moreover, since MC1 seems to be well-behaved, I'm going to restore the nominal eurocrate configuration (sans extender board) tomorrow.

  14193   Wed Sep 5 10:59:23 2018 wgautamUpdateCDSCDS status update

Rolf came by today morning. For now, we've restarted the FE machine and the expansion chassis (note that the correct order in which to do this is: turn off computer--->turn off expansion chassis--->turn on expansion chassis--->turn on computer). The debugging measures Rolf suggested are (i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

Another tip from Rolf: if the c1lsc FE is responsive but the models have crashed, then doing sudo reboot by ssh-ing into c1lsc should suffice* (i.e. it shouldn't take down the models on the other vertex FEs, although if the FE is unresponsive and you hard reboot it, this may still be a problem). I'll modify I've modified the c1lsc reboot script accordingly.

* Seems like this can still lead to the other vertex FEs crashing, so I'm leaving the reboot script as is (so all vertex machines are softly rebooted when c1lsc models crash).

Quote:

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

  14194   Thu Sep 6 14:21:26 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Todd E. came by this morning and gave us (i) 1x new ADC card and (ii) 1x roll of 100m (2017 vintage) PCIe fiber. This afternoon, I replaced the old ADC card in the c1lsc expansion chassis, and have returned the old card to Todd. The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue. CDS is back to what is now the nominal state (Attachment #1) and Yarm is locked for Jon to work on his IFOcoupling study. We will monitor the stability in the coming days.

Quote:

(i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

Attachment 1: CDSoverview.png
CDSoverview.png
  14195   Fri Sep 7 12:35:14 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

Attachment 1: Screenshot_from_2018-09-07_12-34-52.png
Screenshot_from_2018-09-07_12-34-52.png
  14196   Mon Sep 10 12:44:48 2018 JonUpdateCDSADC replacement in c1lsc expansion chassis

Gautam and I restarted the models on c1lsc, c1ioo, and c1sus. The LSC system is functioning again. We found that only restarting c1lsc as Rolf had recommended did actually kill the models running on the other two machines. We simply reverted the rebootC1LSC.sh script to its previous form, since that does work. I'll keep using that as required until the ongoing investigations find the source of the problem.

Quote:

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

 

  14197   Wed Sep 12 22:22:30 2018 KojiUpdateComputersSSL2.0, SSL3.0 disabled

LIGO GC notified us that nodus had SSL2.0 and SSL3.0 enabled. This has been disabled now.
The details are described on 40m wiki.

  14198   Mon Sep 17 12:28:19 2018 gautamUpdateIOOPMC and IMC relocked, WFS inputs turned off

The PMC and IMC were unlocked. Both were re-locked, and alignment of both cavities were adjusted so as to maximize MC2 trans (by hand, input alignment to PMC tweaked on PSL table, IMC alignment tweaked using slow bias voltages). I disabled the inputs to the WFS loops, as it looks like they are not able to deal with the glitching IMC suspensions. c1lsc models have crashed again but I am not worrying about that for now.

9pm: The alignment is wandering all over the place so I'm just closing the PSL shutter for now.

  14199   Tue Sep 18 14:02:37 2018 SteveUpdatesafety safety training

Yuki Miyazaki received 40m specific basic safety training.

 

  14200   Tue Sep 18 17:56:01 2018 not gautamUpdateIOOPMC and IMC relocked, WFS inputs turned off

I restarted the LSC models in the usual way via the c1lsc reboot script. After doing this I was able to lock the YARM configuration for more noise coupling scripting.

Quote:

The PMC and IMC were unlocked. Both were re-locked, and alignment of both cavities were adjusted so as to maximize MC2 trans (by hand, input alignment to PMC tweaked on PSL table, IMC alignment tweaked using slow bias voltages). I disabled the inputs to the WFS loops, as it looks like they are not able to deal with the glitching IMC suspensions. c1lsc models have crashed again but I am not worrying about that for now.

9pm: The alignment is wandering all over the place so I'm just closing the PSL shutter for now.

 

  14201   Thu Sep 20 08:17:14 2018 SteveUpdateSUSlocal 3.4M earth quake

M3.4 Colton shake did not trip sus.

 

Attachment 1: local_3.4M.png
local_3.4M.png
  14202   Thu Sep 20 11:29:04 2018 gautamUpdateCDSNew PCIe fiber housed

[steve, yuki, gautam]

The plastic tubing/housing for the fiber arrived a couple of days ago. We routed ~40m of fiber through roughly that length of the tubing this morning, using some custom implements Steve sourced. To make sure we didn't damage the fiber during this process, I'm now testing the vertex models with the plastic tubing just routed casually (= illegally) along the floor from 1X4 to 1Y3 (NOTE THAT THE WIKI PAGE DIAGRAM IS OUT OF DATE AND NEEDS TO BE UPDATED), and have plugged in the new fiber to the expansion chassis and the c1lsc front end machine. But I'm seeing a DC error (0x4000), which is indicative of some sort of timing error (Attachment #1) **. Needs more investigation...

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

** In the past, I have been able to fix the 0x4000 error by manually rebooting fb (simply restarting the daqd processes on fb using sudo systemctl restart daqd_* doesn't seem to fix the problem). Sure enough, seems to have done the job this time as well (Attachment #2). So my initial impression is that the new fiber is functioning alright yes.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3)

Attachment 1: PCIeFiberSwap.png
PCIeFiberSwap.png
Attachment 2: PCIeFiberSwap_FBrebooted.png
PCIeFiberSwap_FBrebooted.png
  14203   Thu Sep 20 16:19:04 2018 gautamUpdateCDSNew PCIe fiber install postponed to tomorrow

[steve, gautam]

This didn't go as smoothly as planned. While there were no issues with the new fiber over the ~3 hours that I left it plugged in, I didn't realize the fiber has distinct ends for the "HOST" and "TARGET" (-5 points to me I guess). So while we had plugged in the ends correctly (by accident) for the pre-lunch test, while routing the fiber on the overhead cable tray, we switched the ends (because the "HOST" end of the cable is close to the reel and we felt it would be easier to do the routing the other way. 

Anyway, we will fix this tomorrow. For now, the old fiber was re-connected, and the models are running. IMC is locked.

Quote:

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

  14206   Fri Sep 21 16:46:38 2018 gautamUpdateCDSNew PCIe fiber installed and routed

[steve, koji, gautam]

We took another pass at this today, and it seems to have worked - see Attachment #1. I'm leaving CDS in this configuration so that we can investigate stability. IMC could be locked. However, due to the vacuum slow machine having failed, we are going to leave the PSL shutter closed over the weekend.

Attachment 1: PCIeFiber.png
PCIeFiber.png
Attachment 2: IMG_5878.JPG
IMG_5878.JPG
  14207   Fri Sep 21 16:51:43 2018 gautamUpdateVACc1vac1 is unresponsive

Steve pointed out that some of the vacuum MEDM screen fields were reporting "NO COMM". Koji confirmed that this is a c1vac1 problem, likely the same as reported here and can be fixed using the same procedure.

However, Steve is worried that the interlock won't kick in in case of a vacuum emergency, so we are leaving the PSL shutter closed over the weekend. The problem will be revisited on Monday.

  14208   Fri Sep 21 19:50:17 2018 KojiUpdateCDSFrequent time out

Multiple realtime processes on c1sus are suffering from frequent time outs. It eventually knocks out c1sus (process).

Obviously this has started since the fiber swap this afternoon.

gautam 10pm: there are no clues as to the origin of this problem on the c1sus frontend dmesg logs. The only clue (see Attachment #3) is that the "ADC" error bit in the CDS status word is red - but opening up the individual ADC error log MEDM screens show no errors or overflows. Not sure what to make of this. The IOP model on this machine (c1x02) reports an error in the "Timing" bit of the CDS status word, but from the previous exchange with Rolf / J Hanks, this is down to a misuse of ADC0 Ch31 which is supposed to be reserved for a DuoTone diagnostic signal, but which we use for some other signal (one of the MC suspension shadow sensors iirc). The response is also not consistent with this CDS manual - which suggests that an "ADC" error should just kill the models. There are no obvious red indicator lights in the c1sus expansion chassis either.

Attachment 1: 33.png
33.png
Attachment 2: 49.png
49.png
Attachment 3: Screenshot_from_2018-09-21_21-52-54.png
Screenshot_from_2018-09-21_21-52-54.png
  14210   Sat Sep 22 00:21:07 2018 KojiUpdateCDSFrequent time out

[Gautam, Koji]

We had another crash of c1sus and Gautam did full power cycling of c1sus. It was a sturggle to recover all the frontends, but this solved the timing issue.

We went through full reset of c1sus, and rebooting all the other RT hosts, as well as daqd and fb1.

Attachment 1: 23.png
23.png
  14211   Sun Sep 23 17:38:48 2018 yukiUpdateASCAlignment of AUX Y end green beam was recovered

[ Yuki, Koji, Gautam ]

An alignment of AUX Y end green beam was bad. With Koji and Gautam's advice, it was recovered on Friday. The maximum value of TRY was about 0.5.

  14215   Mon Sep 24 15:06:10 2018 gautamUpdateVACc1vac1 reboot + TP1 controller replacement

[steve, gautam]

Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button.

While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog):

  • Turn power off using switch on rear.
  • Remove 4 connecting cables on the back.
  • Switch controllers.
  • Reconnect 4 cables on the back panel.
  • Turn power back on using switch on rear.

However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. 

Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work.

Quote:

The problem will be revisited on Monday.

Attachment 1: beforeReboot.png
beforeReboot.png
Attachment 2: afterReboot.png
afterReboot.png
Attachment 3: CC1.png
CC1.png
  14217   Wed Sep 26 10:07:16 2018 SteveUpdateVACwhy reboot c1vac1

Precondition: c1vac1 & c1vac2 all LED warning lights green [ atm3 ], the only error message is in the gauge readings NO COMM, dataviewer will plot zero [ atm1 ], valves are operational

When our vacuum gauges read " NO COMM " than our INTERLOCKS  do  NOT communicate either.

So V1 gate valve and PSL output shutter can not be triggered to close if the the IFO pressure goes up.                        

   [ only CC1_HORNET_PRESSURE reading is working in this condition because it goes to a different compuer ] 

Quote:

[steve, gautam]

Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button.

While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog):

  • Turn power off using switch on rear.
  • Remove 4 connecting cables on the back.
  • Switch controllers.
  • Reconnect 4 cables on the back panel.
  • Turn power back on using switch on rear.

However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. 

Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work.

Quote:

The problem will be revisited on Monday.

 

Attachment 1: NOcomm.png
NOcomm.png
Attachment 2: Reboot_&_sawp.png
Reboot_&_sawp.png
Attachment 3: c1vac1&2_.jpg
c1vac1&2_.jpg
  14223   Mon Oct 1 22:20:42 2018 gautamUpdateSUSPrototyping HV Bias Circuit

Summary:

I've been plugging away at Altium prototyping the high-voltage bias idea, this is meant to be a progress update.

Details:

I need to get footprints for some of the more uncommon parts (e.g. PA95) from Rich before actually laying this out on a PCB, but in the meantime, I'd like feedback on (but not restricted to) the following:

  1. The top-level diagram: this is meant to show how all this fits into the coil driver electronics chain.
    • The way I'm imagining it now, this (2U) chassis will perform the summing of the fast coil driver output to the slow bias signal using some Dsub connectors (existing slow path series resistance would simply be removed). 
    • The overall output connector (DB15) will go to the breakout board which sums in the bias voltage for the OSEM PDs and then to the satellite box.
    • The obvious flaw in summing in the two paths using a piece of conducting PCB track is that if the coil itself gets disconnected (e.g. we disconnect cable at the vacuum flange), then the full HV appears at TP3 (see pg2 of schematic). This gets divided down by the ratio of the series resistance in the fast path to slow path, but there is still the possibility of damaging the fast-path electronics. I don't know of an elegant design to protect against this.
  2. Ground loops: I asked Johannes about the Acromag DACs, and apparently they are single ended. Hopefully, because the Sorensens power Acromags, and also the eurocrates, we won't have any problems with ground loops between this unit and the fast path.
  3. High-voltage precautons: I think I've taken the necessary precautions in protecting against HV damage to the components / interfaced electronics using dual-diodes and TVSs, but someone more knowledgable should check this. Furthermore, I wonder if a Molex connector is the best way to bring in the +/- HV supply onto the board. I'd have liked to use an SHV connector but can't find a comaptible board-mountable connector.
  4.  Choice of HV OpAmp: I've chosen to stick with the PA95, but I think the PA91 has the same footprint so this shouldn't be a big deal.
  5.  Power regulation: I've adapted the power regulation scheme Rich used in D1600122 - note that the HV supply voltage doesn't undergo any regulation on the board, though there are decoupling caps close to the power pins of the PA95. Since the PA95 is inside a feedback loop, the PSRR should not be an issue, but I'll confirm with LTspice model anyways just in case.
  6. Cost: 
    • ​​Each of the metal film resistors that Rich recommended costs ~$15.
    • The voltage rating on these demand that we have 6 per channel, and if this works well, we need to make this board for 4 optics.
    • The PA95 is ~$150 each, and presumably the high voltage handling resistors and capacitors won't be cheap.
    • Steve will update about his HV supply investigations (on a secure platform, NOT the elog), but it looks like even switching supplies cost north of $1200.
    • However, as I will detail in a separate elog, my modeling suggests that among the various technical noises I've modeled so far, coil driver noise is still the largest contribution which actually seems to exceed the unsqueezed shot noise of ~ 8e-19 m/rtHz for 1W input power and PRG 40 with 20ppm RT arm losses, by a smidge (~9e-19 m/rtHz, once we take into account the fast and slow path noises, and the fact that we are not exactly Johnson noise limited).

I also don't have a good idea of what the PCB layer structure (2 layers? 3 layers? or more?) should be for this kind of circuit, I'll try and get some input from Rich.

*Updated with current noise (Attachment #2) at the output for this topology of series resistance of 25 kohm in this path. Modeling was done (in LTspice) with a noiseless 25kohm resistor, and then I included the Johnson noise contribution of the 25k in quadrature. For this choice, we are below 1pA/rtHz from this path in the band we care about. I've also tried to estimate (Attachment #3) the contribution due to (assumed flat in ASD) ripple in the HV power supply (i.e. voltage rails of the PA95) to the output current noise, seems totally negligible for any reasonable power supply spec I've seen, switching or linear.

Attachment 1: CoilDriverBias.pdf
CoilDriverBias.pdf CoilDriverBias.pdf CoilDriverBias.pdf
Attachment 2: currentNoise.pdf
currentNoise.pdf
Attachment 3: PSRR.pdf
PSRR.pdf
  14225   Tue Oct 2 23:57:16 2018 gautamUpdatePonderSqueezeSqueezing scenarios

[kevin, gautam]

We have been working on double checking the noise budget calculations. We wanted to evaluate the amount of squeezing for a few different scenarios that vary in cost and time. Here are the findings:

Squeezing scenarios

Sqz [dBvac] fmin [Hz] PPRM [W] PBS [W] TPRM [%] TSRM [%]
-0.41 215 0.8 40 5.637 9.903
-0.58 230 1.7 80 5.637 9.903
-1.05 250 1.7 150 1 17
-2.26 340 10 900 1 17

All calculations done with

  • 4.5kohm series resistance on ETMs, 15kohms on ITMs, 25kohm on slow path on all four TMs.
  • Detuning of SRC = -0.01 deg.
  • Homodyne angle = 89.5 deg.
  • Homodyne QE = 0.9. 
  • Arm losses is 20ppm RT.
  • LO beam assumed to be extracted from PR2 transmission, and is ~20ppm of circulating power in PRC.

Scenarios:

  1. Existing setup, new RC folding mirrors for PRG of ~45.
  2. Existing setup, send Innolight (Edwin) for repair (= diode replacement?) and hope we get 1.7 W on back of PRM.
  3. Repair Innolight, new PRM and SRM, former for higher PRG, latter for higher DARM pole.
  4. Same as #3, but with 10 W input power on back of PRM (i.e. assuming we get a fiber amp).

Remarks:

  • The errors on the small dB numbers is large - 1% change in model parameters (e.g. arm losses, PRG, coil driver noise etc) can mean no observable squeezing. 
  • Actually, this entire discussion is moot unless we can get the RIN of the light incident on the PRM lower than the current level (estimated from MC2 transmission, filtered by CARM pole and ARM zero) by a factor of 60dB.
    • This is because even if we have 1mW contrast defect light leaking through the OMC, the beating of this field (in the amplitude quadrature) with the 20mW LO RIN (also almost entirely in the amplitude quad) yields significant noise contribution at 100 Hz (see Attachment #1).
    • Actually, we could have much more contrast defect leakage, as we have not accounted for asymmetries like arm loss imbalance.
    • So we need an ISS that has 60dB of gain at 100 Hz. 
    • The requirement on LO RIN is consistent with Eq 12 of this paper.
  • There is probably room to optimize SRC detuning and homodyne angle for each of these scenarios - for now, we just took the optimized combo for scenario #1 for evaluating all four scenarios.
  • OMC displacement noise seems to only be at the level of 1e-22 m/rtHz, assuming that the detuning for s-pol and p-pol is ~30 kHz if we were to lock at the middle of the two resonances
    • This assumes 0.02 deg difference in amplitude reflectivity b/w polarizations per optic, other parameters taken from aLIGO OMC design numbers.
    • We took OMC displacement noise from here.

Main unbudgeted noises:

  • Scattered light.
  • Angular control noise reinjection (not sure about the RP angular dynamics for the higher power yet).
  • Shot noise due to vacuum leaking from sym port (= DC contrast defect), but we expect this to not be significant at the level of the other noises in Atm #1.
  • Osc amp / phase.
  • AUX DoF cross coupling into DARM readout.
  • Laser frequency noise (although we should be immune to this because of our homodyne angle choice).

Threat matrix has been updated.

Attachment 1: PonderSqueeze_NB_LORIN.pdf
PonderSqueeze_NB_LORIN.pdf
  14229   Thu Oct 4 08:25:50 2018 SteveUpdateVACrga scan pd81 at day 78

 

 

Attachment 1: pd81d78.png
pd81d78.png
  14243   Thu Oct 11 13:40:51 2018 yukiUpdateComputer Scripts / Programsloss measurements
Quote:

This is the procedure I follow when I take these measurements for the XARM (symmetric under XARM <-> YARM):

  1. Dither-align the interferometer with both arms locked. Freeze outputs when done.
  2. Misalign ETMY + ITMY.
  3. ITMY needs to be misaligned further. Moving the slider by at least +0.2 is plentiful to not have the other beam interfere with the measurement.
  4. Start the script, which does the following:
    1. Resume dithering of the XARM
    2. Check XARM dither error signal rms with CDS. If they're calm enough, proceed.
    3. Freeze dithering
    4. Start a new set of averages on the scope, wait T_WAIT (5 seconds)
    5. Read data (= ASDC power and MC2 trans) from scope and save
    6. Misalign ETMX and wait 5s
    7. Read data from scope and save
    8. Repeat desired amount of times
  5. Close the PSL shutter and measure the PD dark levels

Information for the armloss measurement:

  • Script which gets the data:  /users/johannes/40m/armloss/scripts/armloss_scope/armloss_dcrefl_asdcpd_scope.py
  • Script which calculates the loss: /users/johannes/40m/armloss/scripts/misc/armloss_AS_calc.py
  • Before doing the procedure Johannes wrote you have to prepare as follows:
    • put a PD in anti-symmetric beam path to get ASDC signal.
    • put a PD in MC2 box to get tranmitted light of IMC. It is used to normalize the beam power.
    • connect those 2 PDs to oscilloscope and insert an internet cable to it.
  • Usage: python2 armloss_dcrefl_asdcpd_scope.py [IP address of Scope] [ScopeCH for AS] [ScopeCH for MC] [Num of iteration] [ArmMode]

Note: The scripts uses httplib2 module. You have to install it if you don't have.

The locked arms are needed to calculate armloss but the alignment of PMC is deadly bad now. So at first I will make it aligned. (Gautam aligned it and PMC is locked now.) 

gautam: The PMC alignment was fine, the problem was that the c1psl slow machine had become unresponsive, which prevented the PMC length servo from functioning correctly. I rebooted the machine and undid the alignment changes Yuki had made on the PSL table.

  14244   Fri Oct 12 08:27:05 2018 SteveUpdateVACdrypump

Gautam and Steve,

Our TP3 drypump seal is at 360 mT [0.25A load on small turbo]  after one year.  We tried to swap in old spare drypump with new tip  seal. It was blowing it's fuse, so we could not do it.

Noisy aux drypump turned on and opened to TP3 foreline [ two drypumps are in  the foreline now ]  The pressure is 48 mT and 0.17A load on small turbo.

Attachment 1: forepump.png
forepump.png
  14245   Fri Oct 12 12:29:34 2018 yukiUpdateComputer Scripts / Programsloss measurements

With Gautam's help, Y-arm was locked. Then I ran the script "armloss_dcrefl_asdcpd_scope.py" which gets the signals from oscilloscope. It ran and got data, but I found some problems.

  1. It seemed that a process which makes arm cavity mislaigned in the script didn't work.
  2. The script "armloss_dcrefl_asdcpd_scope.py" gets the signal and the another script "armloss_AS_calc.py" calculates the arm loss. But output file the former makes doesn't match with the type the latter requires. A script converts format is needed.

Anyway, I got the data needed so I will calculate the loss after converting the format.

  14247   Fri Oct 12 17:37:03 2018 SteveUpdateVACpressure gauge choices

We want to measure the pressure gradient in the 40m IFO

Our old MKS cold cathodes are out of order. The existing working gauge at the pumpspool is InstruTech CCM501

The plan is to purchase 3 new gauges for ETMY, BS and MC2 location.

Basic cold cathode     or    Bayard-Alpert Pirani

    

 

  14248   Fri Oct 12 20:20:29 2018 yukiUpdateComputer Scripts / Programsloss measurements

I ran the script for measuring arm-loss and calculated rough Y-arm round trip loss temporally. The result was 89.6ppm. (The error should be considered later.)

The measurement was done as follows:

  1. install hardware
    1. Put a PD (PDA520) in anti-symmetric beam path to get ASDC signal.
    2. Use a PD (PDA255) in MC2 box to get tranmitted light of IMC. It is used to normalize the beam power.
    3. Connect those 2 PDs to oscilloscope (IP: 192.168.113.25) and insert an internet cable to it.
  2. measure DARK noise
    1. Block beam going into PDs with dampers and turn off the room light.
    2. Run the script "armloss_dcrefl_acdcpd_scope.py" using "DARK" mode.
  3. measure the ASDC power when Y-arm locked and misaligned
    1. Remove dampers and turn off the room light.
    2. Dither-align the interferometer with both arms locked. Freeze outputs when done. (Click C1ASS.adl>!MoreScripts>ON and click C1ASS.adl>!MoreScripts>FreezeOutputs.)
    3. Misalign ETMX + ITMX. (Just click "Misalign" button.)
    4. Further misalign ITMX with the slider. (see previous study: ITMX needs to be misaligned further. Moving the slider by at least +0.2 is plentiful to not have the other beam interfere with the measurement.)
    5. Start the script "armloss_dcrefl_acdcpd_scope.py" using "ETMY" mode, which does the following:
      1. Resume dithering of the YARM.
      2. Check YARM dither error signal rms with CDS. If they're calm enough, proceed. (In the previous study the rms threshold was 0.7. Now "ETM_YAW_L_DEMOD_I" signal was 15 (noisy), then the threshold was set 17.)
      3. Freeze dithering.
      4. Start a new set of averages on the scope, wait T_WAIT (5 seconds).
      5. Read data (= ASDC power and MC2 trans) from scope and save.
      6. Misalign ETMY and wait 5s. (I added a code which switchs LSC mode ON and OFF.)
      7. Read data from scope and save.
      8. Repeat desired amount of times.
  4. calculate the arm loss
    1. Start the script "armloss_AS_calc.py", whose content is follows:
      • requires given parameters: Mode-Matching effeciency, modulation depth, transmissivity. I used the same value as Johannes did last year, which are (huga)
      • reads datafile of beam power at ASDC and MC2 trans, which file is created by "armloss_dcrefl_acdcpd_scope.py".
      • calculates arm loss from the equation (see 12528 and 12854).

Result:

YARM
('AS_DARK =', '0.0019517200000000003') #dark noise at ASDC 
('MC_DARK =', '0.02792') #dark noise at MC2 trans
('AS_LOCKED =', '2.04293') #beam power at ASDC when the cavity was locked 
('MC_LOCKED =', '2.6951620000000003')
('AS_MISALIGNED =', '2.0445439999999997') #beam power at ASDC when the cavity was misaligned
('MC_MISALIGNED =', '2.665312')

\hat{P} = \frac{P_{AS}-P_{AS}^{DARK}}{P_{MC}-P_{MC}^{DARK}} #normalized beam power 

\hat{P}^{LOCKED}=0.765,\ \hat{P}^{MISALIGNED}=0.775,\ \mathcal{L}=89.6\ \mathrm{ppm}

Comments:

  • "ETM_YAW_L_DEMOD_I_OUTPUT" was little noisy even when the arm was locked.
  • The reflected beam power when locked was higher than when misaligned. It seemed strange for me at first. Johannes suggested that it was caused by over-coupling cavity. It is possible when r_{ETMY}>>r1_{ITMY}.
  • My first (wrong) measurement said the arm loss was negative(!). That was caused by lack of enough misalignment of another arm mirrors. If you don't misalign ITMX enough then the beam or scattered light from X-arm would bring bad. The calculated negative loss would be appeared only when \frac{\hat{P}^{LOCKED}}{\hat{P}^{MISALIGNED}} > 1 + T_{ITM}
  • Error should be considered.
  • Parameters given this time should be measured again. 
  14251   Sat Oct 13 20:11:10 2018 yukiUpdateComputer Scripts / Programsloss measurements
Quote:

the script "armloss_AS_calc.py",

  • "ETM_YAW_L_DEMOD_I_OUTPUT" was little noisy even when the arm was locked.
  • The reflected beam power when locked was higher than when misaligned. It seemed strange for me at first. Johannes suggested that it was caused by over-coupling cavity. It is possible when r_{ETMY}>>r1_{ITMY}.

Some changes were made in the script for getting the signals of beam power:

  • The script sees "C1:ASS-X(Y)ARM_ETM_PIT/YAW_L_DEMOD_I_OUTPUT" and stops running until the signals become small, however some offset could be on the signal. So I changed it into waiting until (DEMOD - OFFSET) becomes small. (Yesterday I wrote ETM_YAW_L_DEMOD_I_OUTPUT was about 15 and was little noisy. I was wrong. That was just a offset value.)
  • I added a code which stops running the script when the power of transmitted IR beam is low. You can set this threshold. The nominal value of "C1:LSC-TRX(Y)_OUT16" is 1.2 (1.0), so the threshold is set 0.8 now.  

In the yesterday measurement the beam power of ASDC is higher when locked than when misaligned and I wrote it maybe caused by over-coupled cavity. Then I did a calculation as following to explain this:

  • assume power transmissivity of ITM and ETM are 1.4e-2 and 1.4e-5.
  • assume loss-less mirror, you can calculate amplitude reflectivity of ITM and ETM.
  • consider a cavity which consists two mirrors and is loss-less, then \frac{E_{r}}{E_{in}} = \frac{-r_1+r_2e^{i\phi}}{1-r_1r_2e^{i\phi}} holds. r1 and r2 are amplitude reflectivity of ITM and ETM, and E is electric filed.
  • Then you can calculate the power of reflected beam when resonated and when anti-resonated. The fraction of these value is \frac{P_{RESONANT}}{P_{ANTI-RESO}} = 0.996, which is smaller than 1.
  • I found this calculation was wrong! Above calculatation only holds when cavity is aligned, not when misaligned. 99.04% of incident beam power reflects when locked, and (100-1.4)% reflects when misaligned. The proportion is P(locked)/P(misaligned)=1.004, higher than 1.

 

  14253   Sun Oct 14 16:55:15 2018 not gautamUpdateCDSpianosa upgrade

DASWG is not what we want to use for config; we should use the K. Thorne LLO instructions, like I did for ROSSA.

Quote:

pianosa has been upgraded to SL7. I've made a controls user account, added it to sudoers, did the network config, and mounted /cvs/cds using /etc/fstab. Other capabilities are being slowly added, but it may be a while before this workstation has all the kinks ironed out. For now, I'm going to follow the instructions on this wiki to try and get the usual LSC stuff working.

  14254   Mon Oct 15 10:32:13 2018 yukiUpdateComputer Scripts / Programsloss measurements

I used these values for measuring armloss:

  • Transmissivitity of ITM = 1.384e-2 * (1 +/- 1e-2) 
  • Transmissivitity of ETM = 13.7e-6 * (1 +/- 5e-2)
  • Mode-Matching efficiency of XARM = 0.912 * (1 +/- 2e-2)
  • Mode-Matching efficiency of YARM = 0.867 * (1 +/- 2e-2)
  • modulation depth m1 (11MHz) = 0.179 * (1 +/- 2e-2)
  • modulation depth m2 = 0.226 * (1 +/- 2e-2),

then the uncertainties reported by the individual measurements are on the order of 6 ppm (~6.2 for the XARM, ~6.3 for the YARM). This accounts for fluctuations of the data read from the scope and uncertainties in mode-matching and modulation depths in the EOM. I made histograms for the 20 datapoints taken for each arm: the standard deviation of the spread is over 6ppm. We end up with something like:

XARM: 123 +/- 50 ppm
YARM: 152+/- 50 ppm

This result has about 40% of uncertaintities in XARM and 33% in YARM (so big... no).

In the previous measurement, the fluctuation of each power was 0.1% and the fluctuation of P(Locked)/P(misaligned) was also 0.1%. Then the uncertainty was small. On the other hand in my measurement, the fluctuation of power is about 2% and the fluctuation of P(Locked)/P(misaligned) is 2%. That's why the uncertainty became big.

We want to measure tiny value of loss (~100ppm). So the fluctuation of P(Locked)/P(misaligned) must be smaller than 1.6%.

(Edit on 10/23)
I think the error is dominated by systematic error in scope. The data of beam power had only 3 degits. If P(Locked) and P(misaligned) have 2% error, then
\frac{P_L}{P_M}\frac{1}{1+T_{\mathrm{ITM}}} = 0.99(3).
You have to check the configuration of scope.

Attachment 1: XARM_20181015_1500.pdf
XARM_20181015_1500.pdf
Attachment 2: YARM_20181015_1500.pdf
YARM_20181015_1500.pdf
  14255   Mon Oct 15 12:52:54 2018 yukiUpdateComputer Scripts / Programsadditional comments
Quote:

but there's one weirdness: It get's the channel offset wrong. However this doesn't matter in our measurement because we're subtracting the dark level, which sees the same (wrong) offset.

When you do this measurement with oscilloscope, take care two things:

  1. set y-range of scope as to every signal fits in display: otherwise the data sent from scope would be saturated.
  2. set y-position of scope to the center and don't change it; otherwise some offset would be on the data.
  14256   Mon Oct 15 13:59:42 2018 SteveUpdateVACdrypump replaced

Steve & Bob,

Bob removed the head cover from the housing to inspect the condition of the the tip seal. The tip seal was fine but the viton cover seal had a bad hump. This misaligned the tip seal and it did not allow it to rotate.

It was repositioned an carefully tithened. It worked. It's starting current transiant measured 28 A and operational mode 3.5 A

This load is normal with an old pump. See the brand new DIP7 drypump as spare was 25 A at start and  3.1 A in operational mode. It is amazing how much punishment a slow blow ceramic 10A  fuse can take [ 0215010.HXP ]

In the future one should measure the current pick up [ transient <100ms ] after the the seal change with Fluke 330 Series Current Clamp

 

It was swapped in and the foreline pressure dropped to 24 mTorr after 4 hours. It is very good. TP3 rotational drive current  0.15 A at 50K rpm   24C

Quote:

Gautam and Steve,

Our TP3 drypump seal is at 360 mT [0.25A load on small turbo]  after one year.  We tried to swap in old spare drypump with new tip  seal. It was blowing it's fuse, so we could not do it.

Noisy aux drypump turned on and opened to TP3 foreline [ two drypumps are in  the foreline now ]  The pressure is 48 mT and 0.17A load on small turbo.

 

Attachment 1: drypump_swap.png
drypump_swap.png
  14258   Tue Oct 16 00:44:29 2018 yukiUpdateComputer Scripts / Programsloss measurements

The scripts for measuring armloss are in the directory "/opt/rtcds/caltech/c1/scripts/lossmap_scripts/armloss_scope".

  • armloss_derefl_asdcpd_scope.py: gets data and makes ascii file.
  • armloss_AS_calc.py: calculates armloss from selected a set of files.
  • armloss_calc_histogram.py: calculates armloss from selected files and makes histogram.
  14259   Wed Oct 17 09:31:24 2018 SteveUpdatePSLmain laser off

The main laser went off when PSL doors were opened-closed. It was turned back on and the PSL is locked.

Attachment 1: Inno2wFlipped_off.png
Inno2wFlipped_off.png
  14261   Thu Oct 18 00:27:37 2018 KojiUpdateSUSSUS PD Whitening board inspection

[Gautam, Koji]

As a part of the preparation for the replacement of c1susaux with Acromag, I made inspection of the coil-osem transfer function measurements for the vertex SUSs.

The TFs showed typical f^-2 with the whitening on except for ITMY UL (Attachment 1). Gautam told me that this is a known issue for ~5 years.
We made a thorough inspection/replacement of the components and identified the mechanism of the problem.
It turned out that the inputs to MAX333s are as listed below.

  Whitening ON Whitening OFF
UL ~12V ~8.6V
LL 0V 15V
UR 0V 15V
LR 0V 15V
SD 0V 15V

The switching voltage for UL is obviously incorrect. We thought this comes from the broken BIO board and thus swapped the corresponding board. But the issue remained. There are 4 BIO boards in total on c1sus, so maybe we have replaced a wrong board?

Initially, we thought that the BIO can't drive the pull-up resistor of 5KOhm from 15V to 0V (=3mA of current). So I have replaced the pull-up resistor to be 30KOhm. But this did not help. These 30Ks are left on the board.
 

Attachment 1: 43.png
43.png
  14262   Mon Oct 22 15:19:05 2018 SteveUpdateVACMaglev controller serviced

Gautam & Steve,

Our controller is back with Osaka maintenace completed. We swapped it in this morning.

Quote:

TP-1 Osaka maglev controller  [  model TCO10M,  ser V3F04J07 ]  needs maintenance. Alarm led  on indicating  that we need Lv2 service.

The turbo and the controller are in good working order.

*****************************

Hi Steve,

Our maintenance level 2 service price is $...... It consists of a complete disassembly of the controller for internal cleaning of all ICB’s, replacement of all main board capacitors, replacement of all internal cooling units, ROM battery replacement, re-assembly, and mandatory final testing to make sure it meets our factory specifications. Turnaround time is approximately 3 weeks.

  RMA 5686 has been assigned to Caltech’s returning TC010M controller. Attached please find our RMA forms. Complete and return them to us via email, along with your PO, prior to shipping the cont

Best regards,

Pedro Gutierrez

Osaka Vacuum USA, Inc.

510-770-0100 x 109

*************************************************

our TP-1 TG390MCAB is 9 years old. What is the life expectancy of this turbo?

                        The Osaka maglev turbopumps are designed with a 100,000 hours(or ~ 10 operating years) life span but as you know most of our end-users are

                        running their Osaka maglev turbopumps in excess of 10+, 15+ years continuously.     The 100,000 hours design value is based upon the AL material being rotated at

                        the given speed.   But the design fudge factor have somehow elongated the practical life span.  

We should have the cost of new maglev & controller in next year budget. I  put the quote into the wiki.

 

                         

 

 

Attachment 1: our_controller_is_back.png
our_controller_is_back.png
  14263   Thu Oct 25 16:17:14 2018 SteveUpdatesafetysafety training

Chub Osthelder received 40m specific basic safety traning today.

  14264   Wed Oct 31 17:54:25 2018 gautamUpdateVACCC1 hornet power connection restored

Steve reported to me that the CC1 Hornet gauge was not reporting the IFO pressure after some cable tracing at EX. I found that the power to the unit had been accidentally disconnected. I re-connected the power and manually turned on the HV on the CC gauge (perhaps this can be automated in the new vacuum paradigm). IFO pressure of 8e-6 torr is being reported now.

Attachment 1: cc1_Hornet.png
cc1_Hornet.png
  14266   Fri Nov 2 10:24:20 2018 SteveUpdatePEMroof cleaning

Physical plan is cleaning our roof and gutters today.

ELOG V3.1.3-