The 40m experienced a building-wide power failure for ~30 seconds at ~7:38 pm today.
Thought that might be important...
I'm checking the status from home.
P1 is 8e-4 torr
nodus did not feel the power outage (is it APS supported?)
linux1 booted automatically
c1ioo booted automatically.
c1sus, c1lsc, c1iscex, c1iscey need manual power button push.
9:11pm closed PSL shutter, turned Innolight 2W laser on,
turned 3 IFO air cond on,
CC1 5.1e-5 torr, V1 is closed, Maglev has failed, valve configuration is " Vacuum Normal " with V1 & VM1 closed, RGA not running, c1vac1 and c1vac2 were saved by UPS,
(Maglev is not connected to the UPS because it is running on 220V)
reset & started Maglev.........I can not open V1 without the 40mars running...........
Rossa is the only computer running in the control room,
Nodus and Linux1 was saved by UPS,
turned on IR lasers at the ends, green shutters are closed
It is safe to leave the lab as is.
As far as I know the system is running as usual. I had the IMC locked and one of the arm flashing.
But the other arm had no flash and none of the arms were locked before kunch time.
This morning Steve and I went around the lab to turn on the realtime machines.
Also we took the advantage of this opportunity to shutdown linux1 and nodus
to replace the extension cables for their AC power.
I also installed a 3TB hard disk on linux1. This was to provide a local daily copy of our
working are. But I could not make the disk recognized by the OS.
It seems that there is a "2TB" barrier that the disk bigger than 2.2TB can't be recognized
by the older machines. I'll wait for the upgrade of the machine.
Rebooting the realtime machines did not help FB to talk with them. I fixed them.
Basically what I did was:
- Stop all of the realtime codes by running rtcds kill all on c1lsc, c1ioo, c1sus, c1iscex, c1iscey
rtcds kill all
- run sudo ntpdate -b -s -u pool.ntp.org on c1lsc, c1ioo, c1sus, c1iscex, c1iscey, and fb
sudo ntpdate -b -s -u pool.ntp.org
- restart realtime codes one by one. I checked which code makes FB unhappy. But in reality
FB was happy with all of them running.
Then slow machines except for c1vac1 and c1vac2 were burtrestored.
Zach reported that svn was down. I went to the 40m wiki and searched "apache".
There is an instruction how to restart apache.
Recovery work: now arms are locking as usual
- FB is failing very frequently. Everytime I see red signals in the CDS summary, I have to run "sudo ntpdate -b -s -u pool.ntp.org"
- PMC was aligned
- The main Marconi returned to initial state. Changed the frequency and amplitude to the nominal value labeled on the unit
- The SHG oven temp controllers were disabled. I visited all three units and pushed "enable" buttons.
- Y arm was immediately locked. It was aligned using ASS.
- X arm did not show any flash. I found that the scx model was not successfully burtrestored yesterday.
The setting was restored using Mar 22 snapshot.
- After a little tweak of the ETMX alignment, a decent flash was achieved. But still it could not be locked.
- Run s/LSC/LSCoffset.py. This immediately made the X arm locked.
- Checked the green alignment. The X arm green is beating with the PSL at ~100MHz but is misaligned beyond the PZT range.
The Y arm green is locked on TEM00 and is beating with the PSL at ~100MHz.
Chiara reports an uptime of >195 days, so its UPS is working fine
FB, megatron, optimus booted via front panel button.
Jetstor RAID array (where the frames live) was beeping, since its UPS failed as well. The beep was silenced by clicking on "View Events/Mute Beeper" at 192.168.113.119 in a browser on a martian computer. I've started a data consistency check via the web interface, as well. According to the log, this was last done in July 2015, and took ~19 hrs.
Frontends powered up; models don't start automatically at boot anymore, so I ran rtcds start all on each of them.
rtcds start all
All frontends except c1ioo had a very wrong datetime, so I ran sudo ntpdate -b -s -u pool.ntp.org on all of them, and restarted the models (just updating the time isn't enough). There is an /etc/ntp.conf in the frontend filesystem that points to nodus, which is set up as an NTP server, but I guess this isn't working.
PMC locking was hindered by sticky sliders. I burtrestored the c1psl.snap from Friday, and the PMC locked up fine. (One may be fooled by the unchanged HV mon when moving the offset slider into thinking the HV KEPCO power supplies need to be brought down and up again, but it's just the sliders)
Mode cleaner manually locked and somewhat aligned. Based on my memory of PMC camera/transmission, the pointing changed; the WFS need a round of MC alignment and WFS offset setting, but the current state is fine for operation without all that.
10:15 power glitch today. ETMX Lightwave and air conditions turned back on
The CDS situation was not as catastrophic as the last time, it was sufficient for me to ssh into all the frontends and restart all the models. I also checked that monit was running on all the FEs and that there was no date/time issues like we saw last week. Everything looks to be back to normal now, except that the ntpd process being monitored on c1iscex says "execution failed". I tried restarting the process a couple of times, but each time it returns the same status after a few minutes.
I was able to realign the arms, lock them, and have run the dither align to maximize IR transmission - looks like things are back to normal now. For the Y-end, I used the green beam initially to do some coarse alignment of the ITM and ETM, till I was able to see IR flashes in the control room monitors. I then tweaked the alignment of the tip-tilts till I saw TEM00 flashes, and then enabled LSC. Once the arm was locked, I ran the dither align. I then tweaked ITMX alignment till I saw IR flashes in the X arm as well, and was able to lock it with minimal tweaking of ETMX. The LSC actuation was set to ETMX when the models were restarted - I changed this to ITMX actuation, and now both arms are locked with nominal IR transmissions. I will center all the Oplev spots tomorrow before I start work on getting the X green back - I've left the ETM Oplev servos on for now.
While I was working, I noticed that frame builder was periodically crashing. I had to run mxstream restart a few times in order to get CDS back to the nominal state. I wonder if this is a persistent effect of the date/time issues we were seeing earlier today?
Sun Feb 28 18:23:09 2010
Hi. This is Alberto. Its Sun Feb 28 19:23:09 2010
Monday, March 1, 9:00 2010 Steve turns on PSL-REF cavity ion pump HV at 1Y1
At 11:13 am there was a ~2-3 second interruption of all power at the 40m.
I checked that nobody was in any of the lab areas at the time of the outage.
I walked along both arms of the 40m and looked for any indicator lights or unusual activity. I took photos of the power supplies that I encountered, attached. I tried to be somewhat complete, but didn't have a list of things in mind to check, so I may have missed something.
I noticed an electrical buzzing that seemed to emanate from one of the AC adapters on the vacuum rack. I've attached a photo of which one, the buzzing changes when I touch the case of the adapter. I did not modify anything on the vacuum rack. There is also
Most of the cds channels are still down. I am going through the wiki for procedures on what to log when the power goes off, and will follow the procedures here to get some useful channels.
I found the HVACs for the ends were off. They were turned back on.
The IMC alignment was restored and the IMC is nicely locking.
Once the vacuum level recovered P<1e-4 torr, the PSL shutter was able to be opened.
The IMC was still flashing, so the lock to TEM00 was possible.
Once it was locked, the MC2 alignment was tweaked and the autolocker and the WFS kicked in to help the locking/alignment.
The transmission is ~13k and seems reasonable considering the low PMC transmission of the PMC (0.672)
=== Observation ===
=== Initial Recovery of TP2/TP3 ===
=== Rough pumping down ===
=== Towards main volume pumping ===
Now the main volume and the annuli are pumped down TP1 and TP2/TP3 with RP AUX backing.
The attachment is the pressure glitch for the main volume.
[Paco, Tega, JC, Yehonathan]
We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.
We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.
We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:
These action seem to make things better.
[JC, Tega, Paco ]
I would like to mention that during the Vacuum startup, after the AUX pump was turned on, Tega and I were walking away while the pressure decreases. While we were, valves opened on their own. Nobody was near the VAC Desktop during this. I asked Koji if this may be an automatic startup, but he said the valves shouldn't open unless they are explicitely told to do so. Has anyone encountered this before?
Took the backup (snapshot) of /home/export as of Aug 12, 2022
controls@nodus> cd /cvs/cds/caltech/nodus_backup
controls@nodus> rsync -ah --progress --delete /home/export ./export_220812 >rsync.log&
As the last backup was just a month ago (July 8), rsync finished quickly (~2min).
After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:
sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*
The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.
I did a walkaround and checked the status of all the interlock switches I could find based on the SOP and interlock wiring diagram, but the PSL remains interlocked. I don't want to futz around with AC power lines so I will wait for Koji before debugging further. All the "Danger" signs at the VEA entry points aren't on, suggesting to me that the problem lies pretty far upstream in the wiring, possibly at the AC line input? The Red lights around the PSL enclosure, which are supposed to signal if the enclosure doors are not properly closed, also do not turn on, supporting this hypothesis...
I confirmed that there is nothing wrong with the laser itself - i manually shorted the interlock pins on the rear of the controller and the laser turned on fine, but I am not comfortable operating in this hacky way so I have restored the interlock connections until we decide the next course of action...
[Gautam, Aaron, Koji]
The PSL interlock system was fixed and now the 40m lab is laser hazard as usual.
- The schematic diagram of the interlock system D1200192
- We have opened the interlock box. Immediately we found that the DC switching supply (OMRON S82K-00712) is not functioning anymore. (Attachment #1)
- We could not remove the module as the power supply was attached on the DIN rail. We decided to leave the broken supply there (it is still AC powered with no DC output).
- Instead, we brought a DC supply adapter from somewhere and chopped the head so that we can hook it up on the crimping-type quick connects. In Attachment #1, the gray is +12V, and the orange and black lines are GND.
- Upon the inspection, the wires of the "door interlock reset button" fell off and the momentary switch (GRAYHILL 30-05-01-502-03) got broken. So it was replaced with another momentary swicth, which is way smaller than the original unfortunately. (Attachments 2 and 3)
- Once the DC supply adapter was pluged to an AC tap, we heard the sounds of the relays working, and we recovered the laser hazard lamps, PSL door alerm lamps. Also it was confirmed that the PSL innolight is operatable now.
- BTW, there is the big switch box on the wall close to the PSL enclosure. Some of the green lamps were gone. We found that we have plenty of spare lamps and relays inside of the box. So we replaced the bulbs and know the A.C. lights are functioning. (Attachments 4 & 5)
I made the first trial of locking a Power-recycled single arm.
This is NOT a work in the main stream,
but it gives us some prospects towards the full lock and perhaps some useful thoughts.
Lock Acquisition Steps
Actual Time Series
Below is a plot of the actual lock acquisition sequence in time series.
Assumptions on the parameter estimations
I constructed a regulator board that can take ±24 V and supply a regulated ±15 V or ±5 V. I followed the schematics from LIGO-D1000217-v1.
I was going to make 2 boards, one for ±15 V and one for ±5, but Chub just gave me a second assembled board when I asked him for the parts to construct it
We have decided that, rather than replacing the power source for the amplifiers that are on the rack, and leaving the Thorlabs PD as POP22/110, we will remove all of the temporary elements, and put in something more permanent.
So, I have taken the broadband PDs from Zach's Gyro experiment in the ATF. We will figure out what needs to be done to modify these to notch out unwanted frequencies, and amplify the signal nicely. We will also create a pair of cables - one for power from the LSC rack, and one for signal back to the LSC rack. Then we'll swap out the currently installed Thorlabs PD and replace it with a broadband PD.
I looked at some DC signals for the buildup of the carrier and sideband fields in various places. The results are shown in Attachments #1 and #2.
This is very interesting. Do you have the ASDC vs PRG (~ TRXor TRY) plot? That gives you insight on what is the cause of the low recycling gain.
Attachments #1 and Attachments #2 are in the style of elog15356, but with data from a more recent lock. It'd be nice to calibrate the ASDC channel (and in general all channels) into power units, so we have an estimate of how much sideband power we expect, and the rest can be attributed to carrier leakage to ASDC.
On the basis of Attachments #1, the PRG is ~19, and at times, the arm transmission goes even higher. I'd say we are now in the regime where the uncertainty of the losses in the recycling cavity - maybe beamsplitter clipping? is important in using this info to try and constrain the arm cavity losses. I'm also not sure what to make of the asymmetry between TRX and TRY. Allegedly, the Y arm is supposed to be lossier.
Gautam and I were talking about some modulation and demodulation and wondered what is the power combining situation for the triple resonant EOM installed 8 years ago. And we noticed that the current setup has additional ~5dB loss associated with the 3-to-1 power combiner. (Figure a)
N-to-1 broadband power combiners have an intrinsic loss of 10 log10(N). You can think about a reciprocal process (power splitting) (Figure b). The 2W input coming to the 2-port power splitter gives us two 1W outputs. The opposite process is power combining as shown in Figure c. This case, the two identical signals are the constructively added in the combiner, but the output is not 20Vpk but 14Vpk. Considering thge linearity, when one of the port is terminated, the output is going to be a half. So we expect 27dBm output for a 30dBm input (Figure d). This fact is frequently oversight particularly when one combines the signals at multiple frequencies (Figrue e). We can avoid this kind of loss by using a frequency-dependent power combiner like a diplexer or a triplexer.
So actually, it was the C1PSL channels that had died. We did the following to get them back:
Looks like there was a power glitch at around 10am today.
All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).
Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.
GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)
[Koji, Rana, Gautam]
The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:
Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error?
Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.
Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params.
Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.
Attachment #4 - Warning lights on C1IOO
Now IFO work like fixing ASS can continue...
I measured the power incident on REFL11 and REFL55. Steve was concerned that it is too high. If we consider this elog the incident power levels were REFL11: 30 mW and REFL55: 87 mW. (assuming efficiency of ~ 0.8 A/W @1064nm for the C30642 PD). However, currently there is a combination of Polarising BS and Half-waveplate with which we have attenuated the power incident on the REFL PDs. We now have (with the PRM misaligned):
REFL11: Power incident = 7.60 mW ; DC out = 0.330 V => efficiency = 0.87 A/W
REFL55: Power incident = 23 mW ; DC out = 0.850 V => efficiency = 0.74 A/W
and with the PRM aligned::
REFL11: DC out = 0.35 V => 8 mW is incident
REFL55: DC out = 0.975 V => 26 mW is incident
These power levels may go up further when everything is working well.
The max rated photo-current is 100mA => max power 125mW @0.8 A/W.
What is the power level on MC_REFL_ PDs and WFS when the MC is not locked?
Kiwamu, Nancy, and I restored the power into the MC today:
We found many dis-assembled Allen Key sets. Do not do this! Return tools to their proper places or else you are just wasting everyone's time!
What was the point:
I twiddled with several different things this evening to increase the power into the Mode Cleaner. The goal was to have enough power to be able to see the arm cavity flashes on the CCD cameras, since it's going to be a total pain to lock the IFO if we can't see what the mode structure looks like.
Summed-up list of what I did:
* Found the MC nicely aligned. Did not ever adjust the MC suspensions.
* Optimized MC Refl DC, using the old "DMM hooked up to DC out" method.
* Removed the temporary BS1-1064-33-1025-45S that was in the MC refl path, and replaced it with the old BS1-1064-IF-2037-C-45S that used to be there. This undoes the temporary change from elog 3878. Note however, that Yuta's elog 3892 says that the original mirror was a 1%, not 10% as the sticker indicates. The temporary mirror was in place to get enough light to MC Refl while the laser power was low, but now we don't want to fry the PD.
* Noticed that the MCWFS path is totally wrong. Someone (Yuta?) wanted to use the MCWFS as a reference, but the steering mirror in front of WFS1 was switched out, and now no beam goes to WFS2 (it's blocked by part of the mount of the new mirror). I have not yet fixed this, since I wasn't using the WFS tonight, and had other things to get done. We will need to fix this.
* Realigned the MC Refl path to optimize MC Refl again, with the new mirror.
* Replaced the last steering mirror on the PSL table before the beam goes into the chamber from a BS1-1064-33-1025-45S to a Y1-45S. I would have liked a Y1-0deg mirror, since the angle is closer to 0 than 45, but I couldn't find one. According to Mott's elog 2392 the CVI Y1-45S is pretty much equally good all the way down to 0deg, so I went with it. This undoes the change of keeping the laser power in the chambers to a nice safe ~50mW max while we were at atmosphere.
* Put the HWP in front of the laser back to 267deg, from its temporary place of 240deg. The rotation was to keep the laser power down while we were at atmosphere. I put the HWP back to the place that Kevin had determined was best in his elog 3818.
* Tried to quickly align the Xarm by touching the BS, ITMX and ETMX. I might be seeing IR flashes (I blocked the green beam on the ETMX table so I wouldn't be confused. I unblocked it before finishing for the night) on the CCD for the Xarm, but that might also be wishful thinking. There's definitely something lighting up / flashing in the ~center of ETMX on the camera, but I can't decide if it's scatter off of a part of the suspension tower, or if it's really the resonance. Note to self: Rana reminds me that the ITM should be misaligned while using BS to get beam on ETM, and then using ETM to get beam on ITM. Only then should I have realigned the ITM. I had the ITM aligned (just left where it had been) the whole time, so I was making my life way harder than it should have been. I'll work on it again more today (Tuesday).
What happened in the end:
The MC Trans signal on the MC Lock screen went up by almost an order of magnitude (from ~3500 to ~32,000). When the count was near ~20,000 I could barely see the spot on a card, so I'm not worried about the QPD. I do wonder, however, if we are saturating the ADC. Suresh changed the transimpedance of the MC Trans QPD a while ago (Suresh's elog 3882), and maybe that was a bad idea?
Xarm not yet locked.
Can't really see flashes on the Test Mass cameras.
- Previously MC TRANS was 9000~10000 when the alignment was good. This means that the MC TRANS PD is saturated if the full power is given.
==> Transimpedance must be changed again.
- Y1-45S has 4% of transmission. Definitively we like to use Y1-0 or anything else. There must be the replaced mirror.
I think Suresh replaced it. So he must remember wher it is.
- We must confirm the beam pointing on the MC mirrors with A2L.
- We must check the MCWFS path alignment and configuration.
- We should take the picture of the new PSL setup in order to update the photo on wiki.
* Tried to quickly align the Xarm by touching the BS, ITMX and ETMX. I might be seeing IR flashes (I blocked the green beam on the ETMX table so I wouldn't be confused. I unblocked it before finishing for the night) on the CCD for the Xarm, but that might also be wishful thinking. There's definitely something lighting up / flashing in the ~center of ETMX on the camera, but I can't decide if it's scatter off of a part of the suspension tower, or if it's really the resonance.
The power buildup in the MC is ~400, so 100mW of incident power would give about 40W circulating in the mode cleaner.
Rana points out that the ATF had a 35W beam running around the table in air, with a much smaller spot size than our MC has, so 40W should be totally fine in terms of coating damage.
I have therefore increased the power into the vacuum envelope to ~75mW. The MC REFL PD should be totally fine up to ~100mW, so 75mW is plenty low. The MC transmission is now a little over 1000 counts. I have changed the low power mcup script to not bring the VCO gain all the way up to 31dB anymore. Now it seems happy with a VCO gain of 15dB (which is the same as normal power).
The ETMY laser was operating at 1.5 A current and 197 mW power.
For the efficient frequency doubling of the AUX laser beam at the ETMY table, a higher power is required.
Steve and I changed the current level of the laser from 1.5 A to 2.1 A in steps of 0.1 A and noted the corresponding power output . The graph is attached here.
The laser has been set to current 1.8 Amperes. At this current, the power of the output beam just near the laser output is measured to be 390 mW.
The power of the beam which is being coupled into the optical fibre is measured to be between 159 mW to 164 mW (The power meter was showing fluctuating readings).
The power out of the beam coming out of the fibre far-end at the PSL table is measured to be 72 mW. Here, I have attached a picture of the beam paths of the ETMY table with the beams labelled with their respective powers.
Next we are going to adjust the green alignment on the ETMY and then measure the power of the beam.
At the output end of the fibre on the PSL, a power meter has been put to dump the beam for now as well as to help with the alignment at the ETMY table.
For the phase locking or beat note measuring we only need ~1 mW. Its a bad idea to send so much power into the fiber because of SBS and safety. The power should be lowered until the output at the PSL is < 2 mW. In terms of SNR, there's no advantage to use such high powers.
Well,the plan is to put in a neutral density filter in the beam path before it enters the fibre. But before I could do that, I set up the camera on the PSL table to look at the fiber output . I will need it while I realign the beam after putting in the Neutral Density Filter. I have attached the ETMY layout with the Neutral Density filter in place herewith.
After all realignment is finished, here are the powers at several positions:
There was a power outage ~30 mins ago that knocked out CDS, PSL etc. The lights in the office area also flickered briefly. Working on recovery now. The elog was also down (since nodus presumably rebooted), I restarted the service just now. Vacuum status seems okay, even though the status string reads "Unrecognized".
The recovery was complete at 1830 local time. Curiously, the EX NPRO and the doubling oven temp controllers stayed on, usually they are taken out as well. Also, all the slow machines and associated Acromag crates survived. I guess the interruption was so fleeting that some devices survived.
The control room workstation, zita, which is responsible for the IFO status StripTool display on the large TV screen, has some display driver issues I think - it crashed twice when I tried to change the default display arrangement (large TV + small monitor). It also wants to update to Ubuntu 18.04 LTS, but I decided not to for the time being (it is running Ubuntu 16.04 LTS). Anyways, after a couple of power cycles, the wall StripTools are up once again.
Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.
I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.
c1vac1 is not responding as of now. All other computers have come back and are alive.
IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.
PMC is fine. There are sliders in the Phase Shifter screen (accessible from the PMC screen) that also needed touching.
PSL shutter is still closed until Steve is happy with the vacuum system - I guess we don't want to let high power in, in case we come all the way up to atmosphere and particulates somehow get in and get fried on the mirrors.
We are pumping again. This is a temporary configuration. The annuloses are at atmosphere. The reset reboot of c1Vac1 and 2 opened everything except the valves that were disconnected.
TP2 lost it's vent solenoid power supply and dry pump during the power outage.
They were replaced but the new small turbo controller is not set up as the old TP2 was so it does not allow V4 to open.
Tomorrow I will swap back the old controller, pump down the annuloses and close off the ion pumps.
I removed the beam block from the PSL table and opened the shutter. CC4 has the real pressure 2e-5 Torr
CC1 is not real.
I touched up the PMC alignment.
While bringing back the MC, I realized IOO got a really old BURT restore again... Restored from midnight last night. WFS still working.
Now aligning IFO for tonight's work
Tp2 is controlled by old controller. Annuloses pumped down. Valve configuration: "vacuum normal "
Ion pumps closed at <1e-4 mT