ID |
Date |
Author |
Type |
Category |
Subject |
15299
|
Tue Apr 7 10:56:39 2020 |
Jon | Update | BHD | BHD front-end complication |
Quote: |
I have a query out to Dolphin asking:
- Have they done any testing of these old drivers on Linux kernel 4.x (e.g., Debian 9/10)?
- Is there any way to buy modern IPC cards for the two new machines and interface them with our existing Gen1 network?
|
Answers from Dolphin:
- No, and kernel 4.x (modern Linux) definitely will not work with the Gen1 cards.
- No, cards using different PCIe chipsets cannot be mixed.
Since upgrading every front end is out of the question, our only option is to install an old OS (Linux kernel < 3.x) on the two new machines. Based on Keith's advice, I think we should go with Debian 8. (Link to Keith's Debian 8 instructions.) |
15300
|
Tue Apr 7 15:30:40 2020 |
Jon | Summary | NoiseBudget | 40m noise budget migrated to pygwinc |
In the past year, pygwinc has expanded to support not just fundamental noise calculations (e.g., quantum, thermal) but also any number of user-defined noises. These custom noise definitions can do anything, from evaluating an empirical model (e.g., electronics, suspension) to loading real noise measurements (e.g., laser AM/PM noise). Here is an example of the framework applied to H1.
Starting with the BHD review-era noises, I have set up the 40m pygwinc fork with a working noise budget which we can easily expand. Specific actions:
- Updated the 40m fork to the latest pygwinc version (while preserving the commit history).
- Added a directory
./CIT40m containing the 40m-specific noise budget files (created by GV).
- Added an ipython notebook
CIT40m.ipynb at the root level showing how to generate a noise budget.
- Integrated our DAC and seismic noise estimators into pygwinc.
- Marked the old 40m NB repo as obsolete (last commit > 2 yrs ago). Many of these noise estimates are probably stale, but I will work with GV to identify which ones can be migrated.
I set up our fork in this way to keep the 40m separate from the main pygwinc code (i.e., not added to as a built-in IFO type). With the 40m code all contained within one root-level directory (with a 40m-specific name), we should now always be able to upgrade to the latest pygwinc without creating intractable merge conflicts. |
15305
|
Thu Apr 16 21:13:20 2020 |
Jon | Update | BHD | BHD optics specifications |
Summary
I've generated specifications for the new BHD optics. This includes the suspended relay mirrors as well as the breadboard optics (but not the OMCs).
To design the mode-matching telescopes, I updated the BHD mode-matching scripts to reflect Koji's draft layout (Dec. 2019) and used A La Mode to optimize ROCs and positions. Of the relay optics, only a few have an AOI small enough for curvature (astigmatism) and most of those do not have much room to move. This reduced the optimization considerably.
These ROCs should be viewed as a first approximation. Many of the distances I had to eyeball from Koji's drawings. I also used the Gaussian PRC/SRC modes from the current IFO, even though the recycling cavities will both slightly change. I set up a running list of items like these that we still need to resolve in the BHD README.
Optics Specifications
At a glance, all the specifications can be seen in the optics summary spreadsheet.
LO Telescope Design
The LO beam originates from the PR2 transmission (POP), near ITMX. It is relayed to the BHD beamsplitter (and mode-matched to the OMCs) via the following optical sequence:
- LM1 (ROC = +10 m, AOI ≈ 3°)
- LM2 (Flat, AOI ≈ 45°)
- MMT1 (Flat, AOI ≈ 5°)
- MMT2 (ROC = +3.5 m, AOI ≈ 5°)
The resulting beam profile is shown in Attachment 1.
AS Telescope Design
The AS beam is relayed from the SRM to the BHD beamsplitter (and mode-matched to the OMCs) via the following sequence:
- AS1 (ROC = +1.5 m, AOI ≈ 3°)
- AS2 (Flat, AOI ≈ 45°)
- Lens (FL = -125 mm)
A lens is used because there is not enough room on the BHD breadboard for a pair of (low-AOI) telescope mirrors, like there is in the LO path. The resulting beam profile is shown in Attachment 2. |
Attachment 1: LO_Beam_Calc-v1.pdf
|
|
Attachment 2: AS_Beam_Calc-v1.pdf
|
|
15311
|
Thu Apr 23 09:52:02 2020 |
Jon | Update | Cameras | GigE w better NIR sensitivvity |
Nice, and we should also permanently install the camera server (c1cam) which is still sitting on the electronics bench. It is running an adapted version of the Python 2/Debian 8 site code. Maybe if COVID continues long enough I'll get around to making the Python 3 version we've long discussed.
Quote: |
There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.
|
|
15334
|
Fri May 15 09:18:04 2020 |
Jon | Update | BHD | BHD telescope designs accounting for ASC |
Hang and I have reanalyzed the BHD telescope designs, with the goal of identifying sufficiently non-degenerate locations for ASC actuation. Given the limited room to reposition optics and the requirement to remain insensitive to small positioning errors, we conclude it is not possible put sufficient Gouy phase separation between the AS1/AS2 and LO1/LO2 locations. However, we can make the current layout work if we instead actuate AS1/AS4 and LO1/LO4. This would require actuating one optic on the breadboard for each relay path. If possible, we believe this offers the simplest solution (i.e., least modification to the current layout).
LO Telescope Design (Attachment 1)
Radius of curvatures:
- LO1: +10 m
- LO2: flat
- LO3: +15 m
- LO4: flat
AS Telescope Design (Attachment 2)
Radius of curvatures:
- AS1: +3 m
- AS2: flat
- AS3: -1 m
- AS4: flat
|
Attachment 1: LOpath.pdf
|
|
Attachment 2: ASpath.pdf
|
|
15379
|
Sat Jun 6 14:07:30 2020 |
Jon | Update | BHD | Stock-Part Mode-Matching Telescope Option |
Summary
For the initial phase of BHD testing, we recently discussed whether the mode-matching telescopes could be built with 100% stock optics. This would allow the optical system to be assembled more quickly and cheaply at a stage when having ultra-low loss and scattering is less important. I've looked into this possibility and conclude that, yes, we do have a good stock optics option. It in fact achieves comprable performance to our optimized custom-curvature design [ELOG 15357]. I think it is certainly sufficient for the initial phase of BHD testing.
Vendor
It turns out our usual suppliers (e.g., CVI, Edmunds) do not have enough stock options to meet our requirements. This is for two reasons:
- For sufficient LO1-LO2 (AS1-AS4) Gouy phase separation, we require a very particular ROC range for LO1 (AS1) of 5-6 m (2-3 m).
- We also require a 2" diameter for the suspended optics, which is a larger size than most vendors stock for curved reflectors (for example, CVI has no stock 2" options).
However I found that Lambda Research Optics carries 1" and 2" super-polished mirror blanks in an impressive variety of stock curvatures. Even more, they're polished to comprable tolerances as I had specificied for the custom low-scatter optics [DCC E2000296]: irregularity < λ/10 PV, 10-5 scratch-dig, ROC tolerance ±0.5%. They can be coated in-house for 1064 nm to our specifications.
From modeling Lambda's stock curvature options, I find it still possible to achieve mode-matching of 99.9% for the AS beam and 98.6% for the LO beam, if the optics are allowed to move ±1" from their current positions. The sensitivity to the optic positions is slightly increased compared to the custom-curvature design (but by < 1.5x). I have not run the stock designs through Hang's full MC corner-plot analysis which also perturbs the ROCs [ELOG 15339]. However for the early BHD testing, the sensitivity is secondary to the goal of having a quick, cheap implementation.
Stock-Part Telescope Designs
The following tables show the best telescope designs using stock curvature options. It assumes the optics are free to move ±1" from their current positions. For comparison, the values from the custom-curvature design are also given in parentheses.
AS Path
The AS relay path is shown in Attachment 1:
- AS1-AS4 Gouy phase separation: 71°
- Mode-matching to OMC: 99.9%
Optic |
ROC (m) |
Distance from SRM AR (m) |
AS1 |
2.00 (2.80) |
0.727 (0.719) |
AS2 |
Flat (Flat) |
1.260 (1.260) |
AS3 |
0.20 (-2.00) |
1.864 (1.866) |
AS4 |
0.75 (0.60) |
2.578 (2.582) |
LO Path
The LO relay path is shown in Attachment 2:
- LO1-LO2 Gouy phase separation: 67°
- Mode-matching to OMC: 98.6%
Optic |
ROC (m) |
Distance from PR2 AR (m) |
LO1 |
5.00 (6.00) |
0.423 (0.403) |
LO2 |
1000 (1000) |
2.984 (2.984) |
LO3 |
0.50 (0.75) |
4.546 (4.596) |
LO4 |
0.15 (-0.45) |
4.912 (4.888) |
Ordering Information
I've created a new tab in the BHD procurement spreadsheet ("Stock MM Optics Option") listing the part numbers for the above telescope designs, as well as their fabrication tolerances. The total cost is $2.8k + the cost of the coatings (I'm awaiting a quote from Lambda for the coatings). The good news is that all the curved substrates will receive the same HR/AR coatings, so I believe they can all be done in a single coating run. |
Attachment 1: ASpathStock.pdf
|
|
Attachment 2: LOpathStock.pdf
|
|
15382
|
Mon Jun 8 17:40:22 2020 |
Jon | Update | BHD | Astigmatism and scattering plots |
MM_total = (MM_vert + MM_horiz) / 2.
The large astigmatic MM loss in the LO case is mainly due to the strong LO4 curvature (R=0.15m) with a 10 deg AOI. I looked again at whether LO1 could be increased from R=5m to the next higher stock value of 7.5m, as this would allow weaker curvatures on LO3 and LO4. However, no, that is not possible---it reduces the LO1-LO2 Gouy phase separation to only 18 deg.
There is, however, a good stock-curvature option if we want to reconsider actuating LO4 instead of LO2 (attachment 1). It achieves 99.2% MM with the OMCs, allowing positions to vary +/-1" from the current design. The LO1-LO4 Gouy phase separation is 72 deg.
Optic |
ROC (m) |
Distance from PR2 AR (m) |
LO1 |
10 |
0.378 |
LO2 |
1000 |
2.984 |
LO3 |
10 |
4.571 |
LO4 |
7.5 |
4.926 |
Alternatively, we could look at reducing the AOI on LO3 and LO4 (keeping LO1-LO2 actuation). |
Attachment 1: LOpathStock2.pdf
|
|
15384
|
Mon Jun 8 21:45:47 2020 |
Jon | Update | BHD | Astigmatism and scattering plots |
Hmm? T1300364 suggests MM_total = Sqrt(MM_Vert * MM_Horiz) |
15386
|
Tue Jun 9 14:55:43 2020 |
Jon | Update | BHD | MM telescope actuation range requirments |
I don't think we ever discussed why the angular RMS of the ETMs is so much higher than the ITMs. Maybe that's a separate matter because, even assuming the worst case, the actuation range requirement is
(0.82 μrad RMS) x (15 μrad/μrad) x (10 safety factor) = 0.12 mrad
which is still only order 1% of the pitch/yaw pointing range of the Small Optic Suspensions, according to P1600178 (sec. IV. A). Can we check this requirement off the list?
Quote: |
We computed the required actuation range for the telescope design in elog:15357. The result is summarized in the table below. Here we assume we misalign an IFO mirror by 1 urad, and then compute how many urad do we need to move the (AS1, AS4) or (LO1, LO2) mirrors to simultaneously correct for the two gouy phases.
Actuation requirement in urad per urad misalignment
[urad/urad] |
ITMX |
ITMY |
ETMX |
ETMY |
BS |
PRM |
PR2 |
PR3 |
SR3 |
SRM |
AS1 |
1.9 |
2.1 |
-5.0 |
-5.5 |
0.5 |
0.5 |
-0.3 |
0.2 |
0.1 |
0.6 |
AS4 |
2.9 |
2.0 |
-8.8 |
-5.5 |
-5.9 |
-0.7 |
1.3 |
-0.7 |
-0.5 |
0.7 |
LO1 |
-4.0 |
-3.9 |
11.0 |
10.4 |
1.9 |
-0.4 |
-0.2 |
0.1 |
0.0 |
-1.1 |
LO2 |
-5.0 |
-3.7 |
15.1 |
10.4 |
8.7 |
0.8 |
1.9 |
1.1 |
0.7 |
-1.3 |
The most demanding ifo mirrors are the ETMs and the BS, for every 1 urad misalignment the telescope needs to move 10-15 urad to correct for that. However, it is unlikely for those mirrors to move more 100 nrad for a locked ifo with ASC engaged. Thus a few urad actuation should be sufficient. For the recycling mirrors, every 1 urad misalignment also requires ~ 1 urad actuation.
As a result, if we could afford 10 urad actuation range for each telescope suspension, then the gouy phase separations we have should be fine.
================================================================
Edits:
We looked at the oplev spectra from gps 1274418500 for 512 sec. This should be a period when the ifo was locked in the PRFPMI state according to elog:15348. We just focused on the yaw data for now. Please see the attached plots. The solid traces are for the ASD, and the dotted ones are the cumulative rms. The total rms for each mirror is also shown in the legend.
I am now confused... The ITMs looked somewhat reasonable in that at least the < 1 Hz motion was suppressed. The total rms is ~ 0.1 urad, which was what I would expect naively (~ x100 times worse than aLIGO).
There seems to be no low-freq suppression on the ETMs though... Is there no arm ASC at the moment???
|
|
15389
|
Thu Jun 11 09:37:38 2020 |
Jon | Update | BHD | Conclusions on Mode-Matching Telescopes |
After further astigmatism/tolerance analysis [ELOG 15380, 15387] our conclusion is that the stock-optic telescope designs [ELOG 15379] are sufficient for the first round of BHD testing. However, for the final BHD hardware we should still plan to procure the custom-curvature optics [DCC E2000296]. The optimized custom-curvature designs are much more error-tolerant and have high probability of achieving < 2% mode-matching loss. The stock-curvature designs can only guarantee about 95% mode-matching.
Below are the final distances between optics in the relay paths. The base set of distances is taken from the 2020-05-21 layout. To minimize the changes required to the CAD model, I was able to achieve near-maximum mode-matching by moving only one optic in each relay path. In the AS path, AS3 moves inwards (towards the BHDBS) by 1.06 cm. In the LO path, LO4 moves backwards (away from the BHDBS) by 3.90 cm.
AS Path
Interval |
Distance (m) |
Change (cm) |
SRMAR-AS1 |
0.7192 |
0 |
AS1-AS2 |
0.5405 |
0 |
AS2-AS3 |
0.5955 |
-1.06 |
AS3-AS4 |
0.7058 |
-1.06 |
AS4-BHDBS |
0.5922 |
0 |
BHDBS-OMCIC |
0.1527 |
0 |
LO Path
Interval |
Distance (m) |
Change (cm) |
PR2AR-LO1 |
0.4027 |
0 |
LO1-LO2 |
2.5808 |
0 |
LO2-LO3 |
1.5870 |
0 |
LO3-LO4 |
0.3691 |
+3.90 |
LO4-BHDBS |
0.2573 |
+3.90 |
BHDBS-OMCIC |
0.1527 |
0 |
|
15402
|
Tue Jun 16 13:35:03 2020 |
Jon | Update | VAC | Temporary vac fix / IFO usable again |
[Jon, Jordan, Koji]
Today Jordan reconfigured the vac system to allow pumping of the main volume resume, with Jon and Koji remotely advising. All clear to resume normal IFO activities. However, the vac system is operating in a temporary configuration that will have to be reverted as we locate replacement components. Details below.
Procedure
Since serial readback of the TP2 controller seems to be failing, we reconfigured the system with TP3 now backing for TP1. TP2 was valved off (at V4) and shut down until we can replace its controller.
TP3 has its own problems, however. It was valved off in January after its temperature readback began glitching and spuriously triggering the interlocks [ELOG 15140]. However the problem appears to be limited only one readback (rotation speed, current, voltage are fine) and there is enough redundancy in the pump-dependent interlock conditions to safely connect it to the main volume.
We also discovered that sometime since January, the TP3 dry pump has failed. The foreline pressure had risen to 165 torr. Since the TP2 and TP3 dry pumps are not interchangeable (Agilent vs. Varian), we instead valved in the auxiliary dry pump and disconnected the failed dry pump using a KF blank. This is a temporary arrangement until the permanent dry pump can be repaired. Jordan removed it to replace the tip seals and will test it in the bake lab before reinstalling.
With this configuration in place, we proceeded to pump down the main volume without issue (attachment 1). We monitored the pumpdown for about 45 min., until the pressure had reached ~1E-5 torr and TP3 had been transitioned to standby (low-speed) mode.
Summary of topology changes:
- TP2 valved off and shut down until controller can be replaced
- TP3 temporarily backing for TP1
- Auxiliary dry pump temporarily backing for TP3
- TP3 dry pump has been removed for repairs
|
Attachment 1: Pumpdown.png
|
|
15406
|
Thu Jun 18 11:00:24 2020 |
Jon | Update | VAC | Questions/comments on vacuum |
Quote: |
- Isn’t it true that we didn’t digitally monitor any of the TP diagnostic channels before 2018 December? I don’t have the full history but certainly there wasn’t any failure of the vacuum system connected to pump current/temp/speed from Sep 2015-Dec2018, whereas we have had 2 interruptions in 6 months because of flaky serial communications.
|
Looking at images of the old vac screens, the TP2/3 rotation speed and status string were digitally monitored. However I don't know if there were software interlocks predicated on those.
Quote: |
- According to the manuals, the turbo-pumps have their own internal logic to shut off the pump when either bearing temperature exceeds 60C or current exceeds 1.5A. I agree its good to have some redundancy, but do we really expect that our outer interlock loops will function if the internal ones fail?
|
The temperature and current interlocks are implemented precisely because the pumps can shut themselves off. The concern is not about damaging the pumps (their internal logic protects against that); it's that a pump could automatically shut down and back-vent the IFO to atmosphere. Another interlock (e.g., the pressure differentials) might catch it, but it would depend on the back-vent rate and the scenario has never been tested. The temperature and current interlocks are set to trip just before the pump reaches its internal shut-down threshold.
One way we might be able to reduce our reliance on the flaky serial readbacks is to implement rotation-speed hardware interlocks. The old vac documentation alludes to these, but as far as Chub and I could determine in 2018, they never actually existed. The older turbo controllers, at least, had an analog output proportional to speed which could be used to control a relay to interrupt the V4/5 control signals. I'll look into this for the new controllers. If it could be done, we could likely eliminate the layer of serial-readback interlocks altogether.
|
- I also think we should finally implement the email alert in the event the vacuum interlock is tripped. I can implement this if no one else volunteers.
|
That would be awesome if you're willing to volunteer. I agree this would be great to have. |
15408
|
Thu Jun 18 14:13:03 2020 |
Jon | Update | VAC | Questions/comments on vacuum |
I agree there were MEDM fields, but I can't find any record of these channels being recorded till 2018 December, so I don't agree that they were being digitally monitored. You can also look back in the elog (e.g. here and here) and see that the display fields are just blank. I would then assume that no interlocks were dependent on these channels, because otherwise the vacuum interlocks would be perpetually tripped.
Right, I doubt they were ever recorded or used for interlocks. But the readbacks did work at one point in the past. There's a photo of the old vac monitor screen on p. 19 of E1500239 (last updated 2017) which shows the fields once alive.
Sorry but I'm having trouble imagining a scenario how the pressure gauges wouldn't register this before the IFO volume is compromised. Is there some back of the envelope calculations I can do to understand this? Since both the pressure gauges and the TP diagnostic channels are being monitored via EPICS, the refresh rate is similar, so I don't see how we can have a pump temperature / speed / current threshold tripped but NOT have this be registered on all the pressure gauges, seems like a bit of a contrived scenario to me. Our thresholds currently seem to be arbitrary numbers anyway, or are they based on some expected backstreaming rate? Isn't this scenario degenerate with a leak elsewhere in the vacuum envelope that would be caught by the differential pressure interlocks?​
I don't disagree that the pressure gauges would register the change. What I'm not sure about is whether the change would violate any of the existing interlock conditions, triggering a shutdown. Looking at what we have now, the only non-pump-related conditions I see that might catch it are the diffpres conditions:
-
abs(P2 - PTP2) > 1 torr (for a TP2 failure)
-
abs(P3 - PTP3) > 1 torr (for a TP3 failure)
-
abs(P1a - P2) > 1 torr (for either a TP2 or TP3 failure)
For the P1a-P2 differential, the threshold of 1 torr is the smallest value that in practice still allows us to pump down the IFO without having to disable the interlocks (P1a-P2 is the TP1 intake/exhaust differential). The purpose of the P2-PTP2/P3-PTP3 differentials is to prevent V4/5 from opening and suddenly exposing the spinning turbo to high pressure. I'm not aware of a real damage threshold calculation that any one has done; I think < 1 torr is lore passed down by Steve.
If a turbo pump fails, the rate it would backstream is unknown (to me, at least) and likely depends on the failure mode. The scenario I'm concerned about is if the backstream rate is slower than the conduction time through the pumspool and into the main volume. In that case, the pressure gauges will rise more or less together all the way up to atmosphere, likely never crossing the 1 torr differential pressure thresholds.
For the email alert, can you expose a soft channel that is a flag - if this flag is not 1, then the service will send out an email.
There's already a channel C1:Vac-error_status, where if the value is anything other than an empty string, there is an interlock tripped. Does that work? |
15412
|
Thu Jun 18 22:33:57 2020 |
Jon | Omnistructure | VAC | Vac hardware purchase list |
Replacement Hardware Purchase List
I've created a purchase list of hardware needed to restore the aging vacuum system. This wasn't planned as part of the BHD upgrade, but I've added it to the BHD procurement list since hardware replacements have become necessary.
The list proposes replacing the aging TP3 Varian turbo pump with the newer Agilent model which has already replaced TP2. It seems I was mistaken in believing we already had a second Agilent pump on hand. A thorough search of the lab has not turned it up, and Steve himself has told me he doesn't remember ordering a second one. Fortunately Steve did leave us a detailed Agilent parts list [ELOG 14322].
It also proposes replacing the glitching TP2 Agilent controller with a new one. The existing one can be sent back for repair and then retained as a spare. Considering that one of these controllers is already malfunctioning after < 2 years, I think it's a very good idea to have a spare on hand.
Known Hardware Issues
Below is our current list of vacuum hardware issues. Items that this purchase list will address (limited to only the most urgent) are highlighted in yellow.
- Replace the UPS
- Need a 240V socket for TP1 (currently TP1 is not protected from power loss)
- Need RS232/485 comms with the interlock server (current UPS: serial readbacks have failed, battery is failing)
- Remove/replace the failed pressure gauges (~5)
- Add more cold cathode sensors to the main volume for sensor redundancy (currently the main-volume interlocks rely on only 1 working sensor)
- Replace TP3 (controller is failing)
- Replace TP2 controller (serial interface has failed)
- Remove RP2
- Dead and also not needed. We already have to throttle the pumpdown rate with only two roughing pumps
- Remove/refurbish the cryopump
- Contamination risk to have it sitting connectable to the main volume
|
15413
|
Fri Jun 19 07:40:49 2020 |
Jon | Update | VAC | Questions/comments on vacuum |
I think we should discuss interlock possibilities at a 40m meeting. I'm reluctant to make the system more complicated, but perhaps we can find ways to reduce the reliance on the turbo pump readbacks. I agree they've proven to be the least reliable.
While we may be able to improve the tolerance to certain kinds of hardware malfunctions (and if so, we should), I don't see interlocks triggering on abnormal behavior of critical equipment as the root problem. As I see it, our bigger problem is with all the malfunctioning, mostly end-of-lifetime pieces of vacuum equipment still in use. If we can address the hardware problems, as I'm trying to do with replacements [ELOG 15412], I think that in itself will make the interlocking much less of an issue.
Quote: |
So why not just have a special mode for the interlock code during pumpdown and venting, and during normal operation we expect the main volume pressure to be <100uTorr so the interlock trips if this condition is violated? These can just be EPICS buttons on the Vac control MEDM screen. Both of these procedures are not "business as usual", and even if we script them in the future, it's likely to have some operator supervising, so I don't think it's unreasonable to have to switch between these modes. I just think the pressure gauges have demonstrated themselves to be much more reliable than these TP serial readbacks (as you say, they worked once upon a time, but that is already evidence of its flakiness?). The Pirani gauges are not ultra-reliable, they have failed in the past, but at least less frequently than this serial comm glitching. In fact, if these readbacks are so flaky, it's not impossible that they don't signal a TP shutdown? I just think the real power of having these multi-channel diagnostics is lost without some AND logic - a turbopump failure is likely to result in an increase in pump current and temperature increase and pump speed decrease, so it's not the individual channel values that should be determining if an interlock is tripped.
|
Ok, this can be added pretty easily. Its value will just be toggled between 1 and 0 every time the interlock server raises/clears the existing string channel. Adding the channel will require restarting the whole vac IOC, so I'll do it at a time when Jordan is on hand in case something fails to come back up.
Quote: |
It would be better to have a flag channel, might be useful for the summary pages too. I will make it if it is too much trouble.
|
|
15421
|
Mon Jun 22 10:43:25 2020 |
Jon | Configuration | VAC | Vac maintenance at 11 am |
The vac system is going down at 11 am today for planned maintenance:
- Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
- Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]
We will advise when the work is completed. |
15424
|
Mon Jun 22 20:06:06 2020 |
Jon | Configuration | VAC | Vac maintenance complete |
This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging.
For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time.
Edit: The new interlock flag channel is named C1:Vac-interlock_flag.
Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed.
The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍
Quote: |
The vac system is going down at 11 am today for planned maintenance:
- Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
- Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]
|
|
Attachment 1: Pumpdown-6-22-20.png
|
|
15446
|
Wed Jul 1 18:03:04 2020 |
Jon | Configuration | VAC | UPS replacements |
I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:
- Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
- Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
- Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.
I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant. |
15456
|
Mon Jul 6 15:10:40 2020 |
Jon | Summary | BHD | 40m --> A+ BHD design analysis |
As suggested last week, Hang and I have reviewed the A+ BHD status (DRD, CDD, and reviewers' comments) and compiled a list of key unanswered questions which could be addressed through Finesse analysis.
In anticipation of others helping with this modeling effort, we've tried to break questions into self-contained projects and estimated their level of difficulty. As you'll see, they range from beginner to Finesse guru. |
15462
|
Thu Jul 9 16:02:33 2020 |
Jon | HowTo | CDS | Procedure for setting up BHD front-ends |
Here is the procedure for setting up the three new BHD front-ends (c1bhd, c1sus2, c1ioo - replacement). This plan is based on technical advice from Rolf Bork and Keith Thorne.
The overall topology for each machine is shown here. As all our existing front-ends use (obsolete) Dolphin PCIe Gen1 cards for IPC, we have elected to re-use Dolphin Gen1 cards removed from the sites. Different PCIe generations of Dolphin cards cannot be mixed, so the only alternative would be to upgrade every 40m machine. However the drivers for these Gen1 Dolphin cards were last updated in 2016. Consequently, they do not support the latest Linux kernel (4.x) which forces us to install a near-obsolete OS for compatibility (Debian 8).
Hardware
-
-
IPC cards: Dolphin DXH510-A0 (PCIe x4 Gen1) [LLO will provide; I've asked Keith Thorne to ship them]
-
-
-
Software
- OS: Debian 8.11 (Linux kernel 3.16)
- IPC card driver: Dolphin DX 4.4.5 [works only with Linux kernel 2.6 to 3.x]
- I/O card driver: None required, per the manual
Install Procedure
- Follow Keith Thorne's procedure for setting up Debian 8 front-ends
- Apply the real-time kernel patches developed for Debian 9, but modified for kernel 3.16 [these are UNTESTED against Debian 8; Keith thinks they may work, but they weren't discovered until after the Debian 9 upgrade]
- Install the PCIe expansion cards and Dolphin DX driver (driver installation procedure)
|
15465
|
Thu Jul 9 18:00:35 2020 |
Jon | Configuration | VAC | UPS replacements |
Chub has placed the order for two new UPS units (115V for TP2/3 and a 220V version for TP1).
They will arrive within the next two weeks.
Quote: |
I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:
- Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
- Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
- Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.
I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.
|
|
15499
|
Thu Jul 23 15:58:24 2020 |
Jon | Summary | VAC | Vacuum controls refurbishment plan |
This year we've struggled with vacuum controls unreliability (e.g., spurious interlock triggers) caused by decaying hardware. Here are details of the vacuum refurbishment plan I described on the 40m call this week.
☑ Refurbish TP2 and TP3 dry pumps. Completed [ELOG 15417].
☑ Automated notifications of interlock-trigger events. Email to 40m list and a new interlock flag channel. Completed [ELOG 15424].
☐ Replace failing UPS.
- Two new Tripp Lite units on order, 110V and 230V [ELOG 15465].
- Jordan will install them in the vacuum rack once received.
- Once installed, Jon will come test the new units, set up communications, and integrate them into the interlock system following this plan [ELOG 15446].
- Jon will move the pumps and other equipment to the new UPS units only after completing the above step.
☐ Remove interlock dependencies on TP2/TP3 serial readbacks. Due to persistent glitching [ELOG 15140, ELOG 15392].
Unlike TP2 and TP3, the TP1 readbacks are real analog signals routed to Acromags. As these have caused us no issues at all, the plan is to eliminate dependence on the TP2/3 digital readbacks in favor of the analog controller outputs. All the digital readback channels will continue to exist, but the interlock system will no longer depend on them. This will require adding 2 new sinking BI channels each for TP2 and TP3 (for a total of 4 new channels). We have 8 open Acromag XT1111 channels in the c1vac system [ELOG 14493], so the new channels can be accommodated. The below table summarizes the proposed changes.
Channel |
Type |
Status |
Description |
Interlock |
C1:Vac-TP1_current |
AI |
exists |
Current draw (A) |
keep |
C1:Vac-TP1_fail |
BI |
exists |
Critical fault has occurred |
keep |
C1:Vac-TP1_norm |
BI |
exists |
Rotation speed is within +/-10% of set point |
new |
C1:Vac-TP2_rot |
soft |
exists |
Rotation speed (krpm) |
remove |
C1:Vac-TP2_temp |
soft |
exists |
Temperature (C) |
remove |
C1:Vac-TP2_current |
soft |
exists |
Current draw (A) |
remove |
C1:Vac-TP2_fail |
BI |
new |
Critical fault has occurred |
new |
C1:Vac-TP2_norm |
BI |
new |
Rotation speed is >80% of set point |
new |
C1:Vac-TP3_rot |
soft |
exists |
Rotation speed (krpm) |
remove |
C1:Vac-TP3_temp |
soft |
exists |
Temperature (C) |
remove |
C1:Vac-TP3_current |
soft |
exists |
Current draw (A) |
remove |
C1:Vac-TP3_fail |
BI |
new |
Critical fault has occurred |
new |
C1:Vac-TP3_norm |
BI |
new |
Rotation speed is >80% of set point |
new |
|
15501
|
Mon Jul 27 15:48:36 2020 |
Jon | Summary | VAC | Vacuum parts ordered |
To carry out the next steps of the vac refurbishment plan [ELOG 15499], I've ordered parts necessary for interfacing the UPS units and the analog TP2/3 controller outputs with c1vac. The purchase list is appended to the main BHD list and is located here. Some parts we already had in the boxes of Acromag materials. Jordan is gathering what we do already have and staging it on the vacuum controls console table - please don't move them or put them away.
Quote: |
☐ Replace failing UPS.
☐ Remove interlock dependencies on TP2/TP3 serial readbacks. Due to persistent glitching [ELOG 15140, ELOG 15392].
|
|
15502
|
Tue Jul 28 12:22:40 2020 |
Jon | Update | VAC | Vac interlock test today 1:30 pm |
This afternoon Jordan is going to carry out a test of the V4 and V5 hardware interlocks. To inform the interlock improvement plan [15499], we need to characterize exactly how these work (they pre-date the 2018 upgrade). I have provided him a sequence of steps for each test and will also be backing him up on Zoom.
We will close V1 as a precaution but there should be no other impact to the IFO. The tests are expected to take <1 hour. We will advise when they are completed. |
15504
|
Tue Jul 28 14:11:14 2020 |
Jon | Update | VAC | Vac interlock test today 1:30 pm |
This test has been completed. The IFO configuration has been reverted to nominal.
For future reference: yes, both the V4 and V5 hardware interlocks were found to still be connected and work. A TTL signal from the analog output port of each pump controller (TP2 and TP3) is connected to an auxiliary relay inside the main valve relay box. These serve the purpose of interupting the (Acromag) control signal to the primary V4/5 relay. This interrupt is triggered by each pump's R1 setpoint signal, which is programmed to go low when the rotation speed falls below 80% of the low-speed setting.
Quote: |
This afternoon Jordan is going to carry out a test of the V4 and V5 hardware interlocks. To inform the interlock improvement plan [15499], we need to characterize exactly how these work (they pre-date the 2018 upgrade). I have provided him a sequence of steps for each test and will also be backing him up on Zoom.
We will close V1 as a precaution but there should be no other impact to the IFO. The tests are expected to take <1 hour. We will advise when they are completed.
|
|
15525
|
Fri Aug 14 10:03:37 2020 |
Jon | Update | CDS | Timing distribution slot availability |
That's great news we won't have to worry about a new timing fanout for the two new machines, c1bhd and c1sus2. And there's no plan to change Dolphin IPC drivers. The plan is only to install the same (older) version of the driver on the two new machines and plug into free slots in the existing switch.
Quote: |
The new dolphin eventually helps us. But the installation is an invasive change to the existing system and should be done at the installation stage of the 40m BHD.
|
|
15526
|
Fri Aug 14 10:10:56 2020 |
Jon | Configuration | VAC | Vacuum repairs today |
The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up. |
15527
|
Sat Aug 15 02:02:13 2020 |
Jon | Configuration | VAC | Vacuum repairs today |
Vacuum work is completed. The TP2 and TP3 interlocks have been overhauled as proposed in ELOG 15499 and seem to be performing reliably. We're now back in the nominal system state, with TP2 again backing for TP1 and TP3 pumping the annuli. I'll post the full implementation details in the morning.
I did not get to setting up the new UPS units. That will have to be scheduled for another day.
Quote: |
The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.
|
|
15528
|
Sat Aug 15 15:12:22 2020 |
Jon | Configuration | VAC | Overhaul of small turbo pump interlocks |
Summary
Yesterday I completed the switchover of small turbo pump interlocks as proposed in ELOG 15499. This overhaul altogether eliminates the dependency on RS232 readbacks, which had become unreliable (glitchy) in both controllers. In their place, the V4(5) valve-close interlocks are now predicated on an analog controller output whose voltage goes high when the rotation speed is >= 80% of the nominal setpoint. The critical speed is 52.8 krpm for TP2 and 40 krpm for TP3. There already exist hardware interlocks of V4(5) using the same signals, which I have also tested.
Interlock signal
Unlike the TP1 controller, which exposes simple relays whose open/closed states are sensed by Acromags, what the TP2(3) controllers output is an energized 24V signal for controlling such a relay (output circuit pictured below). I hadn't appreciated this difference and it cost me time yesterday. The ultimate solution was to route the signals through a set of new 24V Phoenix Contact relays installed inside the Acromag chassis. However, this required removing the chassis from the rack and bringing it to the electronics bench (rather than doing the work in situ, as I had planned). The relays are mounted to the second DIN rail opposite the Acromags. Each TP2(3) signal controls the state of a relay, which in turn is sensed using an Acromag XT1111.

Signal routing
The TP2(3) "normal-speed" signals are already in use by hardware interlocks of V4(5). Each signal is routed into the main AC relay box, where it controls an "interrupter" relay through which the Acromag control signal for the main V4(5) relay is passed. These signals are now shared with the digital controls system using a passive DB15 Y-splitter. The signal routing is shown below.

Interlock conditions
The new turbo-pump-related interlock conditions and their channel predicates are listed below. The full up-to-date channel list and wiring assignments for c1vac are maintained here.
Channel |
Type |
New? |
Interlock-triggering condition |
C1:Vac-TP1_norm |
BI |
No |
Rotation speed < 90% nominal setpoint (29 krpm) |
C1:Vac-TP1_fail |
BI |
No |
Critical fault occurrence |
C1:Vac-TP1_current |
AI |
No |
Current draw > 4 A |
C1:Vac-TP2_norm |
BI |
Yes |
Rotation speed < 80% nominal setpoint (52.8 krpm) |
C1:Vac-TP3_norm |
BI |
Yes |
Rotation speed < 80% nominal setpoint (40 krpm) |
There are two new channels, both of which provide a binary indication of whether the pump speed is outside its nominal range. I did not have enough 24V relays to also add the C1:Vac-TP2(3)_fail channels listed in ELOG 15499. However, these signals are redundant with the existing interlocks, and the existing serial "Status" readback will already print failure messages to the MEDM screens. All of the TP2(3) serial readback channels remain, which monitor voltage, current, operational status, and temperature. The pump on/off and low-speed mode on/off controls remain implemented with serial signals as well.
The new analog readbacks have been added to the MEDM controls screens, circled below:

Other incidental repairs
- I replaced the (dead) LED monitor at the vac controls console. In the process of finding a replacement, I came across another dead spare monitor as well. Both have been labeled "DEAD" and moved to Jordan's desk for disposal.
- I found the current TP3 Varian V70D controller to be just as glitchy in the analog outputs as well. That likely indicates there is a problem with the microprocessor itself, not just the serial communications card as I thought might be the case. I replaced the controller with the spare unit which was mounted right next to it in the rack [ELOG 13143]. The new unit has not glitched since the time I installed it around 10 pm last night.
|
Attachment 1: small_tp_signal_routing.png
|
|
Attachment 3: small_tp_signal_routing.png
|
|
Attachment 4: medm_screen.png
|
|
15537
|
Mon Aug 24 08:13:56 2020 |
Jon | Update | VAC | UPS installation |
I'm in the lab this morning to interface the two new UPS units with the digital controls system. Will be out by lunchtime. The disruptions to the vac system should be very brief this time. |
15538
|
Mon Aug 24 11:25:07 2020 |
Jon | Update | VAC | UPS installation |
I'm leaving the lab shortly. We're not ready to switch over the vac equipment to the new UPS units yet.
The 120V UPS is now running and interfaced to c1vac via a USB cable. The unofficial tripplite python package is able to detect and connect to the unit, but then read queries fail with "OS Error: No data received." The firmware has a different version number from what the developers say is known to be supported.
The 230V UPS is actually not correctly installed. For input power, it has a general type C14 connector which is currently plugged into a 120V power strip. However this unit has to be powered from a 230V outlet. We'll have to identify and buy the correct adapter cable.
With the 120V unit now connected, I can continue to work on interfacing it with python remotely. The next implementation I'm going to try is item #2 of this plan [ELOG 15446].
Quote: |
I'm in the lab this morning to interface the two new UPS units with the digital controls system. Will be out by lunchtime. The disruptions to the vac system should be very brief this time.
|
|
15556
|
Fri Sep 4 15:26:55 2020 |
Jon | Update | VAC | Vac system UPS installation |
The vac controls are going down now to pull and test software changes. Will advise when the work is completed. |
15557
|
Fri Sep 4 21:12:51 2020 |
Jon | Update | VAC | Vac system UPS installation |
The vac work is completed. All of the vacuum equipment is now running on the new 120V UPS, except for TP1. The 230V TP1 is still running off wall power, as it always has. After talking with Tripp Lite support today, I believe there is a problem with the 230V UPS. I will post a more detailed note in the morning.
Quote: |
The vac controls are going down now to pull and test software changes. Will advise when the work is completed.
|
|
15558
|
Sat Sep 5 12:01:10 2020 |
Jon | Update | VAC | Vac system UPS installation |
Summary
Yesterday's UPS switchover was mostly a success. The new Tripp Lite 120V UPS is fully installed and is communicating with the slow controls system. The interlocks are configured to trigger a controlled shutdown upon an extended power outage (> ~30 s), and they have been tested. All of the 120V pumpspool equipment (the full c1vac/LAN/Acromag system, pressure gauges, valves, and the two small turbo pumps) has been moved to the new UPS. The only piece of equipment which is not 120V is TP1, which is intended to be powered by a separate 230V UPS. However that unit is still not working, and after more investigation and a call to Tripp Lite, I suspect it may be defective. A detailed account of the changes to the system follow below.
Unfortunately, I think I damaged the Hornet (the only working cathode ionization gauge in the main volume) by inadvertently unplugging it while switching over equipment to the new UPS. The electronics are run from multiple daisy-chained power strips in the bottom of the rack and it is difficult to trace where everything goes. After the switchover, the Hornet repeatedly failed to activate (either remotely or manually) with the error "HV fail." Its compatriot, the Pirani SuperBee, also failed about a year ago under similar circumstances (or at least its remote interface did, making it useless for digital monitoring and control). I think we should replace them both, ideally with ones with some built-in protection against power failures.
New EPICS channels
Four new soft channels per UPS have been created, although the interlocks are currently predicated on only C1:Vac-UPS120V_status.
Channel |
Type |
Description |
Units |
C1:Vac-UPS120V_status |
stringin |
Operational status |
- |
C1:Vac-UPS120V_battery |
ai |
Battery remaining |
% |
C1:Vac-UPS120V_line_volt |
ai |
Input line voltage |
V |
C1:Vac-UPS120V_line_freq |
ai |
Input line frequency |
Hz |
C1:Vac-UPS240V_status |
stringin |
Operational status |
- |
C1:Vac-UPS240V_battery |
ai |
Battery remaining |
% |
C1:Vac-UPS240V_line_volt |
ai |
Input line voltage |
V |
C1:Vac-UPS240V_line_freq |
ai |
Input line frequency |
Hz |
These new readbacks are visible in the MEDM vacuum control/monitor screens, as circled in Attachment 1:

Continuing issues with 230V UPS
Yesterday I brought with me a custom power cable for the 230V UPS. It adapts from a 208/120V three-phase outlet (L21-20R) to a standard outlet receptacle (5-15P) which can mate with the UPS's C14 power cable. I installed the cable and confirmed that, at the UPS end, 208V AC was present split-phase (i.e., two hot wires separated 120 deg in phase, each at 120V relative to ground). This failed to power on the unit. Then Jordan showed up and suggested to try powering it instead from a single-phase 240V outlet (L6-20R). However we found that the voltage present at this outlet was exactly the same as what the adapter cable provides: 208V split-phase.
This UPS nominally requires 230V single-phase. I don't understand well enough how the line-noise-isolation electronics work internally, so I can think of three possible explanations:
- 208V AC is insufficient to power the unit.
- The unit requires a true neutral wire (i.e., not a split-phase configuration), in which case it is not compatible with the U.S. power grid.
- The unit is defective.
I called Tripp Lite technical support. They thought the unit should work as powered in the configuration I described, so this leads me to suspect #3.
@Chub and Jordan: Can you please look into somehow replacing this unit, potentially with a U.S.-specific model? Let's stick with the Tripp Lite brand though, as I already have developed the code to interface those.
UPS-host computer communications
Unlike our older equipment, which communicates serially with the host via RS232/485, the new UPS units can be connected with a USB 3.0 cable. I found a great open-source package for communicating directly with the UPS from within Python, Network UPS Tools (NUT), which eliminates the dependency on Tripp Lite's proprietary GUI. The package is well documented, supports hundreds of power-management devices, and is available in the Debian package manager from Jessie (Debian 8) up. It consists of a large set of low-level, device-specific drivers which communicate with a "server" running as a systemd service. The NUT server can then be queried using a uniform set of programming commands across a huge number of devices.
I document the full set-up procedure below, as we may want to use this with more USB devices in the future.
How to set up
First, install the NUT package and its Python binding:
$ sudo apt install nut python-nut
This automatically creates (and starts) a set of systemd processes which expectedly fail, since we have not yet set up the config. files defining our USB devices. Stop these services, delete their default definitions, and replace them with the modified definitions from the vacuum git repo:
$ sudo systemctl stop nut-*.service
$ sudo rm /lib/systemd/system/nut-*.service
$ sudo cp /opt/target/services/nut-*.service /etc/systemd/system
$ sudo systemctl daemon-reload
Next copy the NUT config. files from the vacuum git repo to the appropriate system location (this will overwrite the existing default ones). Note that the file ups.conf defines the UPS device(s) connected to the system, so for setups other than c1vac it will need to be edited accordingly.
$ sudo cp /opt/target/services/nut/* /etc/nut
Now we are ready to start the NUT server, and then enable it to automatically start after reboots:
$ sudo systemctl start nut-server.service
$ sudo systemctl enable nut-server.service
If it succeeds, the start command will return without printing any output to the terminal. We can test the server by querying all the available UPS parameters with
$ upsc 120v
which will print to the terminal screen something like
battery.charge: 100
battery.runtime: 1215
battery.type: PbAC
battery.voltage: 13.5
battery.voltage.nominal: 12.0
device.mfr: Tripp Lite
device.model: Tripp Lite UPS
device.type: ups
driver.name: usbhid-ups
driver.parameter.pollfreq: 30
driver.parameter.pollinterval: 2
driver.parameter.port: auto
driver.parameter.productid: 2010
driver.parameter.vendorid: 09ae
driver.version: 2.7.2
driver.version.data: TrippLite HID 0.81
driver.version.internal: 0.38
input.frequency: 60.1
input.voltage: 120.3
input.voltage.nominal: 120
output.frequency.nominal: 60
output.voltage.nominal: 120
ups.beeper.status: enabled
ups.delay.shutdown: 20
ups.mfr: Tripp Lite
ups.model: Tripp Lite UPS
ups.power.nominal: 1000
ups.productid: 2010
ups.status: OL
ups.timer.reboot: 65535
ups.timer.shutdown: 65535
ups.vendorid: 09ae
ups.watchdog.status: 0
Here 120v is the name assigned to the 120V UPS device in the ups.conf file, so it will vary for setups on other systems.
If all succeeds to this point, what we have set up so far is a set of command-line tools for querying (and possibly controlling) the UPS units. To access this functionality from within Python scripts, a set of official Python bindings are provided by the python-nut package. However, at the time of writing, these bindings only exist for Python 2.7. For Python 3 applications (like the vacuum system), I have created a Python 3 translation which is included in the vacuum git repo. Refer to the UPS readout script for an illustration of its usage. |
Attachment 1: vac_medm.png
|
|
15560
|
Sun Sep 6 13:15:44 2020 |
Jon | Update | DAQ | UPS for framebuilder |
Now that the old APC Smart-UPS 2200 is no longer in use by the vacuum system, I looked into whether it can be repurposed for the framebuilder machine. Yes, it can. The max power consumption of the framebuilder (a SunFire X4600) is 1.137kW. With fresh batteries, I estimate this UPS can power the framebuilder for >10 min. and possibly as long as 30 min., depending on the exact load.
@Chub/Jordan, this UPS is ready to be moved to rack 1X6/1X7. It just has to be disconnected from the wall outlet. All of the equipment it was previously powering has been moved to the new UPS. I have ordered a replacement battery (APC #RBC43) which is scheduled to arrive 9/09-11. |
15561
|
Sun Sep 6 14:17:18 2020 |
Jon | Update | Equipment loan | Zurich Instruments analyzer |
On Friday, I grabbed the Zurich Instruments HF2LI lock-in amplifier and brought it home. As time permits, I will work towards developing a similar readout script as we have for the SR785. |
15567
|
Thu Sep 10 15:43:22 2020 |
Jon | Update | BHD | Input noise spectra for A+ BHD modeling |
As promised some time ago, I've obtained input noise spectra from the sites calibrated to physical units. They are located in a new subdirectory of the BHD repo: A+/input_noises. I've heavily annotated the notebook that generates them (input_noises.ipynb) with aLOG references, to make it transparent what filters, calibrations, etc. were applied and when the data were taken. Each noise term is stored as a separate HDF5 file, which are all tracked via git LFS.
So far there are measurements of the following sources:
- L1 SRCL
- H1 SRCL
- L1 DHARD PIT
- L1 DSOFT PIT
- L1 CSOFT PIT
- L1 CHARD PIT
These can be used, for example, to make more realistic Hang's bilinear noise modeling [ELOG 15503] and Yehonathan's Monte Carlo simulations [ELOG 15539]. Let me know if there are other specific noises of interest and I will try to acquire them. It's a bit time-consuming to search out individual channel calibrations, so I will have to add them on a case-by-case basis. |
15577
|
Wed Sep 16 12:03:07 2020 |
Jon | Update | VAC | Replacing pressure gauges |
Assembled is the list of dead pressure gauges. Their locations are also circled in Attachment 1.
Gauge |
Type |
Location |
CC1 |
Cold cathode |
Main volume |
CC3 |
Cold cathode |
Pumpspool |
CC4 |
Cold cathode |
RGA chamber |
CCMC |
Cold cathode |
IMC beamline near MC2 |
P1b |
Pirani |
Main volume |
PTP1 |
Pirani |
TP1 foreline |
For replacements, I recommend we consider the Agilent FRG-700 Pirani Inverted Magnetron Gauge. It uses dual sensing techniques to cover a broad pressure range from 3e-9 torr to atmosphere in a single unit. Although these are more expensive, I think we would net save money by not having to purchase two separate gauges (Pirani + hot/cold cathode) for each location. It would also simplify the digital controls and interlocking to have a streamlined set of pressure readbacks.
For controllers, there are two options with either serial RS232/485 or Ethernet outputs. We probably want the Agilent XGS-600, as it can handle all the gauges in our system (up to 12) in a single controller and no new software development is needed to interface it with the slow controls. |
Attachment 1: vac_gauges.png
|
|
15692
|
Wed Dec 2 12:27:49 2020 |
Jon | Update | VAC | Replacing pressure gauges |
Now that the new Agilent full-range gauges (FRGs) have been received, I'm putting together an installation plan. Since my last planning note in Sept. (ELOG 15577), two more gauges appear to be malfunctioning: CC2 and PAN. Those are taken into account, as well. Below are the proposed changes for all the sensors in the system.
In summary:
- Four of the FRGs will replace CC1/2/3/4.
- The fifth FRG will replace CCMC if the 15.6 m cable (the longest available) will reach that location.
- P2 and P3 will be moved to replace PTP1 and PAN, as they will be redundant once the new FRGs are installed.
Required hardware:
- 3x CF 2.75" blanks
- 10x CF 2.75" gaskets
- Bolts and nut plates
Volume |
Sensor Location |
Status |
Proposed Action |
Main |
P1a |
functioning |
leave |
Main |
P1b |
local readback only |
leave |
Main |
CC1 |
dead |
replace with FRG |
Main |
CCMC |
dead |
replace with FRG* |
Pumpspool |
PTP1 |
dead |
replace with P2 |
Pumpspool |
P2 |
functioning |
replace with 2.75" CF blank |
Pumpspool |
CC2 |
intermittent |
replace with FRG |
Pumpspool |
PTP2 |
functioning |
leave |
Pumpspool |
P3 |
functioning |
replace with 2.75" CF blank |
Pumpspool |
CC3 |
dead |
replace with FRG |
Pumpspool |
PTP3 |
functioning |
leave |
Pumpspool |
PRP |
functioning |
leave |
RGA |
P4 |
functioning |
leave |
RGA |
CC4 |
dead |
replace with FRG |
RGA |
IG1 |
dead |
replace with 2.75" CF blank |
Annuli |
PAN |
intermittent |
replace with P3 |
Annuli |
PASE |
functioning |
leave |
Annuli |
PASV |
functioning |
leave |
Annuli |
PABS |
functioning |
leave |
Annuli |
PAEV |
functioning |
leave |
Annuli |
PAEE |
functioning |
leave |
Quote: |
For replacements, I recommend we consider the Agilent FRG-700 Pirani Inverted Magnetron Gauge. It uses dual sensing techniques to cover a broad pressure range from 3e-9 torr to atmosphere in a single unit. Although these are more expensive, I think we would net save money by not having to purchase two separate gauges (Pirani + hot/cold cathode) for each location. It would also simplify the digital controls and interlocking to have a streamlined set of pressure readbacks.
For controllers, there are two options with either serial RS232/485 or Ethernet outputs. We probably want the Agilent XGS-600, as it can handle all the gauges in our system (up to 12) in a single controller and no new software development is needed to interface it with the slow controls.
|
|
15703
|
Thu Dec 3 14:53:58 2020 |
Jon | Update | VAC | Replacing pressure gauges |
Update to the gauge replacement plan (15692), based on Jordan's walk-through today. He confirmed:
- All of the gauges being replaced are mounted via 2.75" ConFlat flange. The new FRGs have the same footprint, so no adapters are required.
- The longest Agilent cable (50 ft) will NOT reach the CCMC location. The fifth FRG will have to be installed somewhere closer to the X-end.
Based on this info (and also info from Gautam that the PAN gauge is still working), I've updated the plan as follows. In summary, I now propose we install the fifth FRG in the TP1 foreline (PTP1 location) and leave P2 and P3 where they are, as they are no longer needed elsewhere. Any comments on this plan? I plan to order all the necessary gaskets, blanks, etc. tomorrow.
Volume |
Sensor Location |
Status |
Proposed Action |
Main |
P1a |
functioning |
leave |
Main |
P1b |
local readback only |
leave |
Main |
CC1 |
dead |
replace with FRG |
Main |
CCMC |
dead |
remove; cap with 2.75" CF blank |
Pumpspool |
PTP1 |
dead |
replace with FRG |
Pumpspool |
P2 |
functioning |
leave |
Pumpspool |
CC2 |
dead |
replace with FRG |
Pumpspool |
PTP2 |
functioning |
leave |
Pumpspool |
P3 |
functioning |
leave |
Pumpspool |
CC3 |
dead |
replace with FRG |
Pumpspool |
PTP3 |
functioning |
leave |
Pumpspool |
PRP |
functioning |
leave |
RGA |
P4 |
functioning |
leave |
RGA |
CC4 |
dead |
replace with FRG |
RGA |
IG1 |
dead |
remove; cap with 2.75" CF blank |
Annuli |
PAN |
functioning |
leave |
Annuli |
PASE |
functioning |
leave |
Annuli |
PASV |
functioning |
leave |
Annuli |
PABS |
functioning |
leave |
Annuli |
PAEV |
functioning |
leave |
Annuli |
PAEE |
functioning |
leave |
|
15724
|
Thu Dec 10 13:05:52 2020 |
Jon | Update | VAC | UPS failure |
I've investigated the vacuum controls failure that occurred last night. Here's what I believe happened.
From looking at the system logs, it's clear that there was a sudden loss of power to the control computer (c1vac). Also, the system was actually down for several hours. The syslog shows normal EPICS channel writes (pressure readback updates, etc., and many of them per minute) which suddenly stop at 4:12 pm. There are no error or shutdown messages in the syslog or in the interlock log. The next activity is the normal start-up messaging at 7:39 pm. So this is all consistent with the UPS suddenly failing.
According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.
Preventing this in the future:
First, there are too many electronics on the 1 kVA UPS. The reason I asked us to buy a dual 208/120V UPS (which we did buy) is to relieve the smaller 120V UPS. I envision moving the turbo pumps, gauge controllers, etc. all to the 5 kVA unit and reserving the smaller 1 kVA unit for the c1vac computer and its peripherals. We now have the dual 208/120V UPS in hand. We should make it a priority to get that installed.
Second, there are 1 Hz "blinker" channels exposed for c1vac and all the slow controls machines, each reporting the machine's alive status. I don't think they're being monitored by any auto-notification program (running on a central machine), but they could be. Maybe there already exists code that could be co-opted for this purpose? There is an MEDM screen displaying the slow machine statuses at Sitemap > CDS > SLOW CONTROLS STATUS, pictured in Attachment 2. This is the only way I know to catch sudden failures of the control computer itself. |
Attachment 1: TP2_time_history.png
|
|
Attachment 2: slow_controls_monitors.png
|
|
15729
|
Thu Dec 10 17:12:43 2020 |
Jon | Update | | New SMA cables on order |
As requested, I placed an order for an assortment of new RF cables: SMA male-male, RG405.
They're expected to arrive mid next week. |
15738
|
Fri Dec 18 22:59:12 2020 |
Jon | Configuration | CDS | Updated CDS upgrade plan |
Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:
-
Existing FEs stay where they are (they are not moved to a single rack)
-
Dolphin IPC remains PCIe Gen 1
-
RFM network is entirely replaced with Dolphin IPC
Please send me any omissions or corrections to the layout. |
Attachment 1: CDS_2020_Dec.pdf
|
|
Attachment 2: CDS_2020_Dec.graffle
|
15739
|
Sat Dec 19 00:25:20 2020 |
Jon | Update | | New SMA cables on order |
I re-ordered the below cables, this time going with flexible, double-shielded RG316-DS. Jordan will pick up and return the RG-405 cables after the holidays.
Quote: |
As requested, I placed an order for an assortment of new RF cables: SMA male-male, RG405.
|
|
15764
|
Thu Jan 14 12:19:43 2021 |
Jon | Update | CDS | Expansion chassis from LHO |
That's fine, we didn't actually request those. We bought and already have in hand new PCIe x4 cables for the chassis-host connection. They're 3 m copper cables, which was based on the assumption of the time that host and chassis would be installed in the same rack.
Quote: |
- Regarding the fibers - one of the fibers is pre-2012. These are known to fail (according to Rolf). One of the two that LHO shipped is from 2012 (judging by S/N, I can't find an online lookup for the serial number), the other is 2011. IIRC, Rolf offered us some fibers so we may want to take him up on that. We may also be able to use copper cables if the distances b/w server and expansion chassis are short.
|
|
15766
|
Fri Jan 15 15:06:42 2021 |
Jon | Update | CDS | Expansion chassis from LHO |
Koji asked me assemble a detailed breakdown of the parts received from LHO, which I do based on the high-res photos that Gautam posted of the shipment.
Parts in hand:
Qty |
Part |
Note(s) |
2 |
Chassis body |
|
2 |
Power board and cooling fans |
As noted in 15763, these have the standard LIGO +24V input connector which we may want to change |
2 |
IO interface backplane |
|
2 |
PCIe backplane |
|
2 |
Chassis-side OSS PCIe x4 card |
|
2 |
CX4 fiber cables |
These were not requested and are not needed |
Parts still needed:
Qty |
Part |
Note(s) |
2 |
Host-side OSS PCIe x4 card |
These were requested but missing from the LHO shipment |
2 |
Timing slave |
These were not originally requested, but we have recently learned they will be replaced at LHO soon |
Issue with PCIe slots in new FEs
Also, I looked into the mix-up regarding the number of PCIe slots in the new Supermicro servers. The motherboard actually has six PCIe slots and is on the CDS list of boards known to be compatible. The mistake (mine) was in selecting a low-profile (1U) chassis that only exposes one of these slots. But at least it's not a fundamental limitation.
One option is to install an external PCIe expansion chassis that would be rack-mounted right above the FE. It is automatically configured by the system BIOS, so doesn't require any special drivers. It also supports hot-swapping of PCIe cards. There are also cheap ribbon-cable riser cards that would allow more cards to be connected for testing, although this is not as great for permanent mounting.
It may still be better to use the machines offered by Keith Thorne from LLO, as they're more powerful anyway. But if there is going to be an extended delay before those can be received, we should be able to use the machines we already have in conjunction with one of these PCIe expansion options. |
15770
|
Tue Jan 19 13:19:24 2021 |
Jon | Update | CDS | Expansion chassis from LHO |
Indeed T1800302 is the document I was alluding to, but I completely missed the statement about >3 GHz speed. There is an option for 3.4 GHz processors on the X10SRi-F board, but back in 2019 I chose against it because it would double the cost of the systems. At the time I thought I had saved us $5k. Hopefully we can get the LLO machines in the near term---but if not, I wonder if it's worth testing one of these to see whether the performance is tolerable.
Can you please provide a link to this "list of boards"? The only document I can find is T1800302....
|
I confirm that PCIe 2.0 motherboards are backwards compatible with PCIe 1.x cards, so there's no hardware issue. My main concern is whether the obsolete Dolphin drivers (requiring linux kernel <=3.x) will work on a new system, albeit one running Debian 8. The OSS PCIe card is automatically configured by the BIOS, so no external drivers are required for that one.
Please also confirm that there are no conflicts w.r.t. the generation of PCIe slots, and the interfaces (Dolphin, OSSI) we are planning to use - the new machines we have are "PCIe 2.0" (though i have no idea if this is the same as Gen 2).
|
|
15771
|
Tue Jan 19 14:05:25 2021 |
Jon | Configuration | CDS | Updated CDS upgrade plan |
I've produced updated diagrams of the CDS layout, taking the comments in 15476 into account. I've also converted the 40m's diagrams from Omnigraffle ($150/license) to the free, cloud-based platform draw.io. I had never heard of draw.io, but I found that it has most all the same functionality. It also integrates nicely with Google Drive.
Attachment 1: The planned CDS upgrade (2 new FEs, fully replace RFM network with Gen 1 Dolphin IPC)
Attachment 2: The current 40m CDS topology
The most up-to-date diagrams are hosted at the following links:
Please send me any further corrections or omissions. Anyone logged in with LIGO.ORG credentials can also directly edit the diagrams. |
Attachment 1: 40m_CDS_Network_-_Planned.pdf
|
|
Attachment 2: 40m_CDS_Network_-_Current.pdf
|
|
15842
|
Wed Feb 24 22:13:47 2021 |
Jon | Update | CDS | Planning document for front-end testing |
I've started writing up a rough testing sequence for getting the three new front-ends operational (c1bhd, c1sus2, c1ioo). Since I anticipate this plan undergoing many updates, I've set it up as a Google doc which everyone can edit (log in with LIGO.ORG credentials).
Link to planning document
Please have a look and add any more tests, details, or concerns. I will continue adding to it as I read up on CDS documentation. |
15872
|
Fri Mar 5 17:48:25 2021 |
Jon | Update | CDS | Front-end testing |
Today I moved the c1bhd machine from the control room to a new test area set up behind (west of) the 1X6 rack. The test stand is pictured in Attachment 1. I assembled one of the new IO chassis and connected it to the host.
I/O Chassis Assembly
- LIGO-style 24V feedthrough replaced with an ATX 650W switching power supply
- Timing slave installed
- Contec DO-1616L-PE card installed for timing control
- One 16-bit ADC and one 32-channel DO module were installed for testing
The chassis was then powered on and LED lights illuminated indicating that all the components have power. The assembled chassis is pictured in Attachment 2.
Chassis-Host Communications Testing
Following the procedure outlined T1900700, the system failed the very first test of the communications link between chassis and host, which is to check that all PCIe cards installed in both the host and the expansion chassis are detected. The Dolpin host adapter card is detected:
07:06.0 PCI bridge: Stargen Inc. Device 0102 (rev 02) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
I/O behind bridge: 00002000-00002fff
Prefetchable memory behind bridge: 00000000c0200000-00000000c03fffff
Capabilities: [40] Power Management version 2
Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [60] Express Downstream Port (Slot+), MSI 00
Capabilities: [80] Subsystem: Device 0000:0000
Kernel driver in use: pcieport
However the OSS PCIe adapter card linking the host to the IO chassis was not detected, nor were any of the cards in the expansion chassis. Gautam previously reported that the OSS card was not detected by the host (though it was not connected to the chassis then). Even now connected to the IO chassis, the card is still not detected. On the chassis-side OSS card, there is a red LED illuminated indicating "HOST CARD RESET" as pictured in Attachment 3. This may indicate a problem with the card on the host side. Still more debugging to be done. |
Attachment 1: image_67203585.JPG
|
|
Attachment 2: image_67216641.JPG
|
|
Attachment 3: image_17185537.JPG
|
|