I've checked the state of the laser interlock switch and everything looked normal.
Sigh... hard loch
[Mirko / Kiwamu]
The resonant box has been installed together with a 3 dB attenuator.
The demodulation phase of the MC lock was readjusted and the MC is now happily locked.
We needed more modulation depth on each modulation frequency and so for the reason we installed the resonant box to amplify the signal levels.
Since the resonant box isn't impedance matched well, the box creates some amount of the RF reflections (#5339).
In order to reduce somewhat of the RF reflection we decided to put a 3 dB attenuator in between the generation box and the resonant box.
(what we did)
+ attached the resonant box directly to the EOM input with a short SMA connector.
+ put stacked black plates underneath the resonant box to support the wight of the box and to relief the strain on the cable between the EOM and the box.
+ put a 3 dB attenuator just after the RF power combiner to reduce RF reflections.
+ readjusted the demodulation phase of the MC lock.
(Adjustment of MC demodulation phase)
The demodulation phase was readjusted by adding more cable length in the local oscillator line.
After some iterations an additional cable length of about 30 cm was inserted to maximize the Q-phase signal.
So for the MC lock we are using the Q signal, which is the same as it had been before.
Before the installation of the resonant box, the amplitude of the MC PDH signal was measured in the demodulation board's monitor pins.
The amplitude was about 500 mV in peak-peak (see the attached pictures of the I-Q projection in an oscilloscope). Then after the installation the amplitude decreased to 400 mV in peak-peak.
Therefore the amplitude of the PDH signal decreased by 20 %, which is not as bad as I expected since the previous measurement indicated 40 % reduction (#2586).
Gautam and I were talking about some modulation and demodulation and wondered what is the power combining situation for the triple resonant EOM installed 8 years ago. And we noticed that the current setup has additional ~5dB loss associated with the 3-to-1 power combiner. (Figure a)
N-to-1 broadband power combiners have an intrinsic loss of 10 log10(N). You can think about a reciprocal process (power splitting) (Figure b). The 2W input coming to the 2-port power splitter gives us two 1W outputs. The opposite process is power combining as shown in Figure c. This case, the two identical signals are the constructively added in the combiner, but the output is not 20Vpk but 14Vpk. Considering thge linearity, when one of the port is terminated, the output is going to be a half. So we expect 27dBm output for a 30dBm input (Figure d). This fact is frequently oversight particularly when one combines the signals at multiple frequencies (Figrue e). We can avoid this kind of loss by using a frequency-dependent power combiner like a diplexer or a triplexer.
One of the differences between the direct POY and the CM_SLOW POY is the presence of the CM Servo gain stages. So this might mean that you need to move some of the whitening gain to the CM IN1 gain.
I got confused. Why don't we see that too-high-Q pole in the OLTF?
Not sure what's wrong, but the workstation desk is freezing cold again and the room temp is 18degC (64degF).
This measurement tells you how the gain balance between the SLOW_CM and AO paths should be. Basically, what you need is to adjust the overall gain before the branch of the paths.
Except for the presence of the additional pole-zero in the optical gain because of the power recycling.
You have compensated this with a filter (z=120Hz, p=5kHz) for the CM path. However, AO path still don't know about it. Does this change the behavior of the cross over?
If the servo is not unconditionally stable when the AO gain is set low, can we just turn on the AO path at the nominal gain? This causes some glitch but if the servo is stable, you have a chance to recover the CARM control before everything explodes, maybe?
Check out this elog: ELOG 4354
If this summing box is still used as is, it is probably giving the demod phase adjustment.
We are going to replace the old Sun c1ioo with a modernized supermicro. At the opportunity, remove the DAC and BIO cards to use them with the new machines. BTW I also have ~4 32ch BIO cards in my office.
I opened the packages send from Syracuse.
- The components are not vacuum clean. We need C&B.
- Some large parts are there, but many parts are missing to build complete SOSs.
- No OSEMs.
- Left and right panels for 6 towers
- 3 base blocks
- 1 suspension block
- 8 OSEM plates. (1 SOS needs 2 plates)
- The parts looks like old versions. The side panels needs insert pins to hold the OSEMs in place. We need to check what needs to be inserted there.
- An unrelated tower was also included.
I was in the lab at the time. But did not notice anything (like turbo sound etc). I was around ETMX/Y (1X9, 1Y4) rack and SUS rack (1X4/5), but did not go into the Vac region.
We want to migrate the end shutter controls from c1aux to the end acromags. Could you include them to the list if not yet?
This will let us remove c1aux from the rack, I believe.
[Larry (on site), Koji & Gautam (remote)]
Network recovery (Larry/KA)
Asked Larry to get into the lab.
14:30 Larry went to the lab office area. He restarted (power cycled) the edge-switch (on the rack next to the printer). This recovered the ssh-access to nodus.
Also Larry turned on the CAD WS. Koji confirmed the remote access to the CAD WS.
Nodus recovery (KA)
Apr 12, 22:43 nodus was restarted.
Apache (dokuwiki, svn, etc) recovered along with the systemctl command on wiki
ELOG recovered by running the script
Control Machines / RT FE / Acromag server Status
Judging by uptime, basically only the machines that are on UPS (all control room workstations + chiara) survived the power outage. All RT FEs are down. Apart from c1susaux, the acromag servers are back up (but the modbus processes have NOT been restarted yet). Vacuum machine is not visible on the network (could just be a networking issue and the local subnet to valves/pumps is connected, but no way to tell remotely).
KA imagines that FB took some finite time to come up. However, the RT machines required FB to download the OS. That made the RTs down. If so, what we need is to power cycle them.
Acromag: unknown state
The power was lost at Apr 12 22:39:42, according to the vacuum pressure log. The power loss was for a few min.
[Koji / Gautam (Remote)]
sudo /sbin/ifdown eth0
sudo /sbin/ifup eth0
End RTS recovery
rtcds start --all
Vertex RTS recovery
sudo /sbin/ifup eth1
sudo systemctl start modbusIOC.service
sudo /sbin/ifdown eth1
sudo systemctl start modbusIOC.service
RTS recovery ~ part 2
sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*
sudo systemctl start MCautolocker.service
sudo systemctl start FSSSlow.service
Yes, we are supposed to have a few spare PI PZTs.
Is \eta_A the roundtrip loss for an arm?
Thinking about the PRG=10 you saw:
- What's the current PR2/3 AR? 100ppm? 300ppm? The beam double-passes them. So (AR loss)x4 is added.
- Average arm loss is ~150ppm?
Does this explain PRG=10?
Two ITM spares (ITMU01/ITMU02) and five new PR3 mirrors (E1800089 Rev 7-1~Rev7-5) were transported to Downs for phasemap measurement
This is very interesting. Do you have the ASDC vs PRG (~ TRXor TRY) plot? That gives you insight on what is the cause of the low recycling gain.
My speculation for the worse RIN is:
- Unoptimized alignment -> Larger linear coupling of the RIN with the misalignment
- PRC TT misalignment (~3Hz)
Don't can you check the correlation between the POP QPD and the arm RIN?
I see. At the 40m, we have the direct transition from ALS to RF. But it's hard to compare them as the storage time is very different.
Which 1f signals are you going to use? PRCL has sign flipping at the carrier critical coupling. So if the IFO is close to that condition, 1f PRCL suffers from the sign flipping or large gain variation.
GariLynn worked on the measurement of E1800089 mirrros.
The result of the data analysis, as well as the data and the codes, have been summarized here:
We can limit the EPICS values giving some parameters to the channels. cf https://epics.anl.gov/tech-talk/2012/msg00147.php
But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.
Can you describe the mode matching in terms of the total MM? Is MM_total = sqrt(MM_vert * MM_horiz)?
1. I agree that it's likely that it was the temp signal glitch.
Recom #2: I approve to reopen the valves to pump down the main volume. As long as there is no frequent glitch, we can just bring the vacuum back to normal with the current software setup.
2. Recom #1 is also reasonable. You can use simple logic like if we register 10 consecutive samples that exceed the threshold, we can activate the interlock. I feel we should still keep the temp interlock. Switching between pumping mode and the normal operation may cause unexpected omission of the interlocks when it is necessary.
3. We should purchase the UPS battery / replacement rotary TIP seal. Once they are in hand, we can stop the vacuum and execute the replacement. Can one person (who?) accomplish everything with some remote help?
4. The lab temp: you mean, 12degC swing with the AC on!?
Jon and Koji remotely supported Jordan's resetting the TP2 controller.
From the operator's console in front of the vac rack:
Open a terminal window (click the LXTerminal icon on the desktop)
Type "control" + enter to open the vac controls screen
Toggle all the open valves closed (edit by KA: and manually close RV2 by rotating the gate valve handle )
Turn OFF TP2 by clicking the "Off' button. Make sure the status changes and the rotation speed falls to zero (you'll also hear the pump spinning down)
The other pumps (TP1, TP3) can be left running
Once TP2 has stopped spinning, go to the back of the rack and locate the ethernet cable running from the back of the TP2 controller to the IOLAN server (near the top of the rack). Disconnect and reconnect the cable at each end, verifying it is firmly locked in place.
From the front of the rack, power down the TP2 controller (I don't quite remember for the Agilent, but you might have to move the slider on the front from "Remote" to "Local" first)
Wait about 30 seconds, then power it back on. If you had to move the slider to shut it down, revert it back to the "Remote" position.
Go back to the controls screen on the console. If the pump came back up and is communicating serially again, its status will say something other than "NO COMM"
Turn TP2 back on. Verify that it spins up to its nominal speed (66 kRPM)
At this point you can reopen any valves you initially closed (any that were already closed before, leave closed)
TP2 was stopped and at this moment the glitches were gone. Jordan powercycled the TP2 controller and we brought up the TP2 back at the full speed.
However, the glitches came back as before. Obviously we can't go on from here, and we've decided to stop the recovery process here today.
- We left TP1/2/3 running while the valves including RV2 were closed.
- When Jordan is back in the lab next week, we'll try to use TP3 as the backing of TP1 so that we can resume the main volume pumping.
- Currently, TP3 does not have interlocking and that is a risk. Jon is going to implement it.
- Meanwhile, we will try to replace the controller of TP2. We are supposed to have this in the lab. Ask Chub about the location.
- Once we confirm the stability of the diagnostic signals for TP2, we will come back to the nominal pumping scheme.
The vacuum safety policy and design are not clear to me, and I don't know what the first and second defense is. Since we had limited time and bandwidth during the remotely-supported recovery work today, we wanted to work step by step.
The pressure rising rate is 20mtorr/day, and turning on TP3 early next week will resume the main-volume pumping without too much hustle. If you need the IFO time now, contact with Jon and use backing with TP3.
ITMU01 / ITMU02 as well as the five E1800089 mirrors came back to the 40m. Instead, the two ETM spares (ETMU06 / ETMU08) were delivered to GariLynn.
Jordan worked on transportation.
Note that the E1800089 mirrors are together with the ITM container in the precious optics cabinet.
Sigh. Do we have a spare sat box?
> Can't we offload this DC signal to the laser crystal temperature servo?
No. PSL already follows the MC length. So this offset is coming from the difference between the MC length and the CARM length.
What you can do is to offload the MC length to the CARM DC if this helps.
The usual technique is that keeping the IFO locked with the old set of the signals and the relative gain/TF between the conventional and new signals are measured in-lock so that you can calibrate the new gain/demod-phase setting.
It happened before too. Doesn't it say it has occasional self-testing or something?
I believe we will use two new chassis at most. We'll replace c1ioo from Sun to Supermicro, but we recycle the existing timing system.
Grrr. Let's repair the unit. Let's get a help from Chub & Jordan.
Do you have a second unit in the lab to survive for a while?
When I tested Q3000 for aLIGO, the failure rate was pretty high. Let's get 10pcs.
The new dolphin eventually helps us. But the installation is an invasive change to the existing system and should be done at the installation stage of the 40m BHD.
Teledyne AP1053 etc were transported from Rich's office to the 40m. The box is placed on the shelf at the entrance.
My record tells that there are 7 AP1053 in the box. I did not check the number this time.
2PM: Arrived at the 40m. Started the work for the coupling of the RF modulated LO beam into a fiber. -> I left the lab at 10:30 PM.
The fiber coupling setup for the phase-modulated beam was made right next to the PSL injection path. (See attachment 1)
- Loose fiber coupler: Sorry about that. I could not detect something was loose there, although some of the locks were not tightened.
- S incident instead of P: Sorry about that too. I completely missed that the IMC takes S-pol.
- PSL HEPA was running at 33% and is now at 100%
- South End HEPA was not on and is now running
- Yarm Portable HEPA was not running and is now running at max speed: the power was taken beneath the ITMY table. It is better to unplug it when one uses the IFO.
- Yend Portable HEPA was not running and is now running (presumably) at max speed
Particle Levels: (Not sure about the unit. The convention here is to multiply x10 of the reading)
Before running the HEPAs at their maximum
9/10/2020 15:30 / 0.3um 292180 / 0.5um 14420
(cf 9/5/2020 / 0.3um 94990 / 0.5um 6210)
After running the HEPAs at their maximum
The number gradually went down and now became constant at about half of the initial values
9/10/2020 19:30 / 0.3um 124400 / 0.5um 7410
M4.5 EQ in LA 2020-09-19 06:38:46 (UTC) / -1d 23:38:46 (PDT) https://earthquake.usgs.gov/earthquakes/eventpage/ci38695658/executive
I only checked the watchdogs. All watchdogs were tripped. ITMX and ETMY seemed stuck (or have the OSEM magnet issue). They were left tripped. The watchdogs for the other SUSs were reloaded.
I came to the campus and Gautam notified that he just had received the alert from the vac watchdog.
I checked the vac status at c1vac. PTP3 went up to 10 torr-ish and this made the diff pressure for TP3 over 1torr. Then the watchdog kicked in.
To check the TP3 functionality, AUX RP was turned on and the manual valve (MV in the figure) was opened to pump the foreline of TP3. This easily made PTP3 <0.2 torr and TP3 happy (I didn't try to open V5 though).
So the conclusion is that RP for TP3 has failed. Presumably, the tip-seal needs to be replaced.
Right now TP3 was turned off and is ready for the tip-seal replacement. V5 was closed since the watchdog tripped.
I supplied a bottle of hand soap. Don't put water in the bottle to dilute it as it makes the soap vulnarable for cotamination.
There were two SUSs which didn't look normal.
- ITMX was easily released by the bias slider -> Shake the pitch slider and while all the OSEM values are moving, turn on the damping control (with x10 large watchdog threshold)
- ETMY has UR OSEM 0V output. This means that there is no light. And this didn't change at all with the slider move.
- Went to the Y table and tried to look at the coils. It seems that the UR magnet is detached from the optic and stuck in the OSEM.
We need a vent to fix the suspension, but until then what we can do is to redistribute the POS/PIT/YAW actuations to the three coils.
I came to the lab. The control room AC was off -> Now it is on.
Here is the setting of the AC meant for continuous running
Gautam reported that the PSL HEPA stopped running (ELOG 15592). So I came in today and started troubleshooting.
It looks like that the AC power reaches the motors. However, both motors do not run. It looks like the problem exists in the capacitors, the motors, or both.
Parts specs can be found in the next ELOG.
Attachment 1 is the connection diagram of the HEPA. The AC power is distributed by the breaker panel. The PSL HEPA is assigned to use M22 breaker (Attachment 2). I checked the breaker switch and it was (and is) ON. The power goes to the junction box above the enclosure (Attachment 3). A couple of wires goes to the HEPA switch (right above the enclosure light switch) and the output goes to the variac. The inside of the junction box looked like this (Attachment 4).
By the way, the wires were just twisted and screwed into a metal threaded (but isolated) caps (Attachment 5). Is this legit? Shouldn't we use stronger crimping? Anyway, there was nothing wrong with the caps w.r.t the connection for now.
I could easily trace the power up to the variac. The variac output was just fine (Attachment 6). The cord goes from the variac to the junction box (and then HEPAs) looked scorched. The connection from the plug to HEPAs was still OK, but this should be eventually replaced. Right now the cable was unplugged after the following tests for the safety reason.
The junction box for each HEPA unit was opened to check the voltage. The supply voltage came to the junction boxes and it was just fine. In Attachments 8 & 9, the voltages look low but this is because I just turned the variac only a little.
At the (main) junction box, the resistances of the HEPAs were checked with the Fluke. As the HEPA units are connected to the AC in parallel, the resistances were individually checked as follows.
The coils were not disconnected (... I wonder if the wiring of South HEPA was flipped? But this is not the main issue right now.)
By removing the pre-filters, the motors were inspected Attachments 10 & 11. At least the north HEPA motor was warm, indicating there was some current before. A capacitor was connected per motor. When the variac was tuned up a bit, one side of the capacitor could see the voltage. I could not judge which has the issue between the capacitor and the motor.
Dimensions / Specs
- HEPA unit dimentions
- HEPA unit manufacturer
Here is the timeline. This suggests TP2 backing RP failure.
1st line: TP2 foreline pressure went up. Accordingly TP2 P, current, voltage, and temp went up. TP2 rotation went down.
2nd line: TP2 temp triggered the interlock. TP2 foreline pressure was still high (10torr) so TP2 struggled and was running at 1 torr.
3rd line: Gautam's operation. TP2 was isolated and stopped.
Between the 1st line and 2nd line, TP2 pressue (=TP1 foreline pressure) went up to 1torr. This made TP1 current increased from 0.55A to 0.68A (not shown in the plot), but TP1 rotation was not affected.