I believe I finally have the N2 gauge working correctly. The wiring is unchanged from its original state and the controller has been recalibrated.
After letting the line pressure drop to 0 PSI as indicated by the analog gauge in the drill-press room, I recorded the number of counts read by the Omega controller. Then I pressurized the line to 80 PSI, again indicated by the analog gauge, and recorded the Omega counts again. I entered these two reference points into the controller (automatically determines the gain and offset from these), then confirmed the readings to agree with the anaog gauge as I varied the line pressure.
The two reference points are:
In the latest installment in this puzzler: turns out that maybe the trend of the "N2 pressure" channel increasing over the ~3 day timescale it takes a cylinder of N2 to run out is real, and is a feature of the way our two N2 cylinder lines/regulators are setup (for the automatic switching between cylinders when one runs out). In order to test this hypothesis, we'd like to have the line pressure be 0 initially, and then just have 1 cylinder hooked up.
At 11:13 am there was a ~2-3 second interruption of all power at the 40m.
I checked that nobody was in any of the lab areas at the time of the outage.
I walked along both arms of the 40m and looked for any indicator lights or unusual activity. I took photos of the power supplies that I encountered, attached. I tried to be somewhat complete, but didn't have a list of things in mind to check, so I may have missed something.
I noticed an electrical buzzing that seemed to emanate from one of the AC adapters on the vacuum rack. I've attached a photo of which one, the buzzing changes when I touch the case of the adapter. I did not modify anything on the vacuum rack. There is also
Most of the cds channels are still down. I am going through the wiki for procedures on what to log when the power goes off, and will follow the procedures here to get some useful channels.
After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:
sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*
The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.
Bob, Aaron, and I removed the door from the OMC chamber this morning. Everything went well.
I did a walkaround and checked the status of all the interlock switches I could find based on the SOP and interlock wiring diagram, but the PSL remains interlocked. I don't want to futz around with AC power lines so I will wait for Koji before debugging further. All the "Danger" signs at the VEA entry points aren't on, suggesting to me that the problem lies pretty far upstream in the wiring, possibly at the AC line input? The Red lights around the PSL enclosure, which are supposed to signal if the enclosure doors are not properly closed, also do not turn on, supporting this hypothesis...
I confirmed that there is nothing wrong with the laser itself - i manually shorted the interlock pins on the rear of the controller and the laser turned on fine, but I am not comfortable operating in this hacky way so I have restored the interlock connections until we decide the next course of action...
[Gautam, Aaron, Koji]
The PSL interlock system was fixed and now the 40m lab is laser hazard as usual.
- The schematic diagram of the interlock system D1200192
- We have opened the interlock box. Immediately we found that the DC switching supply (OMRON S82K-00712) is not functioning anymore. (Attachment #1)
- We could not remove the module as the power supply was attached on the DIN rail. We decided to leave the broken supply there (it is still AC powered with no DC output).
- Instead, we brought a DC supply adapter from somewhere and chopped the head so that we can hook it up on the crimping-type quick connects. In Attachment #1, the gray is +12V, and the orange and black lines are GND.
- Upon the inspection, the wires of the "door interlock reset button" fell off and the momentary switch (GRAYHILL 30-05-01-502-03) got broken. So it was replaced with another momentary swicth, which is way smaller than the original unfortunately. (Attachments 2 and 3)
- Once the DC supply adapter was pluged to an AC tap, we heard the sounds of the relays working, and we recovered the laser hazard lamps, PSL door alerm lamps. Also it was confirmed that the PSL innolight is operatable now.
- BTW, there is the big switch box on the wall close to the PSL enclosure. Some of the green lamps were gone. We found that we have plenty of spare lamps and relays inside of the box. So we replaced the bulbs and know the A.C. lights are functioning. (Attachments 4 & 5)
Edit: It was not 4TB disk but 6TB disk in fact. (We actually ordered 4TB disk...)
I think the problem of the backup disk was the flaky power supply for the external drive.
I swapped the drive to a new HGST 4TB one, but it was neither recognized nor spun up with the external power supply we had. So I decided to put both the new and old drives in the PC chassis to power them up with the internal power supply. I tested the old disk via a USB-SATA cable. However, this disk was not recognized. I noticed that the disk was not HGST 4TB but Seagate 3TB. Is it possible? I thought it was 4TB... Did I miss something?
Once the new 4TB was connected to the USB-SATA, it was very smooth to get it mounted. Now the disk is mounted as /media/40mBackup as before. /etc/fstab was also modified with the new UUID. All the command logs are found here below.
Let's see how the morning backup goes. It would take a while to copy everything on the new disk. So it was actually very nice to set this disk up by Friday midnight.
controls@chiara|~> sudo mkfs -t ext4 /dev/sdd1
controls@chiara|~> sudo emacs -nw /etc/fstab
controls@chiara|~> cat /etc/fstab
controls@chiara|~> sudo mount -a
The local backup was done at 18:18 after 11h18m of running.
2018-12-15 07:00:01,699 INFO Updating backup image of /cvs/cds
2018-12-15 18:17:56,378 INFO Backup rsync job ran successfully, transferred 5717707 files.
The Auxiliary DAQ Chassis, or Acromag box, is now wired and ready for testing. I will be sorting the cables at the vacuum rack to make connection to the box easier.
We are ready to put the heavy doors back on the chambers and do some test pumpdowns tomorrow morning if Jon gives us the go-ahead. Also, Koji made the OMC resonate some of the AUX beam light we send into it
The OMC input optics layout is attached
Checked the spot position on OMMT-FM1. It was off from the center. This was causing the spot on OMMT1 off-center. This was fixed by the steering mirror for the AUX laser.
The beam alignment onto the OMC was tweaked with OMC-SM1 and OMC-SM2. This was the painful part. We had to make a sensor card that could get in to the narrow space of the OMC. (Attachment 2 right)
Attachment 2 left shows the naming convention of the OMC mirrors.
For the alignment, we gave 5Vpp trig waves at 3.1Hz to the input of the PZT amp so that the cavity is kept scanned continuously. Firstly check the rough spot positions for OMC-CM1 and OMC-CM2. If you carefully use the card, you can check if the beam is returning to OMC-IC. This return beam should have roughtly same hight as the incident beam. This can be adjusted by either of the steering mirrors.
Once the beam is going around the mirrors multiple times, the spot alignment can be checked at OMC-CM1. Bring a card right in front of CM1. If the card is lifter slightly above the incident spot, this automatically allows for the outgoing beam to go through. Depending on the pitch alignment, the next roundtrip (1RT) will be seen on the card. As you lift the card up more, you will be able to see more round trip beams (e.g. 2RT, 3RT, in the figure). If the yaw alignment is perfect, these spots would be lined up vertically. So you can try to align the horizontal direction with the steering mirrors. Then the vertical alignment can be done with the pitch knobs.
At this point you should be able to see some super high-order transmission at the OMC trans. For today, we stopped here as we already ran out of the knob ranges at multiple knobs. This is because the beam height in the mode matching telescope was not right, and the steering mirrors had to work more than their range.
Took the opprtunity of the power glitch to take care of the disk situation of chiara.
- Unmounted /cvs/cds from nodus. This did not affect the services on nodus as they don't use /cvs/cds
- Go to chiara, shut it down, and physically checked the labels of the drives.
root = 0.5TB
/cvs/cds = 4TB HGST
backup of /cvs/cds= 6TB HGST
- These three disks are internally mounted and connected with SATA. Previously, 6TB was on USB.
- There were two other drives (2TB and 3TB) but they seemed logically or physically broken. These two disks were removed from chiara. (they came back online after reformatting on mac. So they seem still physically alive).
df: `/var/lib/lightdm/.gvfs': Permission denied
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 461229088 10690932 427109088 3% /
udev 15915020 4 15915016 1% /dev
tmpfs 3185412 848 3184564 1% /run
none 5120 0 5120 0% /run/lock
none 15927044 144 15926900 1% /run/shm
/dev/sdb1 5814346836 1783407788 3737912972 33% /media/40mBackup
/dev/sdc1 3845709644 1884187232 1766171536 52% /home/cds
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 446.9G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 18.9G 0 part [SWAP]
sdb 8:16 0 5.5T 0 disk
└─sdb1 8:17 0 5.5T 0 part /media/40mBackup
sdc 8:32 0 3.7T 0 disk
└─sdc1 8:33 0 3.7T 0 part /home/cds
sr0 11:0 1 1024M 0 rom
- Rebooted the machine and just came back without any error. This time the control room machines were not shutdown, but they just recovered the NFS once chiara got back.
Here is a list of tasks I think we should prioritize for the next two weeks. The idea is to get back to the previous state of being able to do single arm, PRMI-on-carrier and DRMI locking, before making further changes.
Once the new folding mirrors arrive, I'd like to modify the SRC length to allow locking in the signal-recycled config as opposed to RSE. Still need to do the detailed layout, but I think the in-vacuum layout will work. In that case, I'd like to also move the OMC and OMMT to the IY table, and also move the in-air AS photodiodes to the IY in-air optical table. This is why I've omitted the OMC alignment from this near-term list, but if we want to not move the OMC, then we probably should add alignment of the AS beam to the OMC to this list.
The pumpdown seems to be progressing smoothly, so I think we are going to stick with the plan decided on Wednesday, and vent the IFO on Monday at 8am. I decided to do some checks of the IFO alignment.
I turned on the PSL again (so goggles are advisable again inside the VEA until this work is done), re-locked the PMC, and opened the PSL shutter into the vacuum (still low power 100 mW beam going into vacuum). The IMC alignment required minor tweaking, but I recovered ~1300 cts transmission which is what it was --> so we didn't macroscopically change the input pointing into the IMC while working on the IOO table.
Centering the ITMY oplev spot, there is a spot on the AS camera roughly centered on the control room monitor, so the TT pointing must also be pretty close.
Then I centered the ITMY oplev spot to check how well-aligned or otherwise the Michelson was - the BS has no Oplev so there was considerable angular motion of the Michelson spot, but it looked like on average, it was swinging around through a well aligned place. I saved the slow bias voltages for the ITMs and BS in this config.
Then I re-aligned ETMX and checked the green transmission - it was okay, ~0.3, and I was able to increase it to ~0.4 using the EX green PZT mirrors. So far so good.
Finally, I tried to lock the X-arm on IR - after zeroing the offsets on the transmission QPD, there seems to be a few flashes as the cavity swings through resonances, but no discernible PDH error signal. Moreover the input pointing of the IR into the X arm is controlled by the BS which is swinging around all over the place right now, so perhaps locking is hopeless, but the overall alignment of the IFO seems not too bad. Once ETMY is cleaned and put back in place, perhaps the Y arm can be locked.
I shuttered the PSL and inserted a manual beam block, and also turned off the EX laser so that we can vent on Monday without laser goggles.
*Not directly related to this work: we still have to implement the vacuum interlock condition that closes the PSL shutter in the event of a vacuum failure. It's probably fine now while the PSL power is attenuated, but once we have the high power beam going in, it'd be a good to revert to the old standard.
VEA is now a laser hazard area as usual, several 1064nm lasers in the lab have been turned back on. Apart from this
In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are
All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.
Watchdogs restored at 10 AM PST
The goal for this week is to test out the ALS system, so this is kind of a workable state since POX/POY locking is working. But the number of broken things is accumulating fast.
[Gautam/Chub/Koji] ~ Mini discussion
Maintenance / Upgrade Items
(Priority high to low)
Apr 16, 2019
Borrowed two laser goggles from the 40m. (Returned Apr 29, 2019)
Apr 19, 2019
Borrowed from the 40m:
- Universal camera mount
- 50mm CCD lens
- zoom CCD lens (Returned Apr 29, 2019)
- Olympus SP-570UZ (Returned Apr 29, 2019)
- Special Olympus USB Cable (Returned Apr 29, 2019)
The air handler on the roof of the 40M that supplies the electronics shop and computer room is out of operation until next week. Adding insult to injury, there is a strong odor of Liquid Wrench oil (a creeping oil for loosening stuck bolts that has a solvent additive) in the building. If you don't truly need to be in the 40M, you may want to wait until the environment is back to being cool and "unscented". On a positive note, we should have a quieter environment soon!
Four new 2" CVI 50/50 beamsplitters (2 for p-pol and 2 for s-pol) were delivered. They have been stored in the optics cabinet, along with the "Test Data" sheets from CVI.
The 40M jib cranes all passed inspection!
I believe this completes the non-Chub portions of the pre-vent checklist, we will start letting air into the main volume ASAP tomorrow morning after crossing off the remaining items.
Main goal of this vent is to investigate the oddness of the YARM suspensions. I leave the PSL NPRO on overnight in the interest of data gathering, it's been running ~10 hrs now - I suspect it'll turn itself off before we are ready to vent in the AM.
IMC was locked, MC2T ~ 1200cts after some alginment touch ups. The test mass oplevs indicate some drift, ~100urad. I didn't realign them.
The EY door removal will only be done tomorrow. I will take some free-swinging ETMY data today (suspension was kicked at 1241919438) to see if anything has changed (it shouldn't have). I need to think up a systematic debugging plan in the meantime.
Today, we tried to resuscitate the c1iscaux2 channels by swapping the existing, failed VME crate with the newly freed up crate from c1susaux. In summary, the crate gets power, and the EPICS server gets satrted, but I am unable to switch the whitening gain on the whitening boards. I belive that this has to do with the FAIL LEDs that are on for the XVME-220 units. We were careful to preserve the location of the various cards in the VME crates during the swap. Rather than do a detailed debugging with custom RJ45 cables and terminal emulators, I think we should just focus the efforts on getting the Acromag system up and running.
Our work must have bumped a cable to the c1lsc expansion chassis in the same rack - the c1lsc FE had crashed. I rebooted it using the script - everything came back gracefully.
There was a magnitude 6.6 earthquake just a few minutes ago. I am attaching photographs of the monitor feeds for reference here. Is there a standard protocol to be followed in this situation? I'm looking through the wiki now.
Further, the IMC seems to be misaligned and is not locking! As Koji has let me know, I really hope this is not too serious and can be fixed easily.
Last documented replacement in Nov 2018, so ~7 months, which I believe is par for the course. I am disconnecting its power supply cable.
In fact the projector is still working. The lamp timer showed ~8200hrs. I just reset the timer, but not sure it was the cause of the shutdown. I also set the fan mode to be "High Altitude" to help cooling.
I heard a popping sound in the control room; the projector lightbulb has blown out.
Optical chopper borrowed from CryoLab to 40m
Bulb replaced. Projector is back on.
The control room UPS started making a beeping noise saying batteries need replacement. I hit the "Test" button and the beeping went away. According to the label on it, the batteries were last repalced in March 2016, so maybe it is time for a replacement, @Chub, please look into this.
For some reason, rossa's Xdisplay won't start up anymore. This happened right after the UPS reset. Koji and I tried ~1.5 hours of debugging, got nowhere.
SnapPy scripts made to work on Pianosa.
Of course rossa was the only machine in the lab that could run the python scripts to interface with the GigE camera. And it is totally bricked now. Lame.
So I installed several packages. The key was to install pypylon - if you go to the basler webpage, pypylon1.4.0 does not offer python2.7 support for x86_64 architecture, so I installed pypylon1.3.0. Here are the relevant lines from the changelog:
gstreamer-plugins-bad-0.10.23-5.el7.x86_64 Sat 20 Jul 2019 11:22:21 AM PDT
gstreamer-plugins-good-0.10.31-13.el7.x86_64 Sat 20 Jul 2019 11:22:11 AM PDT
gstreamer-plugins-ugly-0.10.19-31.el7.x86_64 Sat 20 Jul 2019 11:20:08 AM PDT
gstreamer-python-devel-0.10.22-6.el7.x86_64 Sat 20 Jul 2019 10:34:35 AM PDT
pygtk2-devel-2.24.0-9.el7.x86_64 Sat 20 Jul 2019 10:34:34 AM PDT
pygobject2-devel-2.28.6-11.el7.x86_64 Sat 20 Jul 2019 10:34:33 AM PDT
pygobject2-codegen-2.28.6-11.el7.x86_64 Sat 20 Jul 2019 10:34:33 AM PDT
gstreamer-devel-0.10.36-7.el7.x86_64 Sat 20 Jul 2019 10:34:32 AM PDT
gstreamer-python-0.10.22-6.el7.x86_64 Sat 20 Jul 2019 10:34:31 AM PDT
gtk2-devel-2.24.31-1.el7.x86_64 Sat 20 Jul 2019 10:34:30 AM PDT
libXrandr-devel-1.5.1-2.el7.x86_64 Sat 20 Jul 2019 10:34:28 AM PDT
pango-devel-1.42.4-1.el7.x86_64 Sat 20 Jul 2019 10:34:27 AM PDT
harfbuzz-devel-1.7.5-2.el7.x86_64 Sat 20 Jul 2019 10:34:26 AM PDT
graphite2-devel-1.3.10-1.el7_3.x86_64 Sat 20 Jul 2019 10:34:26 AM PDT
pycairo-devel-1.8.10-8.el7.x86_64 Sat 20 Jul 2019 10:34:25 AM PDT
cairo-devel-1.15.12-3.el7.x86_64 Sat 20 Jul 2019 10:34:25 AM PDT
mesa-libEGL-devel-18.0.5-3.el7.x86_64 Sat 20 Jul 2019 10:34:24 AM PDT
libXi-devel-1.7.9-1.el7.x86_64 Sat 20 Jul 2019 10:34:24 AM PDT
pygtk2-doc-2.24.0-9.el7.noarch Sat 20 Jul 2019 10:34:23 AM PDT
atk-devel-2.28.1-1.el7.x86_64 Sat 20 Jul 2019 10:34:21 AM PDT
libXcursor-devel-1.1.15-1.el7.x86_64 Sat 20 Jul 2019 10:34:20 AM PDT
fribidi-devel-1.0.2-1.el7.x86_64 Sat 20 Jul 2019 10:34:20 AM PDT
pixman-devel-0.34.0-1.el7.x86_64 Sat 20 Jul 2019 10:34:19 AM PDT
libXinerama-devel-1.1.3-2.1.el7.x86_64 Sat 20 Jul 2019 10:34:19 AM PDT
libXcomposite-devel-0.4.4-4.1.el7.x86_64 Sat 20 Jul 2019 10:34:19 AM PDT
libicu-devel-50.1.2-15.el7.x86_64 Sat 20 Jul 2019 10:34:18 AM PDT
gdk-pixbuf2-devel-2.36.12-3.el7.x86_64 Sat 20 Jul 2019 10:34:17 AM PDT
pygobject2-doc-2.28.6-11.el7.x86_64 Sat 20 Jul 2019 10:34:16 AM PDT
pygtk2-codegen-2.24.0-9.el7.x86_64 Sat 20 Jul 2019 10:34:15 AM PDT
Camera server is running on a tmux session on pianosa. But it keeps throwing up some gstreamer warnings/errors, and periodically (~every 20 mins) crashes. Kruthi tells me that this behavior was seen on Rossa as well, so whatever the problem is, doesn't seem to be because I missed out on installing some packages on pianosa. Moreover, if the server is in fact running, I am able to take a snapshot - but the camera client does not run.
"bricked" is to mean that it has the functionality of a brick and can be tossed. But rossa seems to have just gotten some software config corruption. I spent a couple hours reinstalling SL7 today as per my previous elog notes and the X display seems to work as before.
i.e. it was fine with the default setup, except for the ole "X chrashes if the mouse goes to left side of screen". As before, I
left side of screen is safe again
This time I installed SL7.6 and followed the K Thorne wiki. But its having trouble installing cds-root because it can't find root.
I want to collect some data with the arms locked to investigate the possibility/usefullness of having seismic feedforward implemented for the arms (it is already known to help the IMC length and PRC angular stability at low frequencies). To facilitate diagnostics I modified the file /users/Templates/Seismic/Seismic_vs_TRXTRYandMC.xml to have the correct channel names in light of Lydia's channel name changes in 2016. Looking at the coherence data, the alignment of the cartesian coordinate system of the Seismometers at the ends and the global interferometer coordinate system can be improved.
I don't know if for the MISO filter design if there is any difference in using TRX/TRY as the target, or the arm length control signal.
Data collection started at 1249018179. I've setup a script running in a tmux shell to turn off the LSC enable in 2 hours.
When I put away the lenses we had used for measuring the RF transfer functions of the QPD heads, I saw that I'd removed them from the cabinet containing green endtable optics, but hadn't noticed the sign forbidding their removal. I'll talk with Koji/Gautam about what happened and what should be done.
Once Koji is done with his checkout of the whitening electronics, I will try and lock the PRMI.
I propose the following re-organization of the PDFR measurement breadboard. We have all the parts on hand, just needs ~30mins of setup work and some characterization afterwards. The fiber beamsplitter will not be PM, but for this measurement, I don't think that matters (the patch fiber from the diode laser head isn't PM anyways). We have one spare 1 GHz BW NF1611 that is fiber coupled (used to live on the ITMY in-air table, and is (conveniently) labelled "REF DET", but I'm not sure what the function of this was). In any case, we have at least 1 free-space NF1611 photodiode available as well. I suggest confirming that the FC version works as expected by calibrating against the free space PD first.
Update 245pm: Implemented, see Attachment #2. Aaron is testing it now, and will post the characterization results.
I'm curious to see if we really need the 1611, or if we can calibrate the diode laser vs. the 1611 one time and then just use that calibration to get the absolute cal for the DUT.
I'm afraid that the RF modualtion of the laser is nonlinear and the electrical and optical resoponse is dependent on the LD pumping current and RF input power. So I feel safe if we keep the reference PD. Of course, this is my feeling and it should be quantitatively tested.
I measured the RF response of the fiber-coupled NewFocus 1611, calibrating out the cable delay. The laser current was set to 20.0 mA, and the RF power going into the splitter was -10 dBm. The DC voltage was 1.87 V, and Gautam and I measured the power from the fiber at 344uW.
Something still looks very wrong -- the PD is supposed to be flat out to 1GHz, and physical units pending, need food.
The 1GHz PD has a bit more flat response, but the laser and the driving network have more frequency dependence as you saw.
I think the metric of interest here is the consistency of the AC transimpedance of the proposed new "Reference PD" (= fiber coupled NF1611) vs the old reference (free space NF1611), since everything will be calibrated against that.
The fiber-coupled PD seems to have a factor of ~1.5 difference in responsivity compared to the free-space PD. There are some differences in the two ways I made the measurement that I don't yet understand.
I measured relative responsivities of the fiber and free coupled NewFocus 1611 PDs (scaled by the Jenne AM transfer function).
I made the measurement in two ways, see attachment three. In attachment one, I show the response for separately measuring the two PDs relative to a pickoff of the source (two-port thru calibration). In attachment two I measure the relative responses directly, without picking off a reference (three-port calibration). I scaled the transfer functions by their DC voltages; both PDs have transimpedances of 700 V/A.
However, there are some clear differences in the response (overall factor of 0.5dB offset that may be explained by a miscalibrated DC level; apparent periodicity in attachment 1) that I don't yet understand.The free path of the non-fiber PD is ~5-6 inches, which accounts for the ~45 degrees of phase advance of the fiber relative to free coupled PD signal. (12.7cm / (c / 300 MHz) * 360 degrees ~ 45 degrees)
[Jon, Yehonathan, Gautam, Aaron, Shruti, Koji]
We get together on Wednesday afternoon for cleaning the lab. Particularly, we collected e-wastes: VME crates, VME modules, old slow control cables, and other old/broken electronics. They are piled up in the office area and the cage outside rioght now (Attachments 1/2). We asked Liz to come to pick them up (under the coordination with either Gautam or Koji). Eventually this will free up two office desks.
Also, we made the acromag components organized in plastic boxes. (Attachment 3)
The UPS is now incessantly beeping. I cannot handle this constant sound so I shut down all the control room workstations and moved the power strip hosting the 4 CPUs to a wall socket for tonight. Chub and I will replace the UPS batteries tomorrow.
[Liz, Gautam, Chub, Jordan, Koji]
We removed a significant amount of e-waste from the lab. The garbage was moved to the e-waste station in WB SB and are waiting for disposal.
Batteries + power cables replaced, and computers back on UPS from today ~3pm.
Some ideas that would help increase the locking duty-cycle in the short term.