PEM config file was also lacking a section named "summary", which is necessary for all parent tabs; this has now been solved. I have deactivated the MEDM pages because Praful's screencap script seemed to be broken (I should have logged this, I apologize).
Pages still not working: PEM and MEDM blank.
I think I fixed the DC error issue
1. I added the leap second (leapsecond ?) entry for 2016/12/31, 23:60:00 UTC to daqdrc
set gps_leaps = 820108813 914803214 1119744016;
set gps_leaps = 820108813 914803214 1119744016 1167264018;
2. Restarted FB and all realtime models
Now I don't see any RED light.
After Koji's leap second fix, we were playing around with the X arm locking. In particular, we were playing around with the limit value on the X arm LSC filter bank - the nominal value is 4000, we wanted to see if we could increase this without kicking the optic while acquiring arm lock. We initially increased it to 8000, and then turned it off altogether. Then we rapidly turned the output of the servo ON/OFF, and looked at the arm transmission to see if it came back to the level before unlocking, as an indication of whether the optic was kicked.
These trials suggested a value of 8000 for the limiter was OK, so we left the LSC mode on with the limiter set to 8000. But just as we were about to leave for the night, I noticed on the wall Striptool that the X arm was unlocked. Investigating, we found that the green wasn't even locking to a HOM. Further investigation of the Oplev spot showed that ETMX had received a large kick (both pitch and law errors were ~200urad). ITMX was unaffected.
We initially tried lowering the LSC limit value back to 4000, then used first the Oplev spot and then the green to align the arm. But turning on LSC misaligned the arm after acquiring lock. So we decided to leave LSC off, thinking that the notorious ETMX suspension problems have resurfaced. As a diagnostic, we figured we'd leave the watchdog tripped, and use the Oplev to see if the optic was getting kicked. But the act of turning the watchdog off kicked the optic again (WHY?!).
Looking at the ETMX sus screen, turning off all the damping and LSC (but watchdog on) still leaves a non-zero offset in the "Vmon" field, between 0.02-0.05V depending on the coil. Turning the watchdog OFF takes all these to 0.009V, although I can see the LR value fluctuating between 0.004V and 0.009V. I went to the Xend and squished all the cables on the Sat. Box, but the problem persisted.
At this time, I can't think of any explanation, so I am giving up for the night. To avoid unnecessarily kicking the optic, I am going to unplug the suspension from the Sat. Box and leave one of our tester boxes plugged in, lets see if that sheds any light on the situation...
Did we turn off minute trend writing in one of the recent FrameBuilder debug sessions? Seems we only have second trends in 2016. Maybe this explains why its so slow to get minute trends? Dataviewer has to rebuild it from second trend.
controls@nodus|frames > l
drwx------ 2 root root 16384 Jun 8 2009 lost+found/
drwxr-xr-x 2 controls controls 4096 Jul 14 2015 tmp/
-rw-r--r-- 1 controls controls 0 Jul 14 2015 test-file
drwxr-xr-x 5 controls controls 4096 Apr 7 2016 trend/
drwxr-xr-x 4 root root 4096 Apr 11 2016 archive/
drwxr-xr-x 789 controls controls 36864 Jan 13 19:34 full/
controls@nodus|frames > cd trend
controls@nodus|trend > l
drwxr-xr-x 258 controls controls 3342336 Jul 6 2015 minute_raw/
drwxr-xr-x 387 controls controls 36864 Nov 5 2015 minute/
drwxr-xr-x 969 controls controls 36864 Jan 13 19:49 second/
Yes, writing minute trends causes hourly FB crashes in the current state of things. The "raw" minute trending is turned on, but I think that these are unknown to nds.
Found that the BS whitening was off. Gautam says that "it has always been that way" and "there's nothing in the elog about this" and "I have no special relationship with Putin".
I looked at DV and DTT while turning the OSEM whitening back on. As expected, the sensor noise improved by 10x above 10 Hz. The time series shows no problems - its just less fuzzy now.
All OSEM spectra after the switch show on upper panel of plot. Lower panel shows comparison of BS UL before/after. To rotate the DTT PDF landscape output I typed this:
pdftk BS-white.pdf cat 1N output BSwhite.pdf
"if you see something, do something"
During the course of Rana's inspection of the general state of the IFO, he commented that there seemed to be several seismic-related IMC lock losses in the time that he had been observing it. This issue looked suspiciously like the the MC1 glitches I had noticed sometime late last year, especially since each time the IMC would unlock, we could see significant amounts of motion on MC REFL. To diagnose, we did the following:
Sure enough, there were several glitches that occurred in all 5 sensor channels. These glitches varied in size from a few counts (the smaller ones) to 60-70counts for the bigger ones. In the past, squishing the LEMO connector on the front of the PD whitening board (D000210) had apparently made the glitching go away. So tonight, for starters, we squished everything else - Sat. Box connectors, the breakout board from Sat. Box to whitening board on the back of 1X6, and the DB connector on the front of the whitening board. This had no effect - the glitching remained consistent.
Next, Rana pulled out two of the three 4pin LEMOs, and left only those coresponding to UL/LL plugged in - but the glitching persisted in these two channels. We then pulled out the board. It was installed in 1998, but has a sticker on it that says "fixed in 2003". Not sure what the fix was. Visual inspection of the circuit didn't show anything obviously faulty, but it did look like the two MAX333A quad switches (these control whether the whitening is bypassed or not) had been replaced at some point. There are other undesirable features, such as the use of thick film resistors, but nothing that would explain the glitchy behaviour.
Next, we re-inserted the whitening board back into its original slot in the Eurocrate, but switched the cables (both D sub and LEMO, but only on the whitening board end) between the boards for MC1 and MC3 (i.e. MC1 cables were routed through the whitening board that was originally used for MC3, and vice-versa). But the glitches remained consistent on the MC1 channels. So it looks like the board is not a likely culprit.
Finally, we went in and squished all the cables from the PD whitening board to the ADC (via an AA filter board). For some of the LEMO cables from the whitening board, the LEMO backshells were not properly tightened. Rana fixed these before putting them back in. Some of the connectors were also not pushed in tightly enough, Rana heard the click when he pushed them in. The cables from the adaptor board to the ADC itself looked fine, it was screwed on at both ends, and all these connections looked snug enough. In the interest of completeness, Rana also pushed in the backplane connectors on the Eurocrate (these supply the signals from the BIO cards to switch the whitening ON/OFF). The one corresponding to MC1 was indeed a little loose.
Coming back to the control room, we saw that the MC1 LR sensor was dead. After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints. Could this explain the glitchy behaviour? Perhaps, but the glitches remained in the 3 channels that were connected. Anyways, I will repair this cable tomorrow, and we can see if this has fixed the problem or not..
Some misc points:
PSL shutter is closed, MC1 watchdog is shutdown for the night.
I tried to follow these instructions today to make the Simulink Webview accessible:
controls@nodus|public_html > ln -sfn /users/public_html/FE /export/home/
The story is: we currently don't expose the whole /users/public_html folder. Instead, we are symlinking the folders from public_html to /export/home/ on nodus, which is where apache looks for things
So, I fixed the links on the Core Optics page by running:
controls@nodus|~ > ln -sfn /users/public_html/40m_phasemap /export/home/
But...I got a "403 Forbidden" message. What is the secret handshake to get this to work? And why have we added this extra step of security?
Seems like this stops working every ~2 years. Its been busted since early 2016 according to cron, so I fixed up the paths and restored some missing files and committed things to the SVN (with comments!) and now its working and grabbing the Web viewable versions of the front end models. Just need to restore its viewability and then the world can watch our models any time.
Back in 2011, JoeB wrote some entries on how to automatically update the Simulink webview stuff.
Somehow, the cron broke down over the years. I reran the matlab file by hand today and it worked fine, so now you can see the up to date models using the internet.
After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints.
The faulty cable has been re-soldered (with heat shrink) and replaced. All 5 sensor signals appear normal on dataviewer now. I am leaving things in this state for the night, let us see if the glitches return overnight.
PSL shutter remains closed
Last night, I plugged the ETMX suspension coils back into the satellite box. Tonight, we turned on the damping loops for ETMX. Rana centered the Oplev so we can use that as an additional diagnostic to see if the optic gets kicked around overnight. We will re-assess the situation tomorrow.
Sometime earlier today, Lydia noticed that the +/- 5V Sorensens at the X end were not displaying their nominal voltage/current values (as per the stickers on them). She corrected this.
Summary pages show no kicking in the ETMX watchdogs from midnight to 6 AM (0800 - 1400 UTC):
After the repair of the faulty LEMO cable, I left MC1 with it's watchdog off overnight. Unfortunately, it looks like the problem still persists. The first attachment shows a second trend plot for the past 15 hours. Towards the left end of the plot, you can see where I re-connected the LEMO cable for the LR/UR channels.
A couple of months ago, I added a BLRMS block for the IMC optics that calculates BLRMS for the shadow sensor output as well as the coil output. Looking at this trend overnight, I noticed that the glitches appear in the coil outputs as well, as shown in the plot below, which is for a 1 hour stretch last night (I used the full data from a 16Hz coil output channel and not the BLRMS, I am not sure if there is a DQ'ed version of the coil outputs?).
Zooming in further to one of these glitches, we can see that the glitches in the coil and shadow sensor signals are in fact coincident.
But given that the watchdog was turned off all this time, the only voltage going to the coils should be the DC bias voltages. So does this not support the hypothesis that the problem lies in the part of the signal chain that supplies the bias voltage to the coils?
Never mind, the "coil output" channel isn't a true readback of the voltage to the coil, but is the calculated damping output (which is not sent to the coils when the watchdog is shutdown...
This link works for me: https://nodus.ligo.caltech.edu:30889/FE/c1als_slwebview.html. The problem with just /FE/ is that there is no index.html, and we have turned off automatic directory listings.
IIRC, this arrangement was due to the fact that authentication of some of the folders (maybe the wikis) was broken during the nodus upgrade, so there was sensitive information being publicly displayed. This setup gives us discretion over what gets exposed.
As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.
I suppose before directory listings were turned off we should have fixed the script to make an index.html, but that's how it goes with "up"-grades. How about re-allow directory listing so that our old links for webview work again?
EQ: https://nodus.ligo.caltech.edu:30889/FE is live
In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.
Brief Summary: I am currently looking at the acoustic noise around both arms to see if there are any frequencies from machinery around the lab that stand out and to see what we can remove/change. I am using a Bluebird microphone suspended with surgical tubing from the cable trays to isolate it from vibrations. I am also using a preamp and the SR875 spectrum analyzer taking 6 sets of data every 1.5 meters (0 to 200Hz, 200Hz to 400Hz, 400z to 800Hz, 800Hz to 3200Hz, 3.2kHz to 12kHz, 12kHz to 100kHz).
· Attachment 1 is a PSD of the first 3 measurements (from 0 to 12kHz) that I took every 1.5 meters along the x arm with the preamp and spectrum analyzer
· Attachment 2 is a blrms color map of the first 6 sets of data I took (from 2.4m to 9.9m)
· Attachmetn 3 is a picture of the microphone set up with the surgical tubing
Problems that occurred: settings on the preamp made the first set of data I took significantly smaller than the data I took with the 0dB button off and the last problem I had was the spectrum analyzer reading only from -50 to -50 dBVpk
Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.
Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...
This was done by adding "Options +Indexes" to /etc/apache/sites-available/nodus
I've added a little more info about the apache configuration on the wiki: ApacheOnNodus
Might be. Or it might be in the satellite box cabling. Hard to tell without a tester. I recommend you squish the cables on there and lock the MC and get back to the usual business. I'll check on sat. box with Ben.
Both suspensions have been relatively well behaved for the best part of the last two days, since I effected the Satellite Box swap. Today morning, I set about re-enabling the damping and locking the MC. Judging by the wall StripTool, it stayed locked for about 30 mins or so, after which the glitching returned.
Attached is a screenshot of the sensor signals from MC1 and MC3 (second trend), and also the highest band (>30Hz) BLRMS output for the same 10 channels (full data sampled at 16Hz). Note that MC1 and MC3 satellite boxes remain swapped. So the glitches now have migrated to the MC3 channels.
I need to think about whether this is just coincidence, or if me re-enabling the damping has something to do with the re-occurrence of the glitching...
Addendum 4.30pm: I've also re-aligned the Y arm. Its alignment has been stable over the last few hours, despite several mode cleaner lock losses in between, it recovers good IR transmission. The X arm has been re-aligned to green, but I can't get it locked to the IR - everytime I turn the LSC to ETMX on, there seems to be some large misalignment applied to it. c1iscaux was dead, I restarted it by keying the crate. I haven't had time to investigate the X arm locking in detail, I will continue to debug this.
We should be able to connect to this station
Gautam and Steve,
It is hanging in the midle of the PSL enclosure. Wired to 1X1 to get plus and minus 15V through fuse. It's output is connected to FB c17 input.
GV: C17 corresponds to "MIC 1" in the PEM model. So the output is saved as "C1:PEM-MIC_1_OUT_DQ"
I added an EM172 to my soldered circuit and it seems to be working so far. I have taken a spectra using the EM172 in ambient noise in the control room as well as in white noise from Audacity. My computer's speakers are not very good so the white noise results aren't great but this was mainly to confirm that the microphone is actually working.
Two day plot of glitching suspentions: MC3, ITMY and ETMX
On the control room monitors, I noticed that the IR TEM00 spot was moving around rather a lot in the Y arm. The last time this happened had something to do with the ETMY Oplev, so I took a look at the 30 day trend of the QPD sum, and saw that it was decaying steeply (Steve will update with a long term trend plot shortly). I noticed the RIN also seemed rather high, judging by how much the EPICS channel reading for the QPD sum was jumping around. Attached are the RIN spectra, taken with the OL spot well centered on the QPD and the arms locked to IR. Steve will swap the laser out if it is indeed the cluprit.
ETMY He/Ne 1103P body temp is ~45 C The laser was seated loosely in the V-mount with black rubber padding.
The enclosure has a stinky plastic smell from this black plastic. This laser was installed on Oct 5, 2016 See 1 year plot.
Oplev servo turned off. Thermocouple attached to the He/Ne
It will be replaced tomorrow morning.
System-wide CIT LDAS cluster maintenance may cause disruptions to summary pages today.
Steve, Craig, Gautam
Today Steve replaced the ETMY He/Ne sr P919645 OpLev laser with sr P947049 and Craig realigned it using a new AR coated lenses.
Attached are the RIN of the OpLev QPD Sum channels. The ETMY OpLev RIN is much lower than when Gautam took the same measurement yesterday.
Also attached are the pitch and yaw OLG TFs to ensure we still have acceptable phase margins at the UGF.
The last three plots show the optical layout of the ETMY OpLev, a QPD reflection blocker we added to the table, and green light to ETMY not being blocked by any changes to the OpLev.
ETMY He/Ne body temp is ~45 C The laser was seated loosely in the V-mount with black rubber padding.
This is probably just a confirmation of something we discussed a couple of weeks back, but I wanted to get more familiar with using the multi-coherence (using EricQs nice function from the pynoisesub package) as an indicator of how much feedforward noise cancellation can be achieved. In particular, in light of our newly improved WFS demod/whitening boards, I wanted to see if there was anything to be gained by adding the WFS to our current MCL feedforward topology.
I used a 1 hour data segment - the channels I looked at were the vertex seismometer (X,Y,Z) and the pitch and yaw signals of the two WFS, and the coherence of the uncorrelated part of these multiple witnesses with MCL. I tried a few combinations to see what is the theoretical best achievable subtraction:
The attached plot suggests that there is negligible benefit from adding the WFS in any combination to the MCL feedforward, at least from the point of view of theoretical achievable subtraction.
I also wanted to put up a plot of the current FF filter performance, for which I collected 1 hour of data tonight with the FF on. While the feedforward does improve the MCL spectrum, I expected better performance judging by previous entries in the elog, which suggest that the FIR implementation almost saturates the achievable lower bound. The performance seems to have degraded particularly around 3Hz, despite the multi-coherence being near unity at these frequencies. Perhaps it is time to retrain the Weiner filter? I will also look into installation of the accelerometers on the MC2 chamber, which we have been wanting to do for a while now...
LDAS has not recovered from maintenance causing the pages to remain unavailable until further notice.
> System-wide CIT LDAS cluster maintenance may cause disruptions to summary pages today.
We rebooted c1psl, c1iscaux and c1aux which were all showing the typical symptom of responding to ping but not to telnet (and also blanked out epics fields on the MEDM screens). Keyed all these crates.
Restored burt snapshots for c1psl, PMC locked fine, and IMC is also locked now.
Johannes forgot to elog this yesterday, but he rebooted c1susaux following the usual procedure to avoid getting ITMX stuck.
To measure the modulation depth of the 29.5 MHz sideband, we plan to connect a bidirectional coupler between the EOM and the triple resonant circuit box. This will let us measure the power going into the EOM and the power in the reflection. According to the manual for the EOM (Newport 4064), the modulation depth is 13 mrad/V at a wavelength of 1000 nm. Before disconnecting these we will turn off the Marconi.
Hopefully we can be gentle enough that the EOM can be realigned without too much trouble. Before touching anything we'll measure the beam power before and after the EOM so we know what to match after.
If anyone has an objection to this plan, speak now or we will proceed tomorrow morning.
I'm afraid that the bidirectional coupler, designed to be 50ohm in/out, disturbs the resonant circuit designed for the EOM which is almost purely capacitive.
One possible way could be to measure the transfer function using the active FET probe from the triple resonant input to the output with the EOM attached.
Another way: How about to measure the reflection before the resonant circuit? Then, of course, there is the triple resonant interface circuit between the power combiner and the EOM. This case, we will see how much power is consumed in EOM and the resonant circuit. Then we can use the previous measurement to see the conversion factor between the power consumption to the modulation depth. Kiwamu may give us his measurement.
Just collecting some links from my elog searching today here for easy reference later.
I couldn't find any details of the actual measurement technique, though perhaps I just didn't look for the right keywords. But Koji's suggestion of measuring powers with the bi-directional coupler before the triple resonant circuit (but after the power combiner) should be straightforward.
The 3 pieces of Sapphire v-groove test cuts are back. They look good. The suspension wire 0.0017" ( 43 micron ) is show on some of the pictures.
Very nice! I got excited.
Rebooted c1iscaux, c1auxex and c1auxey which were all not reponding to telnet. The watchdogs for the ETMs were turned off and then I keyed all 3 crates. All slow machines are reponding to telnet now. Both green lasers locked to the arms so I didn't do any burt restore.
Just FYI I'm running a test of updated daqd code on fb1.
fb1 has it's own fiber to the daq network switch, so nothing had to be modified to do this test. This *should* not affect anything in the rest of the system, but as we all know these are famous last words.... If something is going haywire, and you can't get in touch with me and can't figure what else to do, you can just log on to fb1 and shut it down. It's not writing any data to any of the network filesystems.
The daqd code under test is from the latest advLigoRTS 3.2.1 tag, which has daqd stability fixes that will hopefully address the problems we were seeing last time I tried this upgrade. We'll see...
I'm going to let it run over the weekend, and will check in periodically.
I'm not sure if this is related, but since today morning, I've noticed that the data concentrator errors have returned. Looking at daqd.log, there is a 1 second timing mismatch error that is being generated. Usually, manually running ntpdate on the front ends fixes this problem, but it did not work today.
The coil and PD BLRMS are useful tools in identifying when glitches occur in the PD readout, I thought it would be good to install them for ITMY, ETMX and SRM (since I plan to switch the MC3 satellite box, which we suspect to be problematic, with the SRM one). For this purpose, I had to install some IPC SHMEM blocks in C1SUS and recompile. 24 IPC channels were added to pipe the coil, PD and Oplev signals from C1SUS to C1PEM - the recompilation went smoothly, and it doesn't look like the model computation time has increased significantly or that the model is any closer to timing out.
However, I was unable to install the BLRMS blocks in C1PEM, as when I tried to compile the model with BLRMS for these extra 24 channels, I got a compilation error saying that I have exceeded the maximum allowed 499 testpoints per channel. Is there any workaround to this? It would be possible to create a custom BLRMS block that doesn't have all those testpoints, maybe this is the way to go? Especially if we want to install these channels for all our SOS optics, and also replace the current Seismic BLRMS with this scheme for consistency?
GV edit: I have implemented this scheme - after backing up the original BLRMS_2k part, I made a new one with no testpoints and only EPICS readouts. Doing so allowed me to recompile c1pem without any issues, the CPU time seems to have gone up by 3us from ~55us to 58us. So the BLRMS data record is only available at 16Hz, since there are no DQ channels in the BRLMS block - do we want these in any case? Let's see how this does over the weekend...
We set out to measure the 29.5 MHz power going to the EOM today but decided to start by looking at the output of the RF AM stabilizer box first. We wanted to measure the AM noise with a mixer, so we needed to know the power it was giving. We looked at the ouput that goes to the power combiner on the PSL table and found it was putting out only -2.0 dBm (~0.5 Vpp)! This was measured by taking a spectrum with the AG4395 and confirmed by looking on a scope.
To find out if this could be adjusted, we found an old MEDM screen (/opt/rtcds/caltech/c1/medm/c1lsc/master/C1LSC_RFADJUST.adl) and moved the 29.5 MHz EOM Mod Index Adjust slider while measuring the voltage coming in to the MOD CONTOL connection on the front of the AM stabilizer box. Moving the slider from 0 to 10 changes the input voltage linearly from -10 V to 10 V measured with a DMM at the cross-connects as we couldn't find an appropriate adapter for the LEMO cable. The 29.5 MHz modulation only appeared for slider values between 0 and 5, after which it abruptly shuts off. However, changing the slider value between 0 and 5 (Voltage from -10 to 0) does not change the amplitude of the output.
This seems like a problem; further investigation into the AM stabilizer box is neccessary. This DCC document outlines how to test the box, but we can't find a schematic. Since we don't have any mixers that can handle signals as small as -2 dBm, we gave up trying to measure the AM noise and will attempt to measure that and the reflection power from the EOM + resonant circuit once this problem has been diagnosed and fixed.
GV: After some digging, I found the schematic for the RF AM stabilization box (updated wiki and added it to the 40m document tree). According to it, there should be up to +22dBm of RF AM stabilized output to the EOM available, though we measured -2dBm yesterday, and could not vary this level by adjusting the EPICS voltage value. Neglecting losses in the cabling and the power combiner on the PSL, this translates to a paltry 0.178Vrms*0.6*8mard/Vrms ~ 0.85 mrad of modulation depth (gain at 29.5 MHz of the triple resonant circuit taken from this elog)... I think we need to pull this 1U chassis out and debug more thoroughly...
Some more details of our investigation:
If this problem started before ~4pm on Friday then it's probably unrelated, since I didn't start any of these tests until after that. If unexplained problem persist then we can try shutting of the fb1 daqd and see if that helps.
I just aborted the fb1 test and reverted everything to the nominal configuration. Everything looks to be operating nominally. Front ends are mostly green except for c1rfm and c1asx which are currently not being acquired by the DAQ, and an unknown IPC error with c1daf. Please let me know if any unusual problems are encountered.
The behavior of daqd on fb1 with the latest release (3.2.1) was not improved. After turning on the full pipe it was back to crashing every 10 minutes or so when the full and second trend frames were being written out. lame. back to the drawing board...
We pulled out the RF AM stabilization box from the 1X2 rack. PSL shutter was closed, marconi output, RF distribution box and RF AM stabilization box were turned off in that order. We had to remove the 4 rack nut screws on the RF distribution box because of the stiff cables which prevented the RF AM stabilization box extraction. I've left the marconi output and the RF distribution boxes off, and have terminated all open SMA connections with 50 ohm terminators just in case. Rack nuts for RF distribution box have been removed, it is currently sitting on a metal plate that is itself screwed onto the rack. I deemed this a stable enough ledge for the box to sit on in the short run, while we debug the RF AM stabilization box. We will work on the debugging and re-install the box as soon as we are done...
We looked at the RF AM stabilizer box to see if we could find out 1) Why the output power is so low, and 2) Why it can't be changed with the DC input "MOD CONT IN." Details to follow, attached is the annotated schematic from DCC document D000037.
We are not returning the box tonight so the PSL shutter remains closed.
I think this cron job is running on NODUS (our gateway) instead of our scripts machine:
*/1 * * * * /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh >> /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log 2>&1
Based on Jenne's chiara disk usage monitoring script, I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi. I also updated the chiara disk checking script to work on the new Nodus setup. I tested the two, only emailing myself, and they appear to work as expected.
The scripts are committed to the svn. Nodus' crontab now includes these two scripts, as well as the crontab backup script. (It occurs to me that the crontab backup script could be a little smarter, only backing it up if a change is made, but the archive is only a few MB, so it's probably not so important...)
moreover this script has a 90MB log file full of not finding its channel
I wish this script was in python instead of BASH and I wish it would run on megatron instead of nodus (why can't megatron send us email too?) and I wish that this log file would get wiped out once in awhile. Currently its been spitting out errors since at least a month ago:
Tue Jan 31 14:10:02 PST 2017 : N2 Pressure:
Channel connect timed out: 'C1:Vac-N2pres' not found.
(standard_in) 1: syntax error