c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.
Starting c1cal now, let's see if the other c1lsc FE models are affected at all... Moreover, since MC1 seems to be well-behaved, I'm going to restore the nominal eurocrate configuration (sans extender board) tomorrow.
Todd E. came by this morning and gave us (i) 1x new ADC card and (ii) 1x roll of 100m (2017 vintage) PCIe fiber. This afternoon, I replaced the old ADC card in the c1lsc expansion chassis, and have returned the old card to Todd. The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue. CDS is back to what is now the nominal state (Attachment #1) and Yarm is locked for Jon to work on his IFOcoupling study. We will monitor the stability in the coming days.
(i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.
Looks like the ADC was not to blame, same symptoms persist.
The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.
The PMC and IMC were unlocked. Both were re-locked, and alignment of both cavities were adjusted so as to maximize MC2 trans (by hand, input alignment to PMC tweaked on PSL table, IMC alignment tweaked using slow bias voltages). I disabled the inputs to the WFS loops, as it looks like they are not able to deal with the glitching IMC suspensions. c1lsc models have crashed again but I am not worrying about that for now.
9pm: The alignment is wandering all over the place so I'm just closing the PSL shutter for now.
[steve, yuki, gautam]
The plastic tubing/housing for the fiber arrived a couple of days ago. We routed ~40m of fiber through roughly that length of the tubing this morning, using some custom implements Steve sourced. To make sure we didn't damage the fiber during this process, I'm now testing the vertex models with the plastic tubing just routed casually (= illegally) along the floor from 1X4 to 1Y3 (NOTE THAT THE WIKI PAGE DIAGRAM IS OUT OF DATE AND NEEDS TO BE UPDATED), and have plugged in the new fiber to the expansion chassis and the c1lsc front end machine. But I'm seeing a DC error (0x4000), which is indicative of some sort of timing error (Attachment #1) **. Needs more investigation...
Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...
** In the past, I have been able to fix the 0x4000 error by manually rebooting fb (simply restarting the daqd processes on fb using sudo systemctl restart daqd_* doesn't seem to fix the problem). Sure enough, seems to have done the job this time as well (Attachment #2). So my initial impression is that the new fiber is functioning alright .
The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3)
This didn't go as smoothly as planned. While there were no issues with the new fiber over the ~3 hours that I left it plugged in, I didn't realize the fiber has distinct ends for the "HOST" and "TARGET" (-5 points to me I guess). So while we had plugged in the ends correctly (by accident) for the pre-lunch test, while routing the fiber on the overhead cable tray, we switched the ends (because the "HOST" end of the cable is close to the reel and we felt it would be easier to do the routing the other way.
Anyway, we will fix this tomorrow. For now, the old fiber was re-connected, and the models are running. IMC is locked.
[steve, koji, gautam]
We took another pass at this today, and it seems to have worked - see Attachment #1. I'm leaving CDS in this configuration so that we can investigate stability. IMC could be locked. However, due to the vacuum slow machine having failed, we are going to leave the PSL shutter closed over the weekend.
Steve pointed out that some of the vacuum MEDM screen fields were reporting "NO COMM". Koji confirmed that this is a c1vac1 problem, likely the same as reported here and can be fixed using the same procedure.
However, Steve is worried that the interlock won't kick in in case of a vacuum emergency, so we are leaving the PSL shutter closed over the weekend. The problem will be revisited on Monday.
Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button.
While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog):
However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller.
Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work.
The problem will be revisited on Monday.
We need to set up a copy of the c1asx model (which currently runs on c1iscex), to be named c1asy, on c1iscey for the green steering PZTs. The plan discussed at the meeting last Wednesday was to rename the existing model c1tst into c1asy, and recompile it with the relevant parts copied over from c1asx. However, I suspect this will create some problems related to the "dcuid" field in the CDS params block (I ran into this issue when I tried to use the dcuid for an old model which no longer exists, called c1imc, for the c1omc model).
From what I can gather, we should be able to circumvent this problem by deleting the .par file corresponding to the c1tst model living at /opt/rtcds/caltech/c1/target/gds/param/, and rename the model to c1asy, and recompile it. But I thought I should post this here checking if anyone knows of other potential conflicts that will need to be managed before I start poking around and breaking things. Alternatively, there are plenty of cores available on c1iscey, so we could just set up a fresh c1asy model...
I've been plugging away at Altium prototyping the high-voltage bias idea, this is meant to be a progress update.
I need to get footprints for some of the more uncommon parts (e.g. PA95) from Rich before actually laying this out on a PCB, but in the meantime, I'd like feedback on (but not restricted to) the following:
I also don't have a good idea of what the PCB layer structure (2 layers? 3 layers? or more?) should be for this kind of circuit, I'll try and get some input from Rich.
*Updated with current noise (Attachment #2) at the output for this topology of series resistance of 25 kohm in this path. Modeling was done (in LTspice) with a noiseless 25kohm resistor, and then I included the Johnson noise contribution of the 25k in quadrature. For this choice, we are below 1pA/rtHz from this path in the band we care about. I've also tried to estimate (Attachment #3) the contribution due to (assumed flat in ASD) ripple in the HV power supply (i.e. voltage rails of the PA95) to the output current noise, seems totally negligible for any reasonable power supply spec I've seen, switching or linear.
We have been working on double checking the noise budget calculations. We wanted to evaluate the amount of squeezing for a few different scenarios that vary in cost and time. Here are the findings:
All calculations done with
Main unbudgeted noises:
Threat matrix has been updated.
What about just copying the Xend layout? I think it has good MM (per calculations), reasonable (in)sensitivity to component positions, good Gouy phase separation, and I think it is good to have the same layout at both ends. Since the green waist has the same size and location in the doubling crystal, it should be possible to adapt the X end solution to the Yend table pretty easily I think.
The setup I designed is here. It can bring 100% mode-matching and good separation of degrees of TEM01, however I found a probrem. The picture of setup is attached #3. You can see the reflection angle at Y7 and Y8 is not appropriate. I will consider the schematic again.
To facilitate Yuki's alignment of the EY green beam into the Yarm cavity, I have changed the LSC triggering and PowNorm settings to use only the reflected light from the cavity to do the locking of Arm Cavity length to PSL. Running the configure script should restore the usual TRY triggering settings. Also, the X arm optics were macroscopically misaligned in order to be able to lock in this configuration.
While pointing Yuki to the c1asx servo system, I noticed that the filter file for c1asx is missing in the usual chans directory. Why? Backups for it exist in the filter_archive subdirectory. But there is no current file. Clearly this doesn't seems to affect the realtime code execution as the ASX model seems to run just fine. I copied the latest backup version from the archive area into the chans directory for now.
Setting up c1asy:
Now Yuki can work on copying the simulink model (copy c1asx structure) and implementing the autoalignment servo.
Steve reported to me that the CC1 Hornet gauge was not reporting the IFO pressure after some cable tracing at EX. I found that the power to the unit had been accidentally disconnected. I re-connected the power and manually turned on the HV on the CC gauge (perhaps this can be automated in the new vacuum paradigm). IFO pressure of 8e-6 torr is being reported now.
Some facts which should be considered when doing this measurement and the associated uncertainty:
This result has about 40% of uncertaintities in XARM and 33% in YARM (so big... ).
The IMC has been misbehaving for the last 5 hours. Why? I turned the WFS servos off. afaik, aaron was the last person to work on the IFO, so i'm not taking any further debugging steps so as to not disturb his setup.
Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.
But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.
Is there a reason why extender cards shouldn't be stuck into eurocrates?
The VEA vertex laptop, paola, has a flashing orange indicator which I take to mean some kind of battery issue. When the laptop is disconnected from its AC power adaptor, it immediately shuts down. So this machine is kind of useless for its intended purpose of being a portable computer we can work at optical tables with. The actual battery diagnostics (using upower) don't report any errors.
Earlier today, I rebooted a few unresponsive VME crates (susaux, auxey).
The IMC has been unhappy for a couple of days - the glitches in the MC suspensions are more frequent. I reset the dark offsets, minimized MCREFL by hand, and then re-centered the beam on the MC2 Trans QPD. In this config, the IMC has been relatively stable today, although judging by the control room StripTool WFS control signal traces, the suspension glitches are still happening. Since we have to fix the attenuator issue anyways soon, we can do a touch-up on IMC WFS.
I removed the DC PD used for loss measurements. I found that the AS beam path was disturbed - there is a need to change the alignment, this just makes it more work to get back to IFO locking as I have to check alignment onto the AS55 and AS110 PDs.
Single arm locking worked with minimal effort - although the X arm dither alignment doesn't do the intended job of maximizing the transmission. Needs a checkup.
PRMI locking (carrier resonant) was also pretty easy. Stability of the lock is good, locks hold for ~20 minutes at a time and only broke because I was mucking around. However, when the carrier is resonant, I notice a smeared scatter pattern on the ITMX camera that I don't remember from before. I wonder if the FF idea can be tested in the simpler PRMI config.
After recovering these two simpler IFO configurations, I improved the cavity alignment by hand and with the ASS servos that work. Then I re-centered all the Oplev beams onto their respective QPDs and saved the alignment offsets. I briefly attemped DRMI locking, but had little success, I'm going to try a little later in the evening, so I'm leaving the IFO with the DRMI flashing about, LSC mode off.
I had some success today. I hope that the tweaks I made will allow working with the DRMI during the day as well, though it looks like the main limiting factor in lock duty cycle is angular stability of the PRC.
[Attachment #1]: Repeatable and reliable DRMI locks tonight, stability is mainly limited by angular glitches - I'm not sure yet if these are due to a suspect Oplev servo on the PRM, or if they're because of the tip-tilt PR2/PR3/SR2/SR3.
[Attachment #2]: A pass at measuring the TF from SRCL error point to MICH error point via control noise re-injection. I was trying to measure down to 40 Hz, but lost the lock, and am calling it for the night.
[Attachment #3]: Coherence between PRM oplev error point and beam spot motion on POP QPD.
Note that the MICH actuation is not necessarily optimally de-coupled by actuating on the PRM and SRM yet (i.e. the latter two elements of the LSC output matrix are not precisely tuned yet).
What is the correct way to make feedforward filters for this application? Swept-sine transfer function measurement? Or drive broadband noise at the SRCL error point and then do time-domain Wiener filter construction using SRCL error as the witness and MICH error as the target? Or some other technique? Does this even count as "feedforward" since the sensor is not truly "outside" the loop?
This problem resurfaced. I'm doing the debugging.
6:30pm - "Solved" using the same procedure of stepping through the whitening gains with a small (10 DAC cts pk) signal applied. Simply stepping through the gains with input grounded doesn't seem to do the trick.
With the DRMI locked, I drove a line in MICH using the sensing matrix infrastructure. Then I looked at the error points of MICH, PRCL and SRCL. Initially, the sensing line oscillator output matrix for MICH was set to drive only the BS. Subsequently, I changed the --> PRM and --> SRM matrix elements until the line height in the PRCL and SRCL error signals was minimized (i.e. the change to PRCL and SRCL due to the BS moving, which is a geometric effect, is cancelled by applying the opposite actuation to the PRM/SRM respectively. Then I transferred these to the LSC output matrix (old numbers in brackets).
MICH--> PRM = -0.335 (-0.2655)
MICH--> SRM = -0.35 (+0.25)
I then measured the loop TFs - all 3 loops had UGFs around 100 Hz, coinciding with the peaks of the phase bubbles. I also ran some sensing lines and did a sensing matrix measurement, Attachment #1 - looks similar to what I have obtained in the past, although the relative angles between the DoFs makes no sense to me. I guess the AS55 demod phase can be tuned up a bit.
The demodulation was done offline - I mixed the time series of the actuator and sensor signals with a "local oscillator" cosine wave - but instead of using the entire 5 minute time series and low-passing the mixer output, I divvied up the data into 5 second chunks, windowed with a Tukey window, and have plotted the mean value of the resulting mixer output.
Unrelated to this work: I re-aligned the PMC on the PSL table, mostly in Pitch.
I've been looking into the cross-coupling from the SRCL loop control point to the Michelson error point.
[Attachment #1] - Swept sine measurement of transfer function from SRCL_OUT_DQ to MICH_IN1_DQ. Details below.
[Attachment #2] - Attempt to measure time variation of coupling from SRCL control point to MICH error point. Details below.
[Attachment #3] - Histogram of the data in Attachment #2.
[Attachment #4] - Spectrogram of the duration in which data in #2 and #3 were collected, to investigate the occurrance of fast glitches.
Hypothesis: (so that people can correct me where I'm wrong - 40m tests are on DRMI so "MICH" in this discussion would be "DARM" when considering the sites)
Measurement details and next steps:
Attachments #2 and #3
This problem resurfaced, which I noticed when I couldn't get the single arm locks going.
The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.
Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.
Prep for this work:
I was trying to get some pics of the optics as a zeroth-level reference for the pre-vent loss with the single arms locked, but since our SL7 upgrade, the sensoray won't work anymore . I'll try fixing this during the daytime.
I've begun prepping the IFO for the vent, and completed most of the IFO related items on the checklist. The power into the MC has been cut, but the low-power autolocker has not been checked. I will finish up tomorrow and post the go ahead. PSL shutter is closed for tonight.
Following the checklist, I did these:
@Steve & Chub, we are ready to vent tomorrow (Monday Nov 19).
As I was turning off the lights in the VEA, I heard a rattling sound from near the PSL enclosure. I followed it to a valve - I couldn't see a label on this valve in my brief effort to find one, but it is on the south-west corner of the IMC table, so maybe VABSSCI or VABSSCO? The power cable is somehow spliced with an attachment that looks to be bringing gas in/out of the valve (See Attachment #1), and the nut on the bottom was loose, the whole power cable + mettal attachment was responsible for the rattling. I finger-tightened the nut and the sound went away.
I checked the IMC alignment following the vent, for which the manual beam block placed on the PSL table was removed. The alignment is okay, after minor touchup, the MC Trans was ~1200 cts which is roughly what it was pre-vent. I've closed the PSL shutter again.
Attachment #1 is a block diagram depicting the pathway by which the vertex DOF control signals can couple into DARM (adapted from a similar diagram in Gabriele's Virgo note on the subject). I've also indicated some points where noise can couple into either loop. In general, there are sensing noises that couple in at the error point of the loop, and actuation noises that couple in at the control point. In this linear picture, each block represents a (possibly time varying) transfer function. So we can write out the node-to-node transfer functions and evaluate the various couplings.
The motivation is to see if we can first simulate with some realistic noise and time-varying couplings (and then possibly test on the realtime system) the effectiveness of the filter denoted by "FF" in canceling out the shot noise from the auxiliary loop being re-injected into the DARM loop via the DARM sensor. Does this look correct?
With Chub's help, I've setup a mini cleanroom at EY - Attachment #1. The HEPA unit is running on high now. All surfaces were wiped with isopropanol, we can wipe everything down again on Monday and replace the foil.
[steve, rana, gautam]
Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.
Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.
[koji, gautam, jon, steve]
I wanted to set up an RTCDS model to understand this problem better. Attachment #1 is the simulink diagram of the signal flow. The idea will be to put in the appropriate filter shapes into the various filter blocks denoting the DARM and auxiliary DoF plants, controllers and actuators, and then use awggui / diaggui to inject some noises and see if in this idealized model I can achieve good subtraction. Then we can build up to applying a time varying cross coupling between DARM and the vertex DoF, and see how good the adaptive FF works. Still need to setup some MEDM screens to make working with the test system easier.
I figured c1omc would be the least invasive model to set this upon without risking losing any of our IR/green alignment references. Compile and install went smooth, see Attachment #2. The c1omc model was clocking 4us before, now it's using 7us.
Attachment #3 shows the top level of the OMC model, while Attachment #4 shows the MEDM screen.
* Note to self: when closing a loop inside the realtime model, there has to be a delay block somewhere in the loop, else a compilation error is thrown.
Recently we wondered at the meeting what the IMC round trip loss was. I had done several ringdowns in the winter of 2017, but because the incident light on the cavity wasn't being extinguished completely (the AOM 0th order beam is used), the full Isogaio et. al. analysis could not be applied (there were FSS induced features in the reflection ringdown signal). Nevertheless, I fitted the transmission ringdowns. They looked like clean exponentials, and judging by the reflection signals (see previous elogs in this thread), the first ~20us of data is a clean exponential, so I figured we may get some rough value of the loss by just fitting the transmission data.
The fitted storage time is .However, this number isn't commensurate with the 40m IMC spec of a critically coupled cavity with 2000ppm transmissivity for the input and output couplers.
Attachment #1: Expected storage time for a lossless cavity, with round-trip length ~27m. MC2 is assumed to be perfectly reflecting. The IMC length is known to better than 100 Hz uncertainty because the marconi RF modulation signal is set accordingly. For the 40m spec, I would expect storage times of ~40 usec, but I measure almost 30% longer, at ~60 usec.
Attachment #2: Fits and residuals from the 10 datasets I had collected. This isn't a super informative plot because there are 10 datasets and fits, but to eye, the fits are good, and the diagonal elements of the covariance matrix output by scipy's curve_fit back this up. The function used to fit the t > 0 portions of these signals (because the light was extinguished at t=0 by actuating on the AOM) is , where A and tau are the fitted parameters. In the residuals, the same artefacts visible in the reflection signal are seen.
Attachment #3: Scatter plot of the data. Width of circles are proportional to fit error on individual measurements (i just scaled the marker size arbitrarily to be able to visually see the difference in uncertainty, the width doesn't exactly indicate the error), while the dahsed lines are the global mean and +/- 1 sigma levels.
Attachment #4: Cavity pole measurement. Using this, I get an estimate of the loss that is a much more believable .
In the latest installment in this puzzler: turns out that maybe the trend of the "N2 pressure" channel increasing over the ~3 day timescale it takes a cylinder of N2 to run out is real, and is a feature of the way our two N2 cylinder lines/regulators are setup (for the automatic switching between cylinders when one runs out). In order to test this hypothesis, we'd like to have the line pressure be 0 initially, and then just have 1 cylinder hooked up. When we went into the drill-press area, we heard a hiss, turns out that one of the cylinders is leaking (to be fair, this was labelled, but i thought it isn't great to have a higher N2 concentration in an enclosed space). Since we don't need any actuation ability, I valved off the leaky cylinder, and disconnected the other properly functioning one. Attachment #1 shows the current state.
I started putting together some code to implement some ideas we discussed at the Tuesday meeting here. Pipeline isn't setup yet, but i think it's commented okay so if people want to play around with it, the code lives on the 40m gitlab.
Initial results and conclusions:
There still seems to be some data quality issues with the ringdown data I have, so I don't think we really gain anything from running this analysis on the data I have already collected - but in the future, we can do the ringdown with complete extinguishing of the input light, and repeat the analysis.
As for whether we should clean the IMC mirrors - I'm going to see how much power comes out at the REFL port (with PRM aligned) this afternoon, and compare to the input power. This technique suffers from uncertainty in the Faraday insertion loss, isolation and IMC parameters, but I am hoping we can at least set a bound on what the IMC loss is.
Both were measured using the FieldMate power meter. I was hesitant to use the Ophir power meter as there is a label on it that warns against exceeding 100 mW. I can't find anything in the elog/wiki about the measured inesrtion loss / isolation of the input faraday, but this seems like a pretty low amount of light to get back from PRM. The IMC visibility using the MC_REFL DC values is ~87%. Assuming perfect transmission of the 87% of the 97mW that's coupled into the IMC, and assuming a further 5% loss between the Faraday rejected port and the AP table, the Faraday insertion loss would be ~30%. Realistically, the IMC transmission is lower. There is also some part of the light picked off for IPPOS. Judging by the shape of the REFL spot on the camera, it doesn't look clipped to me.
Either way, seems like we are only getting ~half of the 1W we send in on the back of PRM. So maybe it's worth it to investigate the situation in the IOO chamber during this vent.
c1psl, c1susaux,c1iool0,caux crates were keyed. Also, the physical shutter on the PSL NPRO, which was closed last Monday for the Sundance crew filming, was opened and the PMC was locked. PMC remains locked, but there is no light going into the IMC.
Disclaimer: This is almost certainly some user error on my part.
I've been trying to get this running for a couple of days, but am struggling to understand some behavior I've been seeing with DTT.
I wanted to measure some transfer functions in the simulated model I set up.
To see if this is just a feature in the simulated model, I tried measuring the "plant" filter in the C1:LSC-PRCL filter bank (which is also just a pendulum TF), and run into the same error. I also tried running the DTT template on donatella (Ubuntu12) and pianosa (SL7), and get the same error, so this must be something I'm doing wrong with the way the measurement is being run / setup. I couldn't find any mention of similar problems in the SimPlant elogs I looked through, does anyone have an idea as to what's going on here?
* I can't get the "import" feature of DTT to work - I go through the GUI prompts to import an ASCII txt file exported from FOTON but nothing selectable shows up in DTT once the import dialog closes (which I presume means that the import was successful). Are we using an outdated version of DTT (GDS-2.15.1)? But Attachment #1 shows the measured part of the pendulum TF, and is consistent with what is expected until the measurement terminates with a synchronization error.
the import problem is fixed - when importing, you have to give names to the two channels that define the TF you're importing (these can be arbitrary since the ASCII file doesn't have any channel name information). once i did that, the import works. you can see that while the measurement ran, the foton TF matches the DTT measured counterpart.
11 Dec 2pm: After discussing with Jamie and Gabriele, I also tried changing the # of points, start frequency etc, but run into the same error (though admittedly I only tried 4 combinations of these, so not exhaustive).
NDscope is now running on pianosa. To be really useful, we need the templates, so I've made /users/Templates/NDScope_templates where these will be stored. Perhaps someone can write a parser to convert dataviewer .xml to something ndscope can understand. To get it installed, I had to run:
sudo yum install ndscope
sudo yum install python34-gpstime
sudo yum install python34-dateutil
sudo yum install python34-requests
I also changed the pythonpath variable to include the python3.4 site-packages library in .bashrc
Let's install Jamie's new Data Viewer
I found that the BS/PRM OL SUM channels were reading close to 0. So I went to the optical table, and found that there was no beam from the HeNe. I tried power-cycling the controller, there was no effect. From the trend data, it looks like there was a slow decay over ~400000 seconds (~ 5 days) and then an abrupt shutoff. This is not ideal, because we would have liked to use the Oplevs as a DC alignment reference during the ventI plan to use the AS camera to recover some sort of good Michelson alignment, and then if we want to, we can switch out the HeNe.
*How can I export PDF from NDscope?
After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:
sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*
The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.
I did a walkaround and checked the status of all the interlock switches I could find based on the SOP and interlock wiring diagram, but the PSL remains interlocked. I don't want to futz around with AC power lines so I will wait for Koji before debugging further. All the "Danger" signs at the VEA entry points aren't on, suggesting to me that the problem lies pretty far upstream in the wiring, possibly at the AC line input? The Red lights around the PSL enclosure, which are supposed to signal if the enclosure doors are not properly closed, also do not turn on, supporting this hypothesis...
I confirmed that there is nothing wrong with the laser itself - i manually shorted the interlock pins on the rear of the controller and the laser turned on fine, but I am not comfortable operating in this hacky way so I have restored the interlock connections until we decide the next course of action...
In order to see the AS beam a bit more clearly in our low-power config, I swapped out the ND=1.0 filter on the AS camera for ND=0.5.
We looked into the /frames situation a bit tonight. Here is a summary:
Plan of action:
BTW - the last chiara (shared drive) backup was October 16 6 am. dmesg showed a bunch of errors, Koji is now running fsck in a tmux session on chiara, let's see if that repairs the errors. We missed the opportunity to swap in the 4TB backup disk, so we will do this at the next opportunity.
I'm running a script that moves TT1 and TT2 randomly in some restricted P/Y space to try and find an alignment that gets some light onto the TRY PD. Test started at gpstime 1228967990, should be done in a few hours. The IMC has to remain locked for the duration of this test. I will close the PSL shutter once the test is done. Not sure if the light level transmitted through the ITM, which I estimate to be ~30uW, will be enough to show up on the TRY PD, but worth a shot I figure.
Test was completed and PSL shutter was closed at 1228977122.
I made a node to collect drawings/schematics for the 40m OMC, added the length drive for now. We should collect other stuff (TT drivers, AA/AI, mechanical drawings etc) there as well for easy reference.
Some numbers FTR: