40m Lab Cad:
Updated the dimensions of and fleshed out the chambers in greater detail, by referring to the engineering drawings that Steve gave to me. I have scanned and uploaded most of these drawings to Dropbox in [40mShare]>[40m_cad_models]>[Vacuum Chamber Drawing Scans]. The excel file "LIGO 40m Parts List" in the [40m Lab CAD] folder also lists the Steve drawings I referenced for dimensions of each part.
1. Finish details of all chambers.
2. Start placing representative blocks on the optical table.
c1mcs had died for some reason. Looking at dmesg, I see:
None of the other EPICS processes died. Not sure what to make of this. I was at the PSL table working, and had closed the PSL shutter to avoid MC autolocker trying to keep the MC locked while I was mucking about, but this shouldn't have had any effect on an EPICS process?
Anyway, I just logged into c1sus, stopped and restarted the model. IMC locks fine now.
After discussing with Koji, I decided to try and align the input beam polarization at the PSL fiber coupler to one of the special axes of the PM fiber. The motivation is to try and narrow down the source of the large RF beatnote amplitude drift I noticed and reported last night.
The setup for doing so is shown in Attachment #1 - essentially, I setup one of the newly purchased couplers in a mount, set up a PBS, and placed two photodiodes at the S and P ports of the PBS. The idea is to rotate the input coupler in its mount, thereby maximizing the PER (monitored on two Thorlabs PDA520s - I didn't check the gain balance of them).
I spent ~30mins doing some preliminary trials just now, and, I was able to achieve a PER of ~1/20. But I think much better numbers were reported in this SURF project (although I'm not entirely sure I understand that measurement). I will spend a little more time tweaking the alignment. The procedure is tricky as at some point, simply rotating the mount reduces the mode-matching efficiency into the fiber so much that it is not possible to get a meaningful PER measurement from the photodiodes. I'm adjouring for now, more to follow...
Current configuration of PSL free-space to fiber coupling is:
I had noticed that the RF beat amplitude was fluctuating by up to 20dBm as viewed on the control room analyzer. As detailed in my earlier elog, I suspected this to be because of random polarization drift between the PSL and EX fields incident on the Fiber coupled PDs. Since I am confident the problem is optical (as opposed to something funny in the electronics), we'd like to be able to isolate which of the many fiber segments is dominating the contribution to this random polarization drift.
Some useful references:
Procedure and details:
Last night I worked at the PSL table for the modulation depth measurement for an aLIGO EOM. Let me know if the IFO behavior is unusual.
What I did was:
aLIGO EOM crystal replacement
aLIGO EOM test: Setup
I have more or less finished cadding the test mass chamber by referring to the drawings Steve gave me. Finer details like lugs and bolts and window flaps can be left for later. Here's a quick render:
This required multiple hard reboots, but seems like all the RT models are back for now. The only indicator I can't explain is the red DC field on c1oaf. Also, the SUS model seems to be overclocking more frequently than usual, though I can't be sure. The "timing" field of this model's state word is RED, while the other models all seem fine. Not sure what could be going on.
Will debug further tomorrow, when I probably will have to do all this again as I'll need to recompile c1lsc for the ALS electronics test with the new ADC card from the differential AA board.
As I had found before, restarting the c1oaf model fixed the DC error. There is however still a pesky red indicator light on the "ADC0" in c1oaf. Trying to open up the ADC MEDM screen to investigate this further leads to the blank screen on the bottom right of Attachment #1. Probably has something to do with the fact that the model has an ADC block (because every model needs one?) but no signals are actually being piped to the model directly from the ADC.
Another observation, though I don't have any hypothesis as to why this was happening: on the c1sus machine, the c1sus model would frequently overclock, and then eventually, crash. I observed this behaviour at least 3 times between last night and now. The other models seemed fine though, in fact, IMC stayed locked. Why should this have been the case? It remains to be seen if this was somehow connected to the red DC indicator on c1oaf, though why should this be the case? Isn't the DC just concerned with writing data to frames? Any sort of IPC should be independent? Attachment #2 shows that there's been a definite increase in the maximum time on c1sus clock-cycle since yesterday (it's a 10 day minute trend plot of the model clock cycle timing and also the maximum time). Why? Koji and I did switch off all the Sorensens at the LSC rack for about 30mins, but why should this affect anything at 1X6? There are no red lights in either the c1lsc or c1sus expansion chassis. Curiously, the PRM also seems to be glitchy - as I'm sitting in the control room, I see a spot flashing across vertically on the REFL CRT monitor sporadically. Note that nominally, with PRM misaligned, the REFL CRT should be dark. dmesg on c1sus doesn't shed any light on the issue.
Seems like some high level voodoo .
I was forced into a simultaneous power-cycle rebooting of the three vertex FEs just now. I took the opportunity to completely disconnect the c1sus expansion chassis from all power and then restart it.
Everything is back up right now, and the weird timing issues I noticed in the sus model seem to be gone now (I'll need a longer baseline to be sure and I'll post a trend of the CPU timing tomorrow). It's disconcerting that apparently the only way to get everything back up and running is the nuclear option of power-cycling all FE related electronics. I was considering borrowing an ADC adapter card from the Y end and measuring the calibrated IR ALS noise with the digital system, but if I'm going to have to go through this whole dance each time I do a model recompile on c1lsc (which I'm going to have to in order to get the extra ADC recognized), I'm wondering if it's just better to wait till we get the new adapter cards we ordered. I think I'm going to work on tuning the input coupling into the fiber at EX in the next couple of days instead.
1. Optical Table Layout
I had discussed with Koji a way to record coordinates of optical table equipments in a text file, and load to solidworks. The goal is to make it easier to move things around on the table in the CAD. While I have succeeded in importing coordinates through txt files, there is still a lot of tediousness in converting these points into sketches. Furthermore, the task has to be redone everytime a coordinate is added to or changed in the txt file. Koji and I think that this can all be automated through solidworks macros, so I will explore that option for the next two weeks.
2. Vacuum Chamber CADs
Steve will help find manufacturing drawings of the BS chamber. I have completed the ETM chambers, while the ITM ones are identical to them so I will reuse parts for the CAD.
Bulb went out ~10am today. Looks like the lifetime of this bulb was <100 days.
Steve: bulb is arriving next week
Bulb is replaced.
Light bulb replaced.
Todd informed me that the ADC Timing adaptor boards we had ordered arrived today. I had to solder on the components and connectors as per the schematic, though the main labor was in soldering the high density connectors. I then proceeded to shut down all models on c1lsc (and then the FE itself). Then classic problem of all vertex machines crashing when unloading models on c1lsc happened (actually Koji noticed that this was happening even on c1ioo). Anyways this was nothing new so I decided to push ahead.
I had to get a cable from Downs that connects the actual GS ADC card to this adaptor board. I powered off the expansion chassis, installed the adaptor board, connected it to the ADC card and restarted the expansion chassis and also the FE. I also reconnected the SCSI cable from the AA board to the adaptor card. It was a bit of a struggle to get all the models back up and running again, but everything eventually came back(after a few rounds of hard rebooting). I then edited the c1x04 and c1lsc simulink models to reflect the new path for the X arm ALS error signals. Seems to work alright.
At some point in the afternoon, I noticed a burning smell concentrated near the PSL table. Koji traced the smell down to the c1lsc expansion chassis. We immediately powered the chassis off. But Steve later informed me that he had already noticed an odd burning smell in the morning, before I had done any work at the LSC rack. Looking at the newly installed adaptor card, there wasn't any visual evidence of burning. So I decided to push ahead and try and reboot all models. Everything came back up normally eventually, see Attachment #1. Particle count in the lab seems a little higher than usual (actually, according to my midnight measurement, they are ~factor of 10 lower than Steve's 8am measurements), but Steve didn't seem to think we should read too much into this. Let's monitor the situation over the coming days, Steve should comment on the large variance seen in the particle counter output which seems to span 2 orders of magnitude depending on the time of the day the measurement is made... Also note that there is a BIO card in the C1LSC expansion chassis that is powered by a lab power supply unit. It draws 0 current, even though the label on it says otherwise. I a not sure if the observed current draw is in line with expectations.
The spare (unstuffed) adaptor cards we ordered, along with the necessary hardware to stuff them, are in the Digital FE hardware cabinet along the east arm.
Steve: particle count in the 40m is following outside count, wind direction, weather condition .....etc. The lab particle count is NOT logged ! This is bad practice.
MCRefl is absent, it is under investigation. I removed a bunch of hardware and note all spare optics along the edges.
I've been developing an idea for making a direct measurement of the SRC Gouy phase at RF. It's a very different approach from what has been tried before. Prior to attempting this at the sites, I'm interested in making a proof-of-concept measurement demonstrating the technique on the 40m. The finesse of the 40m SRC will be slightly higher than at the sites due to its lower-transmission SRM. Thus if this technique does not work at the 40m, it almost certainly will not work at the sites.
The idea is, with the IFO locked in a signal-recycled Michelson configuration (PRM and both ETMs misaligned), to inject an auxiliary laser from the AS port and measure its reflection from the SRC using one of the pre-OMC pickoff RFPDs. At the sites, this auxiliary beam is provided by the newly-installed squeezer laser. Prior to injection, an AM sideband is imprinted on the auxiliary beam using an AOM and polarizer. The sinusoidal AOM drive signal is provided by a network analyzer, which sweeps in frequency across the MHz band and demodulates the PD signal in-phase to make an RF transfer function measurement. At the FSR, there will be a AM transmission resonance (reflection minimum). If HOMs are also present (created by either partially occluding or misaligning the injection beam), they too will generate transmission resonances, but at a frequency shift proportional to the Gouy phase. For the theoretical 19 deg one-way Gouy phase at the sites, this mode spacing is approximately 300 kHz. If the transmission resonances of two or more modes can be simultaneously measured, their frequency separation will provide a direct measurement of the SRC Gouy phase.
The above figure illustrates this measurement configuration. An attached PDF gives more detail and the expected response based on Finesse modeling of this IFO configuration.
I have been working on the aux beat setup on the PSL table between 9PM-3AM.
This work involved:
- Turning off the main marconi
- Turning off the freq generation unit (incl IMC modulation)
- Closing the PSL shutter
After the work, these were reverted and the IMC and both arms have been locked.
The new matching circuit was tested.
f_nominal f_actual response required mod. drivng power
[MHz] [MHz] [mrad/V] [rad] needed [dBm]
9.1 9.1 55 0.22 => 22
118.3 118.2 16 0.01 => 6
45.5 45.4 45 0.28 => 25
24.1 N/A 2.1 0.014 => 27
- 9.1MHz and 118.3MHz: They are just fine.
- 24.1MHz: Definitely better (>x3) than the previous trial to combine 118MHz & 24MHz.
We got about the same modulation with the 50Ohm terminated bare crystal (for the port1).
So, this is sort of the best we can do for the 24.1MHz with the current approach.
The driving power of 27dBm is required at 24.1MHz
- About the 45MHz
- The driving power of 27dBm is required at 24.1MHz
- The maximum driving power with the AM stabilized driver is 23dBm, nominally to say.
- I wonder how we can reduce resistance (and capacitance) of the 45MHz further...?
- I also wonder if the IFO can be locked with reduced modulation (0.28 rad->0.2 rad)
- Can the driver max power be boosted a bit? (i.e. adding an attenuator in the RF power detection path)
I've been looking into recovering the seismic BLRMs for the BS Trillium seismometer. It looks like the problem is probably in the anti-aliasing board. There's some heavy stuff sitting on top of it in the rack, so I'll take a look at it later when someone can give me a hand getting it out.
In detail, after verifying that there are signals coming directly out of the seismometer, I tried to inject a signal into the AA board and see it appear in one of the seismometer channels.
Steve, the pictures you posted are not the AA board I was referring to. The attached pictures show the board which is sitting beneath the GPS time server.
Using Gautam's Finesse file and the cad files for the 40m optical setup I propagated the arm mode out of the AS port. For the location of the 3.04 mm waist I used the average distance to the ITMs, which is 11.321 m from the beam spot on the 2 inch mirror on the AS table close to the viewport. The 2inch lens focuses the IFO mode to a 82.6 μm waist at a distance of 81 cm, which is what we have to match the aux laser fiber output to.
I profiled the fiber output and obtained a waist of 289.4 μm at a distance of 93.3 cm from the front edge of the base of the fiber mount. Next step is to figure out the lens placement and how to merge the beam paths. We could use a simple mirror if we don't need AS110 and AS55, we could use a polarizing BS and work with s polarization, or we find a Faraday Isolator.
While doing a beam scan with the razor blade method I noticed that the aux laser has significant intensity noise. This is seen on the New Focus 1611 that is used for the beat signal between PSL and aux laser, as well as on the fiber output PD. There is a strong oscillation around 210 kHz. The oscillation frequency decreases when the output power is turned down, the noise eater has no effect. Koji suggested it could be light scattering back into the laser because I couldn't find a usable Faraday Isolator back when I installed the aux laser in the PSL enclosure. I'll have to investigate this a little further, look at the spectrum, etc. This intensity noise will appear as amplitude noise of the beat note, which worries me a little.
For the arm cavity ringdowns, I guess we don't need AS55/AS110 (although I think the camera will still be useful for alignment). But for something like RC Gouy phase characterization, I'd imagine we need the AS detectors to lock various cavities. So I think we should go for a solution that doesn't disturb the AS PD beams.
It's hard to tell from the plot in the manual (pg 52) what exactly the relaxation oscillation frequency is, but I think it's closer to 600 kHz (is this characteristic of NdYAG NPROs)?? Is the high RIN on the light straight out of the NPRO?
We could use a simple mirror if we don't need AS110 and AS55, we could use a polarizing BS and work with s polarization, or we find a Faraday Isolator.
There is a strong oscillation around 210 kHz. The oscillation frequency decreases when the output power is turned down, the noise eater has no effect.
I suspect that the LD of the aux laser is dying.
- The max power we obtain from this laser (700mW NPRO) is 33mW. Yes, 33mW. (See attachment 1)
- The intensity noise is likely to be relaxation oscillation and the frequency is so low as the pump power is low. When the ADJ is adjusted to 0, the peak moved even lower. (Attachment 2, compare purple and red)
- What the NE (noise eater) doing? Almost nothing. I suspect the ISS gain is too low because of the low output power. (Attachment 2, compare green and red)
Aidan called saying nodus was down at ~345pm. I was able to access it at ~330pm. I couldn't ssh in from my machine or the control room ones. So I went to 1X7 and plugged in a monitor to nodus. It was totally unresponsive. Since the machine wasn't responding to ping either, I decided to hard-reboot it. Machine seemed to come back up smoothly. I had trouble getting the elog started - it wasn't clear to me that the web ports were closed by default, so even though the startELOGD.sh script was running fine, the 8080 port wasn't open to the outside world. Anyways, once I figured this out, I was able to start the elog. DokuWiki also seems to be up and running now...
I found megatron in a similar state to that which nodus was in yesterday. Clued by the fact that MCautolocker wasn't executing the mc scripts (as was evident from looking at the wall StripTool trace), I tried ssh-ing into megatron, but was unable to (despite it being responsive to ping requests). So I went into the VEA and plugged in a monitor to megatron - saw nothing on it. With no soft reboot options available, I power cycled the machine via the front panel button. It came back up smoothly. I manually restarted the autolocker, FSSslow and EX thermal control processes (the former two with initctl, while the latter runs in a tmux session). Everything seems alright for now. Not sure how long megatron has been dead for.
In September 2017 I measured ~150mW output power, which was already kind of low. What are the chances of getting this one repaired? Steve, can you please check the serial number? It's probably too old like the other ones.
I suspect that the LD of the aux laser is dying.
- The max power we obtain from this laser (700mW NPRO) is 33mW. Yes, 33mW. (See attachment 1)
Steve was calibrating the load cells at the EY table with the crane - we didn't get through the full procedure today, so the area near the EY table is kind of obstructed. The 100kg donut is resting on the floor on the North side of the EY table and is still connected to the crane. There are stopper plates underneath the donut, and it is still connected to the crane. Steve has placed cones around the area too. The crane has been turned off.
We'd like to know how much actuation is required on the ETMs to lock the DARM degree of freedom. The "disturbance" we are trying to cancel is the seismic driven length fluctuation of the arm cavity. In order to try and estimate what the actuation required will be, we can use data from POX/POY locks. I'd collected some data on Friday which I looked at today. Here are the results.
If this approach looks legit, I will compute the control signal that is required to stabilize this level of disturbance using the DARM control loop, and see what is the maximum permissible series resistance we can use in order to realize this stabilization. We can then compare various scenarios like different whitening schemes, with/without Barry puck etc, and look at coil driver noise levels for each of them.
Here is an updated plot - the main difference is that I have added a trace that is the frequency domain wiener filter subtraction of the coherent power between the L_X and L_Y time series. I tried reproducing the calculation with the time domain wiener filter subtraction as well, using half of the time series (i.e. 5 mins) to train the wiener filter (with L_X as target and L_Y as witness), but I don't get any subtraction above 5 Hz on the half of the data that is a test data set. Probably I am not doing the pre-filtering correctly - I downsampled the signal to 1 kHz, weighted it by low passing the signal above 40 Hz and trained the Wiener filter on the resulting time series. But this frequency domain Wiener filter subtraction should be at least a lower bound on DARM - indeed, it is slightly lower everywhere than simply taking the time domain subtraction of the two data streams.
Putting a slightly cleaned up version of this plot in now - I'm only including the coherence-inferred DARM estimate now instead of the straight up time domain subtraction. So this is likely to be an underestimate. At low (<10 Hz) frequencies, the time domain computation lines up fairly well, but I suspect that I am getting huge amounts of spectral leakage (see Attachment #2) in the way I compute the spectrum using scipy's filtering routine (once the Wiener filter has been computed). Note that Attachment #2, I didn't break up the data into a training/testing set as in this case, we just care about the one-off offline performance in order to get an estimate of DARM.
The python version of the wiener filter generating code only supports [b,a] output of the digital filter, an sos filter might give better results. Need to figure out the least painful way of implementing the low-noise digital filtering in python...
Instead of trying to couple the fiber output into the interferometer, I'm doing the reverse and maximize the amount of interferometer light going into the fiber. I set up the mode-matching solution shown in attachment #1 and started tweaking the lens positions. Attachment #2 shows the setup on the AS table. After the initial placement I kept moving the lenses in the green arrow directions and got more and more light into the fiber.
When I stopped this work yesterday I measured 86% of the AS port light coming out the other fiber end, and I have not yet reached a turning point with moving the lenses, so it's possible I can tickle out a little more than that.
It occured to me though that I may have been a little hasty with the placement of the mirror that in attachment #2 redirects the beam which would ordinarily go to AS55. For my arm ringdown measurements this doesn't matter, I could actually place it even before the 50/50 beamsplitter that sends light onto AS110 and double the amount of light going into the IFO. What signals are needed for the Guoy phase measurement? Is AS 110 sufficient, or do we need AS55?
I think we need AS55 for locking the configuration Jon suggested - AS55 I and Q were used to lock the SRMI previously, and so I'd like to start from those settings but perhaps there is a way to do this with AS110 I and Q as well.
What signals are needed for the Guoy phase measurement? Is AS 110 sufficient, or do we need AS55?
Using the Wiener Filter estimate of the DARM disturbance we will have to cancel, I computed how the control signal would look like for a few scenarios. Our DACs are 16-bit, +/-10V (i.e. +/-32,768cts-pk, or ~23,000cts RMS). We need to consider the shape of the de-whitening filter to conclude whether it is feasible to increase the series resistance by x10 or not.
Note that in this first computation, I have not considered
While doing this calculation, I have accounted for the fact that right now, the analog de-whitening filters in the ETM drive chain have a x3 gain which we will remove. Actually this is an assumption, I have not yet measured a transfer function, maybe I'll do one channel at EY to confirm. Also, the actuator gains themselves need to be confirmed.
As I was looking at the coil driver schematic more closely, I realized that there are actually two separate series resistances, one for the fast controls path, and another for the DC bias voltage from the slow ADCs. So I think we have been underestimating the Johnson noise of the coil drivers by sqrt(2). I've also attached screenshots of the IFOalign and MCalign screens. The two ITMs and ETMX have pitch DC bias values that are compatible with a x10 increase of the series resistance. But even so, we will have ~3pA/rtHz per coil from the two resistances.
gautam 8pm May8: Seems like I had confirmed the x3 gain in the EX de-whitening board when Johannes and I were investigating the AI board offset.
example of plots illustrating DAC range / saturation
There was an earthquake, all watchdogs were tripped, ITMX was stuck, and c1psl was dead so MCautolocker was stuck.
Watchdogs were reset (except ETMX which remains shutdown until we finish with the stack weight measurement), ITMX was unstuck using the usual jiggling technique, and the c1psl crate was keyed.
There is no beam going into the IFO at the moment. There was definitely a spot on the AS camera after I restored the suspensions yesterday, as you can see from the ASDC level in Attachment #1. But at around 2pm Pacific yesterday, the ASDC level has gone to 0. I suspect the TTs. There is no beam on the REFL camera either when PRM is aligned, and PRM's DC alignment is good as judged by Oplev.
Normally, I am able to recover the beam by scanning the TTs around with some low frequency sine waves, but not today. We don't have any readback (Oplev/OSEM) of the TT alignment, and the DC bias values havent jumped abnormally around the time this happened, judging by the OUT16 monitor points (see Attachment #2). The IMC was also locked at the time when this abrupt drop in ASDC level happened. Unfortunately, we don't have a camera on the Faraday so I don't know where the misalignment is happening, but the beam is certainly not making it to the BS. All the SOS optics (e.g. BS, ITMX and ITMY) are well aligned as judged by Oplev.
Being debugged now...
As suspected - the problem was with the TTs. I tested the TT signal chain by driving a low frequency sine wave using AWG and looking at the signal on an o'scope. But I saw nothing, neither at the AI board monitor point, nor at the actual coil driver mon point. I decided to look at the IOP testpoints for the DAC channels, to see if the signals were going through okay on the digital side. But the IOP channels were flatlined, as viewed on dataviewer (see Attachment #1). This despite the fact that the DAC output monitor screen in the model itself was showing some sensible numbers, again see Attachment #1.
Looking at the CDS overview screen, there were no red flags. But there was a red indicator sneakily hidden away in the IOP model's CDS status screen, the "DAC" field in the state word is red. As Attachment #2 shows, a change in the state word is correlated with the time ASDC went to 0.
Note that there are also no errors on the c1lsc frontend itself, judging by dmesg. I want to do a model restart, but (i) this will likely require reboots of all vertex FEs and (ii) I want to know if any CDS experts want to sniff for clues to what's going on before a model restart wipes out some secret logfiles. I'm a little confused that the rtcds isn't throwing up any errors and causing models to crash if the values are not being written to the registers of the DAC. It may also be that the DAC card itself is dead . To re-iterate, all the EPICS readbacks were suggesting that I am injecting a signal right up to the IOP.
Quoting from the runtime diagnostics note:
20180508 4:49am Cabazon earth quake 4.5M at 79 miles away. ETMX is in load cell measurment condition.
Looking at Steve's plot, I was reminded of the ITMY UL OSEM issue. The numbers don't make sense to me though - 300um of DC shift in UL with negligible shifts in the other coils should have made a much bigger DC shift in the Oplev spot position.
See Attachment #1 for the projected control signal ASDs. The main assumption in the above is that all other control loops can be low-passed sufficiently such that even with anti-dewhitening, we won't run into saturation issues.
DARM control loop:
De-whitening and Anti-De-whitening:
It remains to add the control signals for Oplev, local damping, and ASC to make sure we have sufficient headroom, but given that current projections are predicting using up only ~1000cts of the ~23000cts (RMS) available from the DAC, I think it is likely we won't run into saturations. Need to also figure out what the implication of the reduced actuation range will be on handling the locking transient.
I think "OLG" trace is not labeled right; it would be good to see the actual OLG in addition to whatever that trace actually is.
Based on the first plot, however, my conclusion is that:
I was a bit hasty in posting the earlier plots. In the earlier plot, the "OLG" trace was OLG * anti dewhitening as Rana pointed out.
Here are the updated ones, and a cartoon (Attachment #5) of the loop topology I assumed. I've excluded things like violin filters, AA/AI etc. The overall gain scaling I mentioned in the previous elog amounts to changing the optical sensing response in this cartoon. I now also show the DARM suppression (Attachment #4) for this OLG and the DARM linewidths for RSE. I don't think the conclusions change.
Note that for Signal Recycling, which is what Kevin tells us we need to do, there is a DARM pole at ~150 Hz. I assume we will cancel this in the digital controller and so can achieve a similar OLG shape. This would modify the control signal spectrum a little around 150Hz. But for a UGF on the loop of ~150 Hz, we should still be able to roll-off the control signal at high frequencies and so the RMS shouldn't be dramatically affected.
Steve is looking into acquiring 4.5kohm Vishay Wirewound resistors with 1% tolerance. Plan is to install two in parallel (so that we get 2kohm effective resistance) and then snip off one once we are convinced we won't have any actuation range issues. Do these look okay? They're ~$1.50ea on mouser assuming we get 100. Do we need the non-inductive winding?
Good question! I've never calculated what the resonance frequency would be if had an inductive resistor with our cable capacitance (~50 pF/m I guess).
I found the c1lsc machine to be completely unresponsive today. Looking at the trend of the state word, it happened sometime yesterday (Saturday). The usual reboot procedure did not work - I am not able to bring back any of the models on any of the machines, during the restart procedure, they all fail. The logfile reads (for the c1ioo front end, but they all behave the same):
Not sure what is going on here, or what "Corrutped EPICS data" is supposed to mean. Thinking that something was messed up the last time the model was compiled, I tried recompiling the IOP model. But I'm not able to even compile the model, it fails giving the error message
I suspect this is some kind of path problem - the EPICS_BASE bash variable is set to /cvs/cds/rtapps/epics-18.104.22.168_long/base on the FEs, while /cvs isn't even mounted on the FEs (nor do I think it should be). I think the correct path should be /opt/rtapps/epics-22.214.171.124_long/base. Why should this have changed?
I've shutdown all watchdogs until this is resolved.
As suspected, this was indeed a path problem. Johannes will elog about it later, but in short, it is related to some path variables being changed in order to try and streamline the EPICS processes on the new c1auxex machine (Acromag Era). It is confusing that futzing around with the slow computing system messes with the realtime system as well - aren't these supposed to be decoupled? Once the paths were restored by Johannes, everything compiled and restarted fine. We even have a beam on the AS camera, which was what triggered this whole thing.
Anyways, Attachment #1 shows the current status. I am puzzled by the red TIMING indicators on the c1x04 and c1x02 processes, it is absent from any other processes. How can this be debugged further?
I think the root of the problem is that the /opt/rtapps/ and /cvs/cds/rtapps/ mounting locations point to the same directory on the nfs server. Gautam and I were cleaning up the /cvs/cds/caltech/target/ directory, placing the previous contents of /cvs/cds/caltech/target/c1auxex/, including database files and startup instructions in /cvs/cds/caltech/target/c1auxex_oldVME/, and then moved /cvs/cds/caltech/target/c1auxex2/, which has the channel database and initialization files for the Acromac DAQ, to /cvs/cds/caltech/target/c1auxex/.
This also required updating the systemd entries on c1auxex to point to the changed directory. While confirming that everything worked as before we noticed that upon startup the EPICS IOC complains about not being able to find the caRepeater binary. This was not new and has not limited DAQ functionality in the past, but we wanted to fix this, as it seemed to be some simple PATH issue. While the paths are all correctly defined in the user login shell, systemd runs on a lower level and doesn't know about them. One thing we tried was to let systemd execute /cvs/cds/rtapps/epics-126.96.36.199_long/etc/epics-user-env.sh initializing EPICS. It was strange that the content of that file was pointing to /opt/rtapps/epics-188.8.131.52_long/base, which is not mounted on the slow machines, so we changed the /opt/ it to /cvs/cds/, not realizing that the frontends read from the same directory (as Gautam said, /cvs/cds does not exist as a mount point on the frontend). It ended up not working this way, and apparently I forgot to change it back during clean up. But worse, never elogged it!
In the end, we managed to to give systemd the correct path definitions by explicitly calling them out in /cvs/cds/caltech/target/c1auxex/ETMXenv, to which a reference was added in the systemd service file. The caRepeater warning no longer appears.
Since we think we already know the stack mass to ~25% (i.e. 5000 +/- 1000 lbs), we decided to restore the ETMX stack. Procedure followed was:
I will upload the photos to the PICASA page and post the link here later.
In this case, we only need a mass estimate of the end chamber contents with an accuracy of ~25%. If we think we have that already, we don't need to keep doing the jacks-strain gauge adventure.
Since there have been various software/hardware activity going on (stack weighing, AUX laser PLL, computing timing errors etc etc), I decided to do a check on the state of the IFO.
The final set-up of stack measurment with 3 load cells and 4 leveling wedge mounts as Atm 1
Sensor voltages BEFORE and AFTER this attempt.