There were many locklosses from the point where the arm powers were somewhat stabilized. Attachments #1 and #2 show two individual locklosses. I think what is happening here is that the BS seismometer X channel is glitching, and creating a transient in the angular feedforward filter that blows the lock. The POP QPD based feedback loop cannot suppress this transient, apparently. For now, I get around this problem by boosting the POP QPD feedback loop a bit, and then turning the feedforward filters off. The fact that the other seismometer channels don't report any transient makes me think the problem is either with the seismometer itself, or the readout electronics. The seismometer masses were recently recentered, so I'm leaning towards the latter.
I didn't explicitly check the data, but I am reasonably certain the same effect is responsible for many PRMI locklosses even with the arms held off resonance (though the tolerance to excursions there is higher). Pity really, the feedforward filters were a big help in the lock acquisition...
The CARM loop now has a UGF of ~12 kHz with a phase margin of ~60 degrees. These values of conventional stability indicators are good. The CARM optical gain that best fits the measurements is 9 MW/m.
I've been working on understanding the loop better, here are the notes.
Attachment #1 shows a block diagram of the loop topology.
Attachment #2 shows the OLGs of the two actuation paths.
Attachment #3 and #4 show the model, overlaid with measurements of the loop OLG and crossover TF respectively.
Attachment #5 shows the evolution of the CARM OLG at a few points in the lock acquisition sequence.
Now the I have a model I believe, I need to think about whether there is any benefit to changing some of these loop shapes. I've already raised the possibility of changing the shape of the boosts on the CM board, with which we could get a bit more suppression in the 100 Hz - 1kHz region (noise budget of laser frequency noise --> DARM required to see if this is necessary).
Attachments #1 and Attachments #2 are in the style of elog15356, but with data from a more recent lock. It'd be nice to calibrate the ASDC channel (and in general all channels) into power units, so we have an estimate of how much sideband power we expect, and the rest can be attributed to carrier leakage to ASDC.
On the basis of Attachments #1, the PRG is ~19, and at times, the arm transmission goes even higher. I'd say we are now in the regime where the uncertainty of the losses in the recycling cavity - maybe beamsplitter clipping? is important in using this info to try and constrain the arm cavity losses. I'm also not sure what to make of the asymmetry between TRX and TRY. Allegedly, the Y arm is supposed to be lossier.
This is very interesting. Do you have the ASDC vs PRG (~ TRXor TRY) plot? That gives you insight on what is the cause of the low recycling gain.
I implemented an ASC servo for the PRC, with the POP QPD as a sensor, and the PRM as the actuator. This has improved the stability of the lock (longer locks are possible), and also reduced the RIN of the arm transmission.
Attachment #1 shows the in-loop error signal suppression, and some out-of-loop monitors (POP22 and POPDC).
Attachment #2 compares the arm transmission RIN with the PRFPMI locked, with and without PRC ASC. The 3 Hz bump is definitely squished, but I think we can do better yet.
Attachments #3-5 are in the style of elog15361. No Oplev signals yet, I'll add them soon.
I guess what this means is that the stability of the lock could be improved by turning on some POP QPD based feedback control, I'll give it a shot
The 40m summary pages have been revived. I've not had to make any manual interventions in the last 5 days, so things seem somewhat stable, but I'm sure there will need to be multiple tweaks made. The primary use of the pages right now are for vacuum, seismic and PSL diagnostics.
For these initial attempts, I was just trying to transition MICH to REFL55Q. I agree, the PRCL situation may be more complicated.
Which 1f signals are you going to use? PRCL has sign flipping at the carrier critical coupling. So if the IFO is close to that condition, 1f PRCL suffers from the sign flipping or large gain variation.
I am inclined to believe that the arm cavity losses are such that the IFO is overcoupled. Some calculations, validated with Finesse modeling also suggest that there isn't a sign change for the CARM error signal when the IFO goes from being undercoupled to overcoupled, but I may have made a mistake here?
Thoughts from others?
This EQ seems to have knocked all suspensions out. ITMX was stuck. It is now released, and the IMC is locked again. It looks like there are some serious aftershocks going on so let's keep an eye on things.
I found that there is an issue with the MC1 slow bias voltages.
I usually offload the DC part of the output voltage from the WFS servos to the slow bias voltage sliders, so as to preserve maximum actuation range from the fast system. However, today, I found that this servo wasn't working well at all. So I dug a little deeper. Looking at the EPICS database records:
Around 5pm local time, the three vertex FEs crashed. AFAIK, no one was in the lab or working on anything CDS related, so this is worrying.
There appears to have been some sort of vacuum failure.
ldas-pcdev1 was down, so the summary pages weren't being generated. I have now switched over to ldas-pcdev6. I suspect some forepump failure, will check up later today unless someone else wants to take care of this.
There was no interlock action, and I don't check the vacuum status every half hour, so there was a period of time last night there was high circulating power in the arm cavities when the main volume pressure was higher than nominal. I have now closed the PSL shutter until the issue is resolved.
It looks like the main vacuum interlock was tripped due to a serial communication error from the TP2 controller. With Rana/Koji's permission, I will open V1 and expose the main volume to TP1 again (#2 in last section).
Recommended course of action:
The pumpspool UPS has its "Replace Battery" indicator light on. Might be a good chance to change the UPS, but at the very least, we should put in fresh batteries (last replacement was in Aug 2017).
I'll say this again - the pumpspool area is noisier than I remember it being, I think one / both of the roughing pumps backing TP2 / TP3 need tip-seal replacements.
BTW, EX is 5C hotter than EY, by virtue of the tarnac outside? In fact, judging by Steve's thermometers, EX reports a 12C swing in 24 hours between 30 C and 18 C (so almost no temperature control) while EY reports a 5C swing between 20 and 25 C. This is borne out by the ETM Oplev data I think...
I still don't understand why restoring the vacuum is contingent on this functionality working. All the TPs have their own internal logic to shutdown the pump if some damage threshold is exceeded. Plus, we have the pressure-sensor based interlocks to protect the main volume as well as pumps. While the extra redundancy from the readbacks from the controller is useful, clearly it isn't the first line of defense?
The main volume pressure is currently ~10mTorr. If we pump down before this reaches 500mTorr, the procedure is pretty straightforward. Otherwise, we have to do the dance with the manual throttling valve (judging by current rate of increase, unlikely to exceed this over the weekend, but I lose IFO time).
Obviously I don't want to rush this and have some permanent damage, so I'll stay out of this unless otherwise instructed.
Didn't mean to sound whiny. I will wait until the vacuum team tells me it is okay.
The vacuum safety policy and design are not clear to me, and I don't know what the first and second defense is. Since we had limited time and bandwidth during the remotely-supported recovery work today, we wanted to work step by step.
The pressure rising rate is 20mtorr/day, and turning on TP3 early next week will resume the main-volume pumping without too much hustle. If you need the IFO time now, contact with Jon and use backing with TP3.
I missed the vacuum discussion on the call today, but I have some questions/comments:
At the very least, I think we should consider making the interlock code have levels (like interrupts on a micro controller). So if the pressure gauges are communicating and are reporting acceptable pressure readings, we should be able to reject unphysical readbacks from the TP controllers.
I still don’t understand why TP2 can’t back TP1, but we just disable all the software interlock conditions contingent on TP2 readbacks. This pump is far newer than TP3, and unless I’ve misunderstood something major about the vacuum infrastructure, I don’t really see why we should trust this flaky serial readbacks for any actionable interlocks, at least without some AND logic (since temperature, current and speed aren’t really independent variables).
I also think we should finally implement the email alert in the event the vacuum interlock is tripped. I can implement this if no one else volunteers.
This might also be a good reminder to get the documentation in order about the new vacuum system.
I agree there were MEDM fields, but I can't find any record of these channels being recorded till 2018 December, so I don't agree that they were being digitally monitored. You can also look back in the elog (e.g. here and here) and see that the display fields are just blank. I would then assume that no interlocks were dependent on these channels, because otherwise the vacuum interlocks would be perpetually tripped.
Sorry but I'm having trouble imagining a scenario how the pressure gauges wouldn't register this before the IFO volume is compromised. Is there some back of the envelope calculations I can do to understand this? Since both the pressure gauges and the TP diagnostic channels are being monitored via EPICS, the refresh rate is similar, so I don't see how we can have a pump temperature / speed / current threshold tripped but NOT have this be registered on all the pressure gauges, seems like a bit of a contrived scenario to me. Our thresholds currently seem to be arbitrary numbers anyway, or are they based on some expected backstreaming rate? Isn't this scenario degenerate with a leak elsewhere in the vacuum envelope that would be caught by the differential pressure interlocks?
For the email alert, can you expose a soft channel that is a flag - if this flag is not 1, then the service will send out an email.
So why not just have a special mode for the interlock code during pumpdown and venting, and during normal operation we expect the main volume pressure to be <100uTorr so the interlock trips if this condition is violated? These can just be EPICS buttons on the Vac control MEDM screen. Both of these procedures are not "business as usual", and even if we script them in the future, it's likely to have some operator supervising, so I don't think it's unreasonable to have to switch between these modes. I just think the pressure gauges have demonstrated themselves to be much more reliable than these TP serial readbacks (as you say, they worked once upon a time, but that is already evidence of its flakiness?). The Pirani gauges are not ultra-reliable, they have failed in the past, but at least less frequently than this serial comm glitching. In fact, if these readbacks are so flaky, it's not impossible that they don't signal a TP shutdown? I just think the real power of having these multi-channel diagnostics is lost without some AND logic - a turbopump failure is likely to result in an increase in pump current and temperature increase and pump speed decrease, so it's not the individual channel values that should be determining if an interlock is tripped.
I definitely think that protecting the vacuum envelope is a priority - but I don't think it should be at the expense of commissioning time. But if you think these extra interlocks are essential to the safety of the vacuum system, I withdraw my request.
It would be better to have a flag channel, might be useful for the summary pages too. I will make it if it is too much trouble.
For this particular email service, ideally the email should be sent out as soon as the interlock is tripped, so this would require a line of code to be added to the main interlock code. Which I guess would require a restart of the interlock service. So let me know when you guys plan to do the dry-pump tip seal replacement operation (when I presume valves will be closed anyways) so that we can do this in a minimally invasive way.
Ok, this can be added pretty easily. Its value will just be toggled between 1 and 0 every time the interlock server raises/clears the existing string channel. Adding the channel will require restarting the whole vac IOC, so I'll do it at a time when Jordan is on hand in case something fails to come back up.
In ELOG 15368, I had claimed that the POP QPD based feedback servo actuating on the PRM stabilized the lock. I now believe this scheme of sensing using the POP QPD and feeding back to the PRM is not a good topology for stabilizing the PRC angular motion.
I would also like to bring up the topic of implementing some WFS for the interferometer fields again, there doesn't seem to be any mention of this in the procurement/planning for the BHD. It is not obvious to me yet that we need WFS and not just DC QPDs from a noise point of view, but at least we should discuss this.
I want some input about what the short-term (next two weeks) commissioning goals should be.
Before the vacuum fracas, the locking was pretty robust. With some human servoing of the input beam, I could maintain locks for ~1 hour. My primary goals were:
I didn't succeed in either so far.
I guess apart from this, we want to run the ALS scan to try and infer something about the absorption-induced thermal lens. I guess at this point, the costs outweigh the benefits in trying to bring in the SRC as well, since we will be changing the SRC config?
The PSL shutter was closed from the vacuum interlock trip. Today, I did the following:
All looks good for now. I will probably get back to PRFPMI locking Monday.
The machine needed a hard reboot as it was un-ssh-able.
The exact time that the machine went down is unknown because the blinkys were not DQ-ed. I've now added these to the EDCU to make these channels actually useful, and we may look back on the reliability (or otherwise) of the Acromag system. To my memory, this is the ~5th time one of the new Acromag servers has needed a hard reboot. While this may be less frequent (?) than the VME machines, perhaps there is some other reason for these dropouts. Maybe something to do with the martian network?
Anyway the machine is back up and running now.
Per the discussion at the meeting today, the plan of action is:
If I missed something, please add here.
This earthquake tripped all suspensions and ITMX got stuck. The watchdogs were restored and the stuck optic was released. The IFO was re-aligned, POX/POY and PRMI on carrier locking all work okay.
I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.
As expected, the requested voltage no longer exceeds the Acromag DAC range, it is now more like 2.5 V. However, I still notice that the MC REFL spot moves somewhat diagonally on the camera image - so maybe the coil gains are seriously imbalanced? Anyway, the WFS control signals can once again be safely offloaded to the slow bias voltages once again, preserving the fast ADC range for other actuation.
The Johnson noise of the series resistor has now increased by a factor of 2, from ~6.4 pA/rtHz to 12.8 pA/rtHz. Assuming a current to force coefficient of 1.6 mN/A per coil, the length noise of the cavity is expected to be 12.8e-12 * 0.064/0.25/(2*pi*100)^2 ~ 8e-18 m/rtHz at 100 Hz. In frequency units, this is 80 uHz/rtHz. I think our IMC noise is at least 10 times higher than this at 100 Hz (in any case, the noise of the coil driver is NOT dominated by the series resistance). Attachment #1 confirms that there isn't any significant MCF noise increase, and I will check with the arm cavity too. Nevertheless, we should, if possible, align the optic better and use as high a series resistance as possible.
The watchdog for MC1 was disabled and the board was pulled out for this work. After it was replaced, the IMC re-locks readily.
But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.
While the vacuum system was knocked out, I measured the RF transimpedance (using the AM laser setup, didn't do the shot noise intercept current measurement for now) of all the RFPDs (except PMC REFL). At the very least, the following photodiodes are suspect:
For the remaining photodiodes, I measure a transimpedance that is within ~20% of what is on the wiki page. The notches may benefit from some retuning. While I have the data, I will fit this and post a more complete report on the wiki.
Update July 6 1145am: WFS response plots now have legends mapping quadrants, and I've also added the response of a spare PDA10CF (which is now the new POP22/POP110 photodiode).
Judging by the summary pages, some 18 hours after this change was made and the board re-installed, the MC1 shadow sensors began to report frequent glitches. I can't think of a plausible causal connection, especially given the 18 hour time lag, but also hard to believe there isn't one? As a result, the IMC is no longer able to stay locked for extended periods of time. I did the usual cable squishing, and also took off the lid to see if that helps the situation.
While the reduced series resistance means there is more current flowing through the slow path,
The attached FLIR camera image re-inforces what we already know, that the thermal environment inside the satellite box is horrible. The absolute temperature calibration may be off, but it was difficult to touch the components with a bare finger, so I'd say its definitely > 70 C.
Hmm I can't seem to export with the colorbar, might be just my phone though. I tried to add some "cursors" with the temperature at a few spots, but the font color contrast is poor so you have to squint really hard to see the temperatures in the photo I attached.
I'll leave the MC1 box open overnight and see if that improves the situation, and if not, I'll switch in the SRM satellite box tomorrow.
does the FLIR have an option to export image with a colorbar?
How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps?
There was no improvement to the situation overnight. So, I did the following today:
IMC is now locked again, I will monitor for glitching/stability.
Update 6pm PDT: as shown in Attachment #1, there is a huge difference in the stability of the lock after the sat box swap. Let's hope it stays this way for a while...
A more comprehensive report has been uploaded here. I'll zip the data files and add them there too. In summary:
I'll upload the data and analysis notebook + liso fit files to the wiki as well shortly. The data, a Jupyter notebook making the plots, and the LISO fit files have been uploaded here.
I didn't do it this time but it'd be nice to also do the noise measurement and get an estimate for the shot-noise intercept current.
While I have the data, I will fit this and post a more complete report on the wiki.
I injected some sensing lines and measured their responses in the various photodiodes, with the interferometer in a few different configurations. The results are summarized in Attachments #1 - #3. Even with the PRMI (no arm cavities) locked on 1f error signals, the MICH and PRCL signals show up in nearly the same quadrature in the REFL port photodiodes, except REFL165. I am now thinking if the output (actuation) matrix has something to do with this - part of the MICH control signal is fed back to the PRM in order to minimize the appearance of the MICH dither in the PRCL error signal, but maybe this matrix element is somehow horribly mistuned?
Some other mysteries that I will investigate further:
I blew the long lock last night because I forgot to not clear the ASS offsets when trying to find the right settings for running the ASS system at high power. Will try again tonight...
Lock the PRMI on carrier and measure the sensing matrix, see if the MICH and PRCL signals look sensible in 1f and 3f photodiodes.
This problem reared its ugly head again. I am inclined to believe the problem is electronic and not on the light, since the POY channels seem immune to this issue (see Attachment #1). I will investigate in the daytime tomorrow. Note that while the POX photodiode head has ~twice the transimpedance than POY (per measurement), the POY signal gets amplified by a ZHL-500-HLN amplifier before heading to the demod electronics (nominal gain is 19dB = x9). There is also some imbalance in the light level at the photodiodes I guess, because overall, the PDH fringe is ~twice as large for the Y arm as the X arm. Basically, the y-axes of the attached plot cannot be directly compared between POX and POY.
Mostly this is an annoyance - right now, the POX signal is only used for locking and dither aligning the X arm cavity, and so once that is done, the locking can proceed (as long as the other channels, e.g. REFL11, aren't glitching as well...)
I re-connected the 3 accelerometers located near the MC1/MC3 chamber. It was a bit tedious to get the cabling sorted - I estimate the cable is ~80m long, and the excess length had to be wound around a spool (see Attachment #1), which wasn't really a 1 person job. It's neat-ish for now, but I'm not entirely satisfied. I think we should get shorter cables (~20m), and also mount the pre-amp/power units in a rack instead of leaving it on the floor. The pre-amp settings are x100 for all three channels. The MC2 channels are powered, but are unconnected to the seismometers - it was too tedious to unroll the other spool yesterday. Apart from this, the cable for the "Z" channel had to be re-seated in the strain relief clamp.
I did not enable any of the CDS filters that convert the raw signal into physical units, so for now, these channels are just recording raw counts.
Update 7pm: the spectra in the current config are here - not sure what to make of the MC2_Z channel appearing to show lower noise?
Update July 13 2020 430pm: This afternoon, I hooked up the MC2 accelerometer channels too...
In an effort to make a second usable workstation, I did the following (remotely) on rossa today (not necessarily in this order, I wasn't maintaining a live log so I forgot):
So, in summary, rossa is now all set up for use during lock acquisition. However, until this machine has undergone a few months of testing, we should freeze the pianosa config and not mess with it.
Note that this version of the "crtools" is rather new. Please, use them and if there is an issue, report the errors! I am going to occassionally try lock acquisition using rossa.
wiped and install Debian 10 on rossa today
still to be done: config it as CDS workstation
please don't try to "fix" it in the meantime
This is strange - I was definitely able to launch medm when I was working on this machine remotely on Friday. But now, there does seem to be a problem with this shared library being missing.
First of all, I installed mlocate to find where the shared library files are installed. Then I made the symlink, and now sitemap seems to work again.
Weirdly, my changes to /etc/resolv.conf got overwritten somehow. Was this machine rebooted? Uptime suggests it's only been running for ~6 hours at the time of writing of this elog.
sudo apt install mlocate
sudo ln -s /usr/lib/x86_64-linux-gnu/libreadline.so.7 /usr/lib/x86_64-linux-gnu/libreadline.so.6
when I try 'sitemap' on rossa I get:
medm: error while loading shared libraries: libreadline.so.6: cannot open shared object file: No such file or directory
Indeed, this is now fixed by following instructions from here. I rebooted rossa at ~1250 PDT and confirmed that resolv.conf didn't get overwritten. The resolv.conf file also now has the following useful lines at the head:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
yes, I rebooted yesterday to fix the 'steaking white lines' problem in the video/display
maybe we're supposed to edit something besides resolv.conf since that gets over-written on boot for some linux OS
Last Tuesday evening, while attempting the PRFPMI locking, I noticed a strange feature in the LSC signals, which is shown in Attachment #1 (the PDF exported by dataviewer is 14MB so I upload the jpeg instead). As best as I can tell, the REFL33 and POP22 channels show an abrupt jump in the signal levels, while the other channels do not. POP110 shows a slight jump at around the same time, and the large excursion in AS110_Q actually occurs a few seconds later, and is probably some angular excursion of the PRC/BS. I'm struggling to interpret how this can be explained by some interferometric mechanism, but haven't come up with anything yet. The LO for the 3f error signals is the 2f field, but then why doesn't the POP110 channel show a similar jump if there is some abrupt change in the resonant condition? Is such a change even feasible from a cavity length change point of view? Or did the sideband frequency somehow abruptly jump? But if so, why is the jump much more clearly visible in one sideband than the other?
Does anyone have any ideas as to what could be going on here? This may give some clue as to what's up with the weird sensing matrices, but may also be something boring like broken electronics...
I want to be able to run the dither alignment servo with the PRFPMI locked - I've been thinking about what the scheme should be, and I list here some questions I had while thinking about this.
I wanted to try using rossa as my locking workstation today. However, a few problems became quickly evident. Basically, any of our scripts that rely on the cdsutils package (there are MANY) will not work on rossa, because of some library error. This machine is running Debian 10, while the cdsutils package is being loaded from a pre-compiled install on the shared drive, so perhaps this isn't surprising?
Digging a little more, I found that actually, a version of cdsutils that actually works with python3 is actually shipped with the standard cds-workstation meta-package. This is great news, and we should try and use this where possible I guess. Deferring further debugging for daytime work.
Anyway, I added a symlink: sudo ln -s /usr/lib/x86_64-linux-gnu/libncurses.so.6 /usr/lib/x86_64-linux-gnu/libncurses.so.5, and installed wmctrl using sudo apt install wmctrl.
I noticed these streaky lines again today (but they were not a problem last night). It is annoying if we have to reboot this machine all the time. I wonder if this has something to do with missing drivers. When I ran sudo apt update && sudo apt upgrade, I got several lines like (this isn't the whole stack trace)
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/ucode_unload.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/ucode_load.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/unload_bl.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/bl.bin for module nouveau
Is this indicative of the graphics drivers being installed incorrectly? I am hesitant to mess with this because I think in the past, it was always trying to update some graphics driver that crashed the whole machine into some weird state where we have to wipe the drive and do a fresh re-install of the OS.
Should we just follow these instructions? The graphics card is apparently Quadro P400, which is one of the supported ones according to the list of supported devices.
Or just swap donatella and rossa monitors and defer the problem for later?
We can probably learn something about the interferometer / top level BHD plan with an in-air BHD setup, even if the noise is bad. Here are some thoughts about how we would do it.
For this first attempt, we don't really care about the PRC filtering. So possible places to pick off an LO beam are:
In all cases, I think the easiest option to actually route whatever beam we choose into a fiber, and then bring it over to whatever cavity we choose to use for an OMC. I'm assuming whatever phase control technique we end up using can cancel the fiber phase noise at relevant frequencies.
LO phase control
There is a question about the range, but I think these are the only two realistic options we can implement on a reasonable time scale.
Again, there are a few options. Here are some pros and cons that come to my mind.
If we can do a vent (we'd just need a single chamber open), I'd go for the option of getting the copper OMC out and using that. Attachment #1 shows the approximate sizes of the various components (OMMT, OMC cavity, DCPDs), while Attachment #2 shows a rough sketch of where things would go on the AP table, with the rectangles approximately to scale.
I'd made a c1omc model sometime ago. Basically, I think we have sufficient ADC/DAC channels in the c1ioo machine for any of the options listed above - but using the copper OMC and associated peripherals would allow the easiest interfacing.
More tomorrow, but I tried the following tonight:
I was looking at some signals from last night, see Attachment #1.
Attachment #2 shows some ASC metrics. My conclusion here is that running the PRCL and MICH dither alignment servos (former demodulating REFLDC and latter demodulating ASDC to get an error signal) that running the dither alignment servo and hand tuning the arm ASC loop offsets improves the mode matching to the IFO, because:
The REFLDC behavior needs a bit more interpretation I think, because if the IFO is overcoupled (as I claim it is), then better alignment would at some point actually result in REFLDC increasing.
All the DC signals recorded by the fast system come from the backplane P2 connector of the PD interface boards. According to the schematic, these signals have a voltage gain of 2. The LSC photodiodes themselves have a nominal DC gain of 50 ohms. So, the conversion from power to digital counts is: 0.8 A/W * 50 V/A * 2 * 3276.8 cts/V * whtGain. Inverting, I get 3.8 uW/ct for a whitening gain of 1. This is power measured at the photodiode - optical losses upstream of the photodiode will have to be accounted for separately.
Assuming a modulation depth of 0.2, the 55 MHz sideband power should be ~20 mW. The Schnupp asymmetry is supposed to give us O(1) transmission of this field to the AS port. Then, the SRM will attenuate the field by a factor of 10, so we expect ~2 mW at the AS port. Let's assume 80 % throughput of this field to the AP table, and then there is a 50/50 beamsplitter dividing the light between the AS55 and AS110 photodiodes. So, we expect there to be ~700 uW of power in the TEM00 mode 55 MHz sideband field. This corresponds to 1600 cts according to the above calibration (the ASDC whitening gain is set to 18 dB). The fact that much smaller numbers were seen for ASDC indicates that (i) the schnupp asymmetry is not so perfectly tuned and the actual transmission of the sideband field to the dark port is smaller, or (ii) one or more optical splitting fractions assumed above is wrong. If the former is true, we can still probably infer the contrast defect if we can somehow get an accurate measurement of the sideband transmission to the dark port.
After some consultation with Erik von Reis at LHO, this workstation is progressing towards being usable for most commissioning tasks. DTT, awggui, foton, and MEDM are all now working well. The main limitation now comes from the fact that many of our python scripts are written for python2, and rossa doesn't have many dependencies installed for python2. I see no reason to build these dependencies on rossa for python2, we should not have to work with an unsupported language. But at the same time, I don't want to completely wipe all our python2 scripts, and make them python3, because this would involve a lot of tedious testing that I'm not prepared to undertake at the moment (the problem is compounded by the fact that pianosa does not have many dependencies installed for python3).
So what I have done in the interim is make python3 versions of the most important scripts I need to get the PRFPMI locking working - they are in the scripts directory and have the same names as their python2 counterparts, but have a 3 appended to their names. So when working on rossa, these are the scripts that are called. Eventually, after a lot more testing, we can depracate the old scripts. Currently, where applicable, the MEDM screens allow for either the python2 or python3 version of the script to be called.
Please, for the time being, do not try and install any new packages on rossa unless you are prepared to debug any problems caused and return the machine to a workable state. If you find some issue with a missing package on rossa, (i) make a note of it on the elog, and (ii) if possible, set up your own conda environment for testing and install dependencies to that environment only.
Main goals tonight were:
After some hunting, I found this old SURF report with the WFS head measurements. The y-axes don't make much sense to me, and I can't find the actual data anywhere (her wiki page doesn't actually exist). So I think it's still unknown if these heads ever had the advertised transimpedance gain, or if the measured transimpedance of ~1kohm was what it always was.
In fact, all these utilities are now available in python3. There may be some bugs (e.g. this), but I've checked basic functionality and things look usable enough for development to proceed. While we can have a python2 env on rossa, I think it's unnecessary.
I too, would prefer py3 for everything, but aren't all the cdsutils / gaurdian things still python2?
I tried using the POX_I error signal for the DC CARM_B path today a couple of times. Got to a point where the AO path could be engaged and the arm powers stabilized somewhat, but I couldn't turn the CARM_A path off without blowing the lock. Now the IMC has entered a temperemental state, so I'm abandoning efforts for tonight, but things to try tomorrow are:
I have some data from a couple of days ago when the PRFPMI was locked as usual (CARM_B on REFL for both DC and AO paths), and the sensing lines were on, so I can measure the relative strength of the sensing lines in POX/REFL and get an estimate of what the correct digital gain should be.
The motivation here is to see if the sensing matrix looks any different with a modified locking scheme.