[koji, steve, gautam]
We debugged this in the following way:
So for now, the power cable to the box is disconnected on the back end. We have to pull it out and debug it at some point.
Apart from this, megatron was un-sshable so I had to hard reboot it, and restart the MCautolocker, FSSslowPy and nds2 processes on it. I also restarted the modbusIOC processes for the PSL channels on c1auxex (for which the physical Acromag units sit in 1X5 and hence were affected by our work), mainly so that the FSS_RMTEMP channel worked again. Now, IMC autolocker is working fine, arms are locked (we can recover TRX and TRY~1.0), and everything seems to be back to a nominal state. Phew.
The trillium interface box was removed from the rack.
The problem was the incorrect use of an under-spec TVS (Transient Voltage Suppression) diodes (~ semiconductor fuse) for the protection circuit.
The TVS diodes we had had the breakdown voltages lower than the supplied voltages of +/-20V. This over-voltage eventually caused the catastrophic breakdown of one of the diodes.
I don't find any particular reason to have these diodes during the laboratory use of the interface. Therefore, I've removed the TVS diodes and left them unreplaced. The circuit was tested on the bench and returned to the rack. All the cables are hooked up, and now the BRLMs look as usual.
- The board version was found to be D1000749-v2
- There was an obvious sign of burning or thermal history around the components D17 and D14. The solder of the D17 was so brittle that just a finger touch was enough to remove the component.
- These D components are TVS diodes (Transient Voltage Suppression Diodes) manufactured by Littelfuse Inc. It is sort of a surge/overvoltage protector to protect rest of the circuit to be exposed to excess voltage. The specified component for D17/D14 was 5.0SMMDJ20A with reverse standoff voltage (~operating voltage) of 20V and the breakdown voltage of 22.20V(min)~24.50V(max). However, the spec sheet told that the marking of the proper component must be "5BEW" rather than "DEM," which is visible on the component. Some search revealed that the used component was SMDJ15A, which has the breakdown voltage of 16.70V~18.50V. This spec is way too low compared to the supplied voltage of +/-20V.
The idea we are going with to push the coil driver noise contribution down is to simply increase the series resistance between the coil driver board output and the OSEM coil. But there are two paths, one for fast actuation and one that provides a DC current for global alignment. I think the simplest way to reduce the noise contribution of the latter, while preserving reasonable actuation range, is to implement a precision DC high-voltage source. A candidate that I pulled off an LT application note is shown in Attachment #1.
If all this seems reasonable, I'd like to prototype this circuit and test it with ETMX, which already has the high series resistance for the fast path. So I will ask Steve to order the OpAmp and transistors.
Bah! Too complex.
The wall StripTool indicated that the IMC wasn't too happy when I came in today. Specifically:
The last time this happened, it was due to the Sorensens not spitting out the correct voltages. This time, there were no indications on the Sorensens that anything was funky. So I just disabled the MCautolocker and figured I'd debug later in the evening.
However, around 5pm, the shadow sensor values looked nominal again, and when I re-enabled the local damping, the MC REFL spot suggested that the local damping was working just fine. I re-enabled the MCautolocker, MC re-locked almost immediately. To re-iterate, I did nothing to the electronics inside the VEA. Anyways, this enabled us to work on the X arm ASS (next elog).
Independent from the problems the vertex machine has been having (I think, unless it's something happening over the shared memory network), I noticed on Friday that the ETMX watchdog was tripped. Today, once again, the ETMX watchdog was tripped. There is no evidence of any abnormal seismic activity around that time, and anyways, none of the other watchdogs tripped. Attachment #1 shows that this happened ~838am PT today morning. Attachment #2 shows the 2k sensor data around the time of the trip. If the latter is to be believed, there was a big impulse in the UL shadow sensor signal which may have triggered the trip. I'll squish cables and see if that helps - Steve and I did work at the EX electronics rack (1X9) on Friday but this problem precedes our working there...
OK, how about this:
The question still remains of how to combine the fast and bias paths in this proposed scheme. I think the following approach works for prototyping at least:
In the longer term, perhaps the Satellite Box revamp can accommodate a bias voltage summation connector.
I have neglected many practical concerns. Some things that come to mind:
Today while Rich Abbott was here, Koji and I had a brief discussion with him about the HV amplifier idea for the coil driver bias path. He gave us some useful tips, perhaps most useful being a topology that he used and tested for an aLIGO ITM ESD driver which we can adapt to our application. It uses a PA95 high voltage amplifier which differs from the PA91 mainly in the output voltage range (up to 900V for the former, "only" 400V for the former. He agrees with the overall design idea of
He also gave some useful suggestions like
I am going to work on making a prototype version of this box for 5 channels that we can test with ETMX. I have been told that the coupling from side coil to longitudinal motion is of the order of 1/30, in which case maybe we only need 4 channels.
A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.
Here is an other big one
I took another pass at this. Here is what I have now:
Attachment #1: Composite amplifier design to suppress voltage noise of PA91 at low frequencies.
Attachment #2: Transfer function from input to output.
Attachment #3: Top 5 voltage noise contributions for this topology.
Attachment #4: Current noises for this topology, comparison to current noise from fast path and slow DAC noise.
Attachment #5: LISO file for this topology.
Looks like this will do the job. I'm going to run this by Rich and get his input on whether this will work (this design has a few differences from Rich's design), and also on how to best protect from HV incidents.
I had a very fruitful discussion with Rich about this circuit today. He agreed with the overall architecture, but made the following suggestions (Attachment #1 shows the circuit with these suggestions incorporated):
If all this sounds okay, I'd like to start making the PCB layout (with 5 such channels) so we can get a couple of trial boards and try this out in a couple of weeks. Per the current threat matrix and noises calculated, coil driver noise is still projected to be the main technical noise contribution in the 40m PonderSqueeze NB (more on this in a separate elog).
Glitch, small amplitude, 350 counts & no trip.
The second big glich trips ETMX sus. There were small earth quakes around the glitches. It's damping recovered.
All suspension tripped. Their damping restored. The MC is locked.
ITMX-UL & side magnets are stuck.
I freed ITMX and coarsely realigned the IFO using the OPLEVs. All the alignments were a bit off from overnight.
The IFO is still only able to lock in MICH mode currently, which was the situation before the earthquake. This morning I additionally tried restoring the burt state of the four machines that had been rebooted in the last week (c1iscaux, c1aux, c1psl, c1lsc) but that did not solve it.
M3.4 Colton shake did not trip sus.
I've been plugging away at Altium prototyping the high-voltage bias idea, this is meant to be a progress update.
I need to get footprints for some of the more uncommon parts (e.g. PA95) from Rich before actually laying this out on a PCB, but in the meantime, I'd like feedback on (but not restricted to) the following:
I also don't have a good idea of what the PCB layer structure (2 layers? 3 layers? or more?) should be for this kind of circuit, I'll try and get some input from Rich.
*Updated with current noise (Attachment #2) at the output for this topology of series resistance of 25 kohm in this path. Modeling was done (in LTspice) with a noiseless 25kohm resistor, and then I included the Johnson noise contribution of the 25k in quadrature. For this choice, we are below 1pA/rtHz from this path in the band we care about. I've also tried to estimate (Attachment #3) the contribution due to (assumed flat in ASD) ripple in the HV power supply (i.e. voltage rails of the PA95) to the output current noise, seems totally negligible for any reasonable power supply spec I've seen, switching or linear.
As a part of the preparation for the replacement of c1susaux with Acromag, I made inspection of the coil-osem transfer function measurements for the vertex SUSs.
The TFs showed typical f^-2 with the whitening on except for ITMY UL (Attachment 1). Gautam told me that this is a known issue for ~5 years.
We made a thorough inspection/replacement of the components and identified the mechanism of the problem.
It turned out that the inputs to MAX333s are as listed below.
The switching voltage for UL is obviously incorrect. We thought this comes from the broken BIO board and thus swapped the corresponding board. But the issue remained. There are 4 BIO boards in total on c1sus, so maybe we have replaced a wrong board?
Initially, we thought that the BIO can't drive the pull-up resistor of 5KOhm from 15V to 0V (=3mA of current). So I have replaced the pull-up resistor to be 30KOhm. But this did not help. These 30Ks are left on the board.
[steve, rana, gautam]
Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.
Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.
[chub, bob, gautam]
We took the heavy door off the EY chamber at ~930am.
Waiting for the table to level off now. Plan for later today / tomorrow is as follows:
While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.
Chamber work by Chub and gautam:
Y arm was locked at low power in air.
We are operating with 1/10th the input power we normally have, so we expect the IR transmission of the Y arm to max out at 1 when well aligned. However, it is hovering around 0.05 right now, and the dominant source of instability is the angular motion of ETMY due to the Oplev loop being non-functional. I am hesitant to do in-chamber work without an extra pair of eyes/hands around, so I'll defer that for tomorrow morning when Chub gets in. With the cavity axis well defined, I plan to align the green beam to this axis, and use the two to confirm that we are well clear of the Parabola.
* Paola, our vertex laptop, and indeed, most of the laptops inside the VEA, are not ideal to work on this kind of alignmment procedure, it would be good to set up some workstations on which we can easily interact with multiple MEDM screens,
Does anyone know what the purpose of the indicated optic in Attachment #1 is? Can we remove it? It will allow a little more space around the elliptical reflector...
I don't think it was used. It is not on the diagram too. You can remove it.
After diagnosis with the tester box, as I suspected, the fully open DC voltages on the two problematic channels, LL and UR, were restored once I replaced the LM6321 ICs in those two channel paths. However, I've been puzzled by the inability to turn on the Oplev loops on ETMY. Furthermore, the DC bias voltages required to get ETMY to line up with the cavity axis seemed excessively large, particularly since we seemed to have improved the table levelling.
I suspected that the problem with the OSEMs hasn't been fully resolved, so on Thursday night, I turned off the ETMY watchdog, kicked the optic, and let it ringdown. Then I looked at the time-series (Attachment #1) and spectra (Attachment #2) of the ringdowns. Clearly, the LL channel seems to saturate at the lower end at ~440 counts. Moreover, in the time domain, it looks like the other channels see the ringdown cleanly, but I don't see the various suspension eigenmodes in any of the sensor signals. I confirmed that all the magnets are still attached to the optic, and that the EQ stops are well clear of the optic, so I'm inclined to think that this behavior is due to an electrical fault rather than a mechanical one.
For now, I'll start by repeating the ringdown with a switched out Satellite Box (SRM) and see if that fixes the problem.
Short update on latest Satellite box woes.
What's more - I did some Sat box switcheroo, swapping the SRM and ETM boxes back and forth in combination with the tester box. In the process, I seem to have broken the SRM sat box - all the shadow sensors are reporting close to 0 volts, and this was confirmed to be an electronic problem as opposed to some magnet skullduggery using the tester box. Once we get to the bottom of the ETMY sat box, we will look at SRM. This is more or less the last thing to look at for this vent - once we are happy the cavity axis can be recovered reliably, we can freeze the position of the elliptical reflector and begin the F.C.ing.
While Chub is making new cables for the EY satellite box...
While the position of the reflector could possibly be optimized further, since we are already seeing a temperature gradient on the optic, I propose pushing on with other vent activities. I'm almost certain the current positioning places the optic closer to the second focus, and we already saw shifts of the HOM resonances with the old configuration, so I'd say we run with this and revisit if needed.
If Chub gives the Sat. Box the green flag, we will work on F.C.ing the mirrors in the evening, with the aim of closing up tomorrow/Friday.
All raw images in this elog have been uploaded to the 40m google photos.
In preparation for the FC cleaning, I did the following:
Tomorrow, I will start with the cleaning of ETMY HR. While the FC is drying, I will position ITMY at the edge of the IY cable for cleaning (Chub will setup the mini-cleanroom at the IY table). The plan is to clean both HR surfaces and have the optics back in place by tomorrow evening. By my count, we have done everything listed in the IY and EY chambers. I'd like to minimize the time between cleaning and pumpdown, so if all goes well (Sat Box problems notwithstanding), we will check the table leveling on Friday morning, and put on the heavy doors and at least rough the main volume down to 1 torr on Friday.
The attached photo shows the two optics with FC applied.
My original plan was to attempt to close up tomorrow. However, we are still struggling with Satellite box issues. So rather than rush it, we will attempt to recover the Y arm cavity alignment on Monday, satellite box permitting. The main motivation is to reduce the deadtime between peeling off the F.C and starting the pumpdown. We will start working on recovering the cavity alignment once the Sat box issues are solved.
Since we may want to close up tomorrow, I did the following prep work:
Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.
Procedure tomorrow [comments / suggestions welcome]:
All photos have been uploaded to google photos.
Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.
I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...
I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow.
[koji, chub, jon, rana, gautam]
Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to
The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.
[Attachment #1]: ITMY HR face after cleaning. I determined this to be sufficiently clean and re-installed the optic.
[Attachment #2]: ETMY HR face after cleaning. This is what the HR face looks like after 3 rounds of First-Contact application. After the first round, we noticed some arc-shaped lines near the center of the optic's clear aperture. We were worried this was a scratch, but we now believe it to be First-Contact residue, because we were able to remove it after drag wiping with acetone and isopropanol. However, we mistrust the quality of the solvents used - they are not any special dehydrated kind, and we are looking into acquiring some dehydrated solvents for future cleaning efforts.
[Attachment #3]: Top view of ETMY cage meant to show increased clearance between the IFO axis and the elliptical reflector.
Many more photos (including table leveling checks) on the google-photos page for this vent. The estimated time between F.C. peeling and pumpdown is ~24 hours for ITMY and ~15 hours for ETMY, but for the former, the heavy doors were put on ~1 hour after the peeling.
The first task is to fix the damping of ETMY.
[jon, koji, gautam]
I'm leaving all suspension watchdogs tripped over the weekend as part of the suspension diagonalization campaign...
I looked at the free-swinging sensor data from two nights ago, and am struggling with the interpretation.
[Attachment #1] - Fine resolution spectral densities of the 5 shadow sensor signals (y-axis assumes 1ct ~1um). The puzzling feature is that there are only 3 resonant peaks visible around the 1 Hz region, whereas we would expect 4 (PIT, YAW, POS and SIDE). afaik, Lydia looked into the ETMY suspension diagonalization last, in 2016. Compared to her plots (which are in the Euler basis while mine are in the OSEM basis), the ~0.73 Hz peak is nowhere to be seen. I also think the frequency resolution (<1 mHz) is good enough to be able to resolve two closely spaced peaks, so it looks like due to some reason (mechanical or otherwise), there are only 3 independent modes being sensed around 1 Hz.
[Attachment #2] - Koji arrived and we looked at some transfer functions to see if we could make sense of all this. During this investigation, we also think that the UL coil actuator electronics chain has some problem. This test was done by driving the individual coils and looking for the 1/f^2 pendulum transfer function shape in the Oplev error signals. The ~ 4dB difference between UR/LL and LR is due to a gain imbalance in the coil output filter bank, once we have solved the other problems, we can reset the individual coil balancing using this measurement technique.
[Attachment #3] - Downsampled time-series of the data used to make Attachment #1. The ringdown looks pretty clean, I don't see any evidence of any stuck magnets looking at these signals. The X-axis is in kilo-seconds.
We found that the POS and SIDE local damping loops do not result in instability building up. So one option is to use only Oplevs for angular control, while using shadow-sensor damping for POS and SIDE.
I did some tests of the electronics chain today.
Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.
The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2.
As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow.
The forthcoming Acromag c1susaux is supposed to use the backplane connectors of the sus euro card modules.
However, the backplane connectors of the vertex sus coil drivers were already used by the fast switches (dewhitening) of c1sus.
Our plan is to connect the Acromag cables to the upper connectors, while the switch channels are wired to the lower connector by soldering jumper wires between the upper and lower connectors on board.
To make the lower 96pin DIN connector available for this, we needed DIN 41612 (96pin) shroud. Tyco Electronics 535074-2 is the correct component for this purpose. The shrouds have been installed to the backplane pins of the coil driver circuit D010001. The shroud has the 180deg rotation dof. The direction of the shroud was matched with the ones on the upper connectors.
Now the sus PD whitening bards are ready to move the back plane connectoresto the lower row and to plug the acromag interface board to the upper low.
Sus PD whitening boards on 1X5 rack (D000210-A1) had slow and fast channels mix in a single DIN96 connector. As we are going to use the rear-side backplane connector for Acromag access, we wanted to migrate the fast channel somewhere. For this purpose, the boards were modified to duplicate the fast signals to the lower DIN96 connector.
The modification was done on the back layer of the board (Attachment 1).
The 28A~32A and 28C~32C of P1 are connected to the corresponding pins of P2 (Attachment 2). The connections were thouroughly checked by a multimeter.
After the modification the boards were returned to the same place of the crate. The cables, which had been identified and noted before disconnection, were returned to the connectors.
The functionarity of the 40 (8sus*5ch) whitening switches were confimred using DTT one by one by looking at the transfer functions between SUS LSC EXC to the PD input filter IN1. All the switches showed the proper whitening in the measurments.
The PD slow mon (like C1:SUS-XXX_xxPDMon) channels were also checked and they returned to the values before the modification, except for the BS UL PD. As the fast version of the signal returned to the previous value, the monitor circuit was suspicious. Therefore the opamp of the monitor channels (LT1125) were replaced and the value came back to the previous value (attachment 3).
Will advise when I'm finished, will be by 1 pm for ALS work to begin.
Testing is finished.
In anticipation of needing to test hundreds of suspension signals after the c1susaux upgrade, I've started developing a Python package to automate these tests: susPython
The core of this package is not any particular test, but a general framework within which any scripted test can be "nested." Built into this framework is extensive signal trapping and exception handling, allowing actuation tests to be performed safely. Namely it protects against crashes of the test scripts that would otherwise leave the suspensions in an arbitrary state (e.g., might destroy alignment).
The package is designed to be used as a standalone from the command line. From within the root directory, it is executed with a single positional argument specifying the suspension to test:
$ python -m suspython ITMY
Currently the package requires Python 2 due to its dependence on the cdsutils package, which does not yet exist for Python 3.
So far I've implemented a cross-consistency test between the DC-bias outputs to the coils and the shadow sensor readbacks. The suspension is actuated in pitch, then in yaw, and the changes in PDMon signals are measured. The expected sign of the change in each coil's PDMon is inferred from the output filter matrix coefficients. I believe this test is sensitive to two types of signal-routing errors: no change in PDMon response (actuator is not connected), incorrect sign in either pitch or yaw response, or in both (two actuators are cross-wired).
The next test I plan to implement is a test of the slow system using the fast system. My idea is to inject a 3-8 Hz excitation into the coil output filter modules (either bandlimited noise or a sine wave), with all coil outputs initially disabled. One coil at a time will be enabled and the change in all VMon signals monitored, to verify the correct coil readback senses the excitation. In this way, a signal injected from the independent and unchanged fast system provides an absolute reference for the slow system.
I'm also aware of ideas for more advanced tests, which go beyond testing the basic signal routing. These too can be added over time within the susPython framework.
Rana did a checkout of my story about oddness of the ETMY suspension. Today, we focused on the actuators - the goal was to find the correct coefficients on the 4 face coils that would result in diagonal actuation (i.e. if we actuate on PIT, it only truly moves the PIT DoF, as witnessed by the Oplev, and so on for the other DoFs). Here are the details:
Ther isn't a consistent set of OSEM coil gains that explains the best actuation vectors we determined yesterday. Here are the explicit matrices:
There isn't a solution to the matrix equation , i.e. we cannot simply redistribute the actuation vectors we found as gains to the coils and preserve the naive actuation matrix. What this means is that in the OSEM coil basis, the actuation eigenvectors aren't the naive ones we would think for PIT and YAW and POS. Instead, we can put these custom eigenvectors into the output matrix, but I'm struggling to think of what the physical implication is. I.e. what does it mean for the actuation vectors for PIT, YAW and POS to not only be scaled, but also non-orthogonal (but still linearly independent) at ~10 Hz, which is well above the resonant frequencies of the pendulum? The PIT and YAW eigenvectors are the least orthogonal, with the angle between them ~40 degrees rather than the expected 90 degrees.
So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC.
let us have 3 by 4, nevermore
so that the number of columns is no less
and no more
than the number of rows
so that forevermore we live as 4 by 4
I'm struggling to think
I repeated the exercise from yesterday, this time driving the butterfly mode [+1 -1 -1 +1] and adding the tuned PIT and YAW vectors from yesterday to it to minimize appearance in the Oplev error signals.
The measured output matrix is , where rows are the coils in the order [UL,UR,LL,LR] and columns are the DOFs in the order [POS,PIT,YAW,Butterfly]. The conclusions from my previous elog still hold though - the orthogonality between PIT and YAW is poor, so this output matrix cannot be realized by a simple gain scaling of the coil output gains. The "adjustment matrix", i.e. the 4x4 matrix that we must multiply the "ideal" output matrix by to get the measured output matrix has a condition number of 134 (1 is a good condition number, signifies closeness to the identity matrix).
so that forevermore we live as 4 by 4