This evening I prepared a new 2 TB 3.5" disk to hold a copy of /opt/rtcds and /opt/rtapps from chiara. This is the final piece of setup before model compilation can be tested on the new front-ends. However chiara does not appear to support hot-swapping of disks, as the disk is not recognized when connected to the live machine. I will await confirmation before rebooting it. The new disk is not currently connected.
I thought I'd get started on some of the tests tonight. But I found that this problem had resurfaced. I don't know what's so special about the REFL55 photodiode - as far as I can tell, other photodiodes at the REFL port are running with comparable light incident on it, similar flat whitening gain, etc etc. The whitening electronics are known to be horrible because they use the quad LT1125 - but why is only this channel problematic? To describe the problem in detail:
I request Koji to look into this, time permitting, tomorrow. In slightly longer term, we cannot run the IFO like this - the frequency of occurrence is much too high and the "fix" seems random to me, why should sweeping the whitening gain fix the problem? There was some suggestion of cutting the PCB trace and putting a resistor to limit the current draw on the preceeding stage, but this PCB is ancient and I believe some traces are buried in internal layers. At the same time, I am guessing it's too much work to completely replace the whitening electronics with the aLIGO style units. Anyone have any bright ideas?
Anyway, I managed to lock the PRMI (ETMs misaligned) using REFL165I/Q. Then, instead of using the BS as the MICH actuator, I used the two ITMs (equal magnitude, opposite sign in the LSC output matrix).
I didn't get around to running any of the other tests tonight, will continue tomorrow.
Update Mar 26: Attachments #2 and #3 show that there is clearly something wrong with the whitening electronics associated with REFL55 channels - with the PSL shutter closed (so the only "signal" being digitized should be the electronics noise at the input of the whitening stage), the I and Q channels don't show similar profiles, and moreover, are not consistent (the two screenshots are from two separate sweeps). I don't know what to make of the parts of the sweep that don't show the expected "steps". Until ndscope gets a log-scaled y-axis option, we have to live with the poor visualization of the gain steps which are dB (rather than linearly) spaced. For this particular case, StripTool isn't an option either because the Q channel as a negative offset, and I opted agains futzing with the cabling at 1Y2 to give a small fixed positive voltage instead. I will emphasize that on Friday, this problem was not present, because the gain balance of the I and Q channels was good to within 1dB.
I finished prewiring the new c1auxey Acromag chassis (see attached pictures). I connected all grounds to the DIN rail to save some wiring. The power switches and LEDs work as expected.
I configured the DAQ modules using the old windows machine. I configured the gateway to be 192.168.114.1. The host machine still needs to be setup.
Next, the feedthroughs need to be wired and the channels need to be bench tested.
It might be a good idea to configure this box for the new suspension config - modern Satellite Amp, HV coil driver etc. It's a good opportunity to test the wiring scheme, "cross-connect" type adapters etc.
Redoing magnet measurement of envelope 1:
Moving on to inspect and measure envelope 3 (the last one):
The servos are almost certainly not optimal - but we have the IFO sort of working, so before we make any changes, let's make a strong case for it. Once the loop TFs and noises (e.g. the sensing noise reinjection you maybe saw) are fully characterized and a new loop is shown to perform better, then we can make the changes, but until then, let's continue using the "nominal" configuration and keep all the WFS loops on . I turned everything back on.
BTW, MC2_ASCPIT_IN1 isn't the correct channel to measure the sensing noise re-injection, you need some other sensor, e.g. is the MC transmission (de)stabilized. 0-20 Hz is where I expect the WFS is actually measuring above the sensing noise.
I want to measure the spot positions on the IMC mirrors. We know that they can't be too far off centerBasically I did the bare minimum to get these scripts in /opt/rtcds/caltech/c1/scripts/ASS/MC/ running on rossa (python3 mainly). I confirmed that I get some kind of spot measurement from this, but not sure of the data quality / calibration to convert the demodulated response into mm of decentering on the MC mirrors. Perhaps it's something the MC suspension team can look into - seems implausible to me that we are off by 5mm in PIT and YAW on MC2? The spot positions I get are (in mm from the center):
MC1 P MC2P MC3P MC1Y MC2Y MC3Y
0.640515 -5.149050 0.476649 -0.279035 5.715120 -2.901459
A future iteration of the script should also truncate the number of significant figures per a reasonable statistical error estimation.
I think the only part missing for assembly now are 4 2U chassis. The PA95s need to be soldered on as well (they didn't arrive in time to send to SC). The stuffed boards are stored under my desk. I inspected one board, looks fine, but of course we will need to run some actual bench tests to be sure.
I measure some of the dowel pins we got from Mcmaster with a caliper.
One small pin is 0.093" in diameter and 0.376" in length. The other sampled small pin has the same dimensions.
One big pin is 0.187" in diameter and 0.505" in length. The other is 0.187" in diameter and 0.506" in length.
The dowels meet our requirements.
We ran the coil balancing procedure 4 times while iterating through the output matrix optimization.
Attachment 1, pages 1 to 4 show the progression of cross coupling from current output matrix (which is theoretical ideal) to the latest iteration. We plot the sensed DOF ASD which we used to determine the cross coupling when different excitations are fed using the LOCKIN1 feeding 13Hz oscillation of 200 counts amplitude along the vector defined in output matrix. That means, when we change the output matrix, in subsequent tests, we alos change the exciation direction along with it.
Unfortunately, we don't see a very good optimizations over iterations. While we see some peaks going down in sensed PIT and sensed POS (through MC_F), we rather see an increase in cross coupling in the sensed YAW.
For this technique to work, (i) the WFS loops must be well tuned and (ii) the beam must be well centered on MC2. I am reasonably certain neither is true. For MC2 coil balancing, you can use a HeNe, there is already one on the table (not powered), and I guess you can use the MC2 trans QPD as a sensor, MC won't need to be locked so you can temporarily hijack that QPD (please don't move anything on the table unless you're confident of recovering everything, it should be possible to do all of this with an additional steering mirror you can install and then remove once your test is done). Then you can do any variant of the techniques available once you have an optical lever, e.g. single coil drive, pringle mode drive etc to do the balancing.
I think Hang had some technique he tried recently as well, maybe that is an improvement.
I think there's been some mis-communication. There's no updated Hang procedure, but there is the one that Anchal, Paco and I discussed, which is different from what is in the elog.
We'll discuss again, and try to get it right, but no need to make multiple forks yet.
I returned today with a beefier USB-SATA adapter, which has an integrated 12 V supply for powering 3.5" disks. I used this to interface a new 6 TB 3.5" disk found in the FE supplies cabinet.
I decided to go with a larger disk and copy the full contents of chiara:/home/cds. Strictly, the FEs only strictly need the RTS executables in /home/cvs/rtcds and /home/cvs/rtapps. However, to independently develop models, the shared matlab binaries in /home/cvs/caltech/... also need to be exposed. And there may be others I've missed.
I began the clone around 12:30 pm today. To preserve bandwidth to the main disk, I am copying not the /home/cds disk directly, but rather its backup image at /media/40mBackup.
Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models.
I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them. However, if there are concerns about having it present on the network, it can be moved to the outside-facing switch in the office area. It is not currently running any RTCDS processes.
Set-up was carried out via the following procedure:
$ sudo apt install cpuset advligorts-mbuf-dkms advligorts-gpstime-dkms docker.io docker-compose
$ sudo /sbin/sysctl kernel.sched_rt_runtime_us = -1
GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=nohz,domain,1-11 nohz_full=1-11 tsc=reliable mce=off"
$ sudo update-grub
$ sudo reboot now
I need to talk to Chris before I can take the setup further.
I repeated the usual whitening board characterization test of:
Attachment #1 suggests that the steps are equal (3dB) in size, but note that the "Q" channel shows only ~half the response of the I channel. The drive is derived from a channel of an unused AI+dewhite board in 1Y2, split with a BNC Tee, and fed to the two inputs on the whitening filter. The impedance is expected to be the same on each channel, and so each channel should see the same signal, but I see a large asymmetry. All of this checked out a couple of weeks ago (since we saw ellipses and not circles) so not sure what changed in the meantime, or if this is symptomatic of some deeper problem.
Usually, doing this and then restoring the cabling returns the signal levels of REFL55 to nominal levels. Today it did not - at the nominal whitening gain setting of +18dB flat gain, when the PRMI is fringing, the REFL55 inputs are frequently reporting ADC overflows. Needless to say, all my attempts today evening to transition the length control of the vertex from REFL165 to REFL55 failed.
I suppose we could try shifting the channels to (physical) Ch5 and Ch6 which were formerly used to digitize the ALS DFD outputs and are currently unused (from Ch3, Ch4) on this whitening filter and see if that improves the situation, but this will require a recompile of the RTCDS model and consequent CDS bootfest, which I'm not willing to undertake today. If anyone decides to do this test, let's also take the opportunity to debug the BIO switching for the delay line.
We poked (looked in situ with a flashlight, not disturbing any connections) around c1auxex chassis to understand better what is the wiring scheme.
To our surprise, we found that nothing was connected to the RTNs of the analog input Acromag modules. From previous experience and the Acromag manual, there can't be any meaningful voltage measurement without it.
I also did some rewiring in the Acromag chassis to improve its reliability. In particular, I removed the ground wires from the DIN rail and connected them using crimp-on butt splices.
This morning Jordan and I ran an 85-foot Cat 6 Ethernet cable from the campus network switch in the office area (on the ligo.caltech.edu domain) to the FE test stand near 1X6. This is to allow the test-stand subnet to be accessed for remote testing, while keeping it invisible to the parallel Martian subnet.
The clone of the chiara:/home/cds disk completed overnight. Today I installed the disk in the chiara clone. The NFS mounts (/opt/rtcds, /opt/rtapps) shared with the other test-stand machines mounted without issue.
Next, I attempted to open the shared Matlab executable (/cvs/cds/caltech/apps/linux64/matlab/bin/matlab) and launch Simulink. The existing Matlab license (/cvs/cds/caltech/apps/linux64/matlab/licenses/license_chiara_865865_R2015b.lic) did not work on this new machine, as they are machine-specific, so I updated the license file. I linked this license to my personal license, so that the machine license for the real chiara would not get replaced. The original license file is saved in the same directory with a *.bak postfix. If this disk is ever used in the real chiara machine, this file should be restored. After the machine license was updated, Matlab and Simulink loaded and allowed model editing.
Finally, I tested RTCDS model compilation on the new FEs using the c1lsc model as a trial case. It encountered one path issue due to the model being located at /opt/rtcds/userapps/release/isc/c1/models/isc/ instead of /opt/rtcds/userapps/release/isc/c1/models/. This seems to be a relic of the migration of the 40m models from the SVN to a standalone git repo. This was resolved by simply symlinking to the expected location:
$ sudo ln -s /opt/rtcds/userapps/release/isc/c1/models/isc/c1lsc.mdl /opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl
The model compilation then succeeded:
controls@c1bhd$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1bhd$ make clean-c1lsc
controls@c1bhd$ make c1lsc
Parsing the model c1lsc...
Building EPICS sequencers...
Building front-end Linux kernel module c1lsc...
make: Warning: File 'GNUmakefile' has modification time 28830 s in the
make: warning: Clock skew detected. Your build may be incomplete.
RCG source code directory:
The following files were used for this build:
Successfully compiled c1lsc
Compile Warnings, found in c1lsc_warnings.log:
As did the installation:
controls@c1bhd$ make install-c1lsc
Installing system=c1lsc site=caltech ifo=C1,c1
Installing start and stop scripts
Updating testpoint.par config file
-gds_node=42 -site_letter=C -system=c1lsc -host=c1lsc
Installing GDS node 42 configuration file
Installing auto-generated DAQ configuration file
Installing Epics MEDM screens
Running post-build script
We are ready to start building and testing models.
I've worked on packing the components for the following chassis
- 5 16bit AI chassis
- 4 18bit AI chassis
- 7 16bit AA chassis
- 8 HAM-A coil driver chassis
They are "almost" ready for shipment. Almost means some small parts are missing. We can ship the boxes to the company while we wait for these small parts.
And some more additional items to fill the emptying stock.
A cross-coupling test has been set to trigger at 05:00 am on April 1st, 2021. The script is waiting on tmux session 'cB' on pianosa. /scripts/SUS/OutMatCalc/MC2crossCoupleTest.py is being used here. The script will switch on oscillator in LOCKIN1 of MC2 at 13 Hz and 200 counts and would send it along the POS, PIT and YAW vectors on output matrix one by one, each for 2 minutes. It will take data from C1:IOO-MC_F_DQ, C1:IOO-MC_TRANS_PIT_ERR and C1:IOO-MC_TRANS_YAW_ERR and use it to measure 'sensing matrix' S. Sensing matrix S is defined as the cross-coupling between excited and sensed DOF and we ideally want it to be an identity matrix. The code will use the measured S to create a guess matrix A which on being multiplied by ideal coil output matrix would give us a rotated coil output matrix O. This guess O will be applied and the measurement will be repeated. On each iteration, next, A matrix is defined by:
This recursive algorithm converges A to the inverse of initial S. The above relation is derived by noticing that in steady state . I've taken this idea from a mathematics paper I found on some more complex stuff (c.f. https://doi.org/10.31219/osf.io/yrvck).
At each iteration, all three matrices A, O and S will be stored in a text file for analysis later.
The code has the error-catching capability and would restore the optic to the status quo if an error occurs or watchdogs trip due to earthquakes.
I spent some time investigating the PRM this evening, trying out some of the stuff we discussed in the meeting.
Basically, my finding tonight was that I could not improve (make the pringle mode actuation witnessed by the Oplev QPD smaller) by +/- perturbing the butterfly actuation with of 0.05%, 0.5% and 1% of PIT (I didn't try YAW, or other values of PIT, as none of these seemed to do any good). It seems highly unlikely that the existing coil gains (these come after the output matrix) and the actual coil/magnet pairs are so perfectly tuned, so there must be something wrong with my method. I'll try more combos tomorrow. Separately, I verified that the naive PIT (YAW) moves the optic mainly, i.e. to the eye), in PIT(YAW) as judged by the REFL spot on the camera and the readback of the Oplev QPD.
For this work, I made a few changes to filter banks:
I noticed that the filters/switch states/gains for LOCKIN1 and LOCKIN2 are not consistent within either PRM or BS suspension, or across suspensions. Several filter INs/OUTs were also disabled - something for the SUSdiag team to note, whenever this is scripted, the script should check that the signal is indeed making it end-to-end.
The coil balancing attempt failed. The off-diagonal values in the measured sensing matrices either remained the same or increased.
The attempt in the morning was too slow. By the time we reached, it had reached to iteration 7 only and still nowhere near optimum sensing matrix had reached. We still needed to see if the optimum would eventually reach if more iterations happened.
<Radhika came for shadowing us and learning about 40m>
So we worked a bit on speeding up the data loading process and then ran the code again which now was running much faster. Still within 1 hr or so, we saw it had reached to iteration 7 with no sign of sensing matrix getting any better.
<Paco left for vaccination>
To determine if the method would work in principle, I decided to stop the current run and start with a 0.5 Hz bandwidth run (so about 7 averages with 8s duration data and welch method). This completed 20 iterations before Gautum came. But it was clear now that the method is not converging to a better solution. Need to find a bug in the implementation of the algorithm mentioned in last post or find a better algoritm.
Attachment 1 is the plot of how the sensing matrix's distance from the identity matrix increased over iterations in the last run.
Attachment 2 is the plot for different off-diagonal terms in the sensing matrix. It is clear that POS->PIT,YAW coupling is not being measured properly as it remains constant.
Attachment 3 Gautum told us that there is some naming error in nds and MC_TRANS_PIT/YAW can be read through C1:IOO-MC_TRANS_PIT_ERR and C1:IOO-MC_TRANS_YAW_ERR channels instead. To test if they indeed point to same values, we did a test of exciting YAW degree through LOCKIN1 and seeing if the peaks are visible in the channels. This was also done to give Radhika an opportunity to do something I could confidently mentor about. and to experience using diaggui.
After fixing a few things we felt were wrong in our implementation of the algorithm, we ran the coil balancing for 12 iterations with just 11s per excitation and still taking CSD with 0.1 Hz bandwidth. This time we saw the distance of sensing matrix from identity going down.
In these results, can you also include the new matrix and what the relative imbalances were?
A longer measurement is set to trigger at 5:00 tomorrow on April 2nd, 2021. This measurement will run for 35 iterations with an excitation duration of 120s and bandwidth for CSD measurement set to 0.1 Hz. The script is set to trigger in a tmux session named 'cB' on pianosa.
We could not find problems with any individual piece of the REFL55 electronics chain, from photodiode to ADC. Nevertheless, the PRMI fringes witnessed by REFL55 is ~x10 higher than ~two weeks ago, when the PRMI could be repeatably and reliably locked using REFL55 signals (ETMs misaligned).
Discussion and next steps:
Q: Koji asked me what is the problem with this apparent increased optical gain - can't we just compensate by decreasing the whitening gain?
A: I am unable to transition control of the PRMI (no ETMs) from 3f to 1f, even after reducing the whitening gain on the REFL55 channels to prevent the saturation. So I think we need to get to the bottom of whatever the problem is here.
Q: Why do we need to transfer the control of the vertex to the 1f signals at all?
A: I haven't got a plot in the elog, but from when I had the PRFPMI locked last year, the DARM noise between 100-1kHz had high coherence with the MICH control signal. I tried some feedforward to try and cancel it but never got anywhere. It isn't a quantitative statement but the 1f signals are expected to be cleaner?
Koji pointed out that the MICH signal is visible in the REFL55 channels even when the PRM is misaligned, so I'm gonna look back at the trend data to see if I can identify when this apparent increase in the signal levels occurred and if I can identify some event in the lab that caused it. We also discussed using the ratio of MICH signals in REFL and AS to better estimate the losses in the REFL path - the Faraday losses in particular are a total unknown, but in the AS path, there is less uncertainty since we know the SRM transmission quite precisely, and I guess the 6 output steering mirrors can be assumed to be R=99%.
Last run gave similar results as the quick run we did earlier. The code has been unable to strike out couplings with POS. We found the bug which is causing this. This was because the sampling rate of MC_F channel is different from the test-point channels used for PIT and YAW. Even though we were aware of it, we made an error in handling it while calculating CSD. Due to this, CSD calculation with POS data was performed by the code with zero padding which made it think that no PIT/YAW <-> POS coupling exist. Hence our code was only able to fix PIT <-> YAW couplings.
We'll need to do another run with this bug fixed. I'll update this post with details of the new measurement.
How should I try to understand why PIT and YAW are so different?
I wanted to put my optomechanical instability hypothesis to the test. So I decided to cut the input power to the IMC by ~half and try locking the PRFPMI. However, this did not improve the stability of the buildup in the arm cavities, while the control was solely on the ALS error signal.
Basically, with some tweaks to loop gains, it worked, see Attachment #1. Note that the lower right axis shows the IMC transmission and is ~7500 cts, vs the nominal ~15,000 cts.
Cutting the input power did not have the effect I hoped it would. Basically, I was hoping to zero the optical CARM offset while the IFO was entirely under ALS control, and have the arm transmission be stable (or at least, stay in the linear regime of REFL11). However, the observation was that the IFO did the usual "buzzing" in and out of the linear regime. Right now, this is not at all a problem - once the IR error signal is blended in, and DC control authority is transferred to that signal, the lock acquisition can proceed just fine. And I guess it is cool that we can lock the IFO at ~half the input power, something to keep in mind when we have the remote controlled waveplate, maybe we always want to lock at the lowest power possible such that optomechanical transients are not a problem.
I also don't think this test directly disputes my claim that the residual CARM noise when the arm cavities are under purely ALS control is smaller than the CARM linewidth.
What does this mean for my hypothesis? I still think it is valid, maybe the power has to be cut even further for the optomechanics to not be a problem. In Finesse (see Attachment #2), with 0.3 W input power to the back of the PRM, and with best guesses for the 40m optical losses in the PRC and arms, I still see that considerable phase can be eaten up due to the optomechanical resonance around ~100 Hz, which is where the digital CARM loop UGF is. So I guess it isn't entirely unreasonable that the instability didn't go away?
After this work, I undid all the changes I made for the low power lock test. I confirmed that IMC locking, POX/POY locking, and the dither alignment systems all function as expected after I reverted the system.
Came in a little bit after 8 and found the MC unlocked and struggling to lock for the past 3 hours. Looking at the SUS overview, both MC1 and ITMX Watchdogs had tripped so we damped the suspensions and brought them back to a good state. The autolocker was still not able to catch lock, so we cleared the WFS filter history to remove large angular offsets in MC1 and after this the MC caught its lock again.
Looks like two EQs came in at around 4:45 AM (Pacific) suggested by a couple of spikes in the seismic rainbow, and this.
Since it seems like the entire electronics chain has no obvious failure, I decided to compensate for the apparent increased optical gain by turning the flat whitening gain down from +18dB to 0dB. Then, after some fiddling around with alignment, settings etc, I was able to lock the PRMI once again, with the ETMs misaligned, using REFL55_I to sense PRCL, and REFL55_Q to sense MICH. Some sensing matrices attached. Some notes:
So there is clearly something funky with the nominal MICH actuation scheme (MICH suspension, PRM suspension or both), which we should get to the bottom of before trying any low noise locking. I think using the ITMs as the MICH actuator in the full lock will not be a good low nosie strategy, as we would then be "polluting" all our suspended optics with our control loops, which seems highly suboptimal for technical noise sources like coil driver noise etc.
Yesterday Chris and I completed setup of the Supermicro machine that will serve as a dedicated host for developing and testing RTCDS sim models. It is currently sitting in the stack of machines in the FE test stand, though it should eventually be moved into a permanent rack.
It turns out the machine cannot run 10 user models, only 4. Hyperthreading was enabled in the BIOS settings, which created the illusion of there being 12 rather than 6 physical cores. Between Chris and Ian's sim models, we already have a fully-loaded machine. There are several more of these spare 6-core machines that could be set up to run additional models. But in the long term, and especially in Ian's case where the IFO sim models will all need to communicate with one another (this is a self-contained cymac, not a distributed FE system), we may need to buy a larger machine with 16 or 32 cores.
IPMI was set up for the c1sim cymac. I assigned the IPMI interface a static IP address on the Martian network (192.168.113.45) and registered it in the usual way with the domain name server on chiara. After updating the BIOS settings and rebooting, I was able to remotely power off and back on the machine following these instructions.
I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them.
Yesterday I installed all the available ADC/DAC/BIO modules and adapter boards into the new I/O chassis (c1bhd, c1sus2). We are still missing three ADC adapter boards and six 18-bit DACs. A thorough search of the FE cabinet turned up several 16-bit DACs, but only one adapter board. Since one 16-bit DAC is required anyway for c1sus2, I installed the one complete set in that chassis.
Below is the current state of each chassis. Missing components are highlighted in yellow. We cannot proceed to loopback testing until at least some of the missing hardware is in hand.
To enable remote access to the machines on the test stand subnet, one machine must function as a gateway server. Initially, I tried to set this up using the second network interface of the chiara clone. However, having two active interfaces caused problems for the DHCP and FTS servers and broke the diskless FE booting. Debugging this would have required making changes to the network configuration that would have to be remembered and reverted, were the chiara disk to ever to be used in the original machine.
So instead, I simply grabbed another of the (unused) 1U Supermicro servers from the 1Y1 rack and set it up on the subnet as a standalone gateway server. The machine is named c1teststand. Its first network interface is connected to the general computing network (ligo.caltech.edu) and the second to the test-stand subnet. It has no connection to the Martian subnet. I installed Debian 10.9 anticipating that, when the machine is no longer needed in the test stand, it can be converted into another docker-cymac for to run additional sim models.
Currently, the outside-facing IP address is assigned via DHCP and so periodically changes. I've asked Larry to assign it a static IP on the ligo.caltech.edu domain, so that it can be accessed analogously to nodus.
We got some dumbells from Re-Source Manufacturing (see attached). I picked 3 in random and measured their dimensions:
1. 0.0760" in diameter, 0.0860" in length
2. 0.0760" in diameter, 0.0860" in length
3. 0.0760" in diameter, 0.0865" in length
In accordance with the Schematics.
As mentioned in last post, we earlier made an error in making sure that all time series arrays go in with same sampling rate in CSD calculation. When we fixed that, our recursive method just blew out in all the efforts since then.
We suspect a major issue is how our measured sensing matrix (the cross-coupling matrix between different degrees of freedom on excitation) has significant imaginary parts in it. We discard the imaginary vaues and only use real parts for iterative method, but we think this is not the solution.
Here we present cross-spectral density of different channels representing the three sensed DOFs (normalized by ASD of no excitation data for each involved component) and the sensing matrix (TF estimate) calculated by normalizing the first cross spectral density plots column wise by the diagonal values. These are measured with existing ideal output matrix but with the new input matrix. This is to get an idea of how these elements look when we use them.
Note, that we used only 10 seconds of data in this run and used binwidth of 0.25Hz. When we used binwidth of 0.1 Hz, we found that the peaks were broad and highest at 13.1 Hz instead of 13 Hz which is the excitation frequency used in these measurements.
Basically I went around all the chambers and all the DB25 flanges to check the invac cable configurations. Also took more time to check the coil Rs and Ls.
Exceptions are the TTs. To avoid unexpected misalignment of the TTs, I didn't try to disconnect the TT cables from the flanges.
Upon the disconnection of the SOS cables, the following steps are taken to avoid large impact to the SOSs
After the measurement, IMC was lock and aligned. The two arms were locked and aligned with ASS. And the PRM alignment (when "misalign" was disengaged) was checked with the REFL CCD.
So I believe the SOSs are functioning as before, but if you find anything, please let me know.
Adding the chamfer around the edge of the optic ring did not change the center of mass relative to the plane from the suspension wires.
The CoM was .0003" away from the plane. Adding the chamfer moved it closer by .0001". See the attached photo.
I've also attached the list of the Moments of Inertia of the SOS Assembly.
To test if our method is working at all, we went for the simpler case of just uncoupling PIT and YAW. This is also because the sensor used for these two degrees of freedom is similar (the MC Trans WFS).
We saw a successful decrease in cross-coupling between PIT and YAW over the first 50 iterations that we tried. Here are some results:
I spent an hour today evening checking out the remote waveplate operation. Basic remote operation was established 👍 . To run a test on the main beam (or any beam for that matter), we need to lay out some long cabling, and install the controller in a rack. I will work with Jordan in the coming days to do these things. Apart from the hardware, some EPICS channel will need to be added to the c1ioo.db file and a python script will need to be set up as a service to allow remote operation.
Satisfied that the unit works basically as expected, I decided to stop for today. My thinking was that we can have the ESP300 installed in 1X1 or 1X2 (depending on where space is more readily available). I will upload have uploaded a cartoon here so people can comment if they like/dislike my plan.
Once everything is installed, we can run some tests to see if the rotary motion disturbs the PSL in any meaningful way. I will upload some photos to the picasa later. Photos here.
Today, we finally crossed the last hurdle and got a successful converging coil balancing run.
5x 16bit ADC adapter boards (D0902006) assembled.
We ran again this method but with the 'b' parameter as a matrix instead. This provides more gain on some off-diagonal terms than others. This gave us a better convergence with the code reaching to the tolerance level provided (0.01 distance of S matrix from identity) within 16 iterations (~17 mins).
Attachment 1 again shows how the off-diagonal terms go down and how the overall distance of sensing matrix from identity goes down. This is 'Cross coupling budget' of the coils as iterations move forward.
convergence is great.
Next we wanna get the F2A filters made since most of the IMC control happens at f < 3 Hz. Once you have the SUS state space model, you should be able to see how this can be done using only the free'swinging eigenfrequencies. Then you should get the closed loop model including the F2A filters and the damping filters to see what the closed loop behavior is like.
Today I assembled the skeleton of 6 towers, without clamps and sensor assembly (attachment 1).
Some of the side plates have this weird hole that doesn't fit any of the suspension blocks (attachment 2). I didn't notice when I counted the parts and now there are exactly enough side plates to assemble 7 towers.
Also found that one of the stiffener plates has a broken threading.
We will need more parts to go beyond the necessary 7 SOSs. I will do the recounting later.
Things to do next:
1. Find the capped spring plungers and send them to C&B.
2. Assemble the clamps onto the suspension blocks.
3. Push some Viton tips into the vented screws we got to make safety stops.
4. more C&B: Magnets, dumbells, dowel pins, OSEMs.
5. Push clean dowel pins into the last suspension block.
6. Assemble 7th Tower.
7. Assemble safety stops and clamps.
8. Glue magnets to dumbells.
I installed three of the 16-bit ADC adapter boards assembled by Koji. Now, the only missing hardware is the 18-bit DACs (quantities below). As I mentioned this week, there are 2-3 16-bit DACs available in the FE cabinet. They could be used if more 16-bit adapter boards could be procured.
I think I mis-spoke about the balancing channels before. The ~20 Hz balancing could go into either the COIL banks or the SUS output matrix.
I believe its more conceptually clean to do this as gains in the outputmatrix, and leave the coil gains as +/- 1. i.e. we would only use the coil gains to compensate for coil/magnet actuation strength.
Then the high frequency balance goes into the outputmatrix. The F2A and A2L decoupling filters would then be generated having a high frequency gain = 1.
Yesterday I resurrected the 40m's LSC simPlant model, c1lsp. It is running on c1sim, a virtual, self-contained cymac that Chris and I set up for developing sim models (see 15997). I think the next step towards an integrated IFO model is incorporating the suspension plants. I am going to hand development largely over to Ian at this point, with continued support from me and Chris.
This model dates back to around 2012 and appears to have last been used in ~2015. According to the old CDS documentation:
Here XEP, YEP, and VSP are respectively the x-end, y-end, and vertex suspension plant models. I haven't found any evidence that these were ever fully implemented for the entire IFO. However, it looks like SUS plants were later implemented for a single arm cavity, at least, using two models named c1sup and c1spx (appear in more recent CDS documentation). These suspension plants could likely be updated and then copied for the other suspended optics.
To represent the optical transfer functions, the model loads a set of SOS filter coefficients generated by an Optickle model of the interferometer. The filter-generating code and instructions on how to use it are located here. In particular, it contains a Matlab script named opt40m.m which defines the interferferometer. It should be updated to match the parameters in the latest 40m Finesse model, C1_w_BHD.kat. The calibrations from Watts to sensor voltages will also need to be checked and likely updated.
For future reference, below are the steps followed to port this model to the virtual cymac.
$ cd ~/docker-cymac
$ ./start_cymac debug
The optional debug flag will print the full set of compilation messages to the terminal. If compilation fails, search the traceback for lines containing "ERROR" to determine what is causing the failure.
Accessing MEDM screens. Once the model is running, a button should be added to the sitemap screen (located at c1sim:/home/controls/docker-cymac/userapps/medm/sitemap.adl) to access one or more screens specific to the newly added model.
Custom-made screens should be added to c1sim:/home/controls/docker-cymac/userapps/medm/x1lsp (where the final subdirectory is the name of the particular model).
The set of available auto-generated screens for the model can be viewed by entering the virtual environment:
$ cd ~/docker-cymac
$ ./login_cymac #drops into virtual shell
# cd /opt/rtcds/tst/x1/medm/x1lsp #last subdirectory is model name
# ls -l *.adl
# exit #return to host shell
The sitemap screen and any subscreens can link to the auto-generated screens in the usual way (by pointing to their virtual /opt/rtcds path). Currently, for the virtual path resolution to work, an environment script has to be run prior to launching sitemap, which sets the location of a virtual MEDM server (this will be auto-scripted in the future):
$ cd ~/docker-cymac
$ eval $(./env_cymac)
One important auto-generated screen that should be linked for every model is the CDS runtime diagnostics screen, which indicates the success/fail state of the model and all its dependencies. T1100625 details the meaning of all the various indicator lights.
I'm not sure I understand what F2A is? I couldn't find a description of this filter anywhere and don't remember if you have already explained it. Can you describe what is needed to be done again, please? We would keep SUS state space model and seismic transfer functions calculation ready meanwhile.
Today, I screwed the plungers on the sensor plates and installed them on the Towers. I also installed the wire clamps on the suspension blocks (attachment).
I ran into problems in 2 separate suspension blocks: one had a dowel pin that was slightly too fat for the wire clamp. In another, the tapped holes were too short so that the 4-40 screws couldn't be screwed all the way.