Behind the X arm tube
Attachment 5: RF delay line was accommodated in 1X3B. (KA)
As marked up in the photos.
Attachment 5: The electronics units removed. Cleaning half way down. (KA)
Attachment 6: Moved most of the units to 1X3B rack ELOG 17125 (KA)
Salvage these (and any other things). Wrap and double-pack nicely. Put the labels. Store them and record the location. Tell JC the location.
We received the remaining 3 front-ends from LHO today. They each have a timing card and an OSS host adapter card installed. We also receive 3 dolphin DX cards. As with the previous packages from LLO, each box contains a rack mounting kit for the supermicro machine.
Took finer measurements of the x-arm aux laser actuator tranfer function (10 kHz - 1 MHz, 1024 pts/decade) using the Moku.
I took finer measurements using the moku by splitting the measurement into 4 sections (10 - 32 (~10^4.5) kHz, 32 - 100 kHz, 100 - 320 kHz, 320 - 1000 kHz) and then grouping them together. I took 25 measurements of each ( + a bonus in case my counting was off), plotted them in the attached notebook, and calculated/plotted the standard deviation of the magnitude (normalized for DC offset). Could not upload to the ELOG as .pdf, but the pdf's are in the .zip file.
Next steps are to do the same stdev calculation for phase, which shouldn't take long, and to use the vectfit of this better data to create a PZT inversion filter.
We got the 3 front-ends from LLO today. The contents of each box are:
The machine shop looked a mess this morning, so I cleaned it up. All power tools are now placed in the drawers in the machine shop. Let me know if there are any questions of where anything here is placed.
@tega This looks great, thank you for putting this together. The rack drawing in particular is great. Two notes:
I think most of this work can be done with very little downtime.
Here is a proposal for what we would like to do in terms of reshuffling a few rack-mounted equipments for the CDS upgrade.
I believe all of this can be done in one go followed by CDS validation. Please comment so we can improve the plan. Should we move FB1 to 1X7 and remove old FB & JetStor during this work?
Attachment 1: Reshuffling proposal
Attachment 2: Front of 1X7 Rack
Attachment 3: Rear of 1X7 Rack
Attachment 4: Front of 1X6 Rack
Attachment 5: Rear of 1X6 Rack
Attachment 6: Martian switch connections
Here is an update of how fitting the resonances is going - I've been modifying parameters by hand and seeing the effect on the fit. Still a work in progress. Magnitude is fitting pretty well, phase is very confusing. Attempted vectfit again but I can't constrain the number of poles and zeros with the code I have and I still get a nonsensical output with 20 poles and 20 zeros. Here is a plot with my fit so far, and a zip file with my moku data of the resonances and the code I'm using to plot.
I have now added code/data to my github repository. (it's the little victories)
I tried to lock the Y/X arms to take some noise budget. However, we noticed that TRX/Y were oscillating coherently together (by tens of percent), meaning some input optics, essentially PR2/3 are swinging. There was no way I could do noise budgeting in this situation.
I set out to debug these optics. First, I notice side motion of PR2 is very weakly damped .
The gain of the side damping loop (C1:SUS-PR2_SUSSIDE_GAIN) was increased from 10 to 150 which seem to have fixed the issue. Attachment 1 shows the current step response of the PR2 DOFs. The residual Qs look good but there is still some cross-couplings, especially when kicking POS. Need to do some balancing there.
PR3 fixing was less successful in the beginning. I increased the following gains:
C1:SUS-PR3_SUSPOS_GAIN: 0.5 -> 30
C1:SUS-PR3_SUSPIT_GAIN: 3 -> 30
C1:SUS-PR3_SUSYAW_GAIN: 1 -> 30
C1:SUS-PR3_SUSSIDE_GAIN: 10 -> 50
But the residual Q was still > 10. Then I checked the input matrix and noticed that UL->PIT is -0.18 while UR->PIT is 0.39. I changed UL->PIT (C1:SUS-PR3_INMATRIX_2_1) to +0.18. Now the Q became 7. I continue optimizing the gains.
Was able to increase C1:SUS-PR3_SUSSIDE_GAIN: 50 -> 100.
Attachment 2 shows the step response of PR3. The change of the entry of the input matrix was very ad-hoc, it would probably be good to run a systematic tuning. I have to leave now, but the IFO is in a very misaligned state. PR3/2 should be moved to bring it back.
[Paco, Chris Stoughton, Leo -- remote]
This morning Chris came over to the 40m lab to help us get the RFSoC board going. After checking out our setup, we decided to do a very basic series of checks to see if we can at least get the ADCs to run coherently (independent of the DACs). For this I borrowed the Marconi 2023B from inside the lab and set its output to 1.137 GHz, 0 dBm. Then, I plugged it into the ADC1 and just ran the usual spectrum analyzer notebook on the rfsoc jupyter lab server. Attachment #1 - 2 shows the screen captured PSDs for ADCs 0 and 1 respectively with the 1137 MHz peaks alright.
The fast ADCs are indeed reading our input signals.
Before this simple test, we actually reached out to Leo over at Fermilab for some remote assistance on building up our minimally working firmware. For this, Chris started a new vivado project on his laptop, and realized the rfsoc 2x2 board files are not included in it by default. In order to add them, we had to go into Tools, Settings and add the 2020.1 Vivado Xilinx shop board repository path to the rfsoc2x2 v1.1 files. After a little bit of struggling, uninstalling, reinstalling them, and restarting Vivado, we managed to get into the actual overlay design. In there, with Leo's assistance, we dropped the Zynq MPSoC core (this includes the main interface drivers for the rfsoc 2x2 board). We then dropped an rf converter IP block, which we customized to use the right PLL settings. The settings, from the System Clocking tab were changed to have a 409.6 MHz Reference Clock (default was 122.88 MHz). This was not straightforward, as the default sampling rate of 2.00 GSPS was not integer-related so we had to also update that to 4.096 GSPS. Then, we saw that the max available Clock Out option was 256 MHz (we need to be >= 409.6 MHz), so Leo suggested we dropped a Clocking Wizard block to provide a 512 MHz clock input for the rfdc. The final settings are captured in Attachment # 3. The Clocking Wizard was added, and configured on its Output Clocks tab to provide a Requested Output Freq of 512 MHz. The finall settings of the Clocking wizard are captured in Attachment #4. Finally, we connected the blocks as shown in Attachment #5.
We will continue with this design tomorrow.
Some more measurements of the PZT resonances (now zoomed in!) I'm adjusting parameters on our model to try and fit to it by hand a bit, definitely still needs improvements but not bad for a 2-pole 2-zero fit for now. I don't have a way to get coherence data from the moku yet but I've got a variety of measurements and will hopefully use the standard deviation to try and find a good error prediction...
This morning, while attempting to align the IFO to continue with noise-budgeting, we noted the XARM lock was not stable and showed glitches in the C1:LSC-TRX_OUT (arm cavity transmission). Inspecting the SUS screens, we found the ULSEN rms ~ 6 times higher than the other coils so we opened an ndscope with the four face OSEM signals and overlay the XARM transmission. We immediately noticed the ULSEN input is noisy, jumping around randomly and where bigger glitches correlated with the arm cavity transmission glitches. This is appreciated in Attachment #1.
We'll do a full signal investigation on ITMX SUS electronics to try and narrow down the issue, but it seems the glitches come and go... Is this from the gold satamp box? ...
We measured the TF of the X-arm laser PZT using the Moku so we can begin fitting to that data and hopefully creating a digital filter to cancel out PZT resonances.
We calculated the DFD calibration (V/Hz) using:
Vrf = 0.158 mV (-6 dBm), Km = 1 (K_phi = Km*Vrf), cable length = 45m, Tau = cable length/(0.67*3*10^8 m/s) ~ 220 ns.
We've taken some preliminary data and can see the resonances around 200-300 kHz.
Next steps are taking more data around the resonances specifically, calibrating the data using the DFD calibration we calculated, and adjusting parameters in our model so we can model the TF.
[JC, Tega, Chris]
After moving the test stand front-ends, chiara (name server) and fb1 (boot server) to the new rack behind 1X7, we powered everything up and checked that we can reach c1teststand via pianosa and that the front-ends are still able to boot from fb1. After confirming these tests, we decided to start the software upgrade to debian 10. We installed buster on fb1 and are now in the process of setting up diskless boot. I have been looking around for cds instructions on how to do this and I found the CdsFrontEndDebian10page which contains most of the info we require. The page suggests that it may be cleaner to start the debian10 installation on a front-end that is connected to an I/O chassis with at least 1 ADC and 1 DAC card, then move the installation disk to the boot server and continue from there, so I moved the disk from fb1 to one of the front-ends but I had trouble getting it to boot. I decided to do a clean install on another disk on the c1lsc front-end which has a host adapter card that can be connected to the c1bhd I/O chassis. We can then mount this disk on fb1 and use it to setup the diskless boot OS.
A new tool box has been placed at the Y-end! Each drawer has its label so PLEASE put the tools back in their correct location. In addition to this, Each tool has its assigned tool box, so PLEASE RETURN all tools to their designated tool box. The tools can be distinguished by a writing or heat shrink which corresponds to the color of the tool chest or location. Photo #2 is an example of how the tools have been marked.
Each toolbox from now on will contain a drawer for the folllowing: Measurements, Allen Keys, Pliers and Cutters, Screwdrivers, Zipties and Tapes, Allen Ball Drivers, Crescent Wrenches, Clamps, and Torque Wrenches/ Ratchets.
Moved the rack to the location of the test stand just behind 1X7 and plan to remove the other two small test stand racks to create some space there. We then mounted the c1bhd I/O chassis and 4 front-end machines on the test stand (see attachment 1).
Installed the dolphin IX cards on all 4 front-end machines: c1bhd, c1ioo, c1sus, c1lsc. I also removed the dolphin DX card that was previously installed on c1bhd.
Found a single OneStop host card with a mini PCI slot mounting plate in a storage box (see attachment 2). Since this only fits into the dual PCI riser card slot on c1bhd, I swapped out the full-length PCI slot OneStop host card on c1bhd and installed it on c1lsc, (see attachments 3 & 4).
for damping and OL loops, we typically don't measure the TF like this because it takes forever and we don't need that detailed info for anything. Just use the step responses in the way we discussed at the meeting 2 weeks ago. There's multiple elog entries from me and others illustrating this. The measurement time is then only ~30 sec per optic, and you also get the cross-coupling for free. No need for test-point channels and overloading, just use the existing DQ channels and read back the response from the frames after the excitations are completed.
I made measurements of old optics OLTF today. I have reduced the file sizes of the plots and data now. It is interesting that it is allowed to read 9 channels simultaneously from c1mcs or c1sus models, even together. The situation with c1su2 is a bit unclear. I was earlier able to take measurements of 6 channels at once from c1su2 but not I can't read more than 1 channel simultaneously. This suggests that the limit is dictated by how much a single model is loaded, not how much we are reading simultaneously. So if we split c1su2 into two models, we might be able to read more optics simultaneously, saving time and giving us the ability to measure for longer.
Attached are the results for all the core optics. Inferences will be made later in the week.
Note: Some measurements have very low coherence in IN2 channels in most of the damping frequency region, these loops need to be excited harder. (eg PIT, POS, YAW, on ITMs and ETMs).
When Juan and I were working on the suspension measurement, I found that CHA didn't settle down well.
I inspected and found that CHA's + input seemed broken and physically flaky. For Juan's measurements, I plugged + channels (for CHA/B) and used - channels as an input. This seemed work but I wasn't sure the SR functioned as expected in terms of the noise level.
We need to inspect the inputs a bit more carefully and send it back to SRS if necessary.
How many SR785's do we have in the lab right now? And the measurement instruments like SR785 are still the heart of our lab, please be kind...
The setup was (at least partially) cleared.
As a first step to characterize all the local damping loops, we ran an open loop transfer function measurement test for all BHD optics, taking transfer function using band-limited (0.3 Hz to 10 Hz) gaussian noise injection at error points in different degrees of freedom. Plots are in the git repo. I'll make them lighter and post here.
We have also saved coherence of excitation at the IN1 test points of different degrees of freedom that may be later used to determine the cross-coupling in the system.
The test ran automatically using measSUSOLTF.py script. The script can run the test parallelly on all suspensions in principle, but not in practice because the cdsutils.getdata apparently has a limitation on how many real-time channels (we think it is 8 maximum) one can read simultaneously. We can get around this by defining these test points at DQ channels but that will probably upset the rtcds model as well. Maybe the thing to do is to separate the c1su2 model into two models handling 3 and 4 suspensions. But we are not sure if the limitation is due to fb or DAQ network (which will persist even if we reduce the number of testpoints on one model) or due to load on a single core of FE machines.
The data is measured and stored here. We can do periodic tests and update data here.
The overlapping plot of the calibrated error and control signals gives you an approximately good estimation of the freerun fluctuation, particularly when the open-loop gain G is much larger or much smaller than the unity.
However, when the G is close to the unity, they are both affected by "servo bump" and both signals do not represent the freerun fluctuation around that frequency.
To avoid this, the open-loop gain needs to be measured every time when the noise budget is calculated. In the beginning, it is necessary to measure the open-loop gain over a large frequency range so that you can refine your model. Once you gain sufficient confidence about the shape of the open-loop gain, you can just use measurement at a frequency and just adjust the gain variation (most of the cases it comes from the optical gain).
I am saying this because I once had a significant issue of (project-wide) incorrect sensitivity estimation by omitting this process.
TL;DR: When the laser has good lock, the OLTF moves up and the UGF moves over!
Figured out with Paco yesterday that when the laser is locked but kind of weakly (mirrors on the optical table sliiightly out of alignment, for example), we would get a UGF around 5 kHz, but when we had a very strong lock (adjusting the mirrors until the spot was brightest) we would get a UGF around 13-17 kHz. Attached are some plots of us going back and forth (you can kind of tell from the coherence/error that the one with the lower UGF is more weakly locked, too). Error on the plots is propagated using the coherence data (see Bendat and Piersol, Random Data, Table 9.6 for the formula).
Want to take data next week to quantitatively compare optical gain to UGF!
We wrote a notebook found on Git/40m/measurements/LSC/FPMI/NoiseBudget/FPMISensitivity.ipynb for calculating the MICH, DARM (currently XARM), CARM (currently YARM) sensitivities in the FPMI lock which can be run daily.
The IN and OUT channels of each DOFs are measured at a certain GPS time and calibrated using the optical gains and actuation calibration measured in the previous post.
Attachment shows the results.
It seems like the UGFs for MICH and DARM (currently XARM) match the ones that were estimated previously (100Hz for MICH, 120Hz for DARM) except for CARM for which the UGF was estimated to be 250Hz and here seems to be > 1kHz.
Indeed one can also see that the picks in the CARM plot don't match that well. Calculation shows that at 250Hz OUT channel is 6 times more than the IN channel. Calibrations for CARM should be checked.
MICH sensitivity using REFL55 at high frequencies is not much better than what was measured with AS55.
DARM sensitivity at 10Hz is a factor of a few better than the single arm lock sensitivity.
Now it is time to do the budgeting.
we want to be able to run SimPlant on the teststand, test our new controls algorithms, test watchdogs, and any other software upgrades. Ideally in the steady state it will run some plants with suspensions and cavities and we will develop our measurement scripts on there also (e.g. IFOtest).
I keep getting confused about the purpose of the teststand. The view I am adopting going forward is its use as a platform for testing the compatibility of new hardware upgrade, instead of thinking of it as an independent system that works with old hardware.
TL;DR: Got the x-arm aux laser locked again and took more data - my fit on my transfer functions need improvement and my new method for finding coherence doesn't work so I went back to the first way! See attached file for an example of data runs with poor fits. First one has the questionable coherence data, second one has more logical coherence. (ignore the dashed lines.)
- Disk full
I updated the configuration file '/etc/logrotate.d/rsyslog' to set a file sise limit of 50M on 'syslog' and 'daemon.log' since these are the two log files that capture caget & caput terminal outputs. I also reduce the number of backup files to 2.
controls@c1vac:~$ cat /etc/logrotate.d/rsyslog
invoke-rc.d rsyslog rotate > /dev/null
invoke-rc.d rsyslog rotate > /dev/null
invoke-rc.d rsyslog rotate > /dev/null
- Vacuum gauge
The XGS-600 can handle 6 FRGs and we currently have 5 of them connected. Yes, having a spare would be good. I'll see about placing an order for these then.
- Disk Full: Just use the usual /etc/logrotate thing
I rather feel not replacing P1a. We used to have Ps and CCs as they didn't cover the entire pressure range. However, this new FRG (=Full Range Gauge) does cover from 1atm to 4nTorr.
Why don't we have a couple of FRG spares, instead?
Questions to Tega: How many FRGs can our XGS-600 controller handle?
- Better suspension damping HIGH
- Investigate ITMX input matrix diagonalization (40m/16931)
- Output matrix diagonalization
* FPMI lock is not stable, only lasts a few minutes for so. MICH fringe is too fast; 5-10 fringes/sec in the evening.
- Noise budget HIGH
- Calibrate error signals (actually already done with sensing matrix measurement 40m/17069)
- Make a sensitivity curve using error and feedback signals (actuator calibration 40m/16978)
* See if optical gain and actuation efficiency makes sense. REFL55 error signal amplitude is sensitive to cable connections.
- FPMI locking
- Use CARM/DARM filters, not XARM/YARM filters
- Remove FM4 belly
- Automate lock acquisition procedure
- Initial alignment scheme
- Investigate which suspension drifts much
- Scheme compatible with BHD alignment
* These days, we have to align almost from scratch every morning. Empirically, TT2 seems to recover LO alignment and PR2/3 seems to recover Yarm alignment (40m/17056). Xarm seems to be stable.
- Install alignment PZTs for Yarm
- Restore ALS CARM and DARM
* Green seems to be useful also for initial alignment of IR to see if arms drifted or not (40m/17056).
- Suspension output matrix diagonalization to minimize pitch-yaw coupling (current output matrix is pitch-yaw coupled 40m/16915)
- Balance ITM and ETM actuation first so that ASS loops will be understandable (40m/17014)
- Suspension calibrations
- Calibrate oplevs
- Calibrate SUSPOS/PIT/YAW/SIDE signals (40m/16898)
* We need better understanding of suspension motions. Also good for A2L noise budgeting.
- CARM servo with Common Mode Board
- Do it with single arm first
- Better suspension damping HIGH
- Invesitage LO2 input matrix diagonalization (40m/16931)
- Output matrix diagonalization (almost all new suspensions 40m/17073)
* BHD fringe speed is too fast (~100 fringes/sec?), LO phase locking saturates (40m/17037).
- LO phase locking
- With better suspensions
- Measure open loop transfer function
- Try dither lock with dithering LO or AS with MICH offset (single modulation)
- Modify c1hpc/c1lsc so that it can modulate BS and do double demodulation, and try double demodulation
- Noise Budget HIGH
- Calibrate MICH error signal and AS-LO fringe
- Calibrate LO1, LO2, AS1, AS4 actuation using ITM single bounce - LO fringe
- Check BHD DCPD signal chain (DCPD making negative output when fringes are too fast; 40m/17067)
- Make a sensitivity curve using error and feedback signals
- AS-LO mode-matching
- Model what could be causing funny LO shape
- Model if having low mode-matching is bad or not
* Measured mode-matching of 56% sounds too low to explain with errors in mode-matching telescope (40m/16859, 40m/17067).
- WFS loops too fast (40m/17061)
- Noise Budget
- Investigate MC3 damping (40m/17073)
- MC2 length control path
Juan and I built an analog setup to measure some transfer functions of the MOS suspension. The setup is blocking the lab way around the PD test bench.
Excuse us for the inconvenience. It will be removed/cleared by the end of the week.
The initial idea of clearing 1X7 cannot be done for now, because I missed the deadline for providing a detailed enough plan before Monday power up of the lab, so we are just going to go ahead and use the new rack as was initially intended and get the latest hardware and software tested here.
We mounted the DAQ, subnet and dolphin IX switches, see attachement 1. The mounting ears that came with the dolphin switch did not fit and so could not be used for mounting. We looked around the lab and decided to used one of the NavePoint mounting brackets which we found next to the teststand, see attachment 2.
We plan to move the new rack to the current location of the teststand and use the power connection from there. It is also closer to 1X7 so that moving the front-ends and switches to 1X7 should be straight forward after we complete all CDS upgrade testing.
[Anchal, Paco, Tega]
c1vac was showing /var disk to be full. We moved all gunzipped backup logs to /home/controls/logBackUp. This emptied 36% of space on /var. Ideally, we need not log so much. Some solution needs to be found for reducing these log sizes or monitoring them for smart handling.
We were unable to opel the PSL shuttter, due to the interlock with C1:Vac-P1a_pressure. We found that C1:Vac-P1a_pressure is not being written by serial_MKS937a service on c1vac. The issue was the the sensor itself has become bad and needs to be replaced. We believe that "L 0E-04" in the status (C1:Vac-P1a_status) message indicates a malfunctioning sensor.
We removed writing of C1:Vac-P1a_pressure and C1:Vac-P1a_status from MKS937a and mvoed them to XGS600 which is using the sensor 1 from main volume. See this commit.
Now we are able to open PSL shutter. The sensor should be replaced ASAP and this commit can be reverted then.
All steps taken have been recorded here:
(Report on Aug 12, 2022)
We went around the lab for the final check. Here are the additional notes.
I declare that now we are ready for the power outage.
Our first step in preparing for the Shutdown was to center all the OpLevs. Next is to prepare the Vacuum System for the shutdown.
Took the backup (snapshot) of /home/export as of Aug 12, 2022
controls@nodus> cd /cvs/cds/caltech/nodus_backup
controls@nodus> rsync -ah --progress --delete /home/export ./export_220812 >rsync.log&
As the last backup was just a month ago (July 8), rsync finished quickly (~2min).
TL;DR: Have successfully measured the UGF of the AUX laser on my Red Pitaya! Attached is one of my data runs (pdf + txt file).
# frequency start: 500.0
# frequency stop: 50000.0
# samples: 50
# amplitude: 0.01
# cycles: 500
# max fs: 125000000.0
# N: 16384UGF: 9264.899326705621
# Frequency[Hz] Magnitude[V/V] Phase[rad] Coherence
4.999999999999999432e+02 5.216612299292965105e+01 -7.738468629291910261e-01 7.660920305860696722e-02
5.492705709937790743e+02 3.622076363933444298e+01 -5.897393740774580229e-01 3.183076012979469405e-01
We had several problems with our NDS2 server configuration. It runs on megatron, but I think it may have had issues since perhaps not everyone was aware of it running there.
Since Megatron is currently running the "Shanghai" Quad-core Opteron processor from ~2009, its ~time to replace it with a more up to date thing. I'll check with Neo to see if he has any old LDAS leftovers that are better.
Here is a summary of what needs doing following the chat with Jamie today.
Jamie brought over the KVM switch shown in the attachment and I tested all 16 ports and 7 cables and can confirm that they all work as expected.
1. Do a rack space budget to get a clear picture of how many front-ends we can fit into the new rack
2. Look into what needs doing and how much effort would be needed to clear rack 1X7 and use that instead of the new rack. The power down on Friday would present a good opportunity to do this work on Monday, so get the info ready before then.
3. Start mounting front-ends, KVM and dolphin network switch
4. Add the BOX rack layout to the CDS upgrade page.
We diagnosed the suspension damping of the IMC/BHD/recycling optics by kicking the various degree of freedom (dof) and then tuning the gain so that we get a residual Q of approx. 5 in the cases where this can be achieved.
MC2: SIDE-YAW coupling, but OK
MC3: Too much coupling between dofs, NEEDS ATTENTION
AS1: POS-PIT coupling, close to oscillation, cnt2um off, NEEDS ATTENTION
AS4: PIT-YAW coupling, cannot increase YAW gain because of coupling, No cnt2um, No Cheby, NEEDS ATTENTION
PR2: No cnt2um, No Cheby
PR3: POS-PIT coupling, cannot increase POS/PIT/YAW gain because of coupling, No cnt2um, No Cheby, NEEDS ATTENTION
SR2: No cnt2um
During the cleaning today, we found many legacy lab items. Here are some policies what should be kept / what should be disposed