40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 76 of 339  Not logged in ELOG logo
ID Date Author Type Category Subject
  13270   Tue Aug 29 20:04:09 2017 ranaUpdatePSLPSL table auxiliary NPRO

I don't understand why the 1st order diffracted beam doesn't go to zero when you shut off the drive. My guess is that the standing acoustic wave in the AO crystal needs some time to decay: f = 40 MHz, tau = 1 usec... Q ~ 100. Perhaps, the crystal is damped by the PZT and ther output impedance of the mini-circuits switch is different from the AO driver.

In any case, if you need a faster shut off, or want something that more cleanly goes to zero, there is a large (~1 cm) aperture Pockels cell that Frank Siefert was using for making pulses to damage photo diodes. There is a DEI Pulser unit near the entrance to the QIL in Bridge which can drive it.

  13269   Tue Aug 29 15:41:17 2017 KiraSummaryPEMheater circuit

I worked with Kevin and Gautam to create a heater circuit. The first attachment is Kevin's schematic of the circuit. The OP amp connects to the gate of the power MOSFET, and the power supply connects to the drain, while the source goes into the heater. We set the power supply voltage to 22V and varied the voltage of the input to the OP amp. At 6V to the OP amp, we got a current of 0.35A flowing through the heater and resistor. This was the peak current we got due to the OP amp being saturated (an increase in either of the power supplies did not change the current), but when we increased the voltage of the supply rails of the OP amp from 15V to 20V, we got a current of 0.5A. We would want a higher current than this, so we will need to get a different OP amp with a higher max voltage rating, and a resistor that can take more power than this one (it currently takes 5W of power, and is the best one we could find).

Kevin and I created a simulation of this circuit using CircuitLab to understand why the current was so low (second attachment). The horizontal axis is the voltage we supply to the OP amp. The blue line shows the voltage at the point between the output of the OP amp and the gate of the MOSFET. The orange line is the voltage at the point between the source of the MOSFET and the heater. The brown line is the voltage at the point between the heater and resistor. Thus, we can see that saturation occurs at about 2.1V. At that point, the gate-source voltage is the difference between the blue curve and the orange curve, which is about 4V, which is what we measured. Likewise, the voltage across the heater is the difference between the orange curve and the brown curve, which comes out to around 8V, which is also what we measured. Lastly, the voltage across the resistor is the brown curve, which is about 2V, which matches our observations. The circuit works as it should, but saturates too soon to get a high enough current out of it.

Gautam noted that it is important to measure the current correctly. We can't just use an ammeter and place it across the resistor or heater, because the internal resistance of the ammeter (~0.5 ohm) is comparable to the resistance we want to measure, so the current gets split between the circuit and the ammeter and we get an equivalent resistance of 1/R = 1/R0 + 1/Ra, where R0 is the resistance of the part we want to measure the current across, and Ra is the ammeter resistance. Thus, the new resistance will be lower and the ammeter will show a higher current value than what is actually there. So to accurately measure the current, we must place the ammeter in series with the part we want to measure. We initially got a 1A reading on the heater, which was not correct, and our setup did not heat up at all basically. When we placed the ammeter in series with the heater, we got only 0.35A.

The last two images are the setup for testing of the heater. We wrapped it around an aluminum piece and covered it with a few layers of insulating material. We can stick a thermometer in between the insulation and heater to see the temperature change. In later tests, we may insulate the whole piece so that less heat gets dissipated. In addition, we used a heat sink and thermal paste to secure the MOSFET to it, as it got very hot.

Our next steps will be to get a resistor and an OP amp that are better suited for our purposes. We will also run simulations with components that we choose to make sure that it can provide the desired current of 1A (the maximum output of the power supply is 24V, and the heater is 24 ohm, so max current is 1A). Kevin is working on that now.

Attachment 1: heater_circuit.pdf
heater_circuit.pdf
Attachment 2: simulation.png
simulation.png
Attachment 3: heater_setup.jpg
heater_setup.jpg
Attachment 4: IMG_20170829_131126.jpg
IMG_20170829_131126.jpg
  13268   Tue Aug 29 15:35:19 2017 SteveUpdateVACvacuum pump specifications & manuals

RP1 and RP3 roughing pump manual of Leybold D30A oily rotory pump

Fore pump of TP2 & TP3 Varian SH-100 Dry Scroll

TP2 and TP3 small turbo drag pump  Varian 969-9361 

TP2 and TP3 turbo controller Varian 969-9505

TP1 magnetically suspended turbo pump  Osaka TG390MCAB, sn360 and controller TC010M  and note : this  pump  running on 208VAC single phaseIt is not on the UPS !  

                                                             Osaka Maglev Manual        and Osaka Controller Communication Wiring                                                                                                                                

VC1 cryo pump CTI-Cryogenics Cryo Torr 8 sn 8g23925  SAFETY note: compressor single phase 208VAC and the head driver 3 phase 208VAC Compressor and driver have each separate power cord!

Installed at 40m wiki  also

Quote:

The V1 gate valve specs installed at 40m wiki page. VAT model number 10846-UE44-0007 Our main volume pumping goes through this 8" id gate valve V1 to Maglev turbo  or Cryo pump to  VC1

The ion pumps have 6" id gate valves:VAT 10844-UE44-AAY1,  Pneumatic actuator with position indicator and double acting solenoid valve 115V 60Hz  Purchased 1999 Dec 22

UHV gate valves 2.5" id. VAT  10836-UE44    Pneumatic actuator with position indicator and double acting solenoid valve 115V 60 hz, IFO to RGA  VM1 &  RGA to Maglev  VM2

mini UHV gate valve 1.5" id.   VAT  01032-UE01      2016 cataloge page 14,   manual - no position indicator, VM4  next to manual adjustable fine leak valve to RGA

UHV angle valve 1.5" id, model VAT 28432-GE41, Viton plate seal, pneumatic actuator with position indicator & solenoid valve 115V & single acting closing spring  MEDM screen: VM3,VC2, V3,V4,V5,V6,VA6,V7 & annuloses Each chamber annulos has 2 valves.

UHV angle valve 1.5" id, model VAT 57132-GE05   go page 208,   Metal tip seal, manual actuating only with position indicator,   MEDM screen: roughing RV1 and venting VV1 hand wheel needed to close to torque spec

UHV angle valve  1.5" id. model VAT 28432-GE01           Viton plate seal, manual operation only at IT gauges Hornet & Super Bee and  ion pumps roughing  ports. These are not labeled.

                                                      

The Cryo pump interlock wiring was added too

Note: all moving valve plate seals are single.

 

  13267   Tue Aug 29 15:04:59 2017 gautamUpdateSUSETMY Oplev PIT loop gain changed

Last night, while we were working on the ALS, I noticed the GTRY spot moving around (in PITCH) on the CCD monitor in the control room at ~1-2Hz. The operating condition was that the arm was locked to the IR, and the PSL green shutter was closed, so that only the arm transmissions were visible on the CCD screens. There was no such noticable movement of the GTRX spot. When looking at the out-of-loop ALS nosie in this configuration (but now with the PSL green shutter open of course), the Y arm ALS noise at low frequencies was much worse than the X arm.

Today, I looked into this a little more. I first checked that the Y-endtable enclosure was closed off as usual (as I had done some tweaking to the green input pointing some days ago). There are various green ghost beams on the Y-endtable. When time permits, we should make an effort to cleanly dump these. But the enclosure was closed as usual.

Then I looked at the in-loop Oplev error signal spectra for the ITMY and ETMY Oplev loops. There was high coherence between ETMYP Oplev error signal and GTRY. So I took a loop transfer function measurement - the upper UGF was around 3.5Hz. I increased the loop gain such that the upper UGF was around 4.5Hz, with phase margin ~30degrees. Doing so visibly reduced the angular movement of the GTRY spot on the CCD. Attachment #1 shows the Oplev loop TF after the gain increase, while Attachment #2 compares the GTRX and GTRY spectra (DC value is approximately the same for both, around 0.4). GTRY still seems a bit noisier at low frequencies, but the out-of-loop ALS noise for the Y arm now lines up much more closely with its reference trace from a known good time. 

Quote:
 

Y-arm ALS wasn't so stellar tonight, especially at low frequencies. I can see the GTRY spot moving on the CCD monitor, so something is wonky. To be investigated.

 

Attachment 1: ETMY_OLPIT.pdf
ETMY_OLPIT.pdf
Attachment 2: GTR_comparison.pdf
GTR_comparison.pdf
  13266   Tue Aug 29 02:08:39 2017 gautamUpdateALSFiber ALS noise measurement

I was having a chat with EricQ about this today, just noting some points from our discussion down here so that I remember to look into this tomorrow.

  • I believe that currently, the channels C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ and the Y arm analog read out the frequency of the green beat, in Hz.
  • In the comparison I plotted, I WRONGLY divided the spectrum of the IR beat by 2, instead of multiplying in by 2, which is what should actually be done for an apples-to-apples comparison.
  • The deeper question is, what should this channel actually readout?
  • Looking at my codes from past arm scans etc, I see that I am dividing the downloaded data by 2 in order to convert the X-axis of these scans to "IR Hz". But this should really be all we care about.
  • So I think I will have to re-do the cts-to-Hz calibration in the ALS models. It should be possible to do ~10FSR scans with the IR beat, and then we can use the sideband resonances (presumably the sideband frequencies are known with better precision than the arm length, and hence the FSR) to calibrate the phase tracker.
  • I don't think this changes the fact that the Fiber ALS situation has been improved - but I will have to repeat the measurement to be sure. The improvement may not be as stellar as I tried to sell in my previous elog sad.

    Other thoughts: 

  • Can we make use of the Jetstor raid array for some kind of consolidated 40m CDS backup system? Once we've gotten everything of interest out of it...

  13265   Tue Aug 29 01:52:22 2017 gautamUpdateSUSTest mass actuator calibration

[ericq, gautam]

Tonight, we decided to double-check the POX counts-to-meters conversion.

It is unclear when this was last done, and since I modified the coil driver electronics for the ITMs and BS recently, I figured it would be useful to get this calibration done. The primary motivation was to see if we could resolve the discrepancy between the current ALS noise (using POX as a sensor) compared to the Izumi et. al. plot.

Because we are planning to change the coil driver electronics further soon anyways, we decided to do the calibration at a single frequency for tonight. For future reference, the extension of this method to calibrate the actuator over a wider range of frequencies is here. The procedure followed, and the relevant numbers from tonight, are as follows.

Procedure:

  1. Set dark offsets on all DCPDs and LSC PDs.
  2. Look at the free swinging Michelson signal on ASDC.
    • For tonights test, ASDC was derived from the AS55 photodiode.
    • The AS110 photodiode actually has more light on it, but we think that the ADC that the DCPD board is interfaced to is running on 0-2V rather than 0-10V, as the signal seemed to saturate around 2000 counts. It is unclear whether the actual photodiode is saturating, to be investigated.
    • So we decided to use ASDC from AS55 photodiode with 15dB whitening gain.
    • There is also some issue with the whitening filter (not whitening gain) on ASDC - engaging the whitening shifts the DC offset. This has to be investigated while we get stuck into the LSC electronics.
  3. Look at the peak-to-peak swing of ASDC. Use algebraic expression for reflected power from Michelson interferometer to calibrate the ASDC slope at Michelson half-fringe. For the test tonight, ASDC_max = 1026 counts, ASDC_min = 2 counts.
  4. Lock the Michelson at half-fringe, with ASDC as the error signal.
    • Zero out the MICH elements in the RFPD input matrix.
    • Set the matrix element from ASDC to MICH in the DCPD LSC input matrix to 1.
    • The servo gain used was +0.005 on the MICH_A servo path.
    • A low-frequency boost was turned on.
  5. Use the sensing matrix infrastructure to drive a line in the optic of interest.
    • Tonight, we looked at ITMX and ITMY.
    • The line was driven at 311.1Hz, and the amplitude was 300 counts.
    • Download 60secs of ASDC data, demodulate at the driven frequency to find the peak height in counts, and using the slope of ASDC (in cts/m) at the Michelson half-fringe, calculate the actuator gain in m/cts.
    • ITMY: 2.55e-9 / f^2 m/count
    • ITMX: 2.65e-9 / f^2 m/count
    • These numbers kind of make sense - the previous numbers were ~5nm/f^2 /ct, but I removed an analog gain of x3 in this path. Presumably there has been some change in the N/A conversion factor - perhaps because of a change in the interaction between the optics' face magnets and the static magnetic field in the OSEMs?
  6. Lock the arms with POX/POY, and drive the newly calibrated ITMs.
    • So we know how many meters we are driving the ITMs by.
    • Looking at POX/POY, we can calibrate these into meters/count.
    • Both POX and POY were whitened.
    • POX whitening gain = +30dB, POY whitening gain = +18dB.
    • ITMX and ITMY were driven at 311.1Hz, with amplitude = 2counts.
    • Download 60 secs of data, demodulate at the drive frequency to find the peak height, and use the known ITM actuator gains to calibrate POX and POY.
    • POX: 7.34e-13 m / count (approx. 5 times less than the number in the Foton filter bank in the C1:CAL-CINV model).
    • POY: 1.325e-13 m / count
    • We did not optimize the demod phases for POX/POY tonight. 

Once these calibrations were updated, we decided to control the arms with ALS, and look at the POX spectrum. Y-arm ALS wasn't so stellar tonight, especially at low frequencies. I can see the GTRY spot moving on the CCD monitor, so something is wonky. To be investigated. But the X arm ALS noise looked pretty good.

Seems like updating the calibration did the job; see the attached comparison plot.

Attachment 1: ALS_comparison.pdf
ALS_comparison.pdf
  13264   Mon Aug 28 23:22:56 2017 johannesUpdatePSLPSL table auxiliary NPRO

I moved the axuiliary NPRO to the PSL table today and started setting up the optics.

The Faraday Isolator was showing a pretty unclean mode at the output so I took the polarizers off to take a look through them, and found that the front polarizer is either out of place or damaged (there is a straight edge visible right in the middle of the aperture, but the way the polarizer is packaged prevents me from inspecting it closer). I proceeded without it but left space so an FI can be added in the future. The same goes for the broadband EOM.

There are two spare AOMs (ISOMET and Intraaction, both resonant at 40MHz) available before we have to resort to the one currently installed in the PSL.

I installed the Intraaction AOM first and looked at the switching speed of its first order diffracted beam using both its commercial driver and a combination of minicircuits components. Both show similar behavior. The fall time of the initial step is ~110ns in both cases, but it doesn't decay rapidly no light but a slower exponential. Need to check the 0 order beam and also the other AOM.

Intraaction Driver

   

Mini Circuits Drive

   

  13263   Mon Aug 28 17:13:57 2017 ericqUpdateCDS40m files backup situation

In addition to bootable full disk backups, it would be wise to make sure the important service configuration files from each machine are version controlled in the 40m SVN. Things like apache files on nodus, martian hosts and DHCP files on chiara, nds2 configuration and init scripts on megatron, etc. This can make future OS/hardware upgrades easier too.

  13262   Mon Aug 28 16:20:00 2017 gautamUpdateCDS40m files backup situation

This elog is meant to summarize the current backup situation of critical 40m files.

What are the critical filesystems? I've also indicated the size of these disks and the volume currently used, and the current backup situation. 

Name

Disk Usage

Description / remarks

Current backup status

FB1 root filesystem 1.7TB / 2TB
  • FB1 is the machine that hosts the diskless root for the front end machines
  • Additionally, it runs the daqd processes which write data from realtime models into frame files
Not backed up
/frames up to 24TB
  • This is where the frame files are written to 
  • Need to setup a wiper script that periodically clears older data so that the disk doesn't overflow.

Not backed up 

LDAS pulls files from nodus daily via rsync, so there's no cron job for us to manage. We just allow incoming rsync.

Shared user area 1.6TB / 2TB
  • /home/cds on chiara
  • This is exported over NFS to 40m workstations, FB1 etc.
  • Contains user directories, scripts, realtime models etc.

Local backup on /media/40mBackup on chiara via daily cronjob

Remote backup to ldas-cit.ligo.caltech.edu::40m/cvs via daily cronjob on nodus

Chiara root filesystem 11GB / 440GB
  • This is the root filesystem for chiara
  • Contains nameserver stuff for the martian network, responsible for rsyncing /home/cds
Not backed up
Megatron root filesystem 39GB / 130GB
  • Boot disk for megatron, which is our scripts machine
  • Runs MC autolocker, FSS loops etc.
  • Also is the nds server for facilitating data access from outside the martian network
Not backed up
Nodus root filesystem 77GB / 355GB
  • This is the boot disk for our gateway machine
  • Hosts Elog, svn, wikis
  • Supposed to be responsible for sending email alerts for NFS disk usage and vacuum system N2 pressure
Not backed up
JETSTOR RAID Array 12TB / 13TB
  • Old /frames
  • Archived frames from DRFPMI locks
  • Long term trends

Currently mounted on Megatron, not backed up.

Then there is Optimus, but I don't think there is anything critical on it. 

So, based on my understanding, we need to back up a whole bunch of stuff, particularly the boot disks and root filesystems for Chiara, Megatron and Nodus. We should also test that the backups we make are useful (i.e. we can recover current operating state in the event of a disk failure).

Please edit this elog if I have made a mistake. I also don't have any idea about whether there is any sort of backup for the slow computing system code.

  13261   Mon Aug 28 10:51:21 2017 steveUpdateSUSETMX damping recovered
Attachment 1: ETMX_restored.png
ETMX_restored.png
  13260   Mon Aug 28 10:28:07 2017 SteveUpdateVACRGA scan 20 day after 17 Torr

The RGA was turned on 7 days ago.  It's 46 C now.   The X-arm room tem ~20 C

IFO pressure 6.5e-6 Torr at IT-Hornet gauge. Valve configuration vacuum normal.

Attachment 1: 20d_after_17_torr.png
20d_after_17_torr.png
  13258   Mon Aug 28 08:47:32 2017 JamieSummaryLSCFirst cavity length reconstruction with a neural network
Quote:

Phenomenal!

truly.

  13257   Sun Aug 27 11:57:31 2017 ranaUpdateALSFiber ALS noise measurement

It seems like the main contribution to the RMS comes from the high frequency bump. When using the ALS loop to lock the arm to the beat, only the stuff below ~100 Hz will matter. Interesting to see what that noise budget will show. Perhaps the discrepancy between inloop and out of loop will go down.

  13256   Sat Aug 26 09:56:34 2017 GabrieleSummaryLSCFirst cavity length reconstruction with a neural network

Update

I included the 55 MHz sideband and higher order modes in my training examples. To keep things simple, I just assumed there are higher order modes up to n+m=4 in the input beam. The power in each HOM is randomly chosen from a random gaussian distribution with width determined from experimental cavity scans. I used a value of 0.913+-0.01 rad for the Gouy phase (again estimated from cavity scans, but in reasonable agreement with the nominal radius of curvature of ETMX)

Results are improved. The plot belows show the performance of the neural network on 100s of experimental data

For reference, the plots below show the performance of the same network on simulated data (that includes sensing noise but no higher order modes)

  13255   Fri Aug 25 17:11:07 2017 ranaUpdateALSFiber ALS noise measurement

Is it better to mount the box in the PSL under the existing shelf, or in a nearby PSL rack?

Quote:

 

Further characterization needs to be done, but the results of this test are encouraging. If we are able to get this kind of out of loop ALS noise with the IR beat, perhaps we can avoid having to frequently fine-tune the green beat alignment on the PSL table. It would also be ideal to mount this whole 1U setup in an electronics rack instead of leaving it on the PSL table

 

  13254   Fri Aug 25 15:54:14 2017 gautamUpdateALSFiber ALS noise measurement

[Kira, gautam]

Attachment #1 - Photo of the revamped beat setup. The top panel has to be installed. New features include:

  • Regulated power supply via D1000217.
  • Single power switch for both PDs.
  • Power indicator LED.
  • Chassis ground isolated from all other electronic grounds. For this purpose, I installed all the elctronics on a metal plate which is only connected to the chassis via nylon screws. The TO220 package power regulator ICs have been mounted with the TO220 mounting kits that provide a thin piece of plastic that electrically insulates its ground from the chassis ground.
  • PD outputs routed through 20dB coupler on front panel for diagnostic purposes.
  • Fiber routing has been cleaned up a little. I installed a winding fixture I got from Johannes, but perhaps we can install another one of these on top of the existing one to neaten up the fiber layout further.
  • 90-10 light splitter (meant for diagnostic purposes) has been removed because of space constraints. 

Attachment #2 - Power budget inside the box. Some of these FC/APC connectors seem to not offer good coupling between the two fibers. Specifically, the one on the front panel meant to accept the PSL light input fiber seems particularly bad. Right now, the PSL light is entering the box through one of the front panel connectors marked "PSL + X out". I've also indicated the beat amplitude measured with an RF analyzer. Need to do the math now to confirm if these match the expected amplitudes based on the power levels measured.

Attachment #3 - We repeated the measurement detailed here. The X arm (locked to IR) was used for this test. The "X" delay line electronics were connected to the X green beat PD, while the "Y" delay line electronics were connected to the X IR beat PD. I divided the phase tracker Hz calibration factor by 2 to get IR Hz for the Y arm channels. IR beat was at ~38MHz, green beat was at ~76MHz. The broadband excess noise seen in the previous test is no longer present. Indeed, below ~20Hz, the IR beat seems less noisy. So seems like the cleaning / electronics revamp did some good. 

Further characterization needs to be done, but the results of this test are encouraging. If we are able to get this kind of out of loop ALS noise with the IR beat, perhaps we can avoid having to frequently fine-tune the green beat alignment on the PSL table. It would also be ideal to mount this whole 1U setup in an electronics rack instead of leaving it on the PSL table.

Quote:

Photos + power budget + plan of action for using this box to characterize the green PDH locking to follow. 

GV Edit: I've added better photos to the 40m Google Photos page. I've also started a wiki page for this box / the proposed IR ALS  system. For the moment, all that is there is the datasheet to the Fiber Couplers used, I will populate this more as I further characterize the setup.

Attachment 1: IMG_7497.JPG
IMG_7497.JPG
Attachment 2: FOL_schematic.pdf
FOL_schematic.pdf
Attachment 3: 20170825_IR_ALS.pdf
20170825_IR_ALS.pdf
  13253   Fri Aug 25 11:11:26 2017 gautamUpdateGeneralMC1 kicked again

Looks like MC1 got another big kick just under 4 hours ago. None of the other optics show any evidence of a glitch so it seems unlikely that this was some sort of global event. It's been well behaved for ~2weeks now. IMC was unlocked. I manually re-aligned MC1, at which point the autolocker was able to lock the IMC.

Looking at this plot, it seems that LR and UL coils seem to have the largest kicks. UR barely saw it. Not sure what (if anything) to make of this - apparently the optic moved by ~20urad with the UR magnet approximately the pivot.

Attachment 1: MC1_glitch.png
MC1_glitch.png
  13252   Fri Aug 25 01:20:52 2017 gautamUpdateLSCDRMI locking attempt

I tried some DRMI locking again tonight, but had no success. Here is the story.

  • I started out by going to the AS table and measuring the light level on the REFL55 photodiode (with PRM aligned and the PRC flashing, but LSC disabled).
    • The Ophir power meter reads 13mW
    • The DC output of the photodiode shows ~500mV on an oscilloscope.
    • Both of these numbers line up well with measurements I made in April/May.
  • Returned to the control room and aligned the IFO for DRMI locking - but LSC servos remained disabled.
    • At the nominal REFL55 whitening level of +18dB, the REFL 55 signals saturated the ADC (confirmed by looking at the traces on dataviewer).
    • But the signals still looked like PDH error signals.
    • Lowering the whitening gain to 6dB makes the PDH error signal horns peak around 20,000 counts.
    • Could this be indicative of problems with either the analog whitening gain switching or the LSC Demod Boards? To be investigated.
  • Tried enabling LSC servos with same settings with which I had success right up till a couple of months ago, but had no success.
    • If it is true that the REFL55 signal is getting amplified because of some gain stage not being switched correctly, I should still have been able to lock the SRC with a lowered loop gain - but even lowering the gain by a factor of 10 had no effect on the locking success rate.

Looks like I will have to embark on the REFL55 LSC electronics investigation. I was able to successfully lock the PRC on carrier and sideband, and the Michelson lock also seems to work fine, all of which seem to point to a hardware problem with the REFL55 signal chain.

I did a quick check by switching the output of the REFL55 demod board to the inputs normally used by AS55 signals on the whitening board. Setting the whitening gain to +18dB for these channels had the same effect - ADC overflow galore. So looks like the whitening board isn't to blame. I will have to check the demod board out.

 

  13251   Thu Aug 24 18:51:57 2017 KojiSummaryLSCFirst cavity length reconstruction with a neural network

Phenomenal!

  13250   Thu Aug 24 18:02:16 2017 GabrieleSummaryLSCFirst cavity length reconstruction with a neural network

1) Introduction

In brief, I trained a deep neural network (DNN) to recosntuct the cavity length, using as input only the transmitted power and the reflection PDH signals. The training was performed with simulated data, computed along 0.25s long trajectories sampled at 8kHz, with random ending point in the [-lambda/4, lambda/4] unique region and with random velocity.

The goal of thsi work is to validate the whole approach of length reconstruction witn DNN in the Fabry-Perot case, by comparing the DNN reconstruction with the ALS caivity lenght measurement. The final target is to deploy a system to lock PRMI and DRMI. Actually, the Fabry-Perot cavity problem is harder for a DNN: the cavity linewidth is quite narrow, forcing me to use very high sampling frequency (8kHz) to be able to capture a few samples at each resonance crossing. I'm using a recurrent neural network (RNN), in the input layers of the DNN, and this is traine using truncated backpropagation in time (TBPT): during training each layer of RNN is unrolled into as many copies as there are input time samples (8192 * 0.25 = 2048). So in practice I'm training a DNN with >2000 layers! The limit here is computational, mostly the GPU memory. That's why I'm not able to use longer data stretches.

But in brief, the DNN reconstruction is performing well for the first attempt.

2) Training simulation

In the results shown below, I'm using a pre-trained network with parameters that do not match very well the actual data, in particular for the distribution of mirror velocity and the sensing noises. I'm working on improving the training.

I used the following parameters for the Fabry-Perot cavity:

The uncertaint is assumed to be the 90% confidence level of a gaussian distribution. The DNN is trained on 100000 examples, each one a 0.25/8kHz long trajectory with random velocity between 0.1 and 5 um/s, and ending point distributed as follow: 33% uniform on the [-lambda/4, lambda/4] region, plus 33% gaussian distribution peaked at the center with 5 nm width. In addition there are 33% more static examples, distributed near the center. 

For each point along the trajectory, the signals TRA, POX11_I and POX11_Q are computed and used as input to the DNN.

3) Experimental data

Gautam collected about 10 minutes of data with the free swinging cavity, with ALS locked on the arm. Some more data were collected with the cavity driven, to increase the motion. I used the driven dataset in the analysis below.

3.1) ALS calibration

The ALS signal is calibrated in green Hz. After converting it to meters, I checked the calibration by measuring the distance between carrier peaks. It turned out that the ALS signal is undercalibrated by about 26%. After correcting for this, I found that there is a small non-linearity in the ALS response over multiple FSR. So I binned the ALS signal over the entire range and averaged the TRA power in each bin, to get the transmission signals as a function of ALS (in nm) below:

I used a peak detection algorithm to extract the carrier and 11 MHz sideband peaks, and compared them with the nominal positions. The difference between the expected and measured peak positions as a function of the ALS signal is shown below, with a quadratic fit that I used to improve the ALS calibration

The result is

z_initial = 1e9 * L*lamba/c *1.26. * ALS
z_corrected = 2.1e-06 z^2  -1.9e-02 z  -6.91e+02

The ALS calibrated z error from the peak position is of the order of 3 nm (one sigma)

3.2) Mirror velocity

Using the calibrated ALS signal, I computed the cavity length velocity. The histogram below shows that this is well described by a gaussian with width of about 3 um/s. In my DNN training I used a different velocity distribution, but this shouldn't have a big impact. I'm retraining with a different distirbution.

4) DNN results

The plot below shows a stretch of time domain DNN reconstruction, compared with the ALS calibrated signal. The DNN output is limited in the [-lambda/4, lambda/4] region, so the ALS signal is also wrapped in the same region. In general the DNN reconstruction follows reasonably well the real motion, mostly failing when the velocity is small and the cavity is simultanously out of resonance. This is a limitation that i see also in simulation, and it is due to the short training time of 0.25s.

I did not hand-pick a good period, this is representative of the average performance. To get a better understanding of the performance, here's a histogram of the error for 100 seconds of data:

The central peak was fitted with a gaussian, just to give a rough idea of its width, although the tails are much wider. A more interesting plot is the hisrogram below of the reconstructed position as a function of the ALS position, Ideally one would expect a perfect diagonal. The result isn't too far from the expectation:

The largest off diagonal peak is at (-27, 125) and marked with the red cross. Its origin is more clear in the plot below, which shows the mean, RMS and maximum error as a function of the cavity length. The second peak corresponds to where the 55 MHz sideband resonate. In my training model, there were no 55 MHz sidebands nor higher order modes. 

5) Conclusions and next steps

The DNN reconstruction performance is already quite good, considering that the DNN couldn't be trained optimally because of computation power limitations. This is a validation of the whole idea of training the DNN offline on a simulation and then deploy the system online.

I'm working to improve the results by

  • training on a more realistic distribution of velocity
  • adding the 55 MHz sidebands
  • maybe adding HOMs
  • tune the DNN architecture

However I won't spend too much time on this, since I think the idea has been already validated.

 

  13249   Thu Aug 24 17:36:11 2017 gautamUpdateCDSFSS Slow Python maintenance

A couple of weeks ago, I was trying to modernize the python version of the FSS Slow temperature control loops, when I accidentally ended up deleting it frown. There was no svn backup. So the old Perl PID script has been running for the last few days.

Today, I checked out the latest version that Andrew and co. have running in the PSL lab. I had to make some important modifications for the script to work for the 40m setup.

  1. The script is conveniently setup in a way that the channels it needs to read from / write to are read in from an .ini file. I renamed all the channels to match the appropriate 40m ones.
  2. We don't have a soft epics channel in which to define the setpoint for our PID servo (which is 0). Rather than poke around with slow machine EPICS records, I simply commented out this line in the script and included the hard-coded value of 0. When we modernize to the Acromag era, we can setup an EPICS channel + MEDM slider for the setpoint.
  3. The way the Perl script was setup, the error signal was pre-scaled by a factor of 0.01, supposedly to make the PID gains be of order 1. For consistency, I re-inserted this scaling, which awade and co. had removed.
  4. Modified the FSSslowPy.init file to call the script in accordance with the new syntax:
python FSSSlow.py -i FSSSlowPy.ini

Then I stopped the Perl process on megatron by running

sudo initctl stop FSSslow

and started the Python process by running

sudo initctl start FSSslowPy

I have now committed the files FSSSlow.py and FSSSlowPy.ini to the 40m svn.  Things seem to be stable for the last 20 mins or so, let's keep an eye on this though - although we had been running the Python PID loop for some months, this version is a slightly modified one. 

The initctl stuff still isn't very robust - I think both the Autolocker and the FSS slow servos have to be manually restarted if megatron is shutdown/restarted for whatever reason. It doesn't seem to be a problem with the initctl routine itself - looking at the logs, I can see that init is trying to start both processes, but is failing to do so each time. To be investigated. The wiki procedure to restart this process is up to date.

GV Edit 0000 25 Aug 2017: I had to add a line to the script that checks MC transmission before enabling the PID loop. Change has been committed to svn. Now, when the MC loses lock or if the PSL shutter is kept closed for an extended period of time, the temperature loop doesn't rail.

  13248   Thu Aug 24 00:39:47 2017 gautamUpdateLSCDRMI locking attempt

Since the single arm locking and dither alignment seemed to work alright after the CDS overhaul, I decided to try some recycling cavity locking tonight.

  • First, I locked single arms, ran dither alignment servos, and centered all test mass Oplevs. Note: the X arm dither alignment doesn't seem to work if we use the High-Gain Thorlabs PD as the Transmission PD. The BS loops just seem to pick up large offsets and the alignment actually degrades over a couple of minutes. This needs to be investigated.
  • Next, to get good PRM alignment, I manually moved the EPICS sliders till the REFL spot became roughly centered on the CCD screen.
  • Then I tried locking PRMI on carrier using the usual C1IFOConfigure script - the lock was caught within ~30 seconds.
  • The PRCL and MICH dither servo scripts also ran fine.
    • Centered PRM Oplev.
  • Next, I tried enabling the PRC angular feedforward.
    • OAF model does not automatically revert to its safe.snap configuration on model reboot, so I first manually did this such that the correct filter banks were enabled.
    • I was able to turn on the angular feedforward without disturbing the PRMI carrier lock. The angular motion of the POP spot on the CCD monitor was visibly reduced.
  • At this point I decided to try DRMI locking.
    • I centered the beam on the AS PDs with the simple Michelson.
    • Centered the beam on the REFL PDs with PRM aligned and PRC flashing through resonances.
    • Restored SRM alignment by eye with EPICS sliders.
    • Cavity alignment seemed alright - so I tried to lock DRMI with the old settings (i.e. from DRMI 1f locking a couple of months ago). But I had no success.
    • The behaviour of REFL55 (used for SRCL control) has changed dramatically - the analog whitening gain for this PD used to be +18dB, but at this setting, there are frequent ADC overflows. I had to reduce the whitening gain to +6dB to stop the ADC overflows. I also checked to make sure that the whitening setting was "manual" and not triggered.

Why should this have changed? I was just on the AS table and did re-center the beam onto the REFL 55 RFPD, but I had also done this in April/May when I was last doing DRMI locking. But I can't explain the apparent factor of ~4 increase in light level. I think I have some measurements of the light levels at various PDs from April 2017, I will see how the present levels line up.

Of course dataviever won't cooperate when I am trying to monitor testpoints.

I may be missing something obvious, but I am quitting for tonight, will look into this more tomorrow.


Unrelated to this work: looking at the GTRY spot on the CCD monitor, there seems to be some excess angular motion. Not sure where this is coming from. In the past, this sort of problem has been symptomatic of something going wonky with the Oplev loops. But I took loop measurements for ITMY and ETMY PIT and YAW, they look normal. I will investigate further when I am doing some more ALS work.

  13246   Wed Aug 23 17:22:36 2017 gautamUpdateALSFiber ALS - reinstalled

I completed the revamp of the box, and re-installed the box on the PSL table today. I think it would be ideal to install this on one of the electronic racks, perhaps 1X2 would be best. We would have to re-route the fibers from the PSL table to 1X2, but I think they have sufficient length, and this way, the whole arrangement is much cleaner.

Did a quick check to make sure I could see beat notes for both arms. I will now attempt to measure the ALS noise with this revamped box, to see if the improved power supply and grounding arrangement, as well as fiber cleaning, has had any effect.

Photos + power budget + plan of action for using this box to characterize the green PDH locking to follow. 

For quick reference: here is the AM/PM measurement done when we re-installed the repaired Innolight NPRO on the new X endtable.

  13245   Wed Aug 23 10:11:46 2017 SteveUpdateVACvacuum valve specifications

The V1 gate valve specs installed at 40m wiki page. VAT model number 10846-UE44-0007 Our main volume pumping goes through this 8" id gate valve V1 to Maglev turbo  or Cryo pump to  VC1

The ion pumps have 6" id gate valves:VAT 10844-UE44-AAY1,  Pneumatic actuator with position indicator and double acting solenoid valve 115V 60Hz  Purchased 1999 Dec 22

UHV gate valves 2.5" id. VAT  10836-UE44    Pneumatic actuator with position indicator and double acting solenoid valve 115V 60 hz, IFO to RGA  VM1 &  RGA to Maglev  VM2

mini UHV gate valve 1.5" id.   VAT  01032-UE01      2016 cataloge page 14,   manual - no position indicator, VM4  next to manual adjustable fine leak valve to RGA

UHV angle valve 1.5" id, model VAT 28432-GE41, Viton plate seal, pneumatic actuator with position indicator & solenoid valve 115V & single acting closing spring  MEDM screen: VM3,VC2, V3,V4,V5,V6,VA6,V7 & annuloses Each chamber annulos has 2 valves.

UHV angle valve 1.5" id, model VAT 57132-GE05   go page 208,   Metal tip seal, manual actuating only with position indicator,   MEDM screen: roughing RV1 and venting VV1 hand wheel needed to close to torque spec

UHV angle valve  1.5" id. model VAT 28432-GE01           Viton plate seal, manual operation only at IT gauges Hornet & Super Bee and  ion pumps roughing  ports. These are not labeled.

                                                      

The Cryo pump interlock wiring was added too

Note: all moving valve plate seals are single.

  13244   Tue Aug 22 23:27:14 2017 ranaUpdateALSALS OLTFs

Didn't someone look at what the OLG req. should be for these servos at some point? I wonder if we can make a parallel digital path that we switch on after green lock. Then we could make this a simple 1/f box and just add in the digital path (take analog control signal into ADC, filter, and then sum into the control point further down the path to the laser) for the low frequency boost.

  13243   Tue Aug 22 18:36:46 2017 gautamUpdateComputersAll FE models compiled against RCG3.4

After getting the go ahead from Jamie, I recompiled all the FE models against the same version of RCG that we tested on the c1iscex models.

To do so:

  • I did rtcds make and rtcds install for all the models.
  • Then I ssh-ed into the FEs and did rtcds stop all, followed by rtcds start <model> in the order they are listed on the CDS overview MEDM screen (top to bottom).
  • During the compilation process (i.e. rtcds make), for some of the models, I got some compilation warnings. I believe these are related to models that have custom C code blocks in them. Jamie tells me that it is okay to ignore these warnings at that they will be fixed at some point.
  • c1lsc FE crashed when I ran rtcds stop all - had to go and do a manual reboot.
  • Doing so took down the models on c1sus and c1ioo that were running - but these FEs themselves did not have to be robooted.
  • Once c1lsc came back up, I restarted all the models on the vertex FEs. They all came back online fine.
  • Then I ssh-ed into FB1, and restarted the daqd processes - but c1lsc and c1ioo CDS indicators were still red.
  • Looks like the mx_stream processes weren't started automatically on these two machines. Reasons unknown. Earlier today, the same was observed for c1iscey.
  • I manually restarted the mx_stream processes, at which point all CDS indicator lights became green (see Attachment #1).

IFO alignment needs to be redone, but at least we now have a (admittedly rounabout way) of getting testpoints. Did a quick check for "nan-s" on the ASC screen, saw none. So I am re-enabling watchdogs for all optics.

GV 23 August 9am: Last night, I re-aligned the TMs for single arm locks. Before the model restarts, I had saved the good alignment on the EPICs sliders, but the gain of x3 on the coil driver filter banks have to be manually turned on at the moment (i.e. the safe.snap file has them off). ALS noise looked good for both arms, so just for fun, I tried transitioning control of both arms to ALS (in the CARM/DARM basis as we do when we lock DRFPMI, using the Transition_IR_ALS.py script), and was successful.

Quote:

[jamie, gautam]

We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints.

Here is what was done (Jamie will correct me if I am mistaken).

  1. Jamie checked out branch 3.4 of the RCG from the SVN.
  2. Jamie recompiled all the models on c1iscex against this version of RCG.
  3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 
  4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green.
  5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above.

So while we are in a better state now, the problem isn't fully solved. 

Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.

 

Attachment 1: CDS_Aug22.png
CDS_Aug22.png
  13242   Tue Aug 22 17:11:15 2017 gautamUpdateComputersc1iscex model restarts

[jamie, gautam]

We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints.

Here is what was done (Jamie will correct me if I am mistaken).

  1. Jamie checked out branch 3.4 of the RCG from the SVN.
  2. Jamie recompiled all the models on c1iscex against this version of RCG.
  3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 
  4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green.
  5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above.

So while we are in a better state now, the problem isn't fully solved. 

Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.

  13241   Tue Aug 22 16:56:54 2017 johannesSummaryGeneralAS laser existing components inventory

I surveyed the lab today to see what we may need to buy for the AS laser setup.

We have:

NPRO 200 mW + Driver

Faraday Isolator from cabinet

ISOMET Model 1201E: This is a free space AOM I found in the modulator cabinet. It needs to be driven at 40MHz (to be confirmed) with ~6W of electrical power. For a 500 micron beam it can allegedly achieve rise times of '93' [units not specified, could this be nanoseconds?]. I did not find a dedicated driver for it, however there was a 5W minicircuits amplifier ZHL-5W-1 in the RF cabinet and a switch ZSDR-230, which has a typical switch time of 2 microseconds, however I'm not sure how this translates to rise/fall times of the deflected power. It seems we have everything to set this up, so we'll by the end of the week if we can use a combination of these things or if we need to buy additional driver electronics.

New Focus model 4004 broadband phase modulator which is labeled as dusty, and in fact quite dirty when looking through. We should attempt to clean this thing and maybe we can use it here or at the ends.

Probably all the optics we need for the PSL table setup.

 

We need:

Beat PD: How about one of these: EOT ET-3000A? I didn't find a broadband PD for the beat with the PSL

Fiber Stuff: coupler & polarization maintaining fiber 20m & collimator. There are a couple options here, which we can discuss in the meeting.

Faraday Isolator: If we want to inject P-polarization. If S is okay we can use a polarizing plate beamsplitter instead.

Possibly some large lenses for mode-matching to IFO (TBD)

 

 

  13240   Tue Aug 22 15:40:06 2017 gautamUpdateComputersOld frames accessible again

[jamie, gautam]

I had some trouble getting the daqd processes up and running again using Jamie's instructions.

With Jamie's help however, they are back up and running now. The problem was that the mx infrastructure didn't come back up on its own. So prior to running sudo systemctl restart daqd_*, Jamie ran sudo systemctl start mx. This seems to have done the trick.

c1iscey was still showing red fields on the CDS overview screen so Jamie did a soft reboot. The machine came back up cleanly, so I restarted all the models. But the indicator lights were still red. Apparently the mx processes weren't running on c1iscey. The way to fix this is to run sudo systemctl start mx_stream. Now everything is green.

Now we are going to work on trying the fix Rolf suggested on c1iscex.

Quote:

It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.frown

I hooked it up to megatron, and it was automatically recognized and mounted. yes

I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!

At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.

Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.

There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.

 

  13239   Tue Aug 22 15:17:19 2017 ericqUpdateComputersOld frames accessible again

It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.frown

I hooked it up to megatron, and it was automatically recognized and mounted. yes

I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!

At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.

Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.

There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.

  13238   Tue Aug 22 02:19:11 2017 gautamUpdateALSALS OLTFs

Attachment #1 shows the results of my measurements tonight (SR785 data in Attachment #2). Both loops have a UGF of ~10kHz, with ~55 degrees of phase margin.

Excitation was injected via SR560 at the PDH error point, amplitude was 35mV. According to the LED indicators on these boxes, the low frequency boost stages were ON. Gain knob of the X end PDH box was at 6.5, that of the Y end PDH box was at 4.9. I need to check the schematics to interpret these numbers. GV Edit: According to this elog, these numbers mean that the overall gain of the X end PDH box is approx. 25dB, while that of the Y end PDH box is approx. 15dB. I believe the Y end Lightwave NPRO has an actuator discriminant ~5MHz/V, while the X end Innolight is more like 1MHz/V.

Not sure what to make of the X PDH loop measurement being so much noisier than the Y end, I need to think about this.

More detailed analysis to follow.

Quote:

 

I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.

 

Attachment 1: ALS_OLTFs.pdf
ALS_OLTFs.pdf
Attachment 2: ALS_OLTF_Aug2017.zip
  13237   Mon Aug 21 23:38:55 2017 gautamUpdateALSALS out-of-loop noise

I worked a little bit on the Y arm ALS today. 

  • Started by locking the Y arm to IR with POY, and then ran the dither alignment script to maximize Y arm transmission.
  • Green TRY DC monitor was around 0.16, whereas I have seen ~0.45 when we were doing DRFPMI locking.
  • So I went to the Y end table and tweaked the steering mirrors a little. I was able to get GTRY to ~0.42. I think this can be tweaked a little further but I decided to push on for tonight.
  • The beat amplitude on the network analyzer in the control room is comparable to the X arm beat now.
  • Adjusted the gain of the phase tracker servos, cleared phase history.
  • Looking at the ALS beat noise with the arms locked to IR and the slow ALS temperature control loops ON (see Attachment #1), the current measurements line up quite well with the reference traces.

I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.


Some weeks ago, I had moved some of the Green steering optics on the PSL table around, in order to flip some mirror mounts and try and get angles of incidence closer to ~45deg on some of the steering mirrors. As a result of this work, I can see some light on the GTRY CCD when the X green shutter is open. It is unclear if there is also some scattered light on the RFPDs. I will post pictures + a more detailed investigation of the situation on the PSL table later, there are multiple stray green beams on the PSL table which should probably be dumped.


As I was writing this elog, I saw the X green lock drop abruptly. During this time, the X arm stayed locked to the IR, and the Y arm beat on the control room network analyzer did not jump (at least not by an amount visible to the eye). Toggling the X end shutter a few times, the green TEM00 lock was re-acquired, but the beatnote has moved on the control room analyzer by ~40MHz. On Friday evening however, the X green lock held for >1 hour. Need to keep an eye on this.

Attachment 1: ALS_21082017.pdf
ALS_21082017.pdf
  13236   Mon Aug 21 21:26:41 2017 gautamSummaryGeneralLoss measurements plan

In case you want to use it, I had profiled the Lightwave NPRO sometime back, and we were even using it as the AUX X laser for a short period of time. 

As for using the AS laser for mode spectroscopy: don't we want to match the beam into the cavity as best as possible, and then use some technique to disturb the input mode (like the dental tooth scraper technique from Chris Mueller's thesis)? 

Johannes and I did an arm scan of the X arm today (arm controlled with ALS, monitoring IR transmission) - only 2 IR FSRs were scanned, but there should be sufficient information in there to extract the modulation depth and mode matching - can we use Kaustubh's/Naomi's code?. The Y arm ALS needs to be touched up so I don't have a Y arm scan yet. Note that to get a good arm scan measurement, the High Gain Thorlabs PD should be used as the transmission PD.

Quote:
 

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

 

  13235   Mon Aug 21 20:11:25 2017 johannesSummaryGeneralLoss measurements plan

There are three methods we (will soon) have available to evaluate the round-trip dissipative losses in the arms that do not suffer from the ITM loss dominance:

  • DC reflection method:
    • Compare reflected light levels from [ITM only] vs [arm cavity on resonance]
  • Basler CCDs:
    • Infer large (or small) angle scatter loss with calibrated CCDs
  • Reflection ringdowns:
    • Need AS port light injection, principle is similar to DC method but better (?)

DCREFL

The DC method comparing reflectivities has been used in the past and is relatively easy to do. After the recent vacuum troubles the first step should be to re-perform these as CDS permits (needs some ASS functionality and of course the MC to behave). It wouldn't hurt to know the parameters this depends on, aka mode overlap and modulation depths with better certainty. Maybe the SURF scripts for mode-spectroscopy can be applied?

CCDs

With the new CCD cameras calibrated, pre-vent we can determine the magnitude of the large-angle scatter loss (assuming isotropic scatter) of ETMX and possibly ETMY. Can we look past ETMX/ETMY from the viewports? Then we can probably also look at the small angle scatter of ITMX and ITMY. If not, once we open one of the chambers there's the option of installing mirrors as close as possible to the main beam path. The easiest is probably to look at ITMX, since there is plenty of space in the XEND chamber, and the camera is already installed.

ASPORT

This requires a lot of up-front work. We decided to use the spare 200mW NPRO. It will be placed on the PSL table and injected into an optical fiber, which terminates on the AS table. The again free space beam there needs to be sort-of mode-matched into the SRC ("sort-of" because mode-spectroscopy). We want to be able to phaselock this secondary beam to the PSL with at least a couple kHz bandwidth and also completely extinguish the beam on time-scales of a few microseconds. We will likely need to purchase a few components that we can salvage from other labs, I'm still going through the inventory and will know more soon (more detailed post to follow). We need to settle for the polarization we want to send in from the back.

 

Tentative Schedule (aggressive)

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

Week Aug 28 - Sep 3:

  • Re-evaluate modulation indices if necessary
  • Optical beat AS Port Auxiliary Laser (ASAL) - PSL
  • PLL setup
  • CCD large angle prep work

Week Sep 4 - Sep 10:

  • PLL CDS integration
  • Amplitude-modulation preparation
  • CCD large angles

Week Sep 11 - Sep 17:

  • Fiber-injection
  • AS table preliminary mode-matching
  • Faraday setup
  • CCD small angle prep work

Week Sep 18 - Sep 24:

  • ASAL amplitude switching
  • CCD small angles

Week Sep 25 - Oct 1:

  • AS port ringdowns

 

  13234   Mon Aug 21 16:35:48 2017 gautamUpdateVACUPS checkup

[steve, gautam]

At Rolf/Rich Abbott's request, we performed a check of the UPS today.

Steve believed that the UPS was functioning as it should, and the recent accidental vent was because the UPS batteries were insufficiently charged when the test was performed. Today, we decided to try testing the UPS.

We first closed V1, VM1 and VA6 using the MEDM screen. We prepared to pull power on all these valves by loosening the power connections (but not detaching them). [During this process, I lost the screw holding the power cord fixed to the gate valve V1 - we are looking for a replacement right now but it seems to be an odd size. It is cable tied for now.]

The battery charge indicator LEDs on the UPS indicated that the batteries were fully charged.

Next, we hit the "Test" button on the UPS - it has to be held down for ~3 seconds for the test to be actually initiated, seems to be a safety feature of the UPS. Once the test is underway, the LED indicators on the UPS will indicate that the loading is on the UPS batteries. The test itself lasts for ~5seconds, after which the UPS automatically reverts to the nominal configuration of supplying power from the main line (no additional user input is required).

In this test, one of the five battery charge indicator LEDs went off (5 ON LEDs indicate full charge).

So on the basis of this test, it would seem that the UPS is functioning as expected. It remains to be investigated if the various hardware/software interlocks in place will initiate the right sequence of valve closures when required.


Quote:
 

Never hit O on the Vacuum UPS !

Note: the " all off " configuration should be all valves closed ! This should be fixed now.

In case of  emergency  you can close V1 with disconnecting it's actuating power as shown on Atm3 if you have peumatic pressure 60 PSI 

 

  13233   Mon Aug 21 14:53:32 2017 gautamUpdateVACRGA reset

[gautam, steve]

In the aftermath of the accidental vent, it looks like the RGA was shutdown.

We followed the instructions in this elog to restart the RGA.

Seems to be working now, Steve says we just need to wait for it to warm up before we can collect a reliable scan.

Quote:

We have good RGA scan now. There was no scan for 3 months.

 

  13232   Mon Aug 21 13:07:08 2017 KiraUpdatePEMtemp sensor PCB

On Friday, I cleaned up the circuit so that there are only three connections needed (+15V, -15V, GND) and a BNC connector for reading the output. Today, I added in bypass capacitors. The small yellow ones are 0.1 microF ceramic, and the large ones are 100 microF electrolytic. They are used to stabilize the +15V and -15V inputs to the OP amp and minimize fluctuations, since it doesn't have a regulator for stability. I have also attached the circuit diagram for the OP amp only, where 1 are the electrolytic and 2 are the ceramic. The temperature is still about 2 degrees off, but if that difference is constant for all temperatures in our range we can just calibrate it later.

Here is a helpful link on bypass capacitors (thanks to Kevin for sending it to me).

As a note, the electrolytic capacitors do have a polarity, so it is important to place them correctly (the negative side is towards the lower voltage potential, and not always towards ground).

Quote:

Got it to work. One of the connections was faulty. I decided to check the temperature measured against a thermometer. The sensor showed 26.1 C, but the thermometer showed 25.8 C after I let them both cool down after heating them up. The temperature of the thermometer was dropping at the time of measurement, but the temperature of the sensor was not. This is still a rough version of the final sensor, so I'm not sure what exactly causes this discrepancy.

Quote:

Tried taking the circuit from the breadboard to the PCB. I attached all the components to adapters that would allow them to be connected to the PCB. From the first picture, the first component is AD586, the second is AD590, and the third is LT1012, along with a resistor across it. I then soldered the connections between the components, as can be seen in the second picture. When I tested out this version of the circuit by hooking it up to the DC source, I got a reading of ~-15V. I will have to check all the connections to make sure there is contact where there should be one, and no contact where there shouldn't be. I had issues attaching the tiny AD590 and LT1012 to its adaptor, so the issue may lie there as well. I'll also check that each component is in working order as well.

Once I figure out where my error is, my plan is to build two more of these and place a metal object such that it contacts only the surface of the AD590s. This would allow me to compare the three values to the actual temperature of the metal, which would then tell me how accurate this setup is.

Note on the resistor: I measured all the resistors and chose three that had exactly 10.00k Ohm. The voltage detected is dependent on the resistor, so if we are to take three identical copies, I ensured that there would be no error due to the resistors being a little different.

 

 

Attachment 1: IMG_20170821_124121.jpg
IMG_20170821_124121.jpg
Attachment 2: IMG_20170821_124429~2.jpg
IMG_20170821_124429~2.jpg
Attachment 3: IMG_20170821_124108.jpg
IMG_20170821_124108.jpg
  13231   Mon Aug 21 09:08:54 2017 SteveUpdateSUStiny glitches

They are synchronised tiny glitches. They are not mechanical.
 

Attachment 1: glitches.png
glitches.png
  13230   Sat Aug 19 01:35:08 2017 ericqUpdateALSX Arm ALS lock

My motivation tonight was to get an up-to-date spectrum of a calibrated measurement of the out-of-loop displacement of an arm locked on ALS (using the PDH signal as the out-of-loop sensor) to compare the performance of ALS control noise with the Izumi et al green locking paper. 

I was able to fish out the PSD from the paper from the 40m svn, but the comparison as plotted looks kind of fishy. I don't see why the noise from 10-60Hz should be so different/worse. We updated the POX counts to meters conversion by looking at the Hz-calibrated ALSX signal and a ~800Hz line injected on ETMX.

Attachment 1: ALS_comparison.pdf
ALS_comparison.pdf
  13229   Fri Aug 18 23:59:53 2017 gautamUpdateALSX Arm ALS lock

[ericq, gautam]

  • I was just getting the IFO aligned, and single arm lock going, when EricQ came in and asked if we could get some ALS data.
  • ALS beats seemed fine, in particular the X-Arm. The broad hump around ~70Hz that was present in my previous ALS update was nowhere to be seen - reasons unknown.
  • Copied over /opt/rtcds/caltech/c1/scripts/YARM/Lock_ALS_YARM.py to /opt/rtcds/caltech/c1/scripts/XARM/Lock_ALS_XARM.py. Could be useful when we want to do arm cavity scans.
  • Made appropriate changes to allow ALS locking of Xarm - the testpoint inaccessibility makes things a little annoying but for tonight we just used DQ channels in place (or slow channels when DQ chans were not available)
  • Calibration of X arm error signal seemed off - so we fixed it by driving a line in ETMX and matching up the peaks in the ALS error signal and POX11. We then updated the gain of the filter in the CINV filter bank accordingly.
  • Got some decent data - X arm stayed locked on ALS for >60mins, during which time the Y arm stayed locked on POY11, and the Y green also reained locked yes. There was no evidence of the X arm 00 mode randomly dropping out of lock tonight.
  • EQ will update with a sick comparison plot - today we looked at the ALS noise from the perspective of the Green Locking Izumi et. al. paper.
  • Y arm ALS noise didn't look so hot tonight - to be investigated...

Leaving LSC mode OFF for now while CDS is still under investigation


Not really related to this work: We saw that the safe.snap file for c1oaf seems to have gotten overwritten at some point. I restored the EPICS values from a known good time, and over-wrote the safe.snap file.

  13228   Fri Aug 18 21:58:35 2017 gautamUpdateGeneralSUS model ASC input weirdness

I spent some time today trying to debug this issue.

Jamie and I had opened up the c1sus frontend to try and replace the RFM card before we realized that the problem was in the RCG code generator. During this process, we had disconnected all of the back-panel cabling to this machine (2 ethernet cables, dolphin cable, and RFM cables/fibers). I thought I may have accidentally returned the cables to the wrong positions - but all the status indicator lights indicate that everything is working as it should, and I also confirmed that the cabling is as it is in the pictures of the rack on the wiki page.

Looking at the SimuLink model diagram (see Attachment #1 for example), it looks like (at least some of) these channels are actually on the dolphin network, and not the RFM network (with which we were experiencing problems). This suggests that the problem is something deeper. Although I did see nans in some of the ETMX ASC channels as well, for which the channels are piped over the RFM network. Even more puzzling is that the ASC MEDM screen (Attachment #3) and the SimuLink diagram (Attachment #2) suggest that there is an output matrix in between the input signals and the output angular control signals to the suspensions. As Attachment #4 shows, the rows corresponding to ITMX PIT and YAW are zero (I confirmed using z read <matrixElement>). Attachment #3 shows that the output of all the servo banks except CARM_YAW is zero, but CARM_YAW has no matrix element going to the ITMs (also confirmed with z read <servoOutputChannel>). So 0 x 0 should be 0, but for some reason the model doesn't give this output?

GV Edit: As EricQ just pointed out to me, nan x 0 is still nan, which probably explains the whole issue. Poking a little further, it seems like this is an SDF issue - the SDF table isn't able to catch differences for this hold output channel.


As I was writing this elog, I noticed that, as mentioned above, the CARM_YAW output was "nan". When I restart the model (thankfully this didn't crash c1lsc!), it seems to default to this state. Opening up the filter module, I saw that the "hold output" was enabled.

Toggling that switch made the nans in all the SUS ASC channels disappear. Mysterious indecision.

All the points above stand - CARM_YAW output shouldn't have been going anywhere as per the output matrix, but it seems to have been responsible? Seems like a bug in any case if a model restarts with a field as "nan".

Anyways the problem seems to have been resolved so I'm going to try locking and dither aligning the arms now.

Rolf mentioned that a simple update could fix several of the CDS issues we are facing (e.g. inability to open up testpoints), but he didn't seem to have any insight into this particular issue. Jamie will try and recompile all the models and then we have to see if that fixes the remaining problems.

Quote:
 

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

 

Attachment 1: ITMXP.png
ITMXP.png
Attachment 2: ASC_model_outmatrix.png
ASC_model_outmatrix.png
Attachment 3: ASC_medm.png
ASC_medm.png
Attachment 4: ASC_outMat.png
ASC_outMat.png
  13227   Thu Aug 17 22:54:49 2017 ericqUpdateComputersTrying to access JetStor RAID files

The JetStor RAID unit that we had been using for frame writing before the fb meltdown has some archived frames from DRFPMI locks that I want to get at. I spent some time today trying to mount it on optimus with no success crying

The unit was connected to fb via a SCSI cable to a SCSI-to-PCI card inside of fb. I moved the card to optimus, and attached the cable. However, no mountable device corresponding to the RAID seems to show up anywhere.

The RAID unit can tell that it's hooked up to a computer, because when optimus restarts, the RAID event log says "Host Channel 0 - SCSI Bus Reset."

The computer is able to get some sort of signals from the RAID unit, because when I change the SCSI ID, the syslog will say 'detected non-optimal RAID status'.

The PCI card is ID'd fine in lspci as "06:01.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev c1)"

'lsssci' does not list anything related to the unit

Using 'mpt-status -p', which is somehow associated with this kind of thing returns the disheartening output:

Checking for SCSI ID:0
Checking for SCSI ID:1
Checking for SCSI ID:2
Checking for SCSI ID:3
Checking for SCSI ID:4
Checking for SCSI ID:5
Checking for SCSI ID:6
Checking for SCSI ID:7
Checking for SCSI ID:8
Checking for SCSI ID:9
Checking for SCSI ID:10
Checking for SCSI ID:11
Checking for SCSI ID:12
Checking for SCSI ID:13
Checking for SCSI ID:14
Checking for SCSI ID:15
Nothing found, contact the author
 
I don't know what to try at this point.
  13226   Thu Aug 17 17:33:01 2017 gautamUpdateSUSMC1 <--> MC3 switched back

that's why the Autolocker clears the outputs; we don't want to be holding the offsets from the last ms of lock when it was all messed up; instead it would be best to have a slow (~mHz) relief script that takes the WFS controls and puts them onto the MC SUS sliders. This would then re-align the MC to the input beam rather than the input to the MC. Which is not the best idea.

Quote:

Seems like this modification didn't really work.

 

  13225   Thu Aug 17 11:17:49 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Seems like this modification didn't really work. There were several large MC1 glitches, and one of them misaligned MC1 so much that the IMC didn't relock for the last ~6 hours. I re-aligned MC1 manually, and now it is locked fine.

Quote:

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

 

 

Attachment 1: MC1_misaligned.png
MC1_misaligned.png
Attachment 2: MC1_glitch.png
MC1_glitch.png
  13224   Thu Aug 17 10:41:58 2017 KiraUpdatePEMtemp sensor PCB

Got it to work. One of the connections was faulty. I decided to check the temperature measured against a thermometer. The sensor showed 26.1 C, but the thermometer showed 25.8 C after I let them both cool down after heating them up. The temperature of the thermometer was dropping at the time of measurement, but the temperature of the sensor was not. This is still a rough version of the final sensor, so I'm not sure what exactly causes this discrepancy.

Quote:

Tried taking the circuit from the breadboard to the PCB. I attached all the components to adapters that would allow them to be connected to the PCB. From the first picture, the first component is AD586, the second is AD590, and the third is LT1012, along with a resistor across it. I then soldered the connections between the components, as can be seen in the second picture. When I tested out this version of the circuit by hooking it up to the DC source, I got a reading of ~-15V. I will have to check all the connections to make sure there is contact where there should be one, and no contact where there shouldn't be. I had issues attaching the tiny AD590 and LT1012 to its adaptor, so the issue may lie there as well. I'll also check that each component is in working order as well.

Once I figure out where my error is, my plan is to build two more of these and place a metal object such that it contacts only the surface of the AD590s. This would allow me to compare the three values to the actual temperature of the metal, which would then tell me how accurate this setup is.

Note on the resistor: I measured all the resistors and chose three that had exactly 10.00k Ohm. The voltage detected is dependent on the resistor, so if we are to take three identical copies, I ensured that there would be no error due to the resistors being a little different.

 

Attachment 1: IMG_20170817_095917.jpg
IMG_20170817_095917.jpg
  13223   Thu Aug 17 08:42:27 2017 SteveUpdatePSLPSL HEPA

The PSL HEPA was running noisy at 100V   The bearing is wearing out. I turned it down to 30V It is quiet there.

  13222   Wed Aug 16 20:24:23 2017 gautamUpdateALSFiber ALS

Today, with Johannes' help, I cleaned the fiber tips of the photodiodes. The effect of the cleaning was dramatic - see Attachments #1-4, which are X Beat PD, axial illumination, X Beat PD, oblique illumination, Y beat PD, axial illumination, Y beat PD, oblique illumination. They look much cleaner now, and the feature that looked like a scratch has vanished.

The cleaning procedure followed was:

  • Blow clean air over the fiber tip
  • First, we tried cleaning with the Q-tip like tool, but the results weren't great. The way to use it is to dip the tip in the cleaning solvent for a few seconds, hold the tip to the fiber taking into account the angled cut, and apply 10 gentle quarter turns.
  • Next, we tried cleaning with the wipes. We peeled out an approximately 5" section of the wipe, and laid it out on the table. We then applied cleaning solvent liberally on the central area where we were sure we hadn't touched the wipe. Then you just drag the fiber tip along the soaked part of the wipe. If you get the angle exactly right, the fiber glides smoothly along the surface, but if you are a little misaligned, you get a scratchy sensation. 
  • Blow dry and inspect.

I will repeat this procedure for all fiber connections once I start putting the box back together - I'm almost done with the new box, just waiting on some hardware to arrive.

 

Quote:

Today, I borrowed the fiber microscope from Johannes and took a look at the fibers coupled to the PDs. The PD labelled "BEAT PD AUX Y" has an end that seems scratched (Attachments #1 and #2). The scratch seems to be on (or at least very close to) the core. The other PD (Attachments #3 and #4) doesn't look very clean either, but at least the area near the core seems undamaged. The two attachments for each PD corresponds to the two available lighting settings on the fiber microscope.

I have not attempted to clean them yet, though I have also borrowed the cleaning supplies to facilitate this from Johannes. I also plan to inspect the ends of all other fiber connections before re-installing them.

 

Attachment 1: IMG_7476.JPG
IMG_7476.JPG
Attachment 2: IMG_7477.JPG
IMG_7477.JPG
Attachment 3: IMG_7478.JPG
IMG_7478.JPG
Attachment 4: IMG_7479.JPG
IMG_7479.JPG
  13221   Wed Aug 16 20:01:03 2017 gautamUpdateGeneralSUS model ASC input weirdness

I'm not sure if this has something to do with the model restarts / new RCG, but while I was re-enabling the MC watchdogs, I noticed the RMS sensor voltage channels on ITMX hovering around ~100mV, even though local damping was on (in which configuration I would expect <1mV if everything is working normally).  I was confused by this behaviour, and after staring at the ITMX suspension screen for a while, I noticed that the input to the "ASCP" and "ASCY" servos were "-nan", and the outputs were 10^20 cts frown (see Attachment #1).

Digging a little deeper, I found that the same problem existed on ITMY, ETMX, ETMY, PRM (but not BS or SRM) - reasons unknown for now.

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

gedit 28 Oct 0026: Seems like this problem is seen at the sites as well. I wonder if the problem is related.

Attachment 1: ITMX_ASC.png
ITMX_ASC.png
  13220   Wed Aug 16 19:50:17 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

Quote:

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 

 

  13219   Wed Aug 16 18:50:58 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors
Quote:

The remaining issues are:

  • RFM network down.  The IOP models on all hosts on the RFM network are not detecting their RFM cards.  Keith Thorne thinks that this is because of changes in trunk to support the new long-range PCIe that will be used at the sites, and that we just need to add a new parameter to the cdsParameters block in models that use RFM.  Him and Rolf are looking into for us.

RFM network is back!  Everything green again.

Use of RFM has been turned off in adLigoRTS trunk in favor of the new long-range PCIe networking being developed for the sites.  Rolf provided a single-line patch that re-enables it:

controls@c1sus:/opt/rtcds/rtscore/trunk 0$ svn diff
Index: src/epics/util/feCodeGen.pl
===================================================================
--- src/epics/util/feCodeGen.pl    (revision 4447)
+++ src/epics/util/feCodeGen.pl    (working copy)
@@ -122,7 +122,7 @@
 $diagTest = -1;
 $flipSignals = 0;
 $virtualiop = 0;
-$rfm_via_pcie = 1;
+$rfm_via_pcie = 0;
 $edcu = 0;
 $casdf = 0;
 $globalsdf = 0;
controls@c1sus:/opt/rtcds/rtscore/trunk 0$

This patched was applied to RTS source checkout we're using for the FE builds (/opt/rtcds/rtscore/trunk, which is r4447, and is linked to /opt/rtcds/rtscore/release).  The following models that use RFM were re-compiled, re-installed, and re-started:

  • c1x02
  • c1rfm
  • c1x03
  • c1als
  • c1x01
  • c1scx
  • c1asx
  • c1x05
  • c1scy
  • c1tst

The re-compiled models now see the RFM cards (dmesg log from c1ioo):

[24052.203469] c1x03: Total of 4 I/O modules found and mapped
[24052.203471] c1x03: ***************************************************************************
[24052.203473] c1x03: 1 RFM cards found
[24052.203474] c1x03:     RFM 0 is a VMIC_5565 module with Node ID 180
[24052.203476] c1x03: address is 0xffffc90021000000
[24052.203478] c1x03: ***************************************************************************

This cleared up all RFM transmission error messages.

CDS upstream are working to make this RFM usage switchable in a reasonable way.

ELOG V3.1.3-