Last night I switched off all the fans in the lab and we have reached the lowest ever recorded beatnote noise.
Latest BN Spectrum: CTN_Latest_BN_Spec.pdf
Daily BN Spectrum: CTN_Daily_BN_Spec.pdf
CTN: 2551 : Comparison between Out of loop vs In loop RIN
We realized that the half-wave plates before the EOAMs probably had no real function in the setup and therefore we proceeded to remove the one from the north path at (39,121) aka row 39, column 121 of the Optical layout.
After this was done, we had to re-adjust the quarter wave-plate (39,112) after the EOAM (39,115) to make sure that the EOAM was still functioning about the 50% transmission point. The beam going into the PMC was also re-aligned by adjusting the two mirrors at (32,92) and (37,92). Finally, the mirror at (43,88) was adjusted to align the beam reflecting from the PMC into the photo-diode.
We were able to re-lock the north PMC and north cavity after increasing the power in that path by adjusting some waveplates.
As may be expected, the sign of the ISS feedback had to be inverted. The ISS actuates on the EOAM; removing the half-wave plate would have switched the circularity of the polarization of the beam entering the PBS at (39,110), so the sign of the voltage that would have previously caused the transmission to increase would now cause it to decrease and vice versa.
Today, I added a new out-of-loop transmission PD (Thorlabs PDA10CS) for the south path. This will be helpful in future measurements of RIN coupling to beatnote noise. This PD is added at (1, 40) using the dumped light. The optical layout would be updated in a few days. I've confirmed that this photodiode is reading the same RIN as read earlier in CTN:2555. I've also connected Acromag channel for South Transmission DC to this photodiode, so the transmitted power channels and the mode matching percentage channel of South Cavity are meaningful again.
Today, I added a new out-of-loop transmission PD (Thorlabs PDA10CS) for the north path. This will be helpful in future measurements of RIN coupling to beatnote noise. This PD is added at (8, 42) using the dumped light. The optical layout would be updated in a few days. I've also connected Acromag channel for North Transmission DC to this photodiode, so the transmitted power channels and the mode matching percentage channel of North Cavity are meaningful again.
ISS Gain for the Northside has been increased to 2x10000 since half of the light is now being used by the OOL PD.
On March 13th around 7:30 pm, I started a super measurement of the beatnote spectrum for over 2 days. The script superBNSpec.py took a beatnote spectrum every 15 minutes for a total of 250 measurements. The experiment was stable throughout the weekend with no lock loss or drift of the beatnote frequency. All data with respective experimental configuration files are present in the Data folder. HEPA filters were on during this measurement.
I found out that since the slow PID of the FSS used Cavity reflection DC level, it is important that this path remains rigid and undisturbed from any testing. In CTN:2559, when Shruti and I were realigning North PMC, we used BNC-T (which was permanently attached to the rack) to pick-off North cavity reflection DC signal. During this putting on and off the cable, we made the BNC-T itself loose and the connection to acromag card became buggy.
I have fixed this by connecting the acromag cards directly to the cable coming from the table behind the rack. Now connections/disconnections at the front of the rack shouldn't disturb this vital connection. Still, we need to be careful about this from next time. I had no idea what went wrong and was about to start a full-scale investigation into PMC and FSS loops. Thankfully, I figured out this problem before that.
It seems like this was never properly solved. On Friday, the same problem was back again. After trying relocking PMC and FSS on the north path without any luck, I switched off the laser to standby mode and after a minute restarted it and the problem went away. I have a strong suspicion that this problem has something to do with the laser temperature controller on the laser head itself. During the unstable state, I see a spike that starts a large surge in error signal of FSS loop which occurs every 1 second! (so something at 1 Hz). The loop actually kills the spike successfully withing 600-700 ms but then it comes back again. I'm not sure what's wrong, but if this goes on and the lockdown is enforced due to Corona virus, I won't even able to observe the experiment from a distance :(. I have no idea what went wrong here.
Today, magically almost, the North Path was found to be locking nicely without the noise. I was waiting for the beatnote to reach the detector's peak frequency when in about 40 min, it started going haywire again. No controls were changed to trigger any of this and as far as I know, nobody entered the lab. Something is flakey or suddenly some new environmental noise is getting coupled to the experiment. Attached is the striptool screenshot of the incident and the data dumped. In the attached screenshot, channel names are self-explanatory, all units on y axis are mW on left plot (note the shifted region, but same scale of North Cavity Transmission Power) and MHz on the right plot for Beatnote Frequency.
I know for sure that everything until PMC is good since when only PMC is locked, I do not see huge low frequency noise in the laser power transmitted or reflected from the PMC. But whatver is this effect, it makes the FSS loop unstable and eventually it unlocks, then locks again and repeats.
Since this morning atleast, I'm not seeing the North Path unstability (see CTN:2565) and the beatnote is stable and calm at the setpoint. Maybe the experiment just needed some distance from me for few days.
So today, I took a general single shot measurement and even after HEPA filers being on at 'Low', the measurement is the lowest ever, especially in the low-frequency region. This might be due to reduced siesmic activity around the campus. I have now started another super beatnote measurement which would take measurement continuously every 10 min is the transmission power from the cavities look stable enough to the code.
there is a new broad bump though arounf 250-300 Hz which was not present before. But I can't really do noise hunting now, so will just take data until I can go to the experiment.
CTN:2565: North path's buggy nature NOT solved
Today at sharp 8:30 am, perfectly fine running experiment went bad again. The North path became buggy again with strong low frequency oscillations in almost all of the loops except the temperature control of vacuum can. The temperature control of beatnote frequency felt a step change and hence went into oscillations of about 65 kHz.
Not sure what went wrong, but 8:30 am might be the clue here. But can't change/test anything until I can go to the lab.
On wednesday around noon, the North Path got back to stability. I captured this process by going back to the FB data. The process of coming back to stability is not so instantaneous as the other way round. Also, in this process, the path becomes stable, then unsable and stable and so one with the duration od unstability decreasing until it vanishes. Attached are plots of about 14 hours of curucial channels. If anyone has any insights on what might be happening, let me know.
I did this analysis last with bare-bones method in CTN:2439. Now I've improved this much more. Following are some salient features:
Final results are calculated for data taken at 3 am on March 11th, 2020 as it was found to be the least noise measurement so far:
Bulk Loss Angle: (8.8 +- 0.5) x 10-4.
Shear Loss Angle: (2.6 +- 2.85) x 10 -7.
Figures of the analysis are attached. I would like to know if I am doing something wrong in this analysis or if people have any suggestions to improve it.
The measurement instance used was taken with HEP filter on but at low. I expect to measure even lower noise with the filters completely off and optimizing the ISS as soon as I can go back to lab.
Other methods tried:
Mentioning these for the sake of completeness.
Wow, very suggestive ASD. A couple questions/thoughts/concerns:
It's typically much easier to overestimate than underestimate the loss angle with a ringdown measurement (eg, you underestimated clamping loss and thus are not dominated by material dissipation). So, it's a little surprising that you would find a higher loss angle than Penn et all. That said, I don't see a model uncertainty for their dilution factors, which can be tricky to model for thin films.
Yeah but this is the noise that we are seeing. I would have liked to see lower noise than this.
If you're assuming a flat prior for bulk loss, you might do the same for shear loss. Since you're measuring shear losses consistent with zero, I'd be interested to see how much if at all this changes your estimate.
Since I have only one number (the noise ASD) and two parameters (Bulk and Shear loss angle), I can't faithfully estimate both. The dependence of noise due to the two-loss angles is also similar to show any change in frequency dependence. I tried giving a uniform prior to Shear Loss Angle and the most likely outcome always hit the upper bound (decreasing the estimate of Bulk Loss Angle). For example, when uniform prior to shear was up to 1x 10-5, the most likely result became = 8.8x10-4, = 1 x 10-5. So it doesn't make sense to have orders of magnitude disagreement with Penn et al. results on shear loss angle to have slightly more agreement on bulk loss angle. Hence I took their result for the shear loss angle as a prior distribution. I'll be interested in knowing if their are alternate ways to do this.
I'm also surprised that you aren't using the measurements just below 100Hz. These seem to have a spectrum consistent with brownian noise in the bucket between two broad peaks. Were these rejected in your cleaning procedure?
Yeah, they got rejected in the cleaning procedure because of too much fluctuations between neighboring points. But I wonder if that's because my empirically found threshold is good only for 100 Hz to 1kHz range because number of averaging is lesser in lower frequency bins. I'm using a modified version of welch to calculate the PSD (see the code here), which runs welch function with different npersegment for the different range of frequencies to get the maximum averaging possible with given data for each frequency bin.
Is your procedure for deriving a measured noise Gaussian well justified? Why assume Gaussian measurement noise at all, rather than a probability distribution given by the measured distribution of ASD?
The time-series data of the 60s for each measurement is about 1 Gb in size. Hence, we delete it after running the PSD estimation which gives out the median and the 15.865 percentile and 84.135 percentile points. I can try preserving the time series data for few measurements to see how the distribution is but I assumed it to be gaussian since they are 600 samples in the range 100 Hz to 1 kHz, so I expected the central limit theorem to kick-in by this point. Taking out the median is important as the median is agnostic to outliers and gives a better estimate of true mean in presence of glitches.
It's not clear to me where your estimated Gaussian is coming from. Are you making a statement like "given a choice of model parameters \phi_bulk and \phi_shear, the model predicts a measured ASD at frequency f_m will have mean \mu_m and standard deviation \sigma_m"?
Estimated Gaussian is coming out of a complex noise budget calculation code that uses the uncertainties package to propagate uncertainties in the known variables of the experiment and measurement uncertainties of some of the estimate curves to the final total noise estimate. I explained in the "other methods tried" section of the original post, the technically correct method of estimation of the observed sample mean and sample standard deviation would be using gaussian and distributions for them respectively. I tried doing this but my data is too noisy for the different frequency bins to agree with each other on an estimate resulting in zero likelihood in all of the parameter space I'm spanning. This suggests that the data is not well-shaped either according to the required frequency dependence for this method to work. So I'm not making this statement. The statement I'm making is, "given a choice of model parameters and , the model predicts a Gaussian distribution of total noise and the likelihood function calculates what is the overlap of this estimated probability distribution with the observed probability distribution.
I found taking a deep dive into Feldman Cousins method for constructing frequentist confidence intervals highly instructive for constructing an unbiased likelihood function when you want to exclude a nonphysical region of parameter space. I'll admit both a historical and philosophical bias here though :)
Thanks for the suggestion. I'll look into it.
Can this method ever reject the hypothesis that you're seeing Brownian noise? I don't see how you could get any distribution other than a half-gaussian peaked at the bulk loss required to explain your noise floor. I think you instead want to construct a likelihood function that tells you whether your noise floor has the frequency dependence of Brownian noise.
Yes, you are right. I don't think this method can ever reject the hypothesis that I'm seeing Brownian noise. I do not see any other alternative though as such as I could think of. The technically correct method, as I mentioned above, would favor the same frequency dependence which we are not seeing in the data :(. Hence, that likelihood estimation method rejected the hypothesis that we are seeing Brownian noise and gave zero likelihood for all of the parameter space. Follow up questions:
I talked to Kevin and he suggested a simpler straight forward Bayesian Analysis for the result. Following is the gist:
This gives us:
with shear loss angle taken from Penn et al. which is 5.2 x 10-7. The limits are 90% confidence interval.
Now this isn't a very good result as we would want, but this is the best we can report properly without garbage assumptions or tricks. I'm trying to see if we can get a lower noise readout in next few weeks, but otherwise, this is it, CTN lab will rest afterward.
I realized that using only the cleaned out frequencies and a condition that estimated power never goes above them at those places is double conditioning. In fact, we can just look at a wide frequency band, between 50 Hz to 600 Hz and use all data points with a hard ceiling condition that estimated noise never goes above the measured noise in any frequency bin in this region. Surprisingly, this method estimates a lower loss angle with more certainty. This happened because, 1) More data points are being used and 2) As Aaron pointed out, there were many useful data bins between 50 Hz and 100 Hz. I'm putting this result separately to understand the contrast in the results. Note that still we are using a uniform prior for Bulk Loss Angle and shear loss angle value from Penn et al.
The estimate of the bulk loss angle with this method is:
with shear loss angle taken from Penn et al. which is 5.2 x 10-7. The limits are 90% confidence interval. This result has an entire uncertain region from Penn et al. within it.
Which is a more fair technique: this post or CTN:2574 ?
Today we measured further low noise beatnote frequency noise. I reran the two notebooks and I'm attaching the results here:
This method only selects a few frequency bins where the spectrum is relatively flat and estimates loss angle based on these bins only. This method rejects any loss angle vaue that results in estimated noise more than measured noise in the selected frequency bins.
This method uses all frequency bins between 50 H and 600 Hz and uses them to estimate loss angle value This method rejects any loss angle value that results in estimated noise more than measured noise in the selected frequency bins.
I'm listing first few comments from Jon that I implemented:
Note that this allows estimated noise to be more than measured noise in some frequency bins.
There are more things that Jon suggested which I'm listing here:
I've implemented all the proper analysis norms that Jon suggested and are mentioned in the previous post. Following is the gist of the analysis:
The analysis is attached. This result will be displayed in upcoming DAMOP conference and would be updated in paper if any lower measurement is made.
Thu Jun 4 09:17:12 2020 Result updated. Check CTN:2580.
what is the effective phi_coating ? I think usually people present bulk/shear + phi_coating.
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have effective coating loss angle of:
This is worse than both Tantala (3.6e-4) and Silica (0.4e-4) currently in use at AdvLIGO.
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
I realized that in my noise budget I was using higher incident power on the cavities which was the case earlier. I have made the code such that now it will update photothermal noise and pdhShot noise according to DC power measured during the experiment. The updated result for the best measurement yet brings down our estimate of the bulk loss angle a little bit.
The analysis is attached.
If all layers have an effective coating loss angle, then using gwinc's calculation (Yam et al. Eq.1), we would have an effective coating loss angle of:
Also, I'm unsure now if our definition of Bulk and Shear loss angle is truly the same as the definitions of Penn et al. because they seem to get an order of magnitude lower coating loss angle from their bulk loss angle.
I figured out why folks befor eme had to use a different definition of effective coating coefficent of thermal expansion (CTE) as a simple weighted average of individual CTE of each layer instead of weighted average of modified CTE due to presence of substrate. The reason is that the modification factor is incorporated in another parameter gamma_1 and gama_2 in Farsi et al. Eq, A43. So they had to use a different definition of effective coating CTE since Farsi et al. treat it differently. That's my guess anyway since thermo-optic cancellation was demonstrated experimentally.
Following points are in relation to previously used noisebudget.ipynb file.
Following points are in relation to the new code at https://git.ligo.org/cit-ctnlab/ctn_noisebudget/tree/master/noisebudget/ObjectOriented.
The new noise budget code is ready. However, there are few discrepancies which still need to be sorted.
The code could be found at https://git.ligo.org/cit-ctnlab/ctn_noisebudget/tree/master/noisebudget/ObjectOriented
Please look into How_to_use_noiseBudget_module.ipynb for a detailed description of all calculations and code structure and how to use this code.
In the previous code, while doing calculations for Thermoelastic contribution to Photothermal noise, the code used a weighted average of coefficients of thermal expansion (CTE) of each layer weighted by their thickness. However, in the same code, while doing calculations for thermoelastic contribution to coating thermo-optic noise, the effective CTE of the coating is calculated using Evans et al. Eq. (A1) and Eq. (A2). These two values actually differ by about a factor of 4.
Currently, I have used the same effective CTE for coating (the one from Evans et al) and hence in new code, prediction of photothermal noise is higher. Every other parameter in the calculations matches between old and new code. But there is a problem with this too. The coating thermoelastic and coating thermorefractive contributions to photothermal noise are no more canceling each other out as was happening before.
So either there is an explanation to previous codes choice of using different effective CTE for coating, or something else is wrong in my code. I need more time to look into this. Suggestions are welcome.
The effective coating CTR in the previous code was 7.9e-5 1/K and in the new code, it is 8.2e-5 1/K. Since this value is calculated after a lot of steps, it might be round off error as initial values are slightly off. I need to check this calculation as well to make sure everything is right. Problem is that it is hard to understand how it is done in the previous code as it used matrices for doing complex value calculations. In new code, I just used ucomplex class and followed the paper's calculations. I need more time to look into this too. Suggestions are welcome.
I've just finished a preliminary draft of CTN paper. This is of course far from final and most figures are placeholders. This is my first time writing a paper alone, so expect a lot of naive mistakes. As of now, I have tried to put in as much info as I could think of about the experiment, calculations, and analysis.
I would like organized feedback through issue tracker in this repo:
Please feel free to contribute in writing as well. Some contribution guidelines are mentioned in the repo readme.
Automatically updating results from now on:
I added the possibility of having a power-law dependence of bulk loss angle on frequency. This model of course matches better with our experimental results but I am honestly not sure if this much slope makes any sense.
Auto-updating Best Measurement analyzed with allowing a power-law slope on Bulk Loss Angle:
RXA: I deleted this inline image since it seemed to be slowing down ELOG (2020-July-02)
I added script SRIMD.py in 40m/labutils/netgpibdata which allows one to measure second order intermodulation product while sweeping modulation strength, modulation frequency or the intermodulation frequency. I used this to measure the non-linearity of SR560 in DC coupling mode with gain of 1 (so just a buffer).
Edit Wed Feb 17 15:34:40 2021:
Adding self-measurement of SR785 for self-induced intermodulation in Attachment 3 and Attachment 4. From these measurements at least, it doesn't seem like SR785 overloaded the intermodulation presented by SR560 anywhere.
tuning range of the 80MHz VCO used for the frequency stabilization:
measured frequency tuning vs wideband input of VCO for calibration of measured spectra. graph coming soon...
Rana: measured spectra? Has there actually been a beat frequency measured after all these years???
no, i mean i re-measured the slope of the frequency modulation input of the VCO with a lot more points. The coefficient (MHz/V) changes a lot over the input range from -5V to 5V (internal gain of 2). We need this to calibrate the spectrum of the feedback-signal (into the VCO) for the 2nd cavity.
pictures taken from the existing power supply.
3123-card (16bit input), 25pin d-sub connector
4116-card (16bit output), 50-pin connector
QPD channels for RefCav beam pointing measurements:
C3:PSL-RCAV_QPDX : X
C3:PSL-RCAV_QPDY : Y
C3:PSL-RCAV_QPDSUM : SUM
I measured TF of PDH box, D0901351, (The one we have was modified). This box sends the signal to VCO.
SR785 measures at low frequency ( 1 Hz to 100kHz)
4935A measures at high frequency (10Hz to 1Mhz)
The integrator switch of the PDH box is turned off. This will be calculate later. The gain is set at 10.
The magnitude as mesured by 4935A is corrected for impedence match by x1.2.( 4935A and the PDH box have 50 ohm impedence for both inputs and outputs.)
This data will be used for control loop model later.
Blue, data from sr785
Green, Data from 4395A, I didnot use the power splitter to split the signal from source.
Red, Data from 4395A, with power splitter to divide power from source. (The power has to be increased to -30dB)
The first plot is magnitude of the TF, the second plot is phase shift, as usual Bode plot.
in order to gain more s/n ratio i modified the existing AD590 readout-box a little bit. I assumed that we wanna operate the cavity at 35C (which is not too high but well above RT or the temp of an additional temp stabilized box around both cavities) The required range for shifting the cavities is ~ 1 FSR, better would be a little bit more for each cavity as we can shift both independent.
df~156MHz /K and 1 FSR~740MHz
this corresponds to ~4.75K/FSR we have to shift.
For testing purposes it might be helpful to have more than that as e.g. if we limit the total range to lets say 6K we might end up at the end of the range and run into trouble as soon some disturbance from outside (e.g we remove part of the insulation, lets say an end cap) might shift the whole thing at the end of the range. As soon as this happens the servo would go crazy.
So i think we should go for 10K range, centered around 35C, so from 30C to 40C. I modified the box for that, so the transimpedance resistors have now a value of 29.4K, which gives us ~9.21V for 40C at the output of this stage.
In order to supply it from an independed power supply to reduce our current ground loops, i've chosen a WM071, the same as we use for the PDH boxes. As they come only in +/-15V, i had to change the voltage regulators in the box to +/-12V instead of =/-15V.
This results in a maximum output voltage of the LT1125 of a couple of 100mV more than 10V, depending on the current they have to source/sink. So 9.2V is still well below the max.
I added a filtered 5V reference, (AD586, 4.7uF filter cap) for the dc offset @35C. The corresponding resistor for the summing amp is 1379.76 which can be implemented almost exact using 2k05 and 4k22 in parallel (1379.7) or 1k54 and 13k3 (1380.2). The feedback resistor of the last stage can then be calculated to be 170k45 in order to match 30C to 40C to -10V to 10V. Paralleling can be used here as well to get an almost exact value.
The matching is not that critical as we don't wanna measure absolut temp, but if can do it that easy why not.
--- new schematic following soon ---
This is the control loop for the current PSL setup.
There are still components to be added.
1) TF of the PDH box, the one we have is a modified D0901351, so I measured the TF of this box when the integrator is off (April13,2010 entry.)
This will be added in the model later. It is set to 1 for now.
2) TF of the photodiodes, I assume they are Newfocus 1811 and choose the same value as used in linfss6.m.
3) I will verified the value of TF of the RefCav path (both Fast and PC paths are calculated from D980536) to see if they agree.
4) The TF of actuators will be added later.
% UPDATED APRIL 15, 2010, Tara Chalermsongsak
% linearize the Simulink block diagram of the %
% FREQUENCY STABILIZATION SERVO %
% This gets the linearized model from the simulink model
deg = 180/pi;
I calculated the TF of the modified PDH box and fit it with the measurement. The comparison does not match perfectly. I'll take a look and check if all Rs and Cs in the circuit are actually the same as those in the box.
The circuit can be found at:
I checked only U7 and U4
R28 is 360 ohms
C18 is 3300 pF
C6 is 0.66 (2x0.33uF) uF
R30,R31, R23,R16,R24 have the same resistance as specified
C20,C28, C29, C14,C15, R25, C11 are removed.
The calculation assumes that the integrator switch is off (R16 is connected parallel to R24 and C6)
If this works, the TF for PDH will be used in the simulink model.
Use LISO for circuit simulation.
The values of some r and c in the circuit are corrected, I used wrong values last time ( details will be added later.)
The measured TF and calculated TF using LISO are plotted below. The measurement and calculated data agree well from 1 to 10^5 Hz using SR785.
The correction factor due to mismatch impedance when using 4395A will be checked again.
As per the request from LLO, the PSL VCO was sent to the site.
They had malfunctioning of the circuit and had no spare.
We have to figure out how we continue the work.
The weird TF result from PMC seems to be the result of the insufficient voltage input. When I increased the swept sine voltage from 2mV to 500 mV, the result of the TF becomes as expected. See fig 1.
Before the signal is fed back to PMC, there is a PMC notch box. It is a low pass filter. It's TF looks fine (fig.2.)
However, when the TF of the servo and the notch is measured together, they look shaky.
I just read Frank's comment. I'll check the schematic of the PMC servo again.
that's not the TF of the PMC servo, it's something else. look at the gain level: -100dB. that's not more than some crosscoupling. Never trust a flat response, always think if the measured form and values make sense at all !
Take a minute and think about the form and values of the TF you expect from a servo like this. Have a look into the schematic and draw the TF shape of the individual gain stages and add them to an overall TF or use LISO to simulate it. Then measure parts of the servo step by step in order to verify that the individual parts are working as expected.
Before the signal is fed back to PMC, there is a PZT notch box. It is a low pass filter. It's TF looks fine (I'll update it.)
However when the TF of the servo and the notch is measured together.
From the bode plot, something is not quite right. I'll debug the PMC servo. My plan is
1) Measure the TF from FP1 test to FP4 (output mon), change gain setting and see if the TF change as expected.
*note the real TF is 20log (Vpzt/ Vin) but Vpzt ~ 50 Vmon. Vmon is connected to Vpzt with divider circuit. To get the real TF, 20log(Vpzt/Vin), the magnitude from out TF between FP1 and out mon will be added by 20log50 = 34 dB.
2) Compare it with the calculated TF from PMC schematic
see this entry
I was going to check the TF on each stage of PMC's servo.
Unfortunately, I couldn't find the floppy disc drive, so I slide the sliders (gain, RF power) around. When I add more RF power (from 1V to 7V) to 21.5 MHz EOM, the oscilaltion subsides*.
How sad. Stop using the floppies and get one of the GPIB-Ethernet converters from Dmass. You can download the python scripts from the 40m wiki.
we already have one but i was waiting for one the wireless bridge devices someone wanted to buy to make it wireless.
But why do you need a floppy to measure a TF?
the reason why you had this flickering problem was that you had too much power on the RFPD in reflection of the PMC. You already saturated it.
I also reduced the RF power as the error signals were not signals anymore, just spikes.
my new settings are:
RF power : 3.0
Phase : 2.5
PMC Gain: 14dB
reduced laser power to 40mW. Transmitted power is 32mW .You have to exchange the output coupler mirror in front of the RFPD in order to increase the power. I think 32mW is enough, it's something like 13mW per cavity.