The first diagram shows the set-up without the Bias Tee. Here we will confirm the monitoring out channel is functional and the dummy EOM acts as expected. After we will add the Bias Tee as shown in the second diagram. Using the monitoring channel again we can then see the effect of adding the Bias Tee to the output of the amplifier circuit, but before the dummy EOM.
I thought, why not just measure Moku's own synthesized frequency output with itself. Attached plot shows the frequency noise measured this way. The measured noise is divided by the square root of 2 to get individual input referred noise of the phase meter.
Then I thought, maybe the phase noises in the synthesized frequency and phase noise of phase meter could be potentially canceling each other as they are coming through the same source. So I inserted a very long cable between output of Moku and it's phasemeter input so that the two things are time separated by at least 1 cycle of 27.34 MHz. I didn't want to open up and measure the length of these neatly packed SMA cables but they are surely more than 6.58 m (wavelength of 27.34 MHz in these cables) long together.
A wise man told me to use three corner hat method to extract individual frequency noise information of Marconi, Moku and the Wenzel crystal. I updated my mokuReadFreqNoise.py to support frequency noise calculation for two channels and their difference.
I'm perplexed to see the result actually and I'm not sure if this is what was expected.
I did two identical runs (I wasn't sure if I was seeing the truth from my first run) with the following settings:
I spent some time understanding moku and even though it has some flaws (like no scriptable channel for recording data), it seems like using the Phasemeter instrument in mokulab will get rid of all of our PLL problems.
Since it doesn't hurt:
Code and Data
Today I took a beat note noise measurement to see where we are. Attached is the updated noise budget.
Edit Thu Jun 6 14:17:34 2019 Anchal : Above in red.
Edit Tue Aug 13 17:08:46 2019 anchal:
The PLL Readout noise added on this plot was erroneous and I can't find where it came from either. So the noisebudget attached is wrong! I was a dumbo then.
I took the transfer function of North FSS side to see how much suppression we are doing. I took the transfer function by sending source signal to EXC port in common path and measuring transfer function as Source/OUT2 in AG4395a. The open loop transfer function is related to this by GOL = 1 - G2 (Source/OUT2) . Here G2 is the gain for source signal at the summing stage in common amplifier board which is -392/1.2e3 = -0.3267.
I have included an expected suppressed frequency noise plot assuming 104/f Hz/rtHz frequency noise of free running NPRO. We need to suppress much more in 100Hz-1kHz to be able to see Brownian noise.
I have also measured crossover frequency between PZT and EOM actuation. It came to around 26KHz which is not bad. We just need to increase FAST GAIN a lot more and COM gan little bit so that crossover remains the same while we get a lot of suppression in 100Hz-1kHz range. I'll look into the source of glitched causing our EOM to become unstable when increasing the feedback gains.
Edit Mon Jul 22 18:55:22 2019 by anchal
The gain values given in the plots are wrong. Correct values are unknown,
~7kHz would be close to the cross over frequency wouldn't it? Maybe also see if you can capture PZT and EOM actuation signals in the same sweep (at their full bandwidth, i.e. careful of the montors filtering things off at higher frequencies).
Also, an aside: you might want to also tar and attach data + plotting to elog as url links rot over time.
After the vacumCans are heated, I aligned the north cavity again today to about 60% mode matching. however, I'm seeing a weird oscillation in the mixer channel of FSS board which is very big (5 Vpp) and is happening alongside a large power fluctuation in transmitted light. See attached plot from oscilloscope of reflected DC level and the micer channel from FSS. The repition rate of the oscillations make this a 7.35 kHz noise. I have never seen this before. Further, these oscillations are completely unaffected from chaning COM gain or the Fast gain in the FSS path. Comments required.
After the vacuum can is heated, I aligned the north cavity again today to about 60% mode matching. however, I'm seeing a weird oscillation in the mixer channel of FSS board which is very big (5 Vpp) and is happening alongside a large power fluctuation in transmitted light. See attached plot from oscilloscope of reflected DC level and the micer channel from FSS. The repition rate of the oscillations make this a 7.35 kHz noise. I have never seen this before. Further, these oscillations are completely unaffected from chaning COM gain or the Fast gain in the FSS path. Comments required.
I installed the new board LIGO-D1800304 with an external voltage regulator and breakout box for handling acromag channels. See attached photos. The transimpedance amplifiers required a parallel capacitor in the feedback path for avoiding oscillations. Since these are temperature sensors and we do not care much about high-frequency noise, I added 1 nF capacitors at the COMIT3,9 and 11 positions. This should give us a bandwidth of 10 kHz. The channels are as follow:
With the new sensors, I have set relay autotune test to happen overnight with relay amplitude of 0.75V and 0.75 offset for 16 hrs. This should tell us optimal PID parameters. Hopefully, these measures will make the can temperature stable enough which I'll try to set around 33 degrees (my guess setpoint to keep good enough cooling rate). Then with Andrew's new script, the cavity temperature control was pretty good and hopefully we'll lower the overall drift of beatnote.
Similar to South Path, I have inserted a beam splitter in the north PMC reflection path. This is a BS1-1064-90-1037-45UNP beamsplitter set such that about 10% light is transmitting. See optical layout for the path changes. I found that the modulation depth fro North PMC PDH was already low enough (0.246), so I did not reduce it. I have changed the DC transimpedance of North Cavity Reflection RFPD SN009 similar to the south side to 929.09 Ohms. This has given us ee-way to increase light intensity in the path with autolockers still working.
As suspected in CTN: 2346, the mode matching of the cavities is deteriorating and eventually alignment is getting screwed due to possible lab temperature fluctuations. I left yesterday with south cavity mode matching to about ~70% but in the morning today, I found that the resonance is completely lost and a higher order mode with vertical fringes is resonant. Same is the case with North one which had shifted to a much higher order with vertical fringes. So clearly, I need to switch back on the vacuum can temperature stabalization.
For gaining about 3.45 mW of light after reflection from cavity on resonance when mode matching is about 60%, 8.63 mW light needs to fall at the cavity. There were three major obstacles in doing this which were resolved as follow:
Also, I have finally started an ATF wiki page to keep documentation of all RFPDs and changes to it. I'll make sure this page is updated and this page should be the reference point in future.
On another note, I'm seeing a steady decline of mode matching on both paths even when I'm staying far away from the alignment. Something is going on which needs to be fixed. After a discussion with awade, following are the suspects:
I'll fix the mode matching again before leaving today and will see if any big changes happen overnight.
I was finding myself changing thresholds every time I change power levels. Also, the fastmon rms monitors were not working with the docker-compose. I'm listing all script and medm changes since the sad departure of awade:
I'm attaching screenshots of modified medm screens.
I tuned the intensity control and changed autolock parameters to get 2.6 mW light at the output of cavities on both paths. At present, the South Path has 74 % mode matching and North Path has 63% mode matching. These numbers were higher yesterday, so I'm suspecting something shifted over a day.
I also maximized gains on PMC loops and FSS loops:
In FSS loop, I kept gain margin of 0.64 dB (i.e. the oscillations are seen at 0.2V above the set value of Common Gain) and in PMC loop, I kept it at 2 dB. I yet have to characterize noise in the FSS error signal and see what I can make better. On another note, I did a rough calculation today and I think shotnoise intercept current for our RFPDs is ~2.6 mA which means atleast 3.45 mW light should be falling on them for them to be shotnoise limited. I have to check the validity of my rough estimation but if this is true, then our FSS RFPDs are not shot noise limited.
Awesome. Can you do the same for crosslinks to the old ATF_lab logbook name (ATF_Lab-> QIL)?
We have changed the logbook name from PSL_Lab to CTN. To keep all the links working, I ran the attached script in a local copy of the log files and then copies them back to the server. The script essentially just changes PSL_Lab from all hyperlinks to CTN. Now, all such links are working. However, the front text of the links was not changed to avoid unnecessary tweaking. So in most of them, the front test still says PSL:XXXX but the links are correct.
I today shifted our modbus (db files for EPICS channels, ioc command files and docker-compose file to run the services) over to this git repo.
Through this, we would have version control now so that we can revert back to a working stage if something causes an error in the hosting of the channels. All db files are updated every 5 minutes by dbFilesUpdate.py in ctn_scripts. So we should try to commit the changes manually once in a while to the repo. This is there so that locally the files track changes in the settings and parameter values but we also have version controlled history in git for long reverts.
On another note, it is important to start the docker processes in a particular order. To ease with this, I have added a restartAll command in the .bashrc of ioc3server which looks like this:
#Restarts all teh EPICS channels and python scripts in the correct manner.
sudo docker-compose start dbFilesUpdateOneTime
echo 'Updating db files before restarting...'
while true; do
if [ -z `sudo docker ps -q --no-trunc | sudo grep $(sudo docker-compose ps -q dbFilesUpdateOneTime)` ]; then
echo 'dB files updated. Shutting down python scripts...'
sudo docker-compose down
echo 'Now restarting the EPICS channels...'
sudo docker-compose down
sudo docker-compose up -d
echo 'Checking if the python scripts are down...'
sudo docker-compose down --remove-orphans
echo 'Starting python scripts...'
sudo docker-compose up -d
So every time a new channel is added. After doing git pull, one should use this command or the commands listed above in order to make sure the channels and python scripts boot up in the right fashion.
Edit Mon May 13 12:17:23 2019: Updated restartAll() function to do a last time db files update before restarting.
I have made a SMA cable of length 1.952m after optimizing phase delay of LO. This is giving the maximum PDH error signal.
For future reference, I'm saving present setup details:
From the ratio of off-resonance value and error signal pk-pk at OUT1, we can check in future if everything is same as right now. From ratio of off-resonance value and dip value, we can check cavity mode matching in future.
Either by the mighty presence of Craig who came down to lab for few minutes or because of my recent updates in path alignment (see PSL:2335, PSL:2334 ), the PDH error signal in the south path looks almost as healthy as it ever was.
Attached is the data taken from oscilloscope with while scanning laser pzt at 3 Hz with 2 V peak-to-peak sinewave.
I measured modulation depth to be 0.373 by scanning the slow control voltage and reading powers in carrier and sidebands. Then I used the ratios of Bessel functions to estimate the modulation depth, which seems higher than expected.
The differences in the expected and the measured values good be due to many reasons like wrong modulation depth measurement, wrong responsivity (I used 0.75), wrong mixer loss and voltage divide estimation based on MAX333A datasheet values etc. So overall, this looks good enough that I can just move on. I am not sure how this actually happened.
Edit Mon May 13 16:40:21 2019 :
Corrected the transimpedance value of SN010 used. I was referring to a wrong value before. The have made the calculation by backtracing from the data in taken in CTN:2247, dividing by MAX4107 old gain (560/(0.5*113) +1), multiplying by the new gain (680/75 +1), dividing by old voltage division of (50/70) and multiplying by new voltage division of (50/100). The details are in this notebook.
Log of changes made to SN010, Schematic D980454-00;
1) Changed R1 to 680 Ohm and R2 to 75 Ohm. Replaced Max4107 with a new one.
2) Changed R6 to 50 Ohm. This changes the voltage division at output.
Log of changes made to SN009, Schematic D980454-00;
1) Changed R6 to 50 Ohm. This changes the voltage division at output.
Edit Wed May 15 19:47:42 2019:
See https://nodus.ligo.caltech.edu:30889/ATFWiki/doku.php?id=main:experiments:psl:rfpd for latest changes in RFPD.
I have finally completed the documentation for mathematical analysis of using PLL for frequency noise measurement. This document also contains noise analysis for various sources.
Please read and let me know if you have any comments.
I borrowed the following components from PSL lab to QIL lab
1. Mixer (Minicircuit, ZFM-3-S+)
2. RF amplifier (Minicircuit, ZFL-500LN)
3. IFR/Marconi 2023 A (# BD9020)
I'm not sure how but the alignment into south cavity became terrible over time. Maybe I accidentally changes persicope alignment but I am pretty sure I didn't go near this area since I last aligned it (PSL:2316).
Today, I was able to get mode matching of 55% only.
In this week's group meeting, Andrew mentioned that the polarization axes of the AlGaAs mirrors are not properly aligned and there are two close by resonances at different polarizations. I indeed was seeing a small blip near resonance during a laser PZT scan. I fixed hte input polarization with the Half wave plate infront of South cavity at (56,26). I've attached two scan measurements from oscilloscopebefoer and after the optimization. The transmission increased by 10.1%.
I have completed making an optical layout for CTN lab. From now onwards, I'll update this layout if I make any major changes in the path.
Please comment if you think I should represent something better.
But I used AD620 which is an instrumentation amplifier, not opamp. I thought comparators are made with differential amplifiers (from Horowitz & Hill Sec 4.23) and since AD620 was a nice available instrumentation amplifier, I thought it would work (and it does work).
But this particular circuit that I made seems to be less general than I thought. 100 kOhm potentiometer makes it hard to fine tune the threshold. Also, I think I should have buffered the threshold voltage divider circuit because I see the threshold level changing with the incoming signal when the signal is more than the threshold. It doesn't affect my particular application, but I think that makes this a crappy TTL generator. But since my purpose has been served, I'll push optimizing and generalizing this box to some other day. Maybe my new SMD prototype boards would be handy.
not all amps are good for use as a comparator
Today, I finally used the trigger generator to trigger near the start of the maximum of PDH error signal to get an FFT when the PDH error signal is near maximum.
It is not a beat frequency, it is rather some other oscillation mode (probably mechanical oscillation through PMC) at 280 kHz mixing at the EOM as I mentioned above. Modulation frequecy is 37 MHz for this path. 14.75 MHz is the modulation frequency for PDH of locking South PMC.
but if all systems are linear, where does this beat physically get generated? What are the actual modulation frequencies for the cavities?
I think I figured the source of this 281 kHz peak. 281 kHz ≈ 37 MHz - 36.72 MHz, so it is the beat signal between 37 MHz and the 36.72 MHz signals. I think I should tune the RFPD more next time I open the cage to bring its resonance closer to 37 MHz.
Today I made a standalone Adjustable TTL Trigger generator box. Following are some features:
I took time series and spectrum of RF output of South Cavity Reflection RFPD through a 20 dB coupler.
Clearly, I needed to see what other frequencies are there in this RFout signal. So I decided to take a spectrum of the signal.
I borrowed a small isopropanol glass bottle from CTN to OMC (Apr 17, 2019)
I borrowed a small acetone glass bottle, which was in the yellow solvent cabinet, from CTN to OMC (Apr 19, 2019)
Today I reset the reflection path from faraday isolator to the RFPD.
This certainly improved the amount of light reaching RFPD but the PDH error signal still looks the same. I'll directly hook up RFout of RFPD tomorrow to the oscilloscope and take some time series data to see any discrepancy with waveform shape.
Yeah, I realized I understood the mentioned bandwidth wrongly. It is mentioned for a direct 50 Ohm load while we load our photodiode with a different resonant circuit.
I haven't made actual measurements for the shot noise intercept current similar to the link, but I made a ltspice simulation of the circuit and I believe the shot noise intercept current is about 0.15 mA. Yes we are good with that. This topic should close here. False Alarm.
There is no problem. It is just a matter of noise requirement.
How much shotnoise intercept current do you require?
At the 40m, the 33MHz PD has the shotnoise intercept current of 0.52mA. https://wiki-40m.ligo.caltech.edu/Electronics/RFPD/REFL33
Is that enough? How much is yours?
You should be able to realize the similar value because the technology used for your resonant PD is the same as the 40m one, I suppose.
If you require super low noise like uA (=sub pA/rtHz current noise), then we will need a high gain and low junction capacitance.
[ED by KA, catalogs should not be put on ELOG. This is public.]
Did we just miss this all along?
C30642 has a bandwidth of 20 MHz only. That means at 36 MHz and 37 MHz, the RF output would be -8.1 dB (0.3933) and -8.3 dB (0.3827) lesser than expected or maybe worse. However, this bandwidth is mentioned for load resistance of 50 Ohms, so I need to go into more details. But definitely will have to look into this.
Maybe we need to replace these with faster photodiodes. Do we have any of C30619 or C30641 in stock?
(NO - this is a incorrect interpretation of bandwidth in this case)
ALL YOUR LEGENDS ARE BELONG TO 24V !
Edit Thu Apr 11 15:58:47 2019: Corrected legend and a plot title.
Edit Thu Apr 11 16:16:03 2019: Added same measurements for North EOM Driver
Edit Thu Apr 11 16:31:30 2019: References:
This apparent problem is clear now. The test path has a 100 ohm resistor to ground (RF Board R10) which with rest of the resistances is creating a voltage divider at DC. The numbers make sense and FSS board is fine. Seeing this, the only option to get better PDH error signal and boost our discriminant is to increase modulation depth. Maybe the modulation depth is indeed not good enough. I'll make the "Apparent Modulation Depth" (Modulation depth calculated by PDH error signal and incident light power) equal to 0.3.
We have been experiencing an extraordinarily low PDH error signal on the South FSS. I think I have found the source of the problem here but not the cause.
Today I aligned the south path completely with about 60% matching with the cavity. I guess I got lucky or I really know how to do this correctly now (see PSL:2253).
I have just one last weird thing remaining in South Path. The South Cavity Reflection RFPD doesn't behave well with its RF cage lid on. More specifically, the +5V voltage regulator LM309H stops working for some reason and +5Vrail becomes -0.6 V. But with lid off, everything works. I was even able to lock FSS nicely. So something is going on which I am trying to understand for a long time now. I have replaced MAX4107ES, LM309H and capacitor at the input of LM309H, but nothing works. The RF cage has a layer of electrical insulation tape from the inside, so there is no chance of it shorting any tall component (only tall component is an Inductor, see PSL:2241). I'll look a little bit more into this on Monday but otherwise, I'll just go ahead and install RFPD without this RF cage lid. Anyways the box itself should provide pretty good insulation from RF interference. If anyone has any clues about this -0.6V issue, please help me.
PMC mode overlap was found to be 68.77%. I realigned this to 91.99%. Here, the percentage is the percentage of power that went through the PMC.
Then I found that the alignment into Faraday isolator in the south path was also very poor with only about 75% of light going through. I aligned this also to 87.11%, but unfortunately, due to this exercise, the south cavity is misaligned now.
However, this will help in keeping the required power low at the PMC stages. So for some reason, the PDH error signals of PMC locks on both paths saturate to some value and do not increase further. This is right after the mixer so it is not the fault of the Servo Cards. As I tried a pristine different RFPD too, it is not the fault of the RFPDs either. I checked the modulation depth, and it is good as well. So only two things can happen here:
This problem seems unsolvable, so I basically have to keep power low enough at the PMC stage. My goal is to send 3 mW power on the cavities. Then the FSS RFPDs will be shot noise limited.
p.s. I know this seems too elaborate exercise at this point when I know a much larger issue, the 50 Hz noise source. But to investigate that, I need rest of the table working and since I'm sort of rebooting everything, I want to optimize it as good as I can and I want to keep the record of all problems and efficiencies as well.
Today, I tried to replace the Reflection Photodiode on South PMC with another 14.75 MHz Resonant (which from test port analysis has TI of 57623 Ohm at 14.75 MHz). Since this another photodiode had been sitting in box for months, I expect it to behave better if the power supply tripping incident caused any harm to the existing photodiode. To my surprise, the error signal turned out to be nearly same. With 6.765 mW incident power, PDH error signal is 900 mVp-p. So I am not sure what is causing this limitation on the output signal strength. From the comparison of sideband strength to carrier strength, we are definitely near 0.88 rad modulation depth. I'll reduce this to 0.3 to avoid power in higher harmonics. But the main concern is that there is some limiting factor because of which the error signal is not going high enough.
The reason I'm worried about this is that PMC lock looks robust enough that it remain locked all night, but whenever I try to scan laser pzt for FSS analysis, the PMCs get unlocked by just connecting a switched off function generator to the laser pzt.
Since the last incident of power supply tripping, I tested all photodiodes and electronics in the lab one by one. During the process, I realized that we are using way too low power in our experiment. According to my calculations, the current RFPD are such that they will be shot noise limited only with more than 2.8 mW light falling on them.
This week on 19th March in the evening, I was working on replacing the GND connections of the power supply with thicker wires and checking out any AC rms voltage between different ground points to look for ground loops. During this, I found that the high voltage power supply for PMCs wasn't directly grounded with the rest of the power supplies. These are Kepco PCX 200-0.1 MAT power supplies. From here onwards, I'll tell about this incident chronologically:
Before the incident:
My understanding of what happened here:
☐ Reroute AC power cables separate from signal cables.
☐ Add chokes or loop cables around ferrite cores to increase impedance to ground loop current.
☐ Shield the FSS 37 wires cable with aluminium foil.
☐ Add a thick grouning from table to rack.
☐ Grounding all DC supplies to a single point in a star configuration with thick metal/wire.
☐ Check all DC power supplies if they are leaking these 60 Hz spikes.
☐ Removing the AC power strips from the table sides. Connect the camera power cables far way from table top.
Wandered into the PSL just now. Slow controls were going wild. Traced it back to the fact that acromag1 and its auto-restarting services were still live. The new dockerized python script services on C3IOCServer were fighting the Acromag1 machine processes. I've copied the service scripts into the ~/Downloads/ folder of acromag1 and deleted them from /etc/init/. I then rebooted acromag1 and the problems went away.
We should achieve whatever is on Acromag1 and probably rebuild that machine with an operating system that is still in long term support
We have been thinking for a while to migrate all epics channels from acromag1 (10.0.1.33) to c3iocserver (10.0.1.36) which is rack mount with the latest supported ubuntu Debian.
Unfortunately, my first attempt failed and I tried to put everything back to the status quo but the docker instance on iocserver which was running PMC interface is not working. Here are the steps I took:
I couldn't debug remotely further for the cause of this problem. So the status is worse than I started. The PMC channels are not running and hence everything must be unlocked in the lab right now.
Edit Mon Mar 11 18:38:03 2019 (awade): crossed out Ubuntu added Debian
# Use the following commands for TCP/IP
# drvAsynIPPortConfigure(const char *portName,
# const char *hostInfo,
# unsigned int priority,
# int noAutoConnect,
# int noProcessEos);
# Example: drvAsynIPPortConfigure("c3test1","10.0.0.42:502",0,0,1)