We realized that the PD amp circuit only requires a 5V DC supply so we try that. One of the PD had the right response, although only after cycling the input impedance from 50ohm to 1Mohm which is weird. The other one (which produces the negative signal) was complete bonkers.
We remove the home-built PDs and put 2 Thorlabs PDs (forgot the model) with a bad dark current but a decent response and high saturation current. With these PDs we are limited by the PD noise to about 1.25db od squeezing when 30mW LO is detected on each PD without using electronic amplifiers. Attachment 1 shows the different noise spectra we measured.
We maximize the coupling efficiency before boosting the LO power. For some reason, the coupling between the LO fiber and fiber BS deteriorated but there was no apparent dirt on them upon inspection. We crank up the power and measure PD outputs using the Moku oscilloscope. The PD signals were subtracted digitally, but now we were not able to get the shot noise even after fine-tuning the gains. What went wrong? maybe it's because the PDs have separate power supplies?
Some analysis in this notebook
I've used the following model of heat transfer between a suspended Si sample (1) and the inner shield (2) in Megastat:
, where I have not assumed that A1/A2 << 1.
For this analysis, I simulated temperature data of a sample using models of e1 and e2. I simulated e1 and e2 as linear in T:
, and generated test mass temperature data using these emissivity models and inner shield temperature data from a previous cooldown. My goal was to determine how uncertainty in the emissivity of the inner shield and heat capacity of silicon would propagate to the calculated emissivity of the sample.
I back-calculated the emissivity of the sample using a procedure similar to this paper: https://www.sciencedirect.com/science/article/pii/S0017931019361289?via=ihub. To summarize, I used a Savitzky-Golay (SG) filter in scipy to calculate dTdt from the temperature of the sample, and rearranged the model above to solve for e1.
The uncertainty of e1 can be found by:
I considered Cp_Si and e2 as uncertain parameters of interest, and assumed we do not have significant uncertainty on the geometric parameters such as the areas of the inner shield or the sample, or mass of sample.
The results from this analysis are:
Plots of these uncertainties can be found in Attachments 1 and 2.
Next steps are to add an additional radiative heating term to the model (more realistic given what we see in MS) and repeat this analysis, adding uncertain parameters such as size of heat leak. Transfering this analysis to MCMC is also in progress.
First we turned on the relevant instruments for this experiment after the power shutdown:
- Main laser drivers and doubling cavity controller. We set the current to 2 A as we had it before.
- The waveguide TEC. We tried setting it to 60.99 C (for maximum efficiency) but the temperature ramps up much faster and over shoots the setpoint. So we had to do what we did earlier which was to adiabatically change the setpoint from room temperature and finally set it to something like 63 C so the actual measured temperature stabilizes at ~60.9 C. How do we change the PID parameters on this controller? The settings don't seem to allow for it.
- PD power supply, oscilloscopes, function generator, SR 560s lying nearby
Then we tried to probe further what was going on with the PDs (TL;DR not much made sense or was reproducible):
1. Grabbed 30Hz-3GHz HP spectrum analyzer from the Cryolab. Installed it in the WOPO lab under the optical table. We figured out how to do a zero-span measurement around 10MHz. The SA has only one input so we try to combine the signals with an RF splitter. We test this capability by sourcing the RF splitter with 10MHz 4Vpp sine waves from a function generator and measuring the output with a scope. We measure with the scope 1.44Vpp for each channel. The combined channel was 2.73Vpp. We then realized that we still don't have a way to adjust the gains electronically, so we moved on to trying the RF amplifiers (ZFL500 LN).
We assemble two amps on the two sides of a metal heatsink. We solder their DC inputs such that they are powered with the same wire (Attachment 1). We attach the heatsink to the optical table with an L bracket (Attachment 2).
We powered the amps using a 15V DC power supply and tested them by feeding them with 10MHz 10mVpp sine waves from a function generator. We observe on a scope an amplification by a factor of ~ 22. Which makes a power amplification of ~ 26db consistent with the amplifiers' datasheet.
We couldn't find highpass filters with a cutoff around 1MHz, so we resumed using the DC blocks, we test them by feeding white noise into them with a function generator and observing the resulting spectrum. First, we try the DC blocks with a 50 Ohm resistor in parallel. That happened to just cut the power by half. We ditch the resistor and get almost unity transmission above 20kHz.
Moving on to observing LO shot noise, we open the laser shutter. We find there is only 0.7mW coming out of each port of the fiber BHD BS. We measure the power going into the BS to be 4mW. This means the coupling between the LO fiber and the BS fiber is bad. We inspect the fibers and find a big piece of junk on the BS fiber core. We also find a small particle on the LO fiber side. We cleaned both fibers and after butt coupling them we measure 1.6mW at each port. We raise this power to 2mW per port.
We connect the outputs of the PDs to the amps through the DC blocks. The outputs of the amps were connected to the Moku's inputs. The PDs were responding very badly and their noise was also bad. We bypass the amps to debug what is going on. We connect the PDs to a scope. We see they have 300mV (attachment 3) dark noise which is super bad and that they hardly respond to the light impinging on them (attachment 4). We shall investigate tomorrow.
We shut down the workstations and the FBs by doing sudo shutdown and unplugged them from the wall.
Electronic equipment on the FB rack was shut down and unplugged from the wall.
Diablo's current was ramped down and the control unit was shutdown. Optical table electronic equipment was shutdown and the table's powerstrip was switched off.
Equipment under the optical table was switched off and unplugged.
Below are the outlined steps towards building an MCMC model for estimating the emissivity of a surface inside Megastat, and determining achievable uncertainty bounds:
[Radhika, Chris, Ian, Paco]
There are two CTC100s in QIL, so we will use the following nicknames:
CTCMS: CTC100 recording channels from Megastat
CTC: CTC100 recording channels from IR labs dewar (for PD testing)
After observing that the CTCMS channels were stale / not updating, I tried restarting CTC100.service on qil-nfs. After this, the channel values were blank. Chris then looked into the logs and saw lines such as:
CTCMS C4:CTC-MS_WORKPIECE_TEMP_VAL: No reply from device within 1000 ms
CTCMS was not responding to queries, so I killed CTC100.service so I could manually connect to the device via telnet. The connection was made successfully, but CTCMS was not responding to any commands. I then tried connecting to CTC (the other CTC100 for PD testing) and was able to get responses from it. The commands I sent were:
popup "hello" #creates a pop-up on the CTC100 front panel with text "hello".
*IDN? #returns a string with format: Manufacturer, Model number, serial number, version.
After verifying I was connecting to the right IP and port, Ian and I swapped the ethernet cables plugged into both CTC100 devices and resent the commands. Sure enough, CTC continued responding to queries and CTCMS did not. Since the behavior of the devices didn't change when every connection downstream was swapped, it seems the issue lies somewhere within CTCMS. The configurations might have changed after the device was powered off during flooding, or it was somehow damaged. However, CTCMS is recording data properly and logging to USB, so the issue seems to be elusive.
Paco suggested I update the firmware on CTCMS to ensure it is up-to-date. I emailed Stanford Research Systems with a description of our issue, and requested the updated firmware. I can install the firmware via USB once they get back to me, and/or they might have some ideas of why the device is not responsive. In the meantime, someone more experienced with the CTC100 might want to take a look for signs of damage.
Since the heater seems to not be functional once again (displaying N/A and no power output), I proceeded to turn off the cryocooler so the chamber can warm up over the weekend. Cryocooler was turned off at 11:20am 4/15.
The USB logging from cooldown was extremely buggy, with loads of special characters embedded in the temperature data. After filtering out the corrupted values and plotting, the data still looked jumpy and not reflective of the true temperatures. I can work to further filter/treat the data to be able to use for the model, but we might be better off rerunning this cooldown.
I reformatted the USB (MS-DOS FAT16) and reinserted it into the CTC100 to collect warmup data, but a popup appeared saying the USB was not able to record data. I swapped it out for another USB that I similarly reformatted, and the device accepted it. I remember having issues with the original USB in the past, so I'm hoping the swap fixes the corrupted data issue.
**Update: I extracted a bit of data with the new USB, and it looks pristine! I reinserted it to continue to log warmup data. I'm disposing of the old one.
Restart, without vacuum incursion, of cooldown from QIL/2749. Hopefully we pull data this time, via manual logging while Radhika and Chris figure out how to get the channels uploading again.
- Cryocooler on at ~2:30 pm with Workpiece Temp ~ 250 K.
- Datalogging didn't start till ~5:40 pm because I forgot that manual logging was necessary!
that's good. Can you from these models estimate what the uncertainty will be on the emissivity? i.e. by MCMC or otherwise rather than eyeballing
I've modeled the cooldown of a 2" diameter and 4" diameter Si wafer in Attachments 1 and 2, using the current Megastat model and previous cold head temperature data. The model includes heat leaking into the inner shield enclosure from an aperture, which we currently observe in Megastat cooldowns. (Note how the wafer cools down much faster than the current test mass, due to the very tiny volume.)
Quick log describing effort to recover leaky IR Labs dewar.
POC Steve Zoltowski - Stevez@irlabs.com
No fix yet, so I reached out to the vendor for more ideas.
I'll post another log with a summary of what leak(s) we suspect and what the current behavior of the leak is.
Fix Effort 1 - Valve Housing Seal (attachment 1)
The fix IR Labs recommended was to look at the seal of the valve housing.
- There was no sign of any issue with oxidation at any visible location.
- The fluoroelastomer valve seat looks like it has crept (plastically deformed in the shape of the sealing surface underneath) but not dramatically.
- The o-ring looked fine, but I wiped all surfaces and added a bit of Krytox to o-ring and valve seat.
- Photos - https://photos.app.goo.gl/oa4bCxm7xaWRJZDj7
Conclusion: No change to the behavior of the leak.
Fix Effort 2 - Feedthrough Seal
I had not yet explored the seal of the feedthrough to the chamber, except to note that the screws are tight.
- The feedthrough wire leads are plugged in within the chamber, and there is not enough slack on the leads to examine the o-ring. I removed the screws, found I had inadequate access, and replaced the screws.
Conclusion: Cleaning / reseating is deferred.
Fix Effort 3 - Window Seal
I had not yet explored the seal of the window to the chamber, except to note that the screws are tight.
- The window looked ok during removal, and I had no reason to be concerned.
- Removed the o-ring and wiped down o-ring and groove thoroughly with IPA.
- Applied Krytox to chamber-side sealing surface.
- Wiped down chamber sealing surface.
Conclusion: No change to the behavior of the leak
The analytic equation for radiative heat transfer in a 2-surface enclosure (formed by the inner shield and Si wafer) is:
This is dependent on properties of inner shield / cold plate, and as such the accuracy of wafer emissivity measurements will be limited by our uncertainty on the inner shield and cold plate emissivities.
As the ratio approaches 0, the above equation simplifies to:
. The terms related to the surrounding surface (inner shield) drop out of the equation, and so the smaller the ratio of areas, the less of an impact the inner shield / cold plate emissivities will have on the cooldown. Thus we should seek to minimize the ratio of areas to minimize the uncertainty on eSi.
On the other hand, in this low area ratio limit, the thermal power transfer between the wafer and surrounding inner shield is proportional to the area of the wafer. As the attachments show, the 4" diameter wafer gets colder than the 2". This should be taken into account when determining in what temperature range we would like to fit the wafer emissivity. Larger wafer ---> colder. Do we care about emissivity measurements < 123K? If not, the 2" wafer gets us there.
I think this is a nice debugging find. Its not very robust to use the workstations as 24/7 script machines (as we have found out over the years).
Best is to install a conda env on the main framebuilder machine, and run the perpetual scripts there in a tmux session.
Once its all sehup, update the ATF Wiki with a description of haw its done. Workstations crash when users do stuff, so its better if the data gettin script can run as a system service (e.g. systemctl, etc)
On Thursday 3/31 we opened up with a goal to diagnose and fix the heater connection (previously reporting an error). Upon opening, we realized the steel wires suspending the test mass had snapped, and the test mass was sitting on the cold plate [Attachments 1, 2]. The mirror had fallen off one face of the test mass, and the heater had also debonded from the other face [Attachments 3, 4]. Our suspicion was that the wires somehow got sliced by the metal zip tie whose function was to mechanically secure the heater in place. Since it did not serve this function anyway and caused more harm, we decided to ditch the zip tie moving forward.
The vacuum pump was started ~4:15pm on 4/1, followed by the cryocooler at 5pm.
On 4/5, I realized the CTC100 log did not contain any data from the weekend cooldown. I expected it would record the data locally even if the workstations were powered off, but this turned out not to be the case. I turned off the cryocooler at 11:45am, with the heater set to 295K. We will redo the cooldown once the chamber gets close to RT.
During the above investigation, I realized the CTC100 channel values were stale - the values are not being updated and all the channels are showing ~255K. None of the RTDs on the CTC100 front panel were are reporting this temperature, so something is getting in the way of proper telnet connection. The warmup waiting period will give me time to diagnose and debug the issue.
The 13.5" rigid legs would cost $1,891.
we don't really need Pneumatic legs. How much for rigid legs?
The current QIL optical tables are Thor Labs PTH503 (discontinued, replaced by PTH603 - also discontinued) which are 700mm tall and offer a closed pneumatic isolation system (passive isolation). This model is only available at the 700mm height and 600mm, which would only be ~4 inches lower than our current legs.
Newport's equivalent model (SL Series Closed Pneumatic Vibration Isolators without Re-Leveling) comes in a 13.5-inch (343mm) height, which would drop our table height by 14 inches. This would be our ideal table height. The cost for 4 legs would come out to $3,047.
Pictures attached. WS1 and WS2 have been turned back on, since the replacement for the ceiling panels will not arrive for another few weeks according to Facilities.
Facilities will be returning on Monday 4/4 between 8-9 AM to remove all ceiling panels above the workstations in B265B (QIL). Replacement of the panels is not yet scheduled, but in the meantime the open ceiling will be covered and the workstations will still be accessible.
Muddy Waters is not new, but if the facility can fix it we'd take it.
This morning, facilities removed all the porous ceiling panels that had been soaked/damaged by water (in B265B: above WS1 and WS2, see Attachments 1+2; In B265A, see Attachment 3). Specifically in B265A, an enclosure was created (Attachment 4) and a dehumidifier was placed inside. All monitors/equipment underneath the panels were thoroughly covered, and the floors were swept up afterward.
No work was done above the North table in the QIL. I asked about it and facilities said they would look into it, but it wasn't on the schedule for today. A member of facilities also pointed out that the sink in the QIL was running black liquid (Attachment 5). It looks like soil/dirt entered the water pipes? This seemed to also be outside of their scope for today.
Facilities placed a blower and dehumidifier in B265B. I checked the airflow and the air around the tables is comparitively still. The North table is covered and the South table is over pressurized by HEPA filters, so there should be little risk of dust being stirred up.
Flood photo album: https://photos.app.goo.gl/BZAG8DyQzFVTfMNz6 (This link is read-only who has no access to the account)
Some photos of affected areas in B265A and B265B (elog shows some preview photos - click on PDF for full set).
Stephen did a great job cleaning up and drying up. Most equipment is powered off and we're leaving it off for a couple of days to dry completely. We'll want to check the stuff on the red lab cart thoroughly.
When I went into QIL today there was a lot of flooding from water dripping from the ceiling at several places in the lab. Images attached.
25 March 2022 (Friday) at 21:00, went to QIL to start warmup.
- Cryocooler was turned off at 21:21
- Heater output was disabled - it seemed there was an issue, and therefore I opted for passive warmup only.
Heater Issue Troubleshooting
Symptom: Heater Output was enabled but reporting only .35 W and "Err" indicator.
Symptom: When output was disabled, a fan noise was terminated. When output was reenabled, the same fan kicked back on. The fan was driving much harder than I had ever heard it before.
Symptom: The output indicated .35 W but the test mass temperature was 66 K. Past heater power for steady state at 120 K was on the order of 1 W.
Per CTC 100 manual:
- pg 9 (100W heater outputs) indicates: "If the temperature of either PCB exceeds 60°C, the CTC100 automatically shuts off the corresponding output" which was not the case.
--> Apparently not an overtemperature situation.
- pg 9 (Hardware faults) indicates a list of error conditions which are accompanied by pop up windows.
--> This error had no pop up window, not quite sure what to make of that except that the controller doesn't think our issue is something it can identify.
- pg 29 (The system fan) notes that "The main system processor reads the desired fan speed from each I/O card and sets the fan to the fastest requested speed".
--> Suggests that the louder fan noise may have indicated higher temperature condition, even if not an over tempearature condition.
- pg 41 (Numeric) describes that in the typical numerical view of the data channels, the message "Err" that I saw on the heater channel indicates "an internal error has occurred".
--> No explanation of what an "internal error" is, but in this case I suspect it could reflect that the heater output is not coupled the input Workpiece temperature.
Best Guess: the symptoms and the lack of any apparent controller-identified fault suggests that the heater may have debonded. I didn't look at temperature history, so I'm not sure if there was a point where the heater was bonded to the test mass during this run.
Next Steps: We should open up and investigate.
The data from this cooldown is attached (labeled 03/19 - UTC time), compared to the run started on 03/10. In between these 2 cooldowns, the greased joints were replaced with indium joints on both sides of the copper bars (cold head to copper bar, copper bar to flexible strap).
Efforts to update the model (indium links) and analyze these runs is ongoing. Accurate analysis rests on understanding points 2 and 3. above, since the current model predicts a much larger steady-state offset between the cold head and inner shield.
I plan to devote some time to this analysis before planning another Megastat cooldown.
Yesterday we went back to fiddling with the green path. Soon after opening the green shutter and then switching the doubling cavity to 'AUTO' we were able to see 150 mW of green light. We were able to replicate this a couple of times yesterday.
Since we had earlier removed the green fiber from the fiber launch to clean its tip, the coupling into the fiber turned out to be quite poor. As can be seen in Attachment 1, Yehonathan pointed out that a lot of green light was being lost to the cladding due to poor coupling. He then played around with the alignment and finally was able to see 65% coupling efficiency. This process seemed to involve a great amount of trial and error through several local power minima.
Attachment 2 shows that the coupling between the two fibers at the 532 nm input of the waveguide is quite poor (there is visible light being lost in the cladding). Furthermore, this light intensity decreases as we get closer to the waveguide meaning this light is being dissipated in the fiber. Even at the 1064 nm output where we expect to see squeezing there is some remnant green light.
We wanted to test if the green leakage reaching the PDs were causing additional noise. For this we just looked at the spectrum analyzer on the Moku (after amplifying 100x with the SR 560) and saw no difference in the noise spectrum with and without the green shutter being open. Although, we're not convinced with this measurement since we were not able to find good quality SMA cables for the entire path. Moving around the BNCs seemed to change the noise. Also, near the end, we noticed some coupling between the two channels on the Moku while measuring the noise that seemed to cause additional noise in one of the channels. We did not have sufficient time yesterday to probe this further.
Today I opened up Megstat to add indium in between the copper bar joints, with the hopes of speeding up cooldown and informing the thermal model.
Outline of procedure:
The roughing pump was turned on at 7:20pm, followed by the cryocooler at 7:50pm.
Yesterday, we measured a bunch of noises.
We wanted to have as reference the Moku noise, the PDs noise, and measure the shot noise of the LO again.
Attachment 1 shows the Moku noise measured by just taking data with no signal coming in. We tried both the spectrum analyzer (SA) and the oscilloscope tools, with and without averaging, and the difference between the channels.
For some reason, the SA has a worse noise figure than the oscilloscope and the difference channel doesn't give us any special common-mode rejection. Also more averaging doesn't help much because we are already taking 1.2ms of data which is way longer than 1/RBW=0.2ms we are taking here.
From now on we use the oscilloscope as the spectrum analyzer and to its noise we refer as the Moku noise floor.
Moving on, we try to measure the PD dark noise. Given that the PD dark noise floor is ~ 6nV we don't expect to see it with the Moku without amplification. Attachment 2 shows that indeed we couldn't resolve the PD dark noise.
We then opened the LO shutter. We measured with a power meter 1mW and 1.15mW coming impinging on the PDs. The voltage readings after the preamp were 1.66V for the white fiber, and 1.93 V for the red fiber. These values suggest responsivities of 0.830 and 0.834 respectively.
The PDs were measured using the Moku scope and subtracted digitally with some small gain adjustment (0.93*ch1-1.07*ch2) between the channels. The result is shown in attachment 3 together with the expected shot noise level.
1. There is not enough clearance for detecting squeezing.
2. Expected shot noise level is still too high. Does the 2kohm preamp gain go all the way above 1MHz??
The heater was turned on at 2:05PM 3/14, with a setpoint of 123K.
The cryocooler was turned off at 10:50AM 3/15, and the heater setpoint was raised to 275K to aid in warmup.
Good to see this experiment being revived.
1. The design of this laser had a number of flaws and one of them is this sensitivity to backreflections at 532 nm. I mostly just disabled the doubler's lock and closed the shutter for good measure, but probably best not to leave flickering around in an unstable state when you're away.
2. I built in the inversion in the second channel to give myself the option to electronically subtract: something that didn't end up being very practical compared to just digitally recording channels and subtracting in post.
3. Subtracted noise spectra
We should chat some time on zoom about more details (rana can forward my details). Hope this enought to go on for at least the homodyne part of the experiment.
Our goal for this week's cooldown was to tape peek sheets fully around the outer shield lip, to leave no bare aluminum contact area with the cold plate. Secondly, we wanted to diagnose and mend the issue preventing the heater from outputting any power. The full procedure was:
1. We allowed the chamber to vent, unbolted the chamber lid and outer/inner shield lids.
2. We noticed that the solder joints between the heater body and its leads had debonded [Attachment 1].
a. The suspension frame was taken out of the chamber and the test mass was removed from the frame.
b. In doing so, we noticed that the varnish joining 1. the heater to cigarette paper and 2. cigarette paper to Si was debonding in certain areas, likely due to Aquadag not being fully removed from the test mass in the area of contact [Attachment 2].
3. We wrapped the copper leads a few times around the heater "wings" and re-applied solder [Attachments 3, 4].
4. We cleaned off aquadag from a greater area on the test mass and applied varnish to re-bond the heater [Attachments 5, 6].
a. We let the varnish cure for ~2 days with a small weight on top.
5. The outer shield was removed from the chamber (without unbolting/removing the inner shield), and a single layer of peek sheet was taped the whole way around the bottom lip [Attachment 7].
6. We re-inserted the outer shield and passed the RTDs back through.
a. We reattached a few RTD lead pins/sockets that had broken off in handling.
7. Lastly, we placed the test mass back into the suspension and into the chamber.
8. Close out [Attachments 8, 9]
The vacuum pump was engaged and the cryocooler was turned on at ~3:30PM.
Analysis of 02/24 cooldown data
Attachment 1 shows the cooldown data for this run. Attachment 2 compares this run to the previous 02/11 run, where in between insulating peek sheets were taped to 2 locations along the bottom rim of the outer shield.
1. The inner shield, outer shield, and test mass all cool slightly faster initially in this run (02/24) compared to 02/11. This effect is seen until ~35 hrs, after which:
2. The outer shield starts to warm up and re-equilibriate. It seems the radiative heating from the chamber strongly kicked in once the outer shield was sufficiently cold.
The best fit for the data can be seen in Attachment 3. Note the addition of the copper bar model, which considers radiative heating from the chamber at RT.
1. The outer shield is still getting quite cold, so we have to consider increasing the insulation from peek sheets (either adding more layers or additional points of contact), or another approach altogether.
2. There are still obscure effects at play in early cooldown that the model is not considering. I have gone back to the drawing board and am trying to fit the raw inner shield data to a sum of exponential terms, in hopes of narrowing down the cooling mechanisms that could be affecting data.
- Check on the heater leads during next opening and perform tests to ensure test mass is warming up
- Devise insulation solutions for outer shield to decrease system heat load
- Consider using indium foil to increase thermal conductance between joints along cooling pathway
Since we had left the lasers ON with the shutters closed we wanted to see if the powers measured after opening the shutter would be similar to what it was when we left. We realized that opening and closing the green shutter destabilizes the doubling cavity (the FI is after the shutter and the shutter does not seem to be a good dump), which in turn changes the SHG crystal temperature (possibly because of the power fluctuation within the crystal). Re-opening the shutter requires some tuning of the temperature and offset to recover similar output power. Finally, after some tuning, we were able to see 156 mW of green light.
We made some a list of some random questions and plans for the future. We then went down and found answers to some of those:
1. Why is there no Faraday isolator in the 1064nm beam path? (edit: turns out there is, but inside the laser, see pictures in this elog).
2. Do the fibers joined by butt-coupling have similar mode field diameter? If not it can explain many loss issues.
a. In the green path we find that according to the SPDC datasheet the 532nm fiber (coastalcon PM480) is 4um while the input thorlabs fiber (P3-488PM-FC2) coupled to it has an MFD of 3.3um. This mismatch gives maximum coupling efficiency of 96%. Ok not a big issue.
b. At the 1064nm output the SPDC fiber is PM980 with MFD of 6.6um while the BS fiber is 6.2um which is good.
3. What is the green fiber laser damage threshold? According to Thorlabs it is theoretically 1MW/cm^2 practically 250kW/cm^2 for glass air interface. With 3.3um MFD the theoretical damage threshold is ~ 80mW and practically ~ 20mW. It doesn't sounds like a lot. More so given that we could only get 50% coupling efficiency. How much is needed for observable squeezing? There is the possibility to splice the fiber to an end cap to increase power handling capabilities if needed.
4. Is stimulated Brillouin back scattering relevant in our experiment? According to this rp photonics article not really.
5. How much green light is left after the dichroic mirrors? Is it below the shot noise level? Should check later.
In addition, we found that the green fiber input and the 1064nm fiber output from the SPDC were very dirty! We cleaned them with a Thorlabs universal fiber connector cleaner.
The heater was turned on on Tue, 3/1 at 4pm, with control setpoint 123K.
*UPDATE: After checking a few hours later, I noticed the test mass temperature hadn't risen, and the heater power was reading nan. When I initially turned the heater on, I watched the power ramp up to 22W (max power limit) and the test mass temperature start to rise. I wonder if somehow the lead pins shorted after it was turned on. For now I have turned the heater output off and will check on this after warmup.
The cryocooler was turned off at 5:45pm.
Yehonathan brought over 532nm/1064nm laser goggles from the 40m.
Our next step would be to measure the LO shot noise.
The goals for this cooldown are:
On Tuesday 2/22, we opened up the bottom conflat of the T to check on the RTD spring-clamped to the cold head. I re-inserted the RTD and tightened the nut further than last run, and it seemed much more secure [Attachment 1]. I re-inserted the mylar "cap" covering the cold head [Attachment 2].
In the chamber body, we carefully passed the RTD leads through the inner shield and outer shield apertures to remove the outer shield. We did this without having to unclamp/remove the inner shield or any components inside, to preserve consistency with the last cooldown. A few pins were damaged in this process (from inner shield).
Once the outer shield was removed, we used kapton tape to secure strips of peek sheets to its bottom rim [Attachments 3,4]. The strips were taped at 2 points along the rim associated with the most wobble, with hopes of stabilizing the shield as much as possible.
On 2/23 I repaired the pins previously damaged. I also added kapton tape labels to the socket leads, corresponding to the shapes found on the RTD leads (semi-circle example in Attachment 5). This way it will be much easier to match the right pins and sockets in the future.
I then bolted up the chamber (close-out pictures can be found on the QIL Google photo dump). The vacuum pump was turned on at 5:45pm, and the cryocooler was turned on at 7:08pm.
On Friday, we came down to QIL to poke around the WOPO setup. The first thing we noticed is that the setup on the wiki page is obsolete and in reality, the 532nm light is coming directly from the Diablo module.
There were no laser goggles for 532nm so we turned on the 1064nm (Mephisto) only. The pump diode current was ramped to 1A. We put a power meter in front of Mephisto and opened the shutter. Rotating the HWP we got 39mW. We dialed it back so that 5mW is coming out of the polarizer.
The beam block was removed. We disconnected the LO fiber end from the fiber BS - there is light coming out! we connected a power meter to the fiber end using an FC/PC Fiber Adapter Plate. The power read 0.7mW. By aligning the beam into the LO fiber we got up to 3.3mW.
We connected the BHD PDs to the scope on the table to observe the subtraction signal. Channel 2 was negative so we looked at the sum channel.
Time ran out. We ramped down the diode current and turned off Mephisto.
Next time we should figure out the dark current of the BHD and work toward observing the shot noise of the LO.
The heater was turned on on Wed, 2/16 at 11:30am, with control setpoint 123K. The lower power limit was verified to be 0W.
The cryocooler was turned off on Thu, 2/17 at 12pm. The heater control setpoint was changed to 295K for warmup. The plan is to address the wacky cold head RTD on Monday.
I've been assuming that the inner shield can be treated as a point mass, but perhaps the thinness make a significant delay between the temperature of the cold plate and the inner shield during the initial cooldown.
Could you model the cold shield to estimate what the temperature gradient would look like during the rapid cooldown? Not full 3D, but something approximate that takes into account the conductivity, thinness, and heat capacity.
*Note: The RTD spring-clamped to the cold head gave spazzy readings for this cooldown, so the last cooldown's cold head temperature data was used instead for reference.
Looking at the data, there are some initial noteworthy observations:
It could be that somehow the resistances of re-bolted joints increased significantly to compensate the lowered resistance of the bar, but this doesn't seem too likely. The more likely answer is the model overestimated the original resistance of the bulk of the copper bar relative to other components/joints in the chain. This means more work needs to be done, and hopefully a more realistic model will also resolve the discrepancy in early cooldown of the inner shield data.
Attachment 2 shows the best fit for the new cooldown.
In the plots comparing data and models, can you use the legend to indicate which is which? e.g. use dots for data and solid lines for models, and then label them as that in the legend. Also nice to include error bars on the temperautre measurements; I think there's a python way to plot this as a shaded region as well.
The heater was turned on at 3:13pm on Friday 2/4.
We specified a set temperature of 123K. However, the CTC100 PI control included a 1 W lower limit on the input to the heater, so there was a steady load of 1 W applied to the Silicon Workpiece over the weekend.
At 16:01 the cryocooler was turned off to start the warmup.
The CTC100 PI control was configured with a setpoint of 250 K on the Workpiece RTD, to aid in the warmup, and an allowable power range from 0 W to 22 W.
The double copper bar configuration pictured in Attachment 1 has been implemented. We completed the updates within work sessions on Thursday and Friday. Here's a pseudo-log:
All images are (or soon to be) posted to the QIL Photo Dump.
Attached are best fits for cooldown runs on 01/14 and 01/31. The setup for both cooldowns can be found in the previous ELOGs. We noticed that the outer shield did not cool as significantly on 01/31 than on 01/14, hinting that there might have been more thermal contact between the outer shield and cold plate / copper bar.
The model considers the resistances of the following conductive elements (and uses these resistances as fit parameters):
These additions helped the model more closely resemble our recorded data, with a few exceptions:
- At early cooldown times, the model seems to be underestimating the heat load on the inner shield and outer shield.
- The best fit was performed on the inner shield and outer shield data (to fine tune elements of the cold linkage), so the test mass fit is not optimized. (This will be performed next to refine emissivity predictions.)
To identify bottlenecks in the cold linkage, I used the 01/31 model and tweaked the resistances to see which would provide the largest gains in cooldown. The results from such tweaks are below:
From these observations, it seems like the greased joints are thermally efficient, and the bulk area of the copper bar appears to be the largest bottleneck.
As discussed during the 21 Jan 2022 meeting, the next cryostat run will seek the fastest radiative cooling (again, see QIL/2706) through the following configuration choices:
Actions completed 27, 28, and 31 Jan 2022
Model updates required to reflect new configuration:
There's also a convention to write "50 kgf" to designate "kilograms of force" (implying the same conversion Rana describes, multiplying by g). I see kgf enough in the mechanical engineering world that I wouldn't have been confounded, so I wanted to pass that along.
I think its least confusing to just replace 50 kg g with 500 N. Writing 50 g can be misleading, it seems like 50 grams.