Attachment 1 is a geometric diagram that reflects the current state of the ITM cooldown model, introduced in . The inner shield is assumed to be held at 77K for simplicity, and 2 heat sources are considered: laser heating, and radiative heating from the room-temperature snout opening. The view factor Fij between the snout opening and test mass (modeled as 2 coaxial parallel discs separated by length L - equation found in Cengel Heat Transfer) is calculated to be 0.022. The parameters used in the model are noted in the figure.
Attachment 2 is a simplified diagram that includes the heating/cooling links to the test mass. At 123K, the radiative cooling power from the inner shield (at 77K) is 161 mW. The radiative heating from the snout opening is 35 mW, and the laser heating (constant) is 101.5 mW. Due to the tiny view factor betwen the snout opening and the test mass, most of the heat emitted by the opening does not get absorbed.
The magnitudes of heating and cooling power can be seen in Attachment 3. Lastly, Attachment 4 plots the final cooldown curve given this model.
My next step is to add the outer shield and fix its temperature, and then determine the optimal size/location of the inner shield to maximize cooling of the test mass. This is question was posed by Koji in order to inform inner shield/outer shield geometric specs. Then, I will add a cold finger and cryo cooler (conductive cooling). Diagrams will be updated/posted accordingly.
Building on , I added a copper cold finger to conductively cool the inner shield, instead of holding the inner shield fixed at 77K. The cold finger draws cooling power from a cyro cooler or "cold bath" held at 60K, for simplicity. I added an outer shield and set its temperature to 100K. The outer shield supplies some radiative heating to the inner shield, but blocks out 295K heating, which is what we want. The expanded diagram can be seen in Attachment 1.
I wanted to find the optimal choice of inner shield area (AIS) to maximize the radiative cooling to the test mass. I chose 5 values for AIS (from ATM to AOS) and plotted the test mass cooldown for each in Attachment 2. The radiative coupling between the inner shield and test mass is maximized when the ratio of the areas, ATM/AIS, is minimized. Therefore, the larger AIS, the colder the test mass can be cooled. Even though choosing AIS close to AOS increases the coupling between the 2 shields, the resulting heating from the outer shield is negligible compared to the enhancement in cooling.
I chose AIS = 0.22 m2 to model the inner shield and test mass cooldown in Attachment 3. The test mass reaches 123 K at ~ 125 hours, or a little over 5 days. I have pushed the updated script which can be found under mariner40/CryoEngineering/MarinerCooldownEstimation.ipynb.
I used the same model in  to consider how test mass length affects the cooldown. Attachment 1 plots the curves for TM length=100mm and 150mm. The coupling between the test mass and inner shield is proportional to the area of the test mass, and therefore increases with increasing length. Choosing l=100mm (compared to 150mm) thus reduces the radiative cooling of the test mass. The cooldown time to 123K is ~125 hrs or over 5 days for TM length=150mm (unchanged from ), but choosing TM length=100m increases this time to ~170 hrs or ~7 days. (Note that these times/curves are derived from choosing an arbitrary inner shield area of 0.22 m2, but the relative times should stay roughly consistent with different IS area choices.)
I reran the cooldown model, setting the emissivity of the inner surface of the inner shield to 0.7 (coating), and the emissivity of the outer surface to 0.03 (polished Al). Previously, the value for both surfaces was set to 0.3 (rough aluminum).
Attachment 1: TM cooldown, varying area of the inner shield. Now, the marginal improvement in cooldown once the IS area reaches 0.22 m2 is negligible. Cooldown time to 123K is ~100 hrs, just over 4 days. I've kept IS area set to 0.22 m2 moving forward.
Attachment 2: TM/IS cooldown, considering 2 lengths for the test mass. Choosing l=100m instead of 150mm increases cooldown time from ~100 hrs to ~145 hrs, or 6 days.
Here I describe the current radiative cooldown model for a Mariner test mass, using parameters from the most recent CAD model. A diagram of all conductive and radiative links can be seen in Attachment 1. Below are some distilled key points:
All parameters have been taken from CAD, with the exception of:
Attachment 2 contains the cooldown curves for the system components. With the above assumptions, the test mass takes ~59hrs to reach 123K, and the final steady-state temperature is 96K. (*This was edited - found a bug in previous iteration of code that underestimated the TM cooldown time constant and incorrectly concluded ~36hrs to reach 123K. The figures have been updated accordingly.)
Attachment 3-6 are power budgets for major components: TM, IS, Cage, OS (can produce for UM if there's interest). For each, the top plot shows the total heating and cooling power delivered to the component, and the bottom plot separates the heating into individual heat loads. I'll discuss these below:
The next post will describe optimization of the snout length/radius for cooldown.
Here is a more detailed analysis of varying the length and radius of the snout.
Attachment 1 plots the heat load (W) from the snout opening as a function of temperature, for different combinations of snout length and radius. The model using the CAD snout parameters (length=0.67m end-to-end; radius=5.08cm) results in ~0.3W of heat load at steady state. The plot shows that the largest marginal reduction in heat load is achieved by doubling the length of the snout (green curve), which cuts the heat load by over a factor of 2/3. This validates the choice in snout length used in the previous ELOG entry analysis. The bottom line is that the end-to-end snout length should be on the order of 1 meter, if physically possible.
The next marginal improvement comes from reducing the radius of the snout. Attachment 1 considers reducing the radius by a half in addition to doubling the length (red curve). A snout radius of an inch is quite small and might not be feasible within system constraints, but it would reduce the snout heat load to only 25mW at steady state (along with length doubling).
The cooldown model resulting from optimizing parameters of the snout (length=1.33m, radius=2.54cm) is shown in Attachment 2. The test mass reaches 123K in ~57hrs - only 2 hours faster than the case where only the snout length is doubled (see previous ELOG entry) - and the test mass reaches steady state at 92K - only 6K colder than in the previous case. This could discourage efforts to reduce the radius of the snout at all, since increasing the length provides the most marginal gains.
The attached plot (upper) compares the heat load delivered to the test mass from various snout lengths (end to end), as a function of test mass temperature. (At steady state, our point of interest is 123K.) Note that these curves use the original CAD snout radius of 5.08cm (2").
The greatest marginal reduction in heat load comes from increasing the end-to-end snout length to 1m, as concluded in the prevous ELOG. This drops the heat load from just under 0.5W (from snout length 0.5m) to 0.15W. Further increase in snout length to 1.5m drops the heat load to well under 0.1W. After this point, we get diminishing marginal benefit for increase in snout length.
The effect on the TM cooldown curve can be seen in the lower plot. A snout length of 1m drops the steady-state TM temperature to under 100K. Then, like above, increasing the length to 1.5m makes the next non-negligible impact.
Does this work? Is this insane?
All parameters are temporary:
Test mass size: D150mm x L140mm
Intermediate mass size W152.4mm x D152.4mm x H101.6mm
TM Magnets: 70mm from the center
Height from the bottom of the base plate
- Test mass: 5.0" (127mm) ==> 0.5" margin for the thermal insulation etc (for optical height of 5.5")
- Suspension Top: 488.95mm
- Top suspension block bottom: 17.75" (450.85mm)
- Intermediate Mass: 287.0mm (Upper pendulum length 163.85mm / Lower pendulum length 160mm)
- IM OSEMs: Top x2 (V/P)<-This is a mistake (Nov 3 fixed), Face x3 (L/Y/P), Side x 1 (S)
- TM OSEMs: Face x4
- OSEM insertion can be adjusted with 4-40 screws
- EQ Stops / Cradle (Nov 3 50% done)
- Space Consideration: Is it too tight?
- Top Clamp: We are supposed to have just two wires (Nov 3 50% done)
- Lower / Middle / Upper Clamps & Consider installation procedure
- Fine alignment adjustment
- Pendulum resonant frequencies & tuning of the parameters
- Utility holes: other sensors / RTDs / Cabling / etc
- Top clamp options: rigid mount vs blade springs
- Top plate utility holes
- IM EQ stops
Discussion with Rana
- Hoe do we decide the clear aperture size for the TM faces?
- OSEM cable stays
- Thread holes for baffles
- Light Machinery can do Si machining
- Thermal conductivity/expansion
- The bottom base should be SUS... maybe others Al except for the clamps
- Suspension eigenmodes separation and temperature dependence
# Deleted the images because they are obsolete.
Some more progress:
- Shaved the height of the top clamp blocks. We can extend the suspension height a bit more, but this has not been done.
- The IM OSEM arrangement was fixed.
- Some EQ stops were implemented. Not complete yet.
The first entry of the Mariner elog post
Continuining with my casual exploration of the Thor Lab slides, I heated them from off --> low --> med --> high, with 10 minutes on each setting. The only pressure I applied was 3 larger glass slides, and that was only to flatten out the copper that the smaller, bonded slides sat on top of (so the contact with the heating plate was even).
The heat made the bonded area smaller, but it did not break. As the slides cooled, the bond area increased slightly but not back to the original size. Next I will try this with slower heating and additional pressure.
While finalizing my work plan for the quarter, I decided to look at the Thor Lab slides. This was instructive because they highlighted the troubles I will have with working with silicone. They are fragile and their small, thin sizes makes cleaning and manipulating them (without contamination) much more difficult compared to the glass sides from before.
I tried cleaning and bonding them the same way as the larger slides. Rubbing them together did not work like with the larger sides, but that may also be a function of being more careful, as not to break them. Once I cleaned them, it only took a tap from my finger to get the center to bond, but the bonded surface area still did not spread out like it did in the YouTube videos (http://youtu.be/se3K_MWR488?t=80). By pressing down around the bonded area, I could expand it slighty. Note that I did crack one slide in the process of doing this, as shown in the pictures.
Because the slides are so thin, I think they will benefit greatly from being left under a heavy object, although it may be difficult to put the weight on the slides without them breaking.
- Following what seemed like a good, intuitive suggestion from Anchal, I implemented a parameter called Ncopies, which takes a stack of m-bilayers and copies it a few times. The idea here was to have stacks where m is the least common multiple of the wavelength fractional relation e.g. m(2/3) = 6 so as to regain some of the coherent scattering in a stack. Unfortunately, this didn't work as planned for m=6, 3, and 2.
- While the target transmissivities are reached with comparably fewer layers using this method, the sensitivity and the surface E field are affected and become suboptimal. The good thing is we can do the old way just by setting Ncopies = 0 in the optimization parameters yaml file.
- An example of such a coating is in Attachment 1.
- I decided to just add the 'varL' scalar cost to the optimizer. Now we minimize for the variance in the coating stack thicknesses. As a target I started with 40% but will play with this now.
I've been testing out the extended body lagrangian models and I'm trying to understand the ground motion and force coupling to the test mass displacement. I've compared the two point-mass model to the extended model and, as expected, I get very similar results for the ground coupling. Attachment 1 shows the comparison and asside from more agressive damping of the point-mass model making a small difference at high frequency, the two models look the same. If I look at the force coupling, I get a significantly different result (see attachment 2). I think this makes sense because in the point-mass model I am driving purely horizontal displacement as there is no moment of inertia. However, for the extended body I drive the horizontal position of the centre of mass, which then results in an induced rotation as the change propagates through the dynamics of the system. To obtain a consistent result with the point-mass model, I would need to apply a force through the CoM as well as a counteracting torque to maintain a purely horizontal displacement of the mass. What I am wondering now is, what's the correct/more convenient way to consider the system? Do I want my lagrangian model to (a) couple in pure forces through the CoM and torques around the CoM and then find the correct actuation matrix for driving each degree of freedom in isolation or (b) incorporate the actuation matrix into the lagrangian model so that the inputs to the plant model are a pure drive of the test mass position or tilt?
I've been having a look at the transfer functions for the translation and pitch of both masses. I'm attaching the plot of all input-to-output transfer functions of interest so far. Here I've identified the pitch resonances of the two masses (one each) as well as the two pendulum modes. I need to now investigate if they occur in the correct places. I have confirmed the DC response by directly solving the statics problem on paper.
I've checked the validity of my state space model in a couple of ways so that we have confidence in the results that it gives. I've checked the DC gain of the transfer functions where it is non-zero. I did this by solving the static balance of forces problem in the extended body model by hand to get the DC CoM position as well as the pitch angle of both masses. In the previous ELOG entry I didn't quite do this for all transfer functions so here I completed the check. My values agree with the model's values to within 10% at the worst end and to within 0.1% at the best end. I performed a second check to see if the frequencies occur in the correct places by considering the case of very low coupling between the different resonant modes. It's difficult to check this in the case where the modes are strongly coupled (for example length-pitch is strong or the two pitch modes are close together) but if I sufficiently separate them, I get very good agreement between my analytic approximation and the state space model.
The model can easily be converted from one that gives motion in X and RY into one that gives motion in Y and RX. Running the model for both directions gives the following list of resonances (note pendulum modes in X and Y direction are identical):
Given that I think the model seems to give sensible values, I've pushed the updated model to the GitLab repository. It is now possible to quickly change the parameters of the suspension and very quickly see the corresponding shift in the resonances. To change the parameters, open the plain text file called 'params' and change the values to the new ones. Afterwards, run the file 'ss_extended.py', which will solve the state space model, save the resulting ABCD matrices to a folder and print out the values of the resonances to terminal.
The goal of "v2.X test #3" is to heat the hot plate to 200°C over the course of 20 minutes, and with v2.6, I have effectively succeeded. There will likely be more issues once I try, for example, to heat the hot plate to 300°C over the course of 60 minutes, but for now, I want to stick with lower temps and shorter times while I work out the kinks. Now that I understand the difficulties of PWMing a hot plate, adapting the code to combat future issues should be straightforward.
To summarize my code, I control the heating rate by cycling the hot plate's power on and off for some % of 1000ms. In other words, the hot plate is on 300ms then off 700ms then on 300ms etc., where the relation between target heating rate and hot plate on time is based on previously gathered data. This produces a nice, linear(ish) temperature increase up until a certain temperature, at which point it plateaus. In the previous versions, the way I compensated for this was by increasing the on time by 5ms for every cycle after 150°C. This did not work for slower heating rates, so the newer versions changed this by making the 5ms and 150°C varry depending on the target heating rate. The exact value is a linear extrapoliation from previous data. This is imperfect, but I do not think perfection will ever be possible with the current equipent, and I think I have reached something good enough that now I can finally apply it to my optically contacted samples.
Since I have finished this "stage" of work, for completeness, I am including all of the code, data*, and graphs involved so far.
*the .txt data files are in the cycle_vX_graphs folders; these folders also have the Jupyter notebooks I used for graphing the data
I realized that, after changing so much from v2.3 to 6, I should check that my first two tests produce correct results with the latest version. This was good because all three tests turned out to be innaccurate, as they were all short roughly 10°C. However, they were very precise. For all three, the final temperature was 193.15±1.5°C.
My two corrections ended up being huge overshoots. The drop off time (100°C) is correct, but the default rate increase that worked in the other cases is not working at all here.
I tried increasing the temperature by 180°C over 20 minutes. As suspected, it did not quite reach the target temperature because the temperature started to drop off around 100°C instead of 150°C, as the program expected. This should be an easy adjustment, since it is just a matter of increasing the duration of the cycle at an earlier time.
Here are the graphed results from yesterday's tests, both by themselves and overlayed with the previous tests. I am satisfied with my code; it has given me the (roughly) linear heat increase that I desired. The only last thing I would like to test is heating over a signficantly slower time.
Before trying the PWM on actual samples, I wanted to make one final attempt at improving my code (labled as v2.1). This change appears to have 1) broken the code regulating the basic heat cycling process 2) caused the hot plate to heat up far, far too quick. Since the thermometer strangely turned off halfway through, I only have two pictures as evidence that this test existed: a screenshot of the Arduino program telling me that the max cycle rate had been reached (which should have not happened) and a frame from the video filming the thermometer showing the peak temperature (which is 100°C high than expected). Somehow the hot plate reached over 300°C, which I thought was impossible because the hot plate's built-in heat cycle should have kicked in around 260°C. Unrelated, but I am performing this test in my dorm room because I was quarentined due to COVID exposure, and I like using my personal fan and the house's freezer to cool down the hot plate quicker.
I made some adjustments (labled as v2.2), and I had the same failure as v2.1, except I managed to capture it on camera.
Finally, with v2.3, I managed to fix all the issues. I ran out time today to transcribe the temperatures for graphing, but this itteration of the code managed to reach 200°C in 10 and 7 minutes for test #1 and #2, respectively. I also managed to fix the problem of the hot plate not turning off after the desired heating time. The real test will be trying a slower heating time, like 20 minutes, but I am glad I postponed using actual samples because this fix has given me code that appears to work exactly as I hoped.
For the following two graphs, I ram four tests: two using the the v1 of the PWM code and two using v2 of the PWM code. The graphs show the heating rate I was aiming for and the actual results. It turns out, my v2 does not work better than my v1. Before 150°C (which is where I believed that (assuming the rate is kept constantly) the heating rate shifted from linear to logarithmic), v1 is an overshoot and v2 is slightly less of an overshoot. The goal of v2 was to increase the rate after 150°C to compensate for this drop off, but it does not appear to have worked.
While I would still like to refine my code, I think it will be good enough to try using it to actually heat the samples.
I had some trouble with the code not working as intended (partially because it has been I while since I coded in C++). However, I was able to run two tests with the new code, although I ran out of time to type up the data for the 2nd. Graphing the 1st test's data, it appears that my improved code is an improvement, but the heating is still slowing down as it approaches 200°C. I need to re-run this test, but with v1 of the code, for better comparison.
The hot plate was supposed to increase 180°C in 10 minutes (so that I would reach 200°C), but due to an inscrutable bug, it did not exit the while loop, so it continued past 10 minutes.
I had a little set back regarding the non-linear portion of the heating. After about 150°C, if the heating rate is kept constant, the heating graph transitions from linear to logarithmic. I was able to show graphically that, yes, it is indeed logarithmic, but I could not think of an algorithmic way to translate this logarithmic curve into the increase in heating rate to maintain a linear heating rate. I do have some ideas which I will test tomorrow.
The previous test was cycled with 0.3s on follwed by 0.7s off*. This test was 0.7s on followed by 0.3s off. I intended to let it run longer, but I accidetly knocked the thermocouple over while trying to move the cable father from the hot plate so the plastic would not risk melting.
Like before, we see that it starts out relatively linear. I noticed the heating light kind of fluttering around 200°C which appeared in the data as a small decrease around 450s on the graph. I do not know the source of this issue, but I fear it may be the hot plate overriding my cycling with its own built-in cycle; something left for future testing. This is the last data I will gather using v1 of my Arduino code, as am I now working on implementing what I have learned in a smarter v2 of the code. I included v1 of the code, and the txt files for the first three tests.
*I think. Could have been 0.1 on, 0.9 off. Note to self: double check this.
I repeated the first test, but let the hot plate run longer. It revealed that the linearity for the lower temperatures completely falls apart at the higher temperatures. I think it should be fairly straightforward to modify the code to accommodate this.
I wrote a program to control the heating rate of the hot plate using Pulse Width Modulation (PWM), and it was a great success!
For roughly 6 minutes, the hot plate was power cycled with a rate of 100 ms on followed by 900 ms off. Based on my calculations, this should correspond to a 0.08°C/sec temperature increase. In other terms, we expect a 24°C increase in the span of 5 minutes. For comparision, without PWM, the hot plate heats up roughly 100°C in that same timespan. I recorded the temperature by filming a thermometer and transcribing that video into a text file, which could be analyzed and graphed. I only transcribed the first 5 minutes of the 17 minute video (I also filmed part of the cool down) because 5 minutes was enough to show clear results.
At t=0, the hot plate was 21.4°C, and at t=300, the hot plate was 49.7°C. That is a 28.3°C increase in the span of 5 minutes, only 4.3°C higher than the predicted value. The rate, 0.094°C/sec, is only slightly faster than the desired 0.08°C/sec. Further, as shown in the graph, the temperature increase was almost perfectly linear, which is ideal. Overall, using an Arduino to PWM the hot plate is looking very promising.
I've been running the HR coating optimization for mariner TMs. Relative to the specifications found here we now are aiming for
Both the PSL and AUX cavity finesses range the few couple of thousands, and the goal is not to optimize the coating stack for noise, but more importantly for the transmission values and tolerances. This way we ensure the average finesse and differential finesse requirements are met. Anyways, Attachment #1-2 shows the transmission plots for the optimized coating stacks (so far). Attachments #3-4 show the dielectric stacks. The code still lives in this repository.
I'm on the process of assessing the tolerance of this design stacks against perturbations in the layer thicknesses; to be posted in a follow-up elog.
Here are some corner plots to analyze the sensitivity of the designs in the previous elog to a 1% gaussian distributed perturbation using MCMC.
Attachment #1 shows the ETM corner plot
Attachment #2 shows the ITM corner plot.
I let the indices of both high and low index materials vary, as well as the physical thicknesses and project their covariances to the transmission for PSL and AUX wavelengths.
The result shows that for our designs it is better to undershoot in the optimization stage rather than meet the exact number. Nevertheless, 1% level perturbations in the optical thickness of the stack result in 30% deviations in our target transmission specifications. It would be nice to have a better constraint on how much each parameter is actually varying by, e.g. I don't believe we can't fix the index of refraction to better than 1%., but exactly what its value is I don't know, and what are the layer deposition tolerances? These numbers will make our perturbation analysis more precise.
A couple of coating stacks with better tolerance (transmission +- 10%). Attachments #1-2 show the spectral reflectivities for ETM/ITM respectively, while Attachments #3-4 show the corner plots. I think the tolerances are inflated by the fact that all the stack indices and thicknesses are varying, while in reality these two effects are degenerate because what matters is the optical thickness. I will try to reflect this in the MCMC code next. Finally, attachments # 5-6 are the hdf5 files with the optimization results.
The HR coating specifications are:
Just took the finesse of a single arm:
and propagated transmissivities as uncorrelated variables to estimate the maximum relative finesse. Different tolerance combinations give the same finesse tolerance, so multiple solutions are possible. I simply chose to distribute the relative tolerance in T for the test masses homogeneously to simultaneously maximize the individual tolerances and minimize the joint tolerance.
A code snippet with the numerical analysis may be found here.
Tue Jun 8 11:52:44 2021 Update
The arm cavity finesse at 2128 nm will be mostly limited by the T = 2000 ppm of the ITM, so the finesse changes mostly due to this specification. Assuming that the vendor will be able to do the two ETM optics in one run (x and y), we really don't care so much about the mean value achieved in this run as much as the relative one. Therefore, the 200 ppm tolerance (10% level) is allowed at the absolute level, but a 20 ppm tolerance (1% level) is still preferred at the relative level; is this achievable?. Furthermore, for the AUX wavelength, we mostly care about achieving critical coupling but there is no requirement between the arms. Here a 20 ppm tolerance at the absolute level should be ok, but a 2 ppm tolerance between runs is highly desirable (although it seems crazier); is this achievable?
We have been working on an estimate of the wavelength dependent emissivity for the mariner test mass HR coatings. Here is a brief summary.
We first tried extending the thin film optimization code to include extinction coefficient (so using the complex index of refraction rather than the real part only). We used cubic interpolations of the silica and tantala thin film dispersions found here for wavelengths in the 1 to 100 um range. This allowed us to recompute the field amplitude reflectivity and transmissivity over a broader range. Then, we used the imaginary part of the index of refraction and the thin film thicknesses to estimate the absorbed fraction of power from the interface. The power loss for a given layer is exponential in the product of the thickness and the extinction coefficient (see eq 2.6.16 here) . Then, the total absorption is the product of all the individual layer losses times the transmitted field at the interface. This is true when energy conservation distributes power among absorption (=emission), reflection, and transmission:
The resulting emissivity estimate using this reasoning is plotted as an example in Attachment #1 for the ETM design from April. Two things to note from this; (1) the emissivity is vanishignly small around 1419 and 2128 nm, as most of the power is reflected which kind of makes sense, and (2) the emissivity doesn't quite follow the major absorption features in the thin film interpolated data at lower wavelengths (see Attachment #2), which is dominated by Tantala... which is not naively expected?
Maybe not the best proxy for emissivity? Code used to generate this estimates is hosted here.
[Paco, Nina, Aidan]
Updated the stack emissivity code to use the Kitamura paper fused silica dispersion which has a prominent 20 um absorption peak which wasn't there before... (data was up to 15 um, and extrapolated smoothly beyond). The updated HR stack emissivities are in Attachments #1 - #2. A weird feature I don't quite understand is the discontinous jump at ~ 59 um ...
I've managed to cut and crimp wires for the power board for coil driver. I will begin adding components to the coil driver board.
- Add Components to Coil Driver board
- Replace some Sat Amp Componetns
- Still working on moving optical table to CAML
- Unsure if cryochamber has been cleaned and moved
Note that the slides have "GLOBE" printed on one side. I always bond the opposite using the opposite side without the text.
On Monday (7/11), I began experimenting with bonding, starting with "air-bonding," which is trying to make dry, gently cleaned slides stick. I achieved my first succesful optical contact with what I call "acidental water-assisted direct bonding" or "water-bonding," where I accidentally clasped two wet slides together while washing my dirty finger prints off them. After the accidental discovery, I repeated it by running water over the slides while there were clasped together and achieved the same result. After a few hours, I attempted partially sliding apart the second water-bonded sample. I could slowly push them apart by pressing my thumbs against the long edge, but it took quite a bit of force. I decided to let 4 samples sit overnight: 1 air-bonded, 1 air-bonded with the brass hunk on top of it, and 2 water-bonded. Neither time nor pressure improved the air-bonded samples as they still slid apart very easily. The first water-bonded sample slid apart easier, but one part remained stubornly attached until I began shaking it violently. The second water-bonded sample was much harder to slide apart than the last time I tested it. With all the force of my fingers, I could barely make it budge.
I have finished all coil driver and sat amp chassis they all seem to be functioning properly.
6" vs 4" optic size comparison using CAD - worth hopping into the 3D geometry using the link below, but also posting a couple of images below.
1) We can adjust all parameters relating to the suspension frame except the beam height. Is there enough clearance under the optic for the internal shield?
--> Using the representation of the MOS structure as-is, there is about 1" of clearance between the bottom panel of the first/internal shield under the 6" case, compared with 2" of clearance in the 4" case. This is not very scary, and suggests that we could use a 6" optic size.
2) Any other concerns at this point?
--> Not really, there are degrees of freedom to absorb other issues that arise from the simple 4" --> 6" parameter shift
EASM posted at https://caltech.app.box.com/folder/132918404089
I used the HITRAN database to download the set of ro-vibrational absorption lines of CO2 (carbon dioxide) near 2.05 um. The lines are plotted for reference vs wavenumber in inverse cm in Attachment #1.
Then, in Attachment #2, I estimate the broadened spectrum around 2.05 um and compare it against one produced by an online tool using the 2004 HITRAN catalog.
For the broadened spectrum, I assumed 1 atm pressure, 296 K temperature (standard conditions) and a nominal CO2 density of 1.96 kg/m^3 under this conditions. Then, the line profile was Lorentzian with a HWHM width determined by self and air broadening coefficients also from HITRAN. The difference between 2050 nm and 2040 nm absorption is approximately 2 orders of magnitude; so 2040 nm would be better suited to avoid in-air absorption. Nevertheless, the estimate implies an absorption coefficient at 2050 nm of ~ 20 ppm / m, with a nearby absorption line peaking at ~ 100 ppm / m.
For the PMC, (length = 50 cm), the roundtrip loss contribution by in-air absorption at 2050 nm would amount to ~ 40 ppm. BUT, this is nevery going to happen unless we pump out everything and pump in 1 atm of pure CO2. So ignore this part.
Tue Nov 9 08:23:56 2021 UPDATE
Taking a partial pressure of 0.05 % (~ 500 ppm concentration in air), the broadening and total absorption decrease linearly with respect to the estimate above. Attachment #3 shows the new estimate.
For the PMC, (length = 50 cm), the roundtrip loss contribution by in-air absorption at 2050 nm would amount to ~ 1 ppm.
There was an error in the last plot of the previous log. This was correctly pointed out by rana's pointing out that the broadening from air should be independent of the CO2 concentration, so nominally both curves should coincide with each other. Nevertheless, this doesn't affect the earlier conclusions -->
The PMC loss by background, pressure broadened absorption lines at 2049.9 nm by CO2 is < 1 ppm.
The results posted here are reflected in the latest notebook commit here.
Since I have been running the ETM/ITM coatings optimization many times, I decided to "benchmark" (really just visualize) the optimizer trajectories under different strategies offered by the scipy.optimize implementation of differential evolution. This was done by adding a callback function to keep track the convergence=val at every iteration. From the scipy.optimize.differential_evolution docs, this "val represents the fractional value of the population convergence".
Attachment 1 shows a modest collection of ~16 convergence trajectories for ETM and ITM as a function of the iteration number (limited by maxiter=2000) with the same targets, weights, number of walkers (=25), and other optimization parameters. The vertical axis plots the inverse val (so tending to small numbers represent convergence).
tl;dr: Put simply, the strategies using "binary" crossover schemes work better (i.e. faster) than "exponential" ones. Will keep choosing "best1bin" for this problem.
Now that I have correct phase and amplitude behaviour for my MIMO state space model of the suspension and the system is being correctly evaluated as stable, I'm uploading the useful plots from my analysis. File names should be fairly self-explanatory. The noise plots are for a total height of 550 mm, or wire lengths of 100 mm per stage. I've also attached a model showing the ground motion for different lengths of the suspension.
Here are the DAC and residual displacement spectra for different suspension heights ranging from 450 mm to 600 mm. I aimed to get the Q of the lower resonance close to 5 and the DAC output RMS close to 0.5 V but as this was just tweaking values by hand I didn't get to exactly these values so I'm adding the actual values for reference. The parameters are as follows:
Here is a set of curves describing the single-pass downconversion efficiency in the 20 mm long PPKTP crystals for the DOPO. I used the "non-depleted pump approximation" and assumed a plane-wave (although the intensity matches the peak intensity from a gaussian beam). Note that these assumptions will in general tend to overestimate the conversion efficiency.
The parameters use an effective nonlinear coefficient "d_eff" of 4.5 pm/V, and assume we have reached the perfect (quasi) phase matching condition where delta_k = 0 (e.g. we are at the correct crystal operating temperature). The wavelengths are 1064.1 nm for the pump, and 2128.2 nm for degenerate signal and idler. The conversion efficiency here is for the signal photon (which is indistinguishable from the idler, so am I off by a factor of 2?)...
Attachment 1 shows the single pass conversion efficiency "eta" as a function of the pump power. This is done for a set of 5 minimum waists, but the current DOPO waist is ~ 35 um, right in the middle of the explored range. What we see from this overestimates is an almost linear-in-pump power increase of order a few %. I have included vertical lines denoting the damage threshold points, assuming 500 kW / cm ^2 for 1064.1 nm (similar to our free-space EOMs). As the waist increases, the conversion efficiency tends to increase more slowly with power, but enables a higher damage threshold, as expected.
At any rate, the single-pass downconversion efficiency is (over)estimated to be < 5 % for our current DOPO waist right before the damage threshold of ~ 10 Watts, so I don't think we will be able to use the amplified pump (~ 20-40 W) unless we modify the cavity design to allow for larger waist modes.
The important figure (after today's group meeting) would be a single pass downconversion efficiency of ~ 0.5 % / Watt of pump power at our current waist of 35 um (i.e. the slope of the curves below)
As a kickoff of the mariner sus cryostat design, I made a tentative crackle chamber model in SW.
Stephen pointed out that the mass for each part is ~100kg and will likely be ~150kg with the flanges. We believe this is with in the capacity of the yellow Skyhook crane as long as we can find its wheeled base.
Finished all 3 Coil Drover chassis and power lines still need to install the rear cables will do that after I finish Sat Amp chassis tomorrow.
All three coil driver boards are complete and have been tested. Modification for all 4 sat amp have been completed. Ideally, I would like to finish all the chassis on Monday I have one just about done.
I was unable to check the samples because I could not get access to Bridge, so they will be checked tomorrow and the results will be added as an edit to this log.
Given that I was unable to do work in the lab, I instead began a second attempt at writing code for the Arduino to use PWM to control the hot plate temperature.
As expected, the suface area of the bond only increased for the samples under the weights. I did notice something worrying: one of the non-weighted samples actually had its surface area decrease. It is unclear if this is a one-time thing or if all of the bonds deteriorate with time. Unrelated, but I also noticed that the bonded areas always have small dots that refuse to bond. It's unclear if that is due to imperfections or contamination (I suspect the latter).
I left all 4 samples under both weights out of curiosity to see if the bonded surface area would increase further (or possibly decrese further).