My two corrections ended up being huge overshoots. The drop off time (100°C) is correct, but the default rate increase that worked in the other cases is not working at all here.
I realized that, after changing so much from v2.3 to 6, I should check that my first two tests produce correct results with the latest version. This was good because all three tests turned out to be innaccurate, as they were all short roughly 10°C. However, they were very precise. For all three, the final temperature was 193.15±1.5°C.
The goal of "v2.X test #3" is to heat the hot plate to 200°C over the course of 20 minutes, and with v2.6, I have effectively succeeded. There will likely be more issues once I try, for example, to heat the hot plate to 300°C over the course of 60 minutes, but for now, I want to stick with lower temps and shorter times while I work out the kinks. Now that I understand the difficulties of PWMing a hot plate, adapting the code to combat future issues should be straightforward.
To summarize my code, I control the heating rate by cycling the hot plate's power on and off for some % of 1000ms. In other words, the hot plate is on 300ms then off 700ms then on 300ms etc., where the relation between target heating rate and hot plate on time is based on previously gathered data. This produces a nice, linear(ish) temperature increase up until a certain temperature, at which point it plateaus. In the previous versions, the way I compensated for this was by increasing the on time by 5ms for every cycle after 150°C. This did not work for slower heating rates, so the newer versions changed this by making the 5ms and 150°C varry depending on the target heating rate. The exact value is a linear extrapoliation from previous data. This is imperfect, but I do not think perfection will ever be possible with the current equipent, and I think I have reached something good enough that now I can finally apply it to my optically contacted samples.
Since I have finished this "stage" of work, for completeness, I am including all of the code, data*, and graphs involved so far.
*the .txt data files are in the cycle_vX_graphs folders; these folders also have the Jupyter notebooks I used for graphing the data
See the attachments.
I've been having a look at the transfer functions for the translation and pitch of both masses. I'm attaching the plot of all input-to-output transfer functions of interest so far. Here I've identified the pitch resonances of the two masses (one each) as well as the two pendulum modes. I need to now investigate if they occur in the correct places. I have confirmed the DC response by directly solving the statics problem on paper.
I've checked the validity of my state space model in a couple of ways so that we have confidence in the results that it gives. I've checked the DC gain of the transfer functions where it is non-zero. I did this by solving the static balance of forces problem in the extended body model by hand to get the DC CoM position as well as the pitch angle of both masses. In the previous ELOG entry I didn't quite do this for all transfer functions so here I completed the check. My values agree with the model's values to within 10% at the worst end and to within 0.1% at the best end. I performed a second check to see if the frequencies occur in the correct places by considering the case of very low coupling between the different resonant modes. It's difficult to check this in the case where the modes are strongly coupled (for example length-pitch is strong or the two pitch modes are close together) but if I sufficiently separate them, I get very good agreement between my analytic approximation and the state space model.
The model can easily be converted from one that gives motion in X and RY into one that gives motion in Y and RX. Running the model for both directions gives the following list of resonances (note pendulum modes in X and Y direction are identical):
Given that I think the model seems to give sensible values, I've pushed the updated model to the GitLab repository. It is now possible to quickly change the parameters of the suspension and very quickly see the corresponding shift in the resonances. To change the parameters, open the plain text file called 'params' and change the values to the new ones. Afterwards, run the file 'ss_extended.py', which will solve the state space model, save the resulting ABCD matrices to a folder and print out the values of the resonances to terminal.
I've been testing out the extended body lagrangian models and I'm trying to understand the ground motion and force coupling to the test mass displacement. I've compared the two point-mass model to the extended model and, as expected, I get very similar results for the ground coupling. Attachment 1 shows the comparison and asside from more agressive damping of the point-mass model making a small difference at high frequency, the two models look the same. If I look at the force coupling, I get a significantly different result (see attachment 2). I think this makes sense because in the point-mass model I am driving purely horizontal displacement as there is no moment of inertia. However, for the extended body I drive the horizontal position of the centre of mass, which then results in an induced rotation as the change propagates through the dynamics of the system. To obtain a consistent result with the point-mass model, I would need to apply a force through the CoM as well as a counteracting torque to maintain a purely horizontal displacement of the mass. What I am wondering now is, what's the correct/more convenient way to consider the system? Do I want my lagrangian model to (a) couple in pure forces through the CoM and torques around the CoM and then find the correct actuation matrix for driving each degree of freedom in isolation or (b) incorporate the actuation matrix into the lagrangian model so that the inputs to the plant model are a pure drive of the test mass position or tilt?
- Following what seemed like a good, intuitive suggestion from Anchal, I implemented a parameter called Ncopies, which takes a stack of m-bilayers and copies it a few times. The idea here was to have stacks where m is the least common multiple of the wavelength fractional relation e.g. m(2/3) = 6 so as to regain some of the coherent scattering in a stack. Unfortunately, this didn't work as planned for m=6, 3, and 2.
- While the target transmissivities are reached with comparably fewer layers using this method, the sensitivity and the surface E field are affected and become suboptimal. The good thing is we can do the old way just by setting Ncopies = 0 in the optimization parameters yaml file.
- An example of such a coating is in Attachment 1.
- I decided to just add the 'varL' scalar cost to the optimizer. Now we minimize for the variance in the coating stack thicknesses. As a target I started with 40% but will play with this now.
While finalizing my work plan for the quarter, I decided to look at the Thor Lab slides. This was instructive because they highlighted the troubles I will have with working with silicone. They are fragile and their small, thin sizes makes cleaning and manipulating them (without contamination) much more difficult compared to the glass sides from before.
I tried cleaning and bonding them the same way as the larger slides. Rubbing them together did not work like with the larger sides, but that may also be a function of being more careful, as not to break them. Once I cleaned them, it only took a tap from my finger to get the center to bond, but the bonded surface area still did not spread out like it did in the YouTube videos (http://youtu.be/se3K_MWR488?t=80). By pressing down around the bonded area, I could expand it slighty. Note that I did crack one slide in the process of doing this, as shown in the pictures.
Because the slides are so thin, I think they will benefit greatly from being left under a heavy object, although it may be difficult to put the weight on the slides without them breaking.
Continuining with my casual exploration of the Thor Lab slides, I heated them from off --> low --> med --> high, with 10 minutes on each setting. The only pressure I applied was 3 larger glass slides, and that was only to flatten out the copper that the smaller, bonded slides sat on top of (so the contact with the heating plate was even).
The heat made the bonded area smaller, but it did not break. As the slides cooled, the bond area increased slightly but not back to the original size. Next I will try this with slower heating and additional pressure.
The first entry of the Mariner elog post
All parameters are temporary:
Test mass size: D150mm x L140mm
Intermediate mass size W152.4mm x D152.4mm x H101.6mm
TM Magnets: 70mm from the center
Height from the bottom of the base plate
- Test mass: 5.0" (127mm) ==> 0.5" margin for the thermal insulation etc (for optical height of 5.5")
- Suspension Top: 488.95mm
- Top suspension block bottom: 17.75" (450.85mm)
- Intermediate Mass: 287.0mm (Upper pendulum length 163.85mm / Lower pendulum length 160mm)
- IM OSEMs: Top x2 (V/P)<-This is a mistake (Nov 3 fixed), Face x3 (L/Y/P), Side x 1 (S)
- TM OSEMs: Face x4
- OSEM insertion can be adjusted with 4-40 screws
- EQ Stops / Cradle (Nov 3 50% done)
- Space Consideration: Is it too tight?
- Top Clamp: We are supposed to have just two wires (Nov 3 50% done)
- Lower / Middle / Upper Clamps & Consider installation procedure
- Fine alignment adjustment
- Pendulum resonant frequencies & tuning of the parameters
- Utility holes: other sensors / RTDs / Cabling / etc
- Top clamp options: rigid mount vs blade springs
- Top plate utility holes
- IM EQ stops
Discussion with Rana
- Hoe do we decide the clear aperture size for the TM faces?
- OSEM cable stays
- Thread holes for baffles
- Light Machinery can do Si machining
- Thermal conductivity/expansion
- The bottom base should be SUS... maybe others Al except for the clamps
- Suspension eigenmodes separation and temperature dependence
# Deleted the images because they are obsolete.
Some more progress:
- Shaved the height of the top clamp blocks. We can extend the suspension height a bit more, but this has not been done.
- The IM OSEM arrangement was fixed.
- Some EQ stops were implemented. Not complete yet.
Does this work? Is this insane?
Here I describe the current radiative cooldown model for a Mariner test mass, using parameters from the most recent CAD model. A diagram of all conductive and radiative links can be seen in Attachment 1. Below are some distilled key points:
All parameters have been taken from CAD, with the exception of:
Attachment 2 contains the cooldown curves for the system components. With the above assumptions, the test mass takes ~59hrs to reach 123K, and the final steady-state temperature is 96K. (*This was edited - found a bug in previous iteration of code that underestimated the TM cooldown time constant and incorrectly concluded ~36hrs to reach 123K. The figures have been updated accordingly.)
Attachment 3-6 are power budgets for major components: TM, IS, Cage, OS (can produce for UM if there's interest). For each, the top plot shows the total heating and cooling power delivered to the component, and the bottom plot separates the heating into individual heat loads. I'll discuss these below:
The next post will describe optimization of the snout length/radius for cooldown.
Here is a more detailed analysis of varying the length and radius of the snout.
Attachment 1 plots the heat load (W) from the snout opening as a function of temperature, for different combinations of snout length and radius. The model using the CAD snout parameters (length=0.67m end-to-end; radius=5.08cm) results in ~0.3W of heat load at steady state. The plot shows that the largest marginal reduction in heat load is achieved by doubling the length of the snout (green curve), which cuts the heat load by over a factor of 2/3. This validates the choice in snout length used in the previous ELOG entry analysis. The bottom line is that the end-to-end snout length should be on the order of 1 meter, if physically possible.
The next marginal improvement comes from reducing the radius of the snout. Attachment 1 considers reducing the radius by a half in addition to doubling the length (red curve). A snout radius of an inch is quite small and might not be feasible within system constraints, but it would reduce the snout heat load to only 25mW at steady state (along with length doubling).
The cooldown model resulting from optimizing parameters of the snout (length=1.33m, radius=2.54cm) is shown in Attachment 2. The test mass reaches 123K in ~57hrs - only 2 hours faster than the case where only the snout length is doubled (see previous ELOG entry) - and the test mass reaches steady state at 92K - only 6K colder than in the previous case. This could discourage efforts to reduce the radius of the snout at all, since increasing the length provides the most marginal gains.
The attached plot (upper) compares the heat load delivered to the test mass from various snout lengths (end to end), as a function of test mass temperature. (At steady state, our point of interest is 123K.) Note that these curves use the original CAD snout radius of 5.08cm (2").
The greatest marginal reduction in heat load comes from increasing the end-to-end snout length to 1m, as concluded in the prevous ELOG. This drops the heat load from just under 0.5W (from snout length 0.5m) to 0.15W. Further increase in snout length to 1.5m drops the heat load to well under 0.1W. After this point, we get diminishing marginal benefit for increase in snout length.
The effect on the TM cooldown curve can be seen in the lower plot. A snout length of 1m drops the steady-state TM temperature to under 100K. Then, like above, increasing the length to 1.5m makes the next non-negligible impact.
Here we lay out the Mariner cryocooler requirements and discuss the most recent cooldown model, which includes a cryocooler that cools down the inner shield and a separate LN2 dewar that cools the outer shield.
The chosen cryocooler must supply at least 2x the cooling power to the TM than the heat loads on the TM, at 123 K. Implicit in this requirement is that in the absense of temperature control, the cooling power must be enough to cool the TM to well below 123 K.
Attachment 1 is the latest Mariner ITM cooldown model. This updated model is pushed to mariner40/CryoEngineering/MarinerCooldownEstimation.ipynb. Before running the notebook you can toggle between IS cooling sources: LN2, DS30, CH-104, or in the future any crycoolers we are considering. All attachments are generated using the cooling curve of the DS30.
Since the OS is no longer a heat load on the cryocooler, the IS gets cooled more efficiently and reaches within 5 K of the coldhead. The heat loads on the TM (snout, apertures, laser heating) make its temperature plateau just under 100 K. It reaches 123K in ~50 hours.
Attachment 2 is a power budget for the TM. We see that at 123K, the heat loads sum to ~0.4 W. The cooling power at this temperature is around 1 W. The DS30 satisfies our cryocooler cooling requirement; however vibration requirements / vacuum interface compatibility still need to be determined.
Lastly, Attachment 3 is an updated block diagram of the heat transfer couplings considered by the model. (The model also considers radiative links between the inner shield and cage, and inner shield and upper mass; these are omitted from the diagram for simplicity.)
Summarizing the current Mariner ITM cooldown model assumptions:
- Inner shield and outer shield have snouts of equal length (1 m end-to-end)
- Laser off during cooldown
- Inner shield cooled by DS30; outer shield cooled by LN2 tank
- ITM barrel emissivity = 0.9
A simplified block diagram can be found in Attachment 4.
I simulated the Mariner cooldown with an additional LN2 tank connected to the main cold strap shared by the cryocooler. LN2 can aid in the initial cooldown from room temperature, and once the inner shield is sufficienly cold the cryocooler can take full control. (The LN2 should not be on the whole time - once the inner shield crosses 77K the LN2 would be contributing heat.) In the model I chose the inner shield temperature of 90K to signal when to turn off the LN2 (any lower and the IS temperature starts to flatten out as it approaches 77K).
The closer the LN2 tank sits towards the chamber/IS (and away from the cold head), the better. This is because the cold head of the cryocooler drops rapidly to ~60K, and the LN2 joint would contribute to heating the cold head. Plus, the cooling of the IS is more efficient if the LN2 source is closer. The model assumes the LN2 tank sits halfway between the coldhead of the cryocooler and the inner shield.
The last assumpion made is that the LN2 tank volume is large enough such that the tip in contact with LN2 remains at 77K.
In Attachment 1, the dashed traces show the cooldown of the cold head, inner shield, and test mass without the additional LN2 cooling. The solid traces include LN2 cooling and use the assumptions above in green. We see that the inner shield is cooled significantly faster with LN2 (on par with the cold head until 150K). As a result, the heat load the inner shield puts on the cold head is reduced, and that reduction more than compensates for the additional heating on the cold head from the LN2 at 77K. Thus the cold head cools much faster in the first 10 hours. The kinks in the cold head/inner shield traces are presumably from the system re-equilibriating after the LN2 source is shut off - it's not clear why the cryocooler doesn't immediately continue the downward trend.
The effect on the test mass is more subtle, but we see the test mass cools to 123K ~2 hours faster (in 28 h). I was then curious if we could get the same gains by simply moving the cryocooler/cold head halfway closer to the inner shield. This simulation is in Attachment 2 - it takes ~1 h longer for the test mass to reach 123K, since we don't get the added cooling power from the LN2.
While there's merit to the additon of LN2, maybe an improvement of a few hours isn't enough to justify the increase in complexity.
Here is the model including an additional LN2 tank aiding in inner shield cooldown, applied to Voyager [Attachment 1]. The same assumptions have been made as in the previous ELOG. The LN2 is switched off once the inner shield reaches 90K.
Using LN2 in such a way cools down the test mass to 123K 5 hours faster. This is a ~6% improvement from the original 85 hours of cooldown [Attachment 2]. Note that the fundamental radiative cooling limit for a Voyager-like test mass is ~68 hours.
*Note: the current modeling script can be found at: CryoEngineering/MarinerCooldownEstimation.ipynb
Nina pointed me to the current mariner cooldown estimation script (path above) and we have since met a few times to discuss upgrades/changes. Nina's hand calculations were mostly consistent with the existing model, so minimal changes were necessary. The material properties and geometric parameters of the TM and snout were updated to the values recently verified by Nina. To summarize, the model considers the following heat sources onto the testmass (Pin):
- laser absorption by ITM bulk (function of incident laser power, PR gain, and bulk absorption)
- laser absorption by ITM HR coating (function of incident laser power and HR coating absorption)
- radiative heating from room-temp tube snout (function of snout radius and length, and TM radius)
The heat transfer out of the testmass (Pout) is simply the sum of the radiative heat emitted by the HR and AR faces and the barrel. Note that the script currently assumes an inner shield T of 77K, and the inner/outer shield geometric parameters need to be obtained/verified.
Nina and Paco have been working towards obtaining tabulated emissivity data as a function of temperature and wavelength. In the meantime, I created the framework to import this tabulated data, use cubic spline interpolation, and return temperature-dependent emissivities. It should be straightforward to incorporate the emissivity data once it is available. Currently, the script uses room-temperature values for the emissivities of various materials.
- Incorporate tabulated emissivity data
- Verify and update inner/outer shield dimensions
How about a diagram so that we can understand what this model includes?
Attachment 1 is a geometric diagram that reflects the current state of the ITM cooldown model, introduced in . The inner shield is assumed to be held at 77K for simplicity, and 2 heat sources are considered: laser heating, and radiative heating from the room-temperature snout opening. The view factor Fij between the snout opening and test mass (modeled as 2 coaxial parallel discs separated by length L - equation found in Cengel Heat Transfer) is calculated to be 0.022. The parameters used in the model are noted in the figure.
Attachment 2 is a simplified diagram that includes the heating/cooling links to the test mass. At 123K, the radiative cooling power from the inner shield (at 77K) is 161 mW. The radiative heating from the snout opening is 35 mW, and the laser heating (constant) is 101.5 mW. Due to the tiny view factor betwen the snout opening and the test mass, most of the heat emitted by the opening does not get absorbed.
The magnitudes of heating and cooling power can be seen in Attachment 3. Lastly, Attachment 4 plots the final cooldown curve given this model.
My next step is to add the outer shield and fix its temperature, and then determine the optimal size/location of the inner shield to maximize cooling of the test mass. This is question was posed by Koji in order to inform inner shield/outer shield geometric specs. Then, I will add a cold finger and cryo cooler (conductive cooling). Diagrams will be updated/posted accordingly.
Building on , I added a copper cold finger to conductively cool the inner shield, instead of holding the inner shield fixed at 77K. The cold finger draws cooling power from a cyro cooler or "cold bath" held at 60K, for simplicity. I added an outer shield and set its temperature to 100K. The outer shield supplies some radiative heating to the inner shield, but blocks out 295K heating, which is what we want. The expanded diagram can be seen in Attachment 1.
I wanted to find the optimal choice of inner shield area (AIS) to maximize the radiative cooling to the test mass. I chose 5 values for AIS (from ATM to AOS) and plotted the test mass cooldown for each in Attachment 2. The radiative coupling between the inner shield and test mass is maximized when the ratio of the areas, ATM/AIS, is minimized. Therefore, the larger AIS, the colder the test mass can be cooled. Even though choosing AIS close to AOS increases the coupling between the 2 shields, the resulting heating from the outer shield is negligible compared to the enhancement in cooling.
I chose AIS = 0.22 m2 to model the inner shield and test mass cooldown in Attachment 3. The test mass reaches 123 K at ~ 125 hours, or a little over 5 days. I have pushed the updated script which can be found under mariner40/CryoEngineering/MarinerCooldownEstimation.ipynb.
I used the same model in  to consider how test mass length affects the cooldown. Attachment 1 plots the curves for TM length=100mm and 150mm. The coupling between the test mass and inner shield is proportional to the area of the test mass, and therefore increases with increasing length. Choosing l=100mm (compared to 150mm) thus reduces the radiative cooling of the test mass. The cooldown time to 123K is ~125 hrs or over 5 days for TM length=150mm (unchanged from ), but choosing TM length=100m increases this time to ~170 hrs or ~7 days. (Note that these times/curves are derived from choosing an arbitrary inner shield area of 0.22 m2, but the relative times should stay roughly consistent with different IS area choices.)
I reran the cooldown model, setting the emissivity of the inner surface of the inner shield to 0.7 (coating), and the emissivity of the outer surface to 0.03 (polished Al). Previously, the value for both surfaces was set to 0.3 (rough aluminum).
Attachment 1: TM cooldown, varying area of the inner shield. Now, the marginal improvement in cooldown once the IS area reaches 0.22 m2 is negligible. Cooldown time to 123K is ~100 hrs, just over 4 days. I've kept IS area set to 0.22 m2 moving forward.
Attachment 2: TM/IS cooldown, considering 2 lengths for the test mass. Choosing l=100m instead of 150mm increases cooldown time from ~100 hrs to ~145 hrs, or 6 days.
Here is the code to generate a random list of parameters and evaluate the energy ratio of each.
L = rand(100) * 200 + 10;
d = rand(100) * 10 + 0.3;
h = rand(100) * 5 + 0.1;
y = zeros(300);
for i = 1:1:length(L)
model.param.set('base_height', append(num2str(h(i)), '[mm]'));
model.param.set('length', append(num2str(L(i)), '[mm]'));
model.param.set('base_width', append(num2str(d(i)), '[mm]'));
I am trying to use fminsearch to find the best cantilever dimensions to maximize the bond/cantilever energy ratio. Fminsearch takes in a function and a set of intial parameters. The function that is passed in should be a function of the parameters, but my getEnergy function does not work unless the COMSOL model is passed in as an argument. I tried to make a helper function, but I run into the same problem.
After running getRatio (attempted helper function):
The COMSOL model is now accessible using the variable 'model'
Unrecognized function or variable 'model'.
Error in getRatio (line 3)
ratio = getEnergy(model, L, h, d)
function ratio = getEnergy(model, L, h, d)
model.param.set('base_width', append(num2str(d), '[mm]'));
model.param.set('base_height', append(num2str(h), '[mm]'));
model.param.set('length', append(num2str(L), '[cm]'));
data = model.result.numerical('int1').getReal;
bondenergy = model.result.numerical('int2').getReal;
min = data(3)/bondenergy(1);
for i = 1:1:6
if data(i*3)/bondenergy(i) < min
function ratio = getRatio(L, h, d)
ratio = getEnergy(model, L, h, d)
x0 = [2, 0.3, 0.55];
x = fminsearch(getEnergy, x0)
The code works now. If the function is specified by a file, there should be an @ symbol front of it when it is passed into fminsearch.
The COMSOL model is now accessible using the variable 'model'
Unrecognized function or variable 'model'.
Error in getRatio (line 3)
ratio = getEnergy(model, L, h, d)
Restricting the search to nothing less than the initial parameters (L = 2 cm, h = 0.3 mm, d = 0.55 mm), fminsearch outputs L = 2.0088 cm, h = 0.3000 mm, d = 0.5776 mm.
With the search restricted to L >= 1 cm, h >= 0.1 mm, and d >= 0.5 mm, fminsearch outpus L = 1.0313 cm, h = 0.1000 mm, and d = 0.5033 mm.
The COMSOL model is now accessible using the variable 'model'
Unrecognized function or variable 'model'.
Error in getRatio (line 3)
ratio = getEnergy(model, L, h, d)
I am trying to get a plot of the fminsearch data, but I was not sure how to extract the data. But fminsearch has built in plots that I think capture the data pretty well.
Following our discussion at the Friday JC meeting, I gathered several resources and made a small simulation to show how frequency combs might be generated on platforms other than microcombs or mode-locked lasers.
Indeed, frequency combs generated directly from a mode-locked laser are expensive as they require ultra-broadband operation (emitting few fs pulses) to allow for f-2f interferometry.
Microcombs are a fancy way of generating combs. They are low-power-consuming, chip-scale, have a high repetition rate, and are highly compatible with Silicon technology. While these are huge advantages for industry, they might be disadvantageous for our purpose. Low-power means that the output comb will be weak (on the order of uW of average power). Microscopic/chip-scale means that they suffer from thermal fluctuations. High rep-rate means we will have to worry about tuning our lasers/comb to get beat notes with frequencies smaller than 1GHz.
Alternatively, and this is what companies like Menlo are selling as full-solution frequency combs, we could use much less fancy mode-locked lasers emitting 50fs - 1ps pulses and broaden their spectrum in a highly nonlinear waveguide, either on a chip or a fiber, either in a cavity or linear topologies. This has all the advantages:
1. High-power (typically 100mW)
2. Low rep-rate (typically 100MHz)
3. Relatively cheap
4. "Narrowband" mode-locked lasers are diverse and can come as a fiber laser which offers high stability.
As a proof of concept, I used this generalized Schrodinger equation solver python package to simulate 1d light propagation in a nonlinear waveguide. I simulated pulses coming out of this "pocket" laser (specs in attachment 1) using 50mW average power out of the available 180mW propagating in a 20cm long piece of this highly nonlinear fiber (specs in attachment 2).
The results are shown in attachments 3-4:
Attachment 3 shows the spectrum of the pulse as a function of propagation distance.
Attachment 4 shows the spectrum and the temporal shape of the pulse at the input and output of the fiber.
It can be seen that the spectrum is octave-spanning and reaches 2um at moderate powers.
One important thing to consider in choosing the parameters of the laser and fiber is the coherence of the generated supercontinuum. According to this paper and others, >100fs pulses and/or too much power (100mW average is roughly the limit for 50fs pulses) result in incoherent spectra which is useless in laser locking or 1f-2f interferometry. These limitations apply only when pumping in the anomalous dispersion regime as traditionally have been done. Pumping in an all-normal (but low) dispersion (like in this fiber) can generate coherent spectra even for 1ps pulses according to this paper and others. So even cheaper lasers can be used. ps pulses will require few meter-long fibers though.
I've ironed out the issues with my MATLAB model so that it now shows correct phase behaviour. The problem seems to arise from infinite Q poles where there is an ambiguity in choosing a shift of +/- 180 deg in phase. I've changed my state space model to include finite but very high Q poles to aid with the phase behaviour. The model has been uploaded to the GitLab project under mariner40 -> mariner_sus -> models -> lagrangian.
Instead of varying individual layer thicknesses using the MC sampler, I made sure both the thickness and index of refractions are varied as a global systematic error to estimate the design sensitivity. The results for ITM/ETM respectively, with 1e5 samples this time, are in Attachments 1-2 below.
we have 23 OSEMS they look all full built and I will try and test them this week and or next week.
Ongoing points of updates/content (list to be maintained and added)
Mariner Chat Channel
Mariner Git Repository
Mariner 40m Timeline [2020-2021] Google Spreadsheet
Putting together Koji's design work with Stephen's CAD, we consider the size of a test chamber for the Mariner suspension.
Koji's design uses a 6" x 6" Si optic, with an overall height of about 21.5".
Stephen's offsets suggest a true shield footprint of 14" x 14" with an overall height of 24".
With generous clearances on all sides, a test chamber with a rectangular footprint internally of about 38" x 32" with an internal height of 34" would be suitable. This scale seems similar to the Thomas Vacuum Chamber in Downs, and suggests feasibility. It will be interesting to kick off conversations with a fabricator to get a sense for this.
This exercise generated a few questions worth considering; feel welcome to add to this list!
WIP - Stephen to check on new suspension dimensions and fit into 40m chamber
I decided test how fast the plates would heat up if the heat was just on constantly on for 5 minutes. In general, these tests are raising a lot of questions in regards to controlling the temperature given the hysteresis in the system. It is also apparent that the bottom plate heats up signficantly faster than the top one, which means I need to heat the samples much longer than, say 10 minutes, if I want to avoid unevenly heating both parts of the optically contacted piece.
I also have to be conscientious that I am already half way through the quarter and ideally should be devoting time to bond strength testing rather than continuing to fiddle with the hot plate.
[I'm (once again) behind on data processing, but I'm creating an entry on the day I actually run the tests]
To combat the bottom plate heating up much faster than the top plate, I decided to try increasing the cycle period from 1000ms (1s) to 10000ms (10s). In other words, taking the test I today ran as an example, the hot plate will now be on for 1000ms then off for 9000ms then repeat. Hopefully this should give more time for the heat to transfer to the top plate, but even in this short test, it still appears to be a problem.
Due to the slower heating times, this will be a bit more challenging to test as each test could take hours to complete, but this is more in line with the final intended use anyways. Perhaps my cycle of 1000ms on is too much (e.g. I should do 100ms on then 9900ms off, although I think that might be so slow that it will never heat up; this also raising the question as to how I will deal with mantaining this slow heat up at the higher temperatures).
[I'm behind on data processing, but I'm creating an entry on the day I actually run the tests]
I performed the same tests I have been doing prior (+180°C in 10 minutes) but now with the (correctly wired) thermocouples attached to the metal plates. The top plate is thermocouple #1 attached to the Fluke and the bottom plate is thermocouple #2 attached to the TPI (the lime green one).
The base heating rate for the new set up will require some tweaking to the code because the plates heat up much slower, but as I have mentioned previously, I do not think this will require a lot of extra work since I now know the tips and tricks to PWMing the hot plate. The only difficulty might come from the increase in hysteresis (i.e. the plates continue to increase in the temperature long after it turns off). For future tests, I need to remember to continue recording the temperature after program finishes its 10 min cycle.
On the positive, I think this test shows that taking the average of the two thermocouples to find the temperature in the center (where the optically contacted samples are) is a worthwhile endevor, considering how much the top plate lags behind the bottom plate in terms of heating speed.
With v3.0, I took a couple steps backwards by getting rid of the feature that increases the heating rate so I can isolate the base heating rate for the two plates. In my experience, the best way to figure out how to modify the program is to try a bunch of different target temperatures and heating times and look for correlations. I started with (attempting) to increase the plates by 280°C in 10 minutes.
For a future release, I am thinking of radically (relatively speaking) changing the function parameters: the user only inputs the target heating rate and how long the plates should be heated at this rate. This is to address the hysteresis in this new set-up, which I will elaborate on if I make the change.
Upper limits on the mechanical loss of silicate bonds in a silicon tuning fork oscillatorâ€‹ and
Temperature Dependence of Losses in Mechanical Resonator Fabricated via the Direct Bonding of Silicon Strips
https://link.springer.com/article/10.1134/S1063782620010200 (I don't have access, but I was given a PDF of this paper over the summer)
I've completed one coil driver board.
Hopefully next week I can finish the other 2 boards and make the modifications to the sat amp baords.
Given that these glass slides are much thinner than the ones I worked with prior, I suspected they would be more receptive to pressure. I decided to replicate the tests I performed with the larger slides: I prepared 8 samples, 4 by smushing the slides together with methanol in the middle and another 4 by cleaning the slides with methanol before pressing them together with my fingers. I put 2 of each type under the cylindrical weight, and 2 of each type under the rectangular weight with the addition of heating. The heating consisted of switching the temperature from off --> low --> med --> high with 15 minutes on each setting.
I will check the results in the morning. I need to wait until the rectangular weight is completely cooled, otherwise I cannot remove it from the hot plate in manner that does not risk cracking the glass.
The first sample picture shows the pressed slides on the top and the smushed slides on the bottom. For the second picture, this is reveresed. Correction: the order is the same for both samples.
We succeeded in setting up an apparatus for quantifiying the razor blade test. After mounting the glass slides such that the razor edge rested against the gap, we slowly turned the knob to push the blade into the gap. We started with the knob at 0.111, and at 0.757, the bond between the glass slides failed. As we approached 0.757, the interference pattern in the glass shifted, foreshadowing the break.
(Edit by Koji. This 0.757 is 0.0757 I suppose...? And the unit is in inch)