To run COMSOL on sandbox1 with no graphical Interface, here are the steps that worked for me (Tue Aug 11 16:35:51 2020) from a mac on the Caltech VPN.
3. If there is still a splash screen from the COMSOL server, you will have to specify nosplash by adding the following line to your .bash_profile (in your home directory)
export COMSOL_MATLAB_INIT=’matlab -nosplash’
When running comsol with matlab interface on sandbox1, it is usually most convenient to ssh with screen forwarding (eg '-CY') and launch COMSOL with matlab by following the instructions in the livelink manual.
Sometimes, it is necessary to run COMSOL without any display available. In that case, the Instructions in the manual are a little unclear. Here are the detailed steps that let me run my script '/home/amarkowi/metamaterials/run_spiral.m' with no screen forwarding.
cf. Forwarded email from Stephen
1) Tuesday Demo - Basics of FEA Meshing G2000696
2) CIT SYS User Guides, How to Use the FEA User Group T2000295
3) CIT SYS User Guides, How to Use the ANSYS Learning Hub T2000236
Fabrice's SAMS piezo actuator second prototype E1900383
There are no issues with the thermal side of the modeling, the issue seems to be with the structural mechanics side. I'm not sure what I'm doing wrong though, but it just isn't converging. In any case, seeing that this is my last day here, I'll just point out that the version without the lens is saved in cvs/cds/caltech/users/cp/current working model.mph, while the model with the lens is saved in the same folder under the file name testing with lens.mph, using optimus. There is also a small file edition of current working model, with a file name that is self evident. I'll leave it to aaron to upload that to git.
In any case, let me just put down some documentation and thoughts on this model: The physical parameters on the model are generally what we do know of silicon at these temperatures, with the exception of emissitivity, which was randomly given a parameter of .5. The model is currently absorbing 22 mW from the heater and .5 mW from the laser, which implies that the heater should be able to have 45 mW incident on the disc, which would in turn suggest that you would want it to at least dissapate 100 mW to account for the lack of direction from radiation. Because comsol's deposit beam power function does not care for emissitivity, it must be modified in tandem with it.
So I tried adding the sapphire lens to the comsol model, and I am having teething issues. I can't seem to get the solver to converge, but I'm working on it.
I've updated the material properties to vary with temperature, mainly in the range of 90-140 K. Using the parametric sweep function to vary the input power of the heater, we get the eigenfreqencies' dependence on temperature to show up. The fractional dependence of 1.3e-5 /K around 123 K matches with what Aaron calculated in this elog entry, which is always a good sign that nothing is going horribly wrong. I've also added the flats to the silicon disc, for better accuracy. See the screenshots below showing the frequency shift with temperature.
I've changed the heating to be from two heat sources, to better model the situation with a heater and a laser. The heater deposits 22 mW, with the laser deposting .5 mW. The overall temperature distribution is smaller then before, as expected, but doesn't really change much. The heater is simulated with a deposited beam with a gassian beam profile with a standard deviation (s.d.) of 8 mm. The laser to the size has a .3 mm s.d. for contrast. I learned that while the deposited beam power doesn't care for emisitivity, it cares about the area the beam is incident with, so for example, if you increase the s.d. too much, you get less power deposited then what you enter.
I've also got modes to appear using solid mechanics, and I'm trying to see if there is a good way to get the frequency dependence with temperature nicely simulated. Changing the parameters of the simulation does give me my different frequencies, but I trying to find a way to do that over the time evolution of the model. I also got to check if the frequency shift is in line with what we are measuring.
The time step response to heating via laser (22.5 mW) is given in the attached picture, for 2 starting temperatures, 122.5 K and 122.8 K. We see that it takes fairly long to equlibrate, with a time constant of about 500 s, and is consistent across both temps. The y axis is average temperature across the surface of the disc, and the x axis is time. I believe that the heat distribution profile would be very similar with time, simply because of how much faster conduction is compared to radiation
So I got the time dependence to work, but I'm not sure what went wrong in the first time anyways. I'll trying to get a sense of how long it takes for the temerature to semi-equlibrate, and coming to grips with comsol as a whole. There seems to be some inaccuracies when the timing increases, so I'm having to figure out how to increase accuracy without drastically increasing compute time. On the bright side, I switched the model to heating via deposited beam, for a more accurate model.
Attached is comsol's handling of a deposited beam modulated sinusoidally with a frequency of 0.1 Hz and screwing up badly. Y axis is the average surface tempurature across the whole disc.
So I did a simple comsol model of laser heating of a silicon disk, with only radiation, to see the temperature variation at steady state, which could be the limiting factor for high Q at 123 K, due to the thermalelastic effect.
The model just uses a simple 2 inch disc, at 0.028 cm thick, with the flats not incorporated in yet.
I had to search for silicon thermal conductivity and heat capacity at low temperatures, settling with k= 800 W/(m K) and C_p= 300 j/(kg K) from refering to papers. Will check LIGO documents for more accurate versions.
I put an arbitary boundary condition of constant temperature of 123 K on a spot .2 mm in radius, to simulate a beam.
Other arbitary values include 77 K for ambient and a surface emissivity of 0.5.
The laser is off center, because that it where the laser will enter the current setup.
We can see that the power required is .02 W, which seems reasonable.
The model is consistent with the analytic model I made with the laser beam at the centre of the disc. See last two figures.
I'm still trying to get the time dependence to work, as it is just giving me nonsense right now.
Some thoughts: beam radius affects the temperature variation quite significantly, with a fat beam (1 mm radius) having half the temperature variation as a beam of .2 mm radius
I think the halo is just a trick of the eye, but I could be wrong.
Things to do:
Find the time scale of the system, as we want to modulate the laser to adjust the temperature, which will then be run though the mode ringer to measure Q to find the zero crossing
Change the heat source to be an actual laser
Add in the solid mechanics part
Add in the sapphire lens underneath
this is a time dependent model of the previous steady-state one
So the cooldown time w/o a heat switch is ~4 days. Since this is less than the usual pumpdown time required to open the gate valves on the beamtubes, perhaps no heatswitch or in-vac cryogens are required.
For the Voyager test masses, we have been considering a barrel coating to increase the IR emissivity to increase the radiative cooling power. We also seek to estimate the added Brownian thermal noise that arises from this.
Dmitry Kopstov (from MSU) made a baseline model for this which we have been modifying. The latest is in the CryogenicLIGO git repo in the FEA directory as ETM_NASAbarrel.mph.
Comparison with Analytic Estimates:
In this way the file sizes will be ~100 kB instead of 10's of MB.
When saving your COMSOL files do these two things to make the files much smaller (good for saving in version control and sharing):
I started implemented some of these changes:
These changes were fine with a 10um mesh, but ended up with much too fine a mesh (~80000 boundary mesh elements in the initial mesh between the taper and main fiber) due to meshing the surface and boundary in a single stage in the first step. I separate them out to get a coarser mesh in the bulk, and also make the resolution a bit coarser.
I talked with Shoaib about some changes I could make to the FEA model to improve convergence and reduce memory usage. Summary:
I'll get these implemented this week and see if the computation goes through.
I made a copy of Gautam's 40m model to add the unstable filter cavity for the ponderomotive squeezing project. I wanted to make a more explanatory record of the changes I've made because I think some of them might be necessary for other scripts using gautam's original model, but I have not implemented them in that file (also just for my own paper trail).
I wanted to automate this loop over mass parameters in a matlab script, so I set up the Livelink handshake so Matlab would send the model to sandbox1 for solving, then MATLAB on my machine would work with the solved model to extract results. I realized later during running that this might not be optimal, since it will require me to keep the connection to the remote COMSOL server during the entire run, which is A. annoying and B. risky because these might take many hours to run and the connection can easily be severed, wrecking the job. I don't really have a workaround for this, so my current plan is to continue logging on to the remote server and running individual COMSOL files. As I'll discuss, this is probably necessary for now anyway since I ran in to some problems running these models, and might want to tweak the models on my local GUI before running them over a large (frequency) parameter space on sandbox1.
Based on the plots generated from the first run, I estimated that the TM displacement was comparable to the TM thickness; just as a first pass I figured 'small displacement' could be easily defined as 'displacements smaller than the thickness of the fiber' or maybe 'smaller than the thickness of the surface layer'. If the displacement is directly proportional to the force, this meant I wanted to scale the previous F_load by 1e-5, which I did. I wanted to just see what kinds of displacements I would get, so I asked to run the model only at 300Hz. After more than 45 min of running, the server threw an error that it couldn't find my STEP file for the horns, which were in a local directory but not in the remote (server's) directory.
I thought this was an odd error for two reasons. First, I had originally not even had the STEP in my local directory (it only needed to load once and then could be used many times for a given model in the GUI), and I was getting an error within a couple minutes of starting the job that it could not find the file. Adding the file to my local directory seemed to solve this problem, as the model was running for much longer. The error I got after it finally crashed was not the same error as before, but was still an error in loading the file, which makes me a bit confused about where it is actually looking for this file or when exactly the loading process happens. To solve this, I have copied the STEP to the remote directory and will run again with the import pointing to that remote file, which I suspect will solve the problem.
Since the first run seemed to mostly work up to the outstanding questions mentioned above, I decided to also run the model in batch mode directly from the sandbox1 command line, just as Gautam had. That is, I ssh on to sandbox1 and run the command
$ /usr/local/comsol53/multiphysics/bin/glnxa64/comsol batch -inputfile [inputname].mph -outputfile [outputname].mph
For this run, I scaled the mass of the model by 0.1, so we now have a 4kg TM. Most lengths of the model then scale by 0.1^(1/3), but those related to the radius of the fibers (and the bend point of the fibers) scale by 0.1^(1/2) because we want to maintain a constaint stress in the fiber. The main length of the fiber remained the same so the modes would be in about the same place. I document these more thoroughly in the matlab script, which I should upload to the git. I scale the force by 1e-5 as above (same back of the envelope calculation), with an additional factor of 0.1 to account for the lower mass.
The model ran in to some convergence issues after a few frequencies. I could solve this by changing the relative tolerance, but I think I will most likely instead pursue Gabriele's suggestion above and try to refine the mesh to improve the convergence. The solver nodes of the COMSOL models are still a bit mysterious to me--I don't really know much about when different methods or measures of convergence are appropriate. Probably playing with these could both improve the accuracy and efficiency of the model... but it seems like a hefty undertaking. Might be worht the long term payoff though.
The first few frequencies did converge, but as I was extracting the resuls ancha, the COMSOL license server, went down so I'll have to extract them later.
NOTE TO SELF: WHAT'S WITH THE INLINE IMAGE QUALITY? DON't USE JPG !!!!
Gautam advised me against trying to install version 5.3, lest it break version 5.1--I had already gone through the install, but looking at the install manual it says it shouldn't affect previous installs except that the default behavior when double clicking .mph files will always choose the latest version of COMSOL. Since we mostly run on the command line we should be fine. That said I haven't tested files with COMSOL 5.1.
Zach pointed me to sandbox1.ligo.caltech.edu at the lunch meeting, as well as these notes from Rana about running comsol on a remote server. I couldn't run the file myself on sandbox1 because I don't yet have a home directory there, but I asked Larry if he could set one up for me so I can use the clusters. Gautam helped me run them on sandbox1 by having me scp the file onto the shared 40m drive, then he moved it in to his home directory on sandbox1 and ran it. A version of the model asking for an analysis at 60Hz and 600Hz ran in 10-20 min, and looked good (I was able to scp the output to my laptop and open it in the gui, though I have a script not quite to the point where Matlab can do this for me, it's not ready yet and anyway I wanted to directly see that the output was normal). I modified the study to take 100 steps between 60Hz and 5kHz, and Gautam has now started that run, which if it scales linearly (hopefully better than linear, since it won't have to remesh and etc) will take about 8-16 hours.
Thank you Gautam for the extended mattermost session helping me run these!
I had some trouble running this on optimus.
Optimus has COMSOL 5.1 installed, but I made these files in 5.3. I downloaded the comsol 5.3 dvd.iso file last night, but on install I'm now getting the error "No locks available." I wasn't sure if this is a file permissions issue (sometimes the file has been 'locked' when it is saved and I open it in the GUI), an svn issue (most of the information I find online is from people running into some problem with lockd or their NFS), or a licensing issue (I don't think this is likely, COMSOL on my laptop shows more licenses available).
Has anyone seen this? Do I need to do something else with /cvs after install?
It's a good idea, I'll check out her elogs and get it started tonight.
I found that I had the relative tolerance set too low (0.001, while the iterations were not converging further than 0.02 or so); I changed it to 0.1, which let me run over a few modes relatively quickly once I reduced the number of sections on the main part of the fiber to 10. This is not sufficient--at 600Hz the fiber looks completely jagged because the mesh is not fine enough, so using optimus will be necessary.
We could run the simulations on the 32 core machine in the 40m lab (optimus)? I think Mariia was running some of her studies on optimus, and even though we had some problems with the licensing initially, I think she resolved these and has detailed the procedure in her elogs...
I'm running the frequency study on a small number (2) of frequencies in each decade from 60Hz to 10kHz (so there should be 5-8 total frequencies in the study). I started it at 4:25pm on my local machine, and it hasn't gotten very far in 30 minutes, so I may abort the study and try to make the mesh coarser--especially the distribution settings, which are easy to change.
I don't know why I wasn't seeing this problem with previous models (perhaps because I wasn't importing any CAD or STEP files), but my latest attempts at meshing and selecting specific domains of my model were being thwarted by inconsistent domain definitions. I was previously always manually selecting domains, which is confusing because all domains just get assigned a number when they are created. Worse, sometimes the numbers assigned to domains change when the model rebuilds, especially if there has been a significant change in the model geometry. This results in later steps selecting the wrong set of domains (or boundaries, etc).
To fix this problem, I created new selections (sets of domains, boundaries, or etc that receive their own label and can be selected as a group later in the model). The new selections include:
This is probably also a necessary step for getting reliable results when interfacing with MATLAB, and might explain some weird problems I was running in to a while ago and just made haphazard fixes for.
I use the following mesh steps to get what seems like a pretty reliable meshing:
Note on the skin depth: I defined the thickness of the surface layer by the parameter skin_depth. Since we are interested in how the energy in the surface layer changes with the radius of the fiber, and expect that the surface depth doesn't change much with fiber radius (a very thick fiber will have a few micron surface layer; so will a fiber half that diameter). I managed to get the mesh working with a 15um surface layer, but was having trouble getting a 10um layer. I wasn't completely sure how small a surface layer would be necessary, but for a test mass 1/100 the size of the aLIGO masses, the main part of the fiber would have a radius of 20um, so if we want to study the surface layer for masses down to that size I figure the skin depth should be at most around 10-15um. It would be better to be motivated by the actual scale of the physics going on at the surface--how deep to micro cracks and other lossy imperfections go?
For the material of the entire test mass and suspension, I used the fused silica that is specified as [solid,NIST SRM 739 - Type I]. I wasn't sure the difference between the types of silica, but this one said SRM so I thought it might have been defined on my distribution of COMSOL by a LIGO person. A quick google search showed me that person may have been rana?? https://labcit.ligo.caltech.edu/~rana/research/etm.html
I have not yet messed with the meshing for these models. Obviously the points with more flexture and smaller parts (like at the horns of the ears, the tapering parts of the fibers, etc) will require a finer mesh.
I need to incorporate the advice on how to build this study described in the link above. The following might also be useful, though I haven't looked through them yet:
I found the dimension of the test mass flat in the drawings of the mock test mass design here: LIGO-D080687.
I modelled the fibers with the profile described in LIGO-D080751, fig 3.7.
I grabbed the d-values from LIGO-T1000545, but since the d-value is defined as the distance from the center of mass (of the penultimate mass (PUM) or the E/ITM) to the bend point (BP) of the fiber (I believe the point on the fiber with maximal flexure in the fundamental mode), I did not go through the effort of figuring out where the bend point is but rather grabbed the horn-CM distance from LIGO-T1100407
I wanted to get the real aLIGO parameters for the first version of this model, and have parametrized the model in such a way that I can define all of the parameters that need to change (surface area of the ear-TM bond, length of the fibers, thickness and profile of fibers, d-value, etc) and scale them with mass in some way for future iterations on this design.
I need to pare down the number of parameters, because I started by fully defining the ears and now am importing a 3D model of the ears and planning to scale these with mass.
I'm using a solid mechanics model.
I fixed the position of the bonding surfaces for the PUM ears, so it is as if they are contacted to a completely fixed PUM (the PUM is not included in the model, but the upper ears are included, so the constraint is on the ears not the fiber. See drawing).
I added gravity to all parts of the model. Apparently, it is not trivial to use gravity in a frequency domain study in COMSOL, as described in this presentation here. Fortunately, the presentation in the link is interested in the transfer function for a mass on a string also, so I follow the simulation steps they describe below.
I add a boundary load that will vary sinusoidally for the frequency domain study.
On Wednesday I started making my own model of the aLIGO suspensions, with the top of the silica fibers attached to ears that are fixed rather attached to an additional suspension stage (so this will be a one stage suspension).
I grabbed the aLIGO ear design from the DCC: LIGO-D080751-v4
I am almost done with the model, should have it working tomorrow and will add it to the experimental gravity github in an appropriate place.
Since 2014, the limit of 12 workers using the matlab parallel computing toolbox has been lifted. Today, I was able to get this to work. There's a trick.
Usually, when you start up matlab and run a parallel thing like 'parfor', it just uses a default profile 'local' which limits you to 12 workers. You can try to ask for more by doing 'parpool(40)' for 40 workers, but it will tell you that NumWorkers = 12 and you're out of luck. So instead:
myCluster = parcluster('local')
myCluster.NumWorkers = 40;
It seems that it needs the max # of workers and the requested number of workers to be 40 to use 40, otherwise you'll just get 12 (as of matlab 2016a).
Using the written path from elog by ericq: Computer Scripts/Programs, Comsol can be run from the directory on the distant computer: /cvs/cds/caltech/apps/linux64/comsol51/bin/glnxa64/comsol batch -inputfile Model1.mph -outputfile Model_solv.mph. To transfer files from Linux to Windows : the command pscp.
1. To download PuTTY and make a coonection with a distant computer.
2. Linux terminal: ssh optimus % allows to go into the whole system
3. Windows terminal: pscp "the path of the file" email@example.com:/users/.../ % allows to transfer mph-file from Windows to Linux
4. Linux terminal: /cvs/cds/caltech/apps/linux64/comsol51/bin/glnxa64/comsol batch -inputfile Model1.mph -outputfile Model_solv.mph % calculation on the distant computer, without outputfile solution would be stored in the input file
5. Windows terminal: pscp firstname.lastname@example.org:/users/.../Model_solv.mph Documents\ % copy file from Linux to Windows, Documents is the name of the folder in Windows
I've just tried this out on my desktop machine using COMSOL 5.1 and its still working. Which COMSOL is installed on optimus at the 40m ?
The number of licenses already used by whom / still remains can be confirmed by running the following command on a comsol-installed linux machine
$ cd /usr/local/comsol51/multiphysics/license/glnxa64
$ ./lmstat -c ../license.dat -a
WIth Gautam's help, I have created a user directory in 40 meter Lab and copied Rana's documents (MATLAB coating files) from flash card into it. After that, from this elog by Rana : COMSOL: remote server w/ matlab from Fri Dec 4 18:32:02 2015, ran the matlab document BarrelCoating which resulted in the following error
I took Aidan's COMSOL model for the ITM from a couple years ago and updated it with some more details:
In the attached image, I have made one quadrant of the tubular cryo shield transparent just for clarity - the actual modelled tube is 3 cm thick, made of aluminum, has an emissivity of 0.95 on the inside and 0.03 on the outside (to simulate what we would get from polished aluminum or gold coating).
This files is in our GitLab: https://git.ligo.org/rana-adhikari/CryogenicLIGO/blob/master/FEA/ITM-ColdShield-CP.mph
*I am suspicious of just using a single emissivity number for the AR and HR coatings. Since we are concerned with wavelengths which are long w.r.t. the coating thickness, it may be that the HR and AR coatings have a complicated wavelength dependence in the 5-50 micron band.
Here is a tutorial in importing SolidWorks into ANSYS and the steps needed to apply a Gaussian Force to the LIGO test mass that’s imported.
Importing SolidWorks into ANSYS:
Applying Gaussian Force:
Here are a series of tutorials for basic meshing principles from ANSYS Meshing Basics:
This tutorial will go through how to show frequency convergence for a cylindrical cantilever using Ansys 14.5 and Mathematica.
Here is a tutorial to implement a simple Modal Analysis of a Cantilever Cylinder
1. Open the ANSYS workbench
2. Drop and drag the "Modal" analysis system into the project schematic
3. Right click on "Engineering Data" and edit the material. There are predetermined properties found in the "Engineering Data Sources" (Right click on the description or press the books in the top right hand corner). For this tutorial, fused silica was created with these parameters:
In order to put in the parameters, just drag and drop from the toolbox. Also make sure the filter is turned off (that's the filter symbol next to the engineering data sources/books in the top right hand corner) this will allow a user to few all the parameters.
4. Return to the project, and open/edit the geometry. The cantilever that was used for this tutorial is a cylinder with a diameter of 2 mm with a length of 200 mm, shown in image 1.
5. Return to the project and open the Model tab. With that open, go to the subtab of "Geometry", named "Solid". Under Solid go to "Assignment" under "Material" to assign Fused Silica or whatever material you decide to choose.
6. Go to the "Mesh" tab. In this case the "Element Size" under the "Sizing" tab is 0.04 and the "Relevance Center" as "Fine". Right click on the mesh and press "Generate mesh".
7. Under the Modal tab, go to the Analysis Settings and change the "Max Modes to Find" for ANSYS to calculate. In this tutorial, the amount of modes that were used was 17.
8. Right Click the Modal and press on "Fixed Support". This will make the bar a cantilever bar, once the geometry is set to fix one face of the bar. Press on one of the faces and click on "apply" on the Geometry.
9. Right click on the solution and press "Solve". Let ANSYS run the modes through.
10. On the Solution tab, the "Tabular Data" is listed but the Total Deformation has not been listed. In order to do so, select all the frequency of the modes in the column and right click and press "Create Mode Shape Results".
11. Once loaded, the total deformation will have a lightning bolt next to each entry, right click and press "Solve" or "Evaluate All Results". This will make all the entries have a green check mark.
12. This will give the user the ability to animate each entry. This is down by clicking the play (sideways triangle button). ANSYS will run through the simulation for that mode that is selected, shown in image 2.
13. Using Mathematica (or another computational system) input the analytical solution for a Cantilever bar fixed to one end.
14. Compare only to the bar actually bending, not twisting or contracting. There are modes that are the same due to the symmetry of the bar. In image 3 the underlined frequencies compare to the analytical calculations (Mathematica) and the computational calculations (ANSYS).
This summarizes how to get the remote comsol server to run. COMSOL 18.104.22.168 is now on tegmeni thanks to Larry.
On the server:
rana@tegmeni|~> /usr/local/comsol51/multiphysics/bin/comsol server -login force
This starts up a comsol server instance, listening on port 2036. '-login force' will ask you to supply a username and password which you make up. You will have to enter these later from the client side.
On my laptop, from the MATLAB prompt:
>> mphstart('tegmeni.ligo.caltech.edu', 2036,'uname','pword')
That's it! Now you just run the matlab script which runs the COMSOL file.
I'm attaching a tarball of the .mph file (written by Dmitry Kopstov from MSU) and the associated matlab scripts which change the parameters, as well as looping over test mass thickness to produce a plot of Brownian noise PSD v.
I moved the only entry from the 'ENG_FEA' log into the COMSOL log and then renamed that logbook as 'FEA' since we don't need two FEA logs.
Also renamed 'AdhikariLab' log as ATF.
At the 2014 commissioning workshop, I presented a summary of my efforts in converting finite element modal models into state space models:
I also provided the attached written summary for a report on the workshop that Aidan Brooks and Gabriele Vajente are preparing.
Here is a set of slides by Yoichi Aso on how to handle gravity in Comsol.
In this document, I try to identify I good mesh by comparing the numerical solution from that mesh with my analytical model. Since there are problems with carrying out the analytical calculation, it is still not entirely clear which mesh should be used.
I have refined the analytical calculation procedure, as outlined in this new document. The procedure indicates that the discrepancy between the analytical and numerical solutions are more likely attributed to meshing inaccuracies.
(See Plots in attached document)
My plan has been to replicate Duan's numerical thermoconductive (TE + TR) phase noise plot presented in his paper (section V). I am trying to match Duan's analytical expression with Heinert's analytical expression. This requires some rescaling of Heinert's TR displacement noise. (I also needed to divide Heinert's expression by 4 pi to match the Fourier Transform convention. ) Duan's analytical expression for the phase noise is obtained by evaluating the triple integral given in equation 13 of the Duan paper "General Treatment of Thermal Noise in Optical Fibers".
I have now plotted the Duan-Heinert comparison for the case of an infinite upper bound. It turns out that the curves differ by a maximum of 10 percent for low frequencies. Such a discrepancy has been attributed to lack of experimental investigation into this regime (according to Duan). For our purposes, such a discrepancy is acceptable. We will therefore use the Heinert curve for subsequent calculations due to its faster computation time.
I have plotted Heinert's analytical solution for TR noise using Duan's parameters. Since TO and TE noise can be found by simply rescaling TR noise, these have been included in the plot as well. The solid curve represents the analytical solution, while the tick marks represent COMSOL's solution. I have used COMSOL for both a 1D axisymmetric and a 3D model. Since Duan's cylinder has a radius of 125 microns, but a length of 1 m, the meshing was difficult for the 3D model. I ended up shortening the length of the cylinder, converting to the actual length when finally calculating the thermal noise.
How close are these FEA calculations with the analytical values?
Can you plot residual too? (Put analytical values, 1D, abs(1D - analytical), 3D, and abs(3D - analytical) all together.)
Here are the plots with their fractional residuals: abs(S_COMSOL - S_analytical)/S_analytical
How close are these FEA calculations with the analytical values?
Can you plot residual too? (Put analytical values, 1D, abs(1D - analytical), 3D, and abs(3D - analytical) all together.)
Agreement with Heinert's paper for cylindrical TR noise has now been achieved. Using the stationary state assumption to calculate the temperature profile, the computation time was reduced compared to the previous time-dependent approach. Here are the plots showing the agreement. I have shown the plots for a 1D axisymmetric model, in addition to a full 3D model in COMSOL. Both give the same result.
What about this example? The result is easier to understand intuitively.
Consider a bar with the length of L.
Let's say there is no body heat applied, but the temperature of the bar at x=L is kept at T=0
and at x=0 is kept at T=T0 Exp[I w t].
The equation for the bar is
Consider the solution with the form of T(x, t) = T(x) T0 Exp(I w t), where T(x) is the position dependent transfer function.
T(x) is a complex function.
Eq.1 is modified with T(x) as
With the boundary condition of
This can be analytically solved in the following form
where alpha is defined by
So kappa/Cp is the characteristic (angular) frequency of the system.
Here is the example plot for L=1 and alpha = 1 (red), 10 (yellow green), 100 (turquoise), 1000 (blue)
If the oscillation is slow enough, the temperature decay length is longer than the bar length and thus the temperature is linear to the position.
If the oscillation is fast, the decay length is significantly shorter than the bar length and the temperature dependence on the position is exponential.
Now what we need is to solve this in COMSOL
Here, I describe some test cases to see if COMSOL's solutions are agreeing with some simple analytical solutions. Right now, I have two plots showing COMSOL's solution and my analytical solution on separate plots. I will be plotting there difference to see if they really match up.
The following document shows the relative difference between these two plots.