40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
 COMSOL elog, Page 3 of 3 Not logged in ID Date Author Type Category Subject 57   Fri Jul 12 19:43:32 2013 Deep ChatterjeeGeneralGeneralTE noise problem - Matlab script

 Quote: The present aim is the calculation of the TE noise like Liu and Thorne. Although, the results of Liu and Thorne are for the case of an infinite mirror, in our case, we try to model a test mass with large dimensions in an attempt to get closer to the result due to Liu and Thorne Following my previous post - The presents results given by the COMSOL simulations gave the same profile as Liu and Thorne's result which has a frequency dependence of 1/omega^2 but was displaced from the analytic result by a constant factor which was dependent on the applied pressure on the face of the test mass which should not be the case. Following the steps followed Liu and Thorne, we had constructed the test mass as a cylinder, large compared to the beam spot size. An oscillating pressure was applied to one of the faces and the from the temperature gradient generated in the process due to strains, one can calculate the work dissipated. The process involves integrating the gradient of temperature over the entire geometry and taking the time avrage. However, this did not give the correct results, so it was decide to extract the Fourier coefficient of the signal and perform the integration on it, as done in the case of TR niose by Koji Arai. I mention the steps it was done - > The data extracted was stored in a 3D array in matlab. using mphinterp - each dimension for r, z, t > The number of radial slices and z slices is defined by the user previously > For each r,z value there was a time signal. > The fourier coefficient corresponding to the pressure oscillation frequency was extracted from the last three cycles (In theory only one cycle should suffice) > The above step was done so that random data( which is small in magnitude) generated by COMSOL is avoided. > The fourier coefficients are stored in a 2D array corresponding to r and z values. > The integration was performed using trapz twice once on each dimension to get the total volume integral > The rest of the calculation is same as the previous script - plugging in the prefactors in the formulae and plotting   **The primary problem with this script is that the data extraction takes significant amount of time - For a mirror with 200 times the radius of the beam spot and 200 radial slices, it takes close to 6 minutes to evaluate work dissipated for each frequency. The script solves for 16 frequency values. The results for small number of radial slices does not follow the straight line profile. For larger number of slices the program takes a longer time. The results have not been checked yet.

I am replying to report of the modification made to the script from the last one. The problem of too many radial and z slices have been avoided by using a slicing based on a gaussian
distribution both along the r and z directions - i.e. the slicing is heavy in the region around the surface is applied and much thinner towards the edges of the cylinder. The slicing has
been separately handled by a function named giveSlices() which does the slicing and returns the values as two arrays corresponding to r and z.
The extent to which the fineness of slicing is controlled in the geometry is by controlling the parameter SD in the code which is the standard deviation of the gaussian according to which
the slicing is controlled as being fine throughout or fine in the centre and thin with increasing r.

This method reduces the number of slices by almost 2 orders for significant large values of the dimensions of the test mass while it is expected to give similar results since slicing the
test mass finely at the edge is not required as the gradient of temperature almost falls to zero.

It is suspected that the long time of simulation is attributed to calling mphinterp() command a large number of times. In my next modification, it will be tried to use the command just once
followed by the proper data extraction from the output given by the command.

Attachment 1: codes_TE_calc.zip
58   Sun Jul 14 15:13:46 2013 Deep ChatterjeeGeneralGeneralTE noise problem - Matlab script

Quote:

 Quote: The present aim is the calculation of the TE noise like Liu and Thorne. Although, the results of Liu and Thorne are for the case of an infinite mirror, in our case, we try to model a test mass with large dimensions in an attempt to get closer to the result due to Liu and Thorne Following my previous post - The presents results given by the COMSOL simulations gave the same profile as Liu and Thorne's result which has a frequency dependence of 1/omega^2 but was displaced from the analytic result by a constant factor which was dependent on the applied pressure on the face of the test mass which should not be the case. Following the steps followed Liu and Thorne, we had constructed the test mass as a cylinder, large compared to the beam spot size. An oscillating pressure was applied to one of the faces and the from the temperature gradient generated in the process due to strains, one can calculate the work dissipated. The process involves integrating the gradient of temperature over the entire geometry and taking the time avrage. However, this did not give the correct results, so it was decide to extract the Fourier coefficient of the signal and perform the integration on it, as done in the case of TR niose by Koji Arai. I mention the steps it was done - > The data extracted was stored in a 3D array in matlab. using mphinterp - each dimension for r, z, t > The number of radial slices and z slices is defined by the user previously > For each r,z value there was a time signal. > The fourier coefficient corresponding to the pressure oscillation frequency was extracted from the last three cycles (In theory only one cycle should suffice) > The above step was done so that random data( which is small in magnitude) generated by COMSOL is avoided. > The fourier coefficients are stored in a 2D array corresponding to r and z values. > The integration was performed using trapz twice once on each dimension to get the total volume integral > The rest of the calculation is same as the previous script - plugging in the prefactors in the formulae and plotting   **The primary problem with this script is that the data extraction takes significant amount of time - For a mirror with 200 times the radius of the beam spot and 200 radial slices, it takes close to 6 minutes to evaluate work dissipated for each frequency. The script solves for 16 frequency values. The results for small number of radial slices does not follow the straight line profile. For larger number of slices the program takes a longer time. The results have not been checked yet.

I am replying to report of the modification made to the script from the last one. The problem of too many radial and z slices have been avoided by using a slicing based on a gaussian
distribution both along the r and z directions - i.e. the slicing is heavy in the region around the surface is applied and much thinner towards the edges of the cylinder. The slicing has
been separately handled by a function named giveSlices() which does the slicing and returns the values as two arrays corresponding to r and z.
The extent to which the fineness of slicing is controlled in the geometry is by controlling the parameter SD in the code which is the standard deviation of the gaussian according to which
the slicing is controlled as being fine throughout or fine in the centre and thin with increasing r.

This method reduces the number of slices by almost 2 orders for significant large values of the dimensions of the test mass while it is expected to give similar results since slicing the
test mass finely at the edge is not required as the gradient of temperature almost falls to zero.

It is suspected that the long time of simulation is attributed to calling mphinterp() command a large number of times. In my next modification, it will be tried to use the command just once
followed by the proper data extraction from the output given by the command.

Following the previous reply regarding the use of mphinterp(), the appropriate changes in code were made. The overhead time due to the call to COMSOL was responsible for the longer
simulation time. Right now, for relatively large no. of total slices (about 27000) just the extraction of data happens within a few seconds. However, for the larger number of slices, the codes
will probably still take some time to complete. The codes are attached.

Attachment 1: codes_TE_calc_(2).zip
60   Mon Jul 15 16:05:29 2013 Deep ChatterjeeGeneralGeneralTE noise problem - Matlab script

Quote:

Quote:

 Quote: The present aim is the calculation of the TE noise like Liu and Thorne. Although, the results of Liu and Thorne are for the case of an infinite mirror, in our case, we try to model a test mass with large dimensions in an attempt to get closer to the result due to Liu and Thorne Following my previous post - The presents results given by the COMSOL simulations gave the same profile as Liu and Thorne's result which has a frequency dependence of 1/omega^2 but was displaced from the analytic result by a constant factor which was dependent on the applied pressure on the face of the test mass which should not be the case. Following the steps followed Liu and Thorne, we had constructed the test mass as a cylinder, large compared to the beam spot size. An oscillating pressure was applied to one of the faces and the from the temperature gradient generated in the process due to strains, one can calculate the work dissipated. The process involves integrating the gradient of temperature over the entire geometry and taking the time avrage. However, this did not give the correct results, so it was decide to extract the Fourier coefficient of the signal and perform the integration on it, as done in the case of TR niose by Koji Arai. I mention the steps it was done - > The data extracted was stored in a 3D array in matlab. using mphinterp - each dimension for r, z, t > The number of radial slices and z slices is defined by the user previously > For each r,z value there was a time signal. > The fourier coefficient corresponding to the pressure oscillation frequency was extracted from the last three cycles (In theory only one cycle should suffice) > The above step was done so that random data( which is small in magnitude) generated by COMSOL is avoided. > The fourier coefficients are stored in a 2D array corresponding to r and z values. > The integration was performed using trapz twice once on each dimension to get the total volume integral > The rest of the calculation is same as the previous script - plugging in the prefactors in the formulae and plotting   **The primary problem with this script is that the data extraction takes significant amount of time - For a mirror with 200 times the radius of the beam spot and 200 radial slices, it takes close to 6 minutes to evaluate work dissipated for each frequency. The script solves for 16 frequency values. The results for small number of radial slices does not follow the straight line profile. For larger number of slices the program takes a longer time. The results have not been checked yet.

I am replying to report of the modification made to the script from the last one. The problem of too many radial and z slices have been avoided by using a slicing based on a gaussian
distribution both along the r and z directions - i.e. the slicing is heavy in the region around the surface is applied and much thinner towards the edges of the cylinder. The slicing has
been separately handled by a function named giveSlices() which does the slicing and returns the values as two arrays corresponding to r and z.
The extent to which the fineness of slicing is controlled in the geometry is by controlling the parameter SD in the code which is the standard deviation of the gaussian according to which
the slicing is controlled as being fine throughout or fine in the centre and thin with increasing r.

This method reduces the number of slices by almost 2 orders for significant large values of the dimensions of the test mass while it is expected to give similar results since slicing the
test mass finely at the edge is not required as the gradient of temperature almost falls to zero.

It is suspected that the long time of simulation is attributed to calling mphinterp() command a large number of times. In my next modification, it will be tried to use the command just once
followed by the proper data extraction from the output given by the command.

Following the previous reply regarding the use of mphinterp(), the appropriate changes in code were made. The overhead time due to the call to COMSOL was responsible for the longer
simulation time. Right now, for relatively large no. of total slices (about 27000) just the extraction of data happens within a few seconds. However, for the larger number of slices, the codes
will probably still take some time to complete. The codes are attached.

The dependence of the Power Spectral Density on the parameters F0, which is the pressure amplitude, has been somewhat corrected for in the codes attached. The mistake lay in the fact that
extraction of the Fourier coefficients was being done for the ht.gradTmag, the Fourier coefficients of which are not the correct ones to be used in this case since ht.gradTmag is not the correct time signal.
In the present case the Fourier coefficients are extracted from the r and z components, squared and added to carry out the calculation. However, the results are still off from the L&T results by order of 5.
The codes and the plots are attached. The red values are the L&T result while the blue sketch is the simulation

Attachment 1: codes_TE_calc.zip
69   Wed Jul 24 21:08:24 2013 Deep ChatterjeeOpticsGeneralTR results for different dimensions

In this post I simulate the procedure of calculating the TR noise for finite cavities as proposed by Heinert and check for a
match.

The technique of performing all necessary calculations in COMSOL and exporting the results was applied to the TR codes.
It was seen that the codes gives similar output as the technique of extraction of Fourier coefficients in place of time averaging
as has been done in the codes of Koji Arai. One can see the output as the present code runs to be similar to the previous ones
found in the SVN.

However, the results in the present case were off by a constant factor close to 100. This maybe due to some 'm' - 'cm' or similar difference between
analytic calculations and COMSOL values of parameters. Although, it has not been found yet, the correction is hopeful to be
found soon.

The codes give results similar to the analytic result for other values of the mirror radius and beam radii (apart from the constant
factor I have mentioned above). One may have a look at the trend of the graphs between analytic and simulated values in the plots
attached. These plots are for the case when the mirror radius = 25m while the beam radius = 9 cm i.e. the original radii were 0.25m
and 9cm respectively i.e. the ratio has been changed by a order of 2.

As mentioned before the reason for the constant factor difference will be looked into.

Attachment 1: Jul24.pdf Attachment 2: Jul24_acalc.pdf 70   Thu Jul 25 11:36:17 2013 ranaOpticsGeneralTR results for different dimensions

PDF please instead of EPS or BMP or JFIF or TARGA or GIF or ascii art.

71   Thu Jul 25 13:21:46 2013 Deep ChatterjeeOpticsGeneralTR results for different dimensions

 Quote: In this post I simulate the procedure of calculating the TR noise for finite cavities as proposed by Heinert and check for a match. The technique of performing all necessary calculations in COMSOL and exporting the results was applied to the TR codes. It was seen that the codes gives similar output as the technique of extraction of Fourier coefficients in place of time averaging as has been done in the codes of Koji Arai. One can see the output as the present code runs to be similar to the previous ones found in the SVN. However, the results in the present case were off by a constant factor close to 100. This maybe due to some 'm' - 'cm' or similar difference between analytic calculations and COMSOL values of parameters. Although, it has not been found yet, the correction is hopeful to be found soon. The codes give results similar to the analytic result for other values of the mirror radius and beam radii (apart from the constant factor I have mentioned above). One may have a look at the trend of the graphs between analytic and simulated values in the plots attached. These plots are for the case when the mirror radius = 25m while the beam radius = 9 cm i.e. the original radii were 0.25m and 9cm respectively i.e. the ratio has been changed by a order of 2. As mentioned before the reason for the constant factor difference will be looked into.

The discrepancy related to the difference between the analytic and COMSOL results has been partially addressed. Attached is another
plot showing the comparison. The ratio this time between the COMSOL results and the analytic results is between 0.7 - 0.8. This difference
will be looked into. It is, however, observed that the difference is not a constant factor - it has to do with the model file.

Attachment 1: Jul_25.pdf 72   Thu Jul 25 15:54:58 2013 Deep ChatterjeeOpticsGeneralTR results for different dimensions

Quote:

 Quote: In this post I simulate the procedure of calculating the TR noise for finite cavities as proposed by Heinert and check for a match. The technique of performing all necessary calculations in COMSOL and exporting the results was applied to the TR codes. It was seen that the codes gives similar output as the technique of extraction of Fourier coefficients in place of time averaging as has been done in the codes of Koji Arai. One can see the output as the present code runs to be similar to the previous ones found in the SVN. However, the results in the present case were off by a constant factor close to 100. This maybe due to some 'm' - 'cm' or similar difference between analytic calculations and COMSOL values of parameters. Although, it has not been found yet, the correction is hopeful to be found soon. The codes give results similar to the analytic result for other values of the mirror radius and beam radii (apart from the constant factor I have mentioned above). One may have a look at the trend of the graphs between analytic and simulated values in the plots attached. These plots are for the case when the mirror radius = 25m while the beam radius = 9 cm i.e. the original radii were 0.25m and 9cm respectively i.e. the ratio has been changed by a order of 2. As mentioned before the reason for the constant factor difference will be looked into.

The discrepancy related to the difference between the analytic and COMSOL results has been partially addressed. Attached is another
plot showing the comparison. The ratio this time between the COMSOL results and the analytic results is between 0.7 - 0.8. This difference
will be looked into. It is, however, observed that the difference is not a constant factor - it has to do with the model file.

The issue related to the difference between the analytic and simulated values has been resolved. The codes seems to give reasonable match
between the analytic and simulated case. There is, however, a difference between the formulas being used from the previous cases. Note that
the 1/2 in front of Eq.(15) of Heinert is a because the time average has already been considered. However, in the present codes, the volume
integral of grad_T is evaluated in COMSOL and exported as a function of time. It is then averaged in MATLAB. Thus the factor of 1/2 is to be
omitted in this case(see Liu and Thorne Eq.(5). The presence of this extra factor of 1/2 was giving error in the last upoaded plots. From the
relative difference plot, one can see the maximum difference between COMSOL and analytic results go upto 7% but for most of the graph
it is close to 1% which is a fair result.

Attachment 1: Jul_25_1.pdf Attachment 2: Jul_25_2.pdf Attachment 3: relative_plot.pdf 129   Sun Aug 26 16:42:54 2018 ranaMechanicsAnalysisTest Mass Thermal Noise: Consistency Checks

For the Voyager test masses, we have been considering a barrel coating to increase the IR emissivity to increase the radiative cooling power. We also seek to estimate the added Brownian thermal noise that arises from this.

Dmitry Kopstov (from MSU) made a baseline model for this which we have been modifying. The latest is in the CryogenicLIGO git repo in the FEA directory as ETM_NASAbarrel.mph.

Comparison with Analytic Estimates:

Convergence Tests:

23   Wed Jun 19 14:59:13 2013 Emory BrownGeneralGeneralThe Relative Residual Convergence Error

We have been encountering an error in COMSOL for a while of the form "Failed to find a solution.; The relative residual (###) is greater than the relative tolerance.; Returned solution is not converged.; -Feature: Stationary Solver 1 (sol1/s1); -Error: Failed to find a solution."  The error has occurred when attempting to find a stationary solution in models with boundary loads and no fixed constraints (preventing an edge from moving).  We wanted to determine what the error was caused by to allow us to use the stationary solver, or at least to confirm that the error was not indicative of a problem in our model.  To this end, I designed a few very simple COMSOL models in which I was able to reproduce the behaviour and attempted to determine the root of the issue.

I first constructed a somewhat similar model to ours using a cylinder of fused silica with all of the default values and a normal meshing.  I applied a boundary load of 1N on one of the faces and ran a stationary solver.  As expected, the solver failed to converge since it had no boundary condition preventing it from accelerating continuously.  Applying a force of 1N on the opposing face resulted in the same error as above, which replicates the previous error since the system is failing to converge in a case where it should.  I decided also to make an even simpler 2-D model of a square.  Applying 1N forces to opposing sides on the square again returned the error above.

Both of these models were able to be evaluated using at least an eigenfrequency solver as noted on the primary model in the previous eLog.  I looked on the COMSOL forums and read through some more of their documentation and saw the suggestion in response to this error to use a time-dependent solver and simply view times after the system will have settled to a stationary state (#2 https://community.cmc.ca/docs/DOC-1453).  I attempted this on the test models and both of the time dependent solutions converged without error to their expected solutions (compression between the faces on which forces were being applied).  This may be a sub-optimal computational method though as even in the simple cylinder case with 6133 elements and a simple force profile, the solution took several minutes to run.  For the cylindrical model, I evaluated the strain energy  using both an eigenfrequency and time dependent solver and obtained the same result using both of the solvers.  The eigenfrequency solver evaluates much more quickly than the time dependent solver, and in the primary model as I noted in my previous eLog, the strain energy  obtained using the eigenfrequency and frequency domain solvers agreed, so it seems that the best manner in which to proceed is to use the eigenfrequency solver to compute the strain energy.

The source of the error is still unknown, but given these tests, it seems very unlikely to be indicative of a problem in our model.  We still have a very significant disagreement between the simulated results and the calculated values.  I am going to spend the next day or so looking through both the COMSOL model and the analytic calculation and checking them for errors which could cause this discrepancy.  I will then start reading the documentation on Livelink for Matlab and try to implement it.

24   Thu Jun 20 10:26:38 2013 Matt A.GeneralGeneralThe Relative Residual Convergence Error

Emory:

I don't understand how you're getting the strain energy from the eigenvalue solver. It is my understanding that the eigenvalue solver will only give you the strain energy at a particular eigenfrequency. We're interested in the strain energy from the beam deformation at frequencies below the first eigenmode.

 Quote: We have been encountering an error in COMSOL for a while of the form "Failed to find a solution.; The relative residual (###) is greater than the relative tolerance.; Returned solution is not converged.; -Feature: Stationary Solver 1 (sol1/s1); -Error: Failed to find a solution."  The error has occurred when attempting to find a stationary solution in models with boundary loads and no fixed constraints (preventing an edge from moving).  We wanted to determine what the error was caused by to allow us to use the stationary solver, or at least to confirm that the error was not indicative of a problem in our model.  To this end, I designed a few very simple COMSOL models in which I was able to reproduce the behaviour and attempted to determine the root of the issue. I first constructed a somewhat similar model to ours using a cylinder of fused silica with all of the default values and a normal meshing.  I applied a boundary load of 1N on one of the faces and ran a stationary solver.  As expected, the solver failed to converge since it had no boundary condition preventing it from accelerating continuously.  Applying a force of 1N on the opposing face resulted in the same error as above, which replicates the previous error since the system is failing to converge in a case where it should.  I decided also to make an even simpler 2-D model of a square.  Applying 1N forces to opposing sides on the square again returned the error above.   Both of these models were able to be evaluated using at least an eigenfrequency solver as noted on the primary model in the previous eLog.  I looked on the COMSOL forums and read through some more of their documentation and saw the suggestion in response to this error to use a time-dependent solver and simply view times after the system will have settled to a stationary state (#2 https://community.cmc.ca/docs/DOC-1453).  I attempted this on the test models and both of the time dependent solutions converged without error to their expected solutions (compression between the faces on which forces were being applied).  This may be a sub-optimal computational method though as even in the simple cylinder case with 6133 elements and a simple force profile, the solution took several minutes to run.  For the cylindrical model, I evaluated the strain energy  using both an eigenfrequency and time dependent solver and obtained the same result using both of the solvers.  The eigenfrequency solver evaluates much more quickly than the time dependent solver, and in the primary model as I noted in my previous eLog, the strain energy  obtained using the eigenfrequency and frequency domain solvers agreed, so it seems that the best manner in which to proceed is to use the eigenfrequency solver to compute the strain energy. The source of the error is still unknown, but given these tests, it seems very unlikely to be indicative of a problem in our model.  We still have a very significant disagreement between the simulated results and the calculated values.  I am going to spend the next day or so looking through both the COMSOL model and the analytic calculation and checking them for errors which could cause this discrepancy.  I will then start reading the documentation on Livelink for Matlab and try to implement it.

26   Thu Jun 20 12:51:44 2013 Emory BrownGeneralGeneralThe Relative Residual Convergence Error

I used the same integrator that we had setup for use with the stationary solver.  I had expected it to either fail or return a very different result, but assumed that the fact that it was returning the same result as when it was used after the previous solvers indicated that it was able to use that solver to find the value.  After seeing this comment I went back and checked and the solver was setup to use the last stationary solution regardless which solver last ran.  I will run the previous tests which relied on using this integrator in conjunction with non-stationary solvers again and see if they actually agree and how the results they give compare to the analytic solution.  I will spend some time today trying this.  Unfortunately, both the time dependent and frequency domain solvers take a long time to converge, so running these tests may take most of the day, but I can start reading through some of the Livelink for Matlab documentation while they run.

edit:

I ran the time-dependent solver for the rigid back boundary condition and computed the strain energy for the final two time-steps.  Both of them gave a value of 1.58425*10^-10 J which corresponds to Sx(100 Hz)=2.08871*10^-40 which is about 3.6% greater than the value obtained from the stationary solver.  I don't understand why these values would differ at all.  When I tried to run the time dependent solver with the spring back and gravitationally opposing force boundary conditions an error message was returned: "Failed to find consistent initial values., Last step is not converged.; -Feature: Time-Dependent Solver 1 (sol4/t1).; -Error: Failed to find consistent initial values."

I'm going to spend some more time looking at the COMSOL model attempting to find anything which could be causing this error and reading any relevant documentation I can find.

edit 2:

I spent most of the day attempting to find the source of either of these errors.  Possible solutions I found and tried included increasing or decreasing how fine the mesh is, increasing the acceptable tolerances, and increasing the time interval in the time dependent solver.  I attempted all of these and none of them worked.  Surprisingly, when I increased the acceptable tolerance level for the stationary solver, it returned a greater relative residual which does not make sense to me.

I also took the simple 3d cylinder constructed yesterday and was able to replicate the errors with it.  When I increased the time interval on that case for the time dependent solver, it converged and gave a result.  I was able to get a Umax value for the final time step, which should be equivalent to the one which we would expect a stationary solver to return.  Further increasing the number of time steps the primary model computed did not cause it to converge.  I will run this again with many more time steps and see if it converges.  Even if it does, this doesn't seem like a good way to do our computations as it takes a long time to complete a solution, but seeing whether it converges may give us information on what the problem with the stationary solution is, in particular if increasing the number of time steps does cause it to converge, then that would indicate that finding a way to make the simple case converge would probably work for our model as well.

Quote:

Emory:

I don't understand how you're getting the strain energy from the eigenvalue solver. It is my understanding that the eigenvalue solver will only give you the strain energy at a particular eigenfrequency. We're interested in the strain energy from the beam deformation at frequencies below the first eigenmode.

 Quote: We have been encountering an error in COMSOL for a while of the form "Failed to find a solution.; The relative residual (###) is greater than the relative tolerance.; Returned solution is not converged.; -Feature: Stationary Solver 1 (sol1/s1); -Error: Failed to find a solution."  The error has occurred when attempting to find a stationary solution in models with boundary loads and no fixed constraints (preventing an edge from moving).  We wanted to determine what the error was caused by to allow us to use the stationary solver, or at least to confirm that the error was not indicative of a problem in our model.  To this end, I designed a few very simple COMSOL models in which I was able to reproduce the behaviour and attempted to determine the root of the issue. I first constructed a somewhat similar model to ours using a cylinder of fused silica with all of the default values and a normal meshing.  I applied a boundary load of 1N on one of the faces and ran a stationary solver.  As expected, the solver failed to converge since it had no boundary condition preventing it from accelerating continuously.  Applying a force of 1N on the opposing face resulted in the same error as above, which replicates the previous error since the system is failing to converge in a case where it should.  I decided also to make an even simpler 2-D model of a square.  Applying 1N forces to opposing sides on the square again returned the error above.   Both of these models were able to be evaluated using at least an eigenfrequency solver as noted on the primary model in the previous eLog.  I looked on the COMSOL forums and read through some more of their documentation and saw the suggestion in response to this error to use a time-dependent solver and simply view times after the system will have settled to a stationary state (#2 https://community.cmc.ca/docs/DOC-1453).  I attempted this on the test models and both of the time dependent solutions converged without error to their expected solutions (compression between the faces on which forces were being applied).  This may be a sub-optimal computational method though as even in the simple cylinder case with 6133 elements and a simple force profile, the solution took several minutes to run.  For the cylindrical model, I evaluated the strain energy  using both an eigenfrequency and time dependent solver and obtained the same result using both of the solvers.  The eigenfrequency solver evaluates much more quickly than the time dependent solver, and in the primary model as I noted in my previous eLog, the strain energy  obtained using the eigenfrequency and frequency domain solvers agreed, so it seems that the best manner in which to proceed is to use the eigenfrequency solver to compute the strain energy. The source of the error is still unknown, but given these tests, it seems very unlikely to be indicative of a problem in our model.  We still have a very significant disagreement between the simulated results and the calculated values.  I am going to spend the next day or so looking through both the COMSOL model and the analytic calculation and checking them for errors which could cause this discrepancy.  I will then start reading the documentation on Livelink for Matlab and try to implement it.

25   Thu Jun 20 12:41:01 2013 Deep ChatterjeeGeneralGeneralThe expression for the "work dissipated" in the TE and TR noise calculations

While discussing TE noise with Yanbei Chen, it was realized that the expression for the work dissipated W_diss was derivable from the inhomogenous Heat equation with a source term.
The exercise was tried out to some success and can be found in the attachment. The attachment describes the steps roughly.
The important point to note is the fact that while Liu and Thorne considered stresses developed in the material by means of the heat balance equation, they have ultimately resorted to the
expression for W_diss = 1/T * integral{ kappa * grad(T)^2 rdr } to calculate the dissipation. They have used an expression whch relates the expansion, Theta, to the gradient of temperature.

It is suspected that the fundamental approach is to consider a source of heat and evaluating the dissipation. However, Liu and Thorne considered applying pressure probably because of the
physical scenario of the fluctuations in the mirror face.

Discussion over the same would be helpful.

Attachment 1: dissipated_work.pdf  31   Mon Jun 24 18:00:26 2013 Deep ChatterjeeOpticsGeneralThe problem of TE Noise calculation for a beam passing through a cylindrical substrate

In this post, I mention of my immediate plan for the week and also describe the problem I am looking at in brief.

In the calculation of TR noise for the beam passing through a cylindrical substrate, Heinert has considered thermal fluctuations
by means of an sinusoidally oscillating heat source, scaled according to the intensity profile of the beam, present along the length of the cylinder.
The heat equation is then solved to evaluate the temperature field. The gradient of temperature is used to calculate the work dissipated.
The work dissipated then is used to find the spectral density according to the Fluctuation Dissipation Theorem.

On the other had, the case of TE noise is considered in the work by Liu and Thorne where the beam reflecting from one of the faces is
considered. In their work, they have used an oscillating pressure, scaled according to the beam, at the face. The equation of stress balance
is solved to get the strain field, the strain results in heating and as a result, a temperature gradient. The work dissipated is calculated using
the gradient of temperature and the spectral density is calculated using the FDT.

Now, in case of Heinert's work, it is to be noted that the heating term is the one that contains the TR coefficient, beta = dn/dT. This is the where
the TR noise that we are looking into comes in the picture. Liu and Thorne's work on the other had, never has an explicit heat source. In their case,
it is the strain that generates the heat implicitly. They have related the expansion with the temperature perturbation in their paper from an expression
in Landau Lifshitz's, Theory of Elasticity. The important point to note is that the physical parameters that characterize the TE noise like coefficient of
linear expansion, alpha, or Poisson ratio, sigma, come into the picture through this relation. Another point to note is that the expression
for the work dissipated ( W_diss ) uses the gradient of temperature( It is the same formula that Heinert has used). This expression is derivable from the heat
equation. Thus, one could have also done the exercise by injecting the right heat source and solving the heat equation instead, since its ultimately
fluctuations in temperature that cause these noises TR or TE.

Our problem is to evaluate TE noise for a beam that passes through the cylindrical substrate instead of reflection off the face. It is suspected that using an
oscillating pressure on the surface will not be the correct approach since the beam is going through the material and not just reflecting from the surface. We
want to solve it by means of a heat injection, as done in Heinert, calculating the gradient of temperature, the work dissipated and then the spectral density. It is realized that the
heat source should be oscillating but the correct coefficients is what is undetermined i.e. we realize the heat source is q_dot = [- - -] * cos(omega * t) . In case
of Heinert it is at this point that the 'beta' comes in. However, in the TE case this is not yet determined. The literature doesn't deal with the case of beam goin through
a material. The equations in Heinert must be looked at more deeply to realize how the 'beta' comes in and then drawing an analogy, we may be able to figure out the
right heat source for the TE noise case.

Any comments on references,  the approach that should be taken, or any thoughts on the problem is most welcome.

86   Tue Jun 24 14:35:42 2014 Sam MooreOpticsGeneralTrying to Verify the Heinert Model
Attachment 1: 06_23_14.pdf 87   Tue Jun 24 17:05:24 2014 Sam MooreOpticsGeneralTrying to Verify the Heinert Model

It does appear that the simplified model is only relevant for the simulations.  To quote Heinert: "An efficient computation is only possible for the simple model, as the advanced model would require an element of size more than 106 ."  I have run Koji's code that replicates Heinert's figure 3.  I have attached the resulting temperature distribution and noise amplitude curve.  In the noise amplitude curve, the red line is the analytical result, while the dots are from COMSOL.

The next step is to convert this code to an efficient complex time-independent solution.  As stated before, my main concern here is whether COMSOL actually solves the right equation in the stationary case.

Attachment 1: temperature.png Attachment 2: noiseAmplitude_agreement.png 59   Mon Jul 15 13:14:30 2013 Emory BrownOpticsGeneralUmax curves for varying height test masses

I ran sweeps over the ratioR range from 0.5 to 0.9 for test masses with modified height, and thus mass, which were otherwise equivalent to our previous test mass.  The attached plot was generated and indicates that the optimal ratioR value for the test mass was shifted significantly, for the half mass test mass into lower frequency regions, and for the 10x mass test mass into higher frequency ranges.  I took the data collected from the half mass run and fit a polynomial of degree 2 to it in Matlab using the command p=polyfit(x,y,2) and obtained p=1.089*10^-10*x^2 - 1.024*10^-10*x +1.774*10^-10 which has a minimum value for ratioR=0.47.  This fits with the plot generated as it appears to be near its minumum value, but I should run the simulation more around this value to see if it fits the expected behaviour.  I also did this fitting for the 10x mass curve, but based on the greater variance in the curve over this region of ratioR values I don't expect the predicted minimum to be as accurate, so if we want to find it we will have to search that region of ratioR values. For the 10x mass test mass, p=4.459*10^-10*x^2 - 8.859*10^-10*x + 6.289*10^-10 which has a minimum of about 1.

I will run simulations in these regions overnight, and edit this post with the plots/results.

Attachment 1: MassComparison.png 95   Mon Jul 14 19:09:14 2014 Sam MooreOpticsGeneralUsing Heinert's Solution for Duan's Parameters

I have plotted Heinert's analytical solution for TR noise using Duan's parameters.  Since TO and TE noise can be found by simply rescaling TR noise, these have been included in the plot as well.  The solid curve represents the analytical solution, while the tick marks represent COMSOL's solution.  I have used COMSOL for both a 1D axisymmetric and a 3D model.  Since Duan's cylinder has a radius of 125 microns, but a length of 1 m, the meshing was difficult for the 3D model.  I ended up shortening the length of the cylinder, converting to the actual length when finally calculating the thermal noise.

Attachment 1: oneD_duanParams_residual-eps-converted-to.pdf Attachment 2: threeD_duanParams_residual-eps-converted-to.pdf 44   Tue Jul 2 13:32:30 2013 Emory BrownOpticsGeneralUsing Matlab to run our COMSOL simulations

We want to be able to test a number of values for the ratio of the radii of the frustum which will be much easier once we can get Matlab to automate running our COMSOL model.  I have been able to output the COMSOL model to a .m file and was able to figure out a way to export the Umax value we want to Matlab (see the attached .m file).

The model ran in Matlab after debugging it for a short while and returned the same result as the COMSOL model had by itself.  I setup a script to loop over some values of RatioRadii and feed them to the model.  The first value it fed to the model worked and returned 1.2099e-10 J for RatioRadii=0.1, but for RatioRadii=0.2 the model had a

meshing error.  I am going to have to see if there is some way we can modify the mesh to fix this error.  When I briefly looked into it today, I found that for several RatioRadii which I tried, the object could be meshed for certain finenesses, but not others.  I am going to spend tomorrow working on the first progress report, but after that is done I will

work on fixing this meshing error.

Attachment 1: model_7_1_13.m
```function [Umax] = model(ratioRadii)
% ratioRadii must be a scalar
% model_7_1_13.m
%
% Model exported on Jul 1 2013, 17:08 by COMSOL 4.3.2.189.

import com.comsol.model.*
import com.comsol.model.util.*

model = ModelUtil.create('Model');
```
... 220 more lines ...
Attachment 2: looper.m
```function [RatioRValues,UmaxValues] = looper
% Iterates over a range of RatioRValues

RatioRValues=[0.1:0.1:10]
UmaxValues=[]
for r=RatioRValues,
UmaxValues = [UmaxValues model_7_1_13(r)];
UmaxValues
end
```
... 4 more lines ...
45   Wed Jul 3 00:04:46 2013 KojiOpticsGeneralUsing Matlab to run our COMSOL simulations

And appropriate word wrapping so that we can read the entries. Please.

34   Wed Jun 26 13:52:06 2013 Arnaldo RodriguezOpticsGeneralVerifying Relative Error in Defocus for Regular and Manual-Loop Simulations

To verify the validity of the solutions produced by the manual simulation, I've calculated the relative error between the results from the manual code and the results produced by COMSOL normally.

The plot for the relative error in the defocus at r = 0 and r = 54 mm is shown below, in the case where only the ring heater is turned on at a total power of 5 W. The following indicates that the maximum error is less than 0.01% (in percent error format).

109   Sun May 7 18:22:35 2017 ranaGeneralVoyagerVoyager ITM: Radiative cooling with cold shield and cold CP

I took Aidan's COMSOL model for the ITM from a couple years ago and updated it with some more details:

1. Through radiative cooling only, the ITM is cooled to 103 K. Taking it to 123 K will be accomplished by adding a ring heater to the ITM.
2. Assume 3 W of heating from main laser beam onto ITM HR face.
3. Emissivity of ITM barrel is 0.95. Emissivity of HR* and AR faces is 0.5.
4. The CP and the Inner Shield are kept fixed at 80 K. This is to simulated the effect of having conductive cooling with cold straps. This needs to be checked in more detail by actually modeling thermal straps.
5. Emissivity of the CP is 0.9.
6. The total length of the inner shield is 5 m. The CP is at z = 0 m and the ITM is at z = 2.25 m. We should check what the result would be if the shield is ~1m shorter or longer.

In the attached image, I have made one quadrant of the tubular cryo shield transparent just for clarity - the actual modelled tube is 3 cm thick, made of aluminum, has an emissivity of 0.95 on the inside and 0.03 on the outside (to simulate what we would get from polished aluminum or gold coating).

This files is in our GitLab: https://git.ligo.org/rana-adhikari/CryogenicLIGO/blob/master/FEA/ITM-ColdShield-CP.mph

*I am suspicious of just using a single emissivity number for the AR and HR coatings. Since we are concerned with wavelengths which are long w.r.t. the coating thickness, it may be that the HR and AR coatings have a complicated wavelength dependence in the 5-50 micron band.

Attachment 1: ITM-ColdShield-CP.png 130   Sun Aug 26 19:21:27 2018 ranaGeneralVoyagerVoyager ITM: Radiative cooling with cold shield and cold CP

this is a time dependent model of the previous steady-state one

• Cold Shield and CP held at a constant 60 K
• 3 W heat input to the ITM from the main laser beam
• radiative cooling to the shield
• ITM barrel emissivity = 0.9
• ITM HR/AR emissivity = 0.5/0.5

So the cooldown time w/o a heat switch is ~4 days. Since this is less than the usual pumpdown time required to open the gate valves on the beamtubes, perhaps no heatswitch or in-vac cryogens are required.

Attachment 1: ITM-Cooldown.pdf Attachment 2: CoolDown.webm
79   Fri Nov 22 21:04:53 2013 Chris CousteOpticsAnalysisanalysis

The analysis is making nice eigenmode and stress mode models, but the displacement experiment needs work. Should be fixed by monday.

131   Thu Feb 14 12:38:51 2019 Ching PinMechanics comsol modelling

So I did a simple comsol model of laser heating of a silicon disk, with only radiation, to see the temperature variation at steady state, which could be the limiting factor for high Q at 123 K, due to the thermalelastic effect.

The model just uses a simple 2 inch disc, at 0.028 cm thick, with the flats not incorporated in yet.

I had to search for silicon thermal conductivity and heat capacity at low temperatures, settling with k= 800 W/(m K) and C_p= 300 j/(kg K) from refering to papers. Will check LIGO documents for more accurate versions.

I put an arbitary boundary condition of constant temperature of 123 K on a spot .2 mm in radius, to simulate a beam.

Other arbitary values include 77 K for ambient and a surface emissivity of 0.5.

The laser is off center, because that it where the laser will enter the current setup.

We can see that the power required is .02 W, which seems reasonable.

The model is consistent with the analytic model I made with the laser beam at the centre of the disc. See last two figures.

I'm still trying to get the time dependence to work, as it is just giving me nonsense right now.

Some thoughts: beam radius affects the temperature variation quite significantly, with a fat beam (1 mm radius) having half the temperature variation as a beam of .2 mm radius

I think the halo is just a trick of the eye, but I could be wrong.

Things to do:

Find the time scale of the system, as we want to modulate the laser to adjust the temperature, which will then be run though the mode ringer to measure Q to find the zero crossing

Change the heat source to be an actual laser

Add in the solid mechanics part

Add in the sapphire lens underneath

Attachment 1: Screenshot_from_2019-02-14_12-47-11.png Attachment 2: Screenshot_from_2019-02-14_15-07-40.png Attachment 3: Screenshot_from_2019-02-14_15-41-41.png Attachment 4: graph.pdf 132   Fri Feb 15 21:05:31 2019 Ching PinMechanics comsol modelling

So I got the time dependence to work, but I'm not sure what went wrong in the first time anyways. I'll trying to get a sense of how long it takes for the temerature to semi-equlibrate, and coming to grips with comsol as a whole. There seems to be some inaccuracies when the timing increases, so I'm having to figure out how to increase accuracy without drastically increasing compute time. On the bright side, I switched the model to heating via deposited beam, for a more accurate model.

Attached is comsol's handling of a deposited beam modulated sinusoidally with a frequency of 0.1 Hz and screwing up badly.  Y axis is the average surface tempurature across the whole disc.

Attachment 1: Screenshot_from_2019-02-15_21-40-01.png 133   Tue Feb 19 19:52:53 2019 Ching PinMechanics comsol modelling

The time step response to heating via laser (22.5 mW) is given in the attached picture, for 2 starting temperatures, 122.5 K and 122.8 K. We see that it takes fairly long to equlibrate, with a time constant of about 500 s, and is consistent across both temps. The y axis is average temperature across the surface of the disc, and the x axis is time. I believe that the heat distribution profile would be very similar with time, simply because of how much faster conduction is compared to radiation

Attachment 1: Screenshot_from_2019-02-19_19-52-10.png 134   Fri Mar 1 19:33:40 2019 Ching PinMechanics comsol modelling

I've changed the heating to be from two heat sources, to better model the situation with a heater and a laser. The heater deposits 22 mW, with the laser deposting .5 mW. The overall temperature distribution is smaller then before, as expected, but doesn't really change much. The heater is simulated with a deposited beam with a gassian beam profile with a standard deviation (s.d.) of 8 mm. The laser to the size has a .3 mm s.d. for contrast. I learned that while the deposited beam power doesn't care for emisitivity, it cares about the area the beam is incident with, so for example, if you increase the s.d. too much, you get less power deposited then what you enter.

I've also got modes to appear using solid mechanics, and I'm trying to see if there is a good way to get the frequency dependence with temperature nicely simulated. Changing the parameters of the simulation does give me my different frequencies, but I trying to find a way to do that over the time evolution of the model. I also got to check if the frequency shift is in line with what we are measuring.

Attachment 1: Screenshot_from_2019-03-01_19-35-32.png Attachment 2: Screenshot_from_2019-03-01_19-36-02.png Attachment 3: Screenshot_from_2019-03-01_19-46-25.png 135   Mon Mar 4 17:22:07 2019 Ching PinMechanics comsol modelling

I've updated the material properties to vary with temperature, mainly in the range of 90-140 K. Using the parametric sweep function to vary the input power of the heater, we get the eigenfreqencies' dependence on temperature to show up. The fractional dependence of 1.3e-5 /K around 123 K matches with what Aaron calculated in this elog entry, which is always a good sign that nothing is going horribly wrong. I've also added the flats to the silicon disc, for better accuracy. See the screenshots below showing the frequency shift with temperature.

Attachment 1: Screenshot_from_2019-03-04_19-13-28.png Attachment 2: Screenshot_from_2019-03-04_19-14-41.png Attachment 3: Screenshot_from_2019-03-04_19-15-37.png Attachment 4: Screenshot_from_2019-03-04_19-15-56.png 136   Wed Mar 6 09:51:18 2019 Ching PinMechanics comsol modelling

So I tried adding the sapphire lens to the comsol model, and I am having teething issues. I can't seem to get the solver to converge, but I'm working on it.

Attachment 1: Screenshot_from_2019-03-06_09-31-24.png 137   Thu Mar 7 10:10:37 2019 Ching PinMechanics comsol modelling

There are no issues with the thermal side of the modeling, the issue seems to be with the structural mechanics side. I'm not sure what I'm doing wrong though, but it just isn't converging. In any case, seeing that this is my last day here, I'll just point out that the version without the lens is saved in cvs/cds/caltech/users/cp/current working model.mph, while the model with the lens is saved in the same folder under the file name testing with lens.mph, using optimus. There is also a small file edition of current working model, with a file name that is self evident. I'll leave it to aaron to upload that to git.

In any case, let me just put down some documentation and thoughts on this model: The physical parameters on the model are generally what we do know of silicon at these temperatures, with the exception of emissitivity, which was randomly given a parameter of .5. The model is currently absorbing 22 mW from the heater and .5 mW from the laser, which implies that the heater should be able to have 45 mW incident on the disc, which would in turn suggest that you would want it to at least dissapate 100 mW to account for the lack of direction from radiation. Because comsol's deposit beam power function does not care for emissitivity, it must be modified in tandem with it.

Attachment 1: Screenshot_from_2019-03-07_10-10-47.png 127   Sat Mar 17 15:27:48 2018 ranaGeneralGeneralfile size >> small

When saving your COMSOL files do these two things to make the files much smaller (good for saving in version control and sharing):

1. File -> Compact History
2. Preferences -> Files -> Optimize for File Size (not speed)
128   Mon Aug 20 15:44:56 2018 ranaGeneralGeneralfile size >> small

Also,

1. click 'Clear Mesh' under the mesh menu
2. 'Clear Solutions' under the Study menu

In this way the file sizes will be ~100 kB instead of 10's of MB.

 Quote: When saving your COMSOL files do these two things to make the files much smaller (good for saving in version control and sharing): File -> Compact History Preferences -> Files -> Optimize for File Size (not speed)

124   Thu Jan 18 21:13:59 2018 aaronOpticsPonderSqueezemodifications to Gautam's 40m finesse model

I made a copy of Gautam's 40m model to add the unstable filter cavity for the ponderomotive squeezing project. I wanted to make a more explanatory record of the changes I've made because I think some of them might be necessary for other scripts using gautam's original model, but I have not implemented them in that file (also just for my own paper trail).

Changes:

• swapped the role of nXBS and nYBS; before I think it was sending the reflected beam at the main BS to the X arm, and the transmitted beam to the Y arm
• I kept nPOY, but I am a bit confused by it--this is the beam returning from the x-arm that is transmitted in to the BS, but it isn't followed out of the BS; rather it seems this pick off beam is detected inside the BS substrate, which is odd. Anyway we aren't using it for now.
• Changed some labels to distinguish between the beamsplitter already in the IFO (IBS) and the BS used in the quantum phase compensator (QBS)
• Added a quantum phase compensator cavity, consisting of QP1 and QP2, as well as a mirror (QP0) to direct light transmitted from the QPC through QBS back to QBS
100   Sat Sep 5 15:04:43 2015 Dennis CoyneMechanicsConfigurationsummary of FEA modal model to State Space model

At the 2014 commissioning workshop, I presented a summary of my efforts in converting finite element modal models into state space models:

https://dcc.ligo.org/LIGO-G1400099

I also provided the attached written summary for a report on the workshop that Aidan Brooks and Gabriele Vajente are preparing.

Attachment 1: FEM2SS_summary.docx
4   Thu May 2 14:00:36 2013 KojiGeneralConfigurationtest mass TR with Levin's approach

Thermo-refractive noise in a finite cylindrical/infinite test mass with Levin's approach

Location of the codes: 40m SVN repository
comsol/thermo-refractive/

This code realizes Levin's calculation on thermo-refractive noise
doi:10.1016/j.physleta.2007.11.007
and duplicates the result of D. Heinerts paper
DOI: 10.1103/PhysRevD.84.062001
Also the result is compared with Braginsky's result in 2004.
doi:10.1016/S0375-9601(03)00473-0

- The code applies gaussian-shaped heat into a cylindrical mirror.

- The heating/cooling is sinusoidal and the dissipation (heat flow) is calculated in COMSOL.

- The time series result was analyzed in MATLAB to extract the single coefficient corresponds to the transfer function.
This way the effect of the initial transient was avoided.

- Unfortunately direct measurement of frequency response in COMSOL was not available as the heat flow is not modal.
If we make a fourier analysis of the partial differential equation and solve it in COMSOL using arbitrary PDE solver,
we may turn this time dependent analysis into static analysis.

All of the calculation was driven from MATLAB. So you have to launch "COMSOL with MATLAB".

Attachment 1: thermo_refractive_1D_axisym_result.pdf 125   Mon Jan 22 21:13:25 2018 aaronMechanicsPonderSqueezetips from Shoaib

I talked with Shoaib about some changes I could make to the FEA model to improve convergence and reduce memory usage. Summary:

• use a hex mesh rather than tetrahedral
• Use more structured meshes. In particular, I can make an angled swept the mesh in the tapered portions rather than using a free mesh in these regions, defining the mesh only on one boundary
• Use a nonconformal mesh, so adjacent domains do not need to have matching meshes. This could allow transitions to coarser meshes as the fiber becomes thicker or contacts the horn
• try using curved elements so the curvature isn't just approximated by the number of regular mesh elements
• Might try scaling the model uniformly by some factor (100, 1000), which could avoid machine precision issues

I'll get these implemented this week and see if the computation goes through.

126   Mon Jan 29 23:02:13 2018 aaronMechanicsPonderSqueezetips from Shoaib

I started implemented some of these changes:

• Started the mesh with a boundary free quad mesh on the interface between the upper tapers and the main part of the fiber. I used the following size setting
• Maximum element size is fiber_taper_length, which I felt was a good characteristic maximum because it wouldn't make sense to have radial elements larger than the length of the taper itself. This setting does not limit the mesh (elements are much smaller than the maximum)
• Minimum element size: skin_depth/10, where I use the thickness of the surface layer as the characteristic length, and allow a mesh with elements substantially smaller than this characteristic length.
• Maximum element growth rate I set at 8, which from some trial and error isn't strongly affecting the mesh as long as it isn't brought much lower than this (so the mesh is allowed to get coarse relatively quickly, though other settings limit this)
• Curvature factor is 0.7, and actually does affect the mesh strongly; for now value seems to give a 'pretty' (symmetric to the eye) mesh.
• The resolution of narrow regions is set to 5, where this value again strongly affects the mesh and this value was chosen because it looks 'pretty'
• Note that in contrast to the previous way I had defined the mesh, I now am defining the surface and bulk of the mesh in the same step, which seems to help these two regions match up. Shoaib had mentioned using a nonconformal mesh, which I may end up implementing in a later update.
• Instead of doing a free mesh in each non-cylindrical region, I continued with a swept mesh along the length of the fiber (still sweeping in sections, though I'm not sure this is necessary)
• I also slightly increased the number of mesh elements along the length of the fibers (ie, used more elements in each section of the mesh sweeping).
• In this model, I still have a free tet mesh on the horn and test mass, since those seemed a bit more involved to move over to a hex mesh (there is no free hex mesh in COMSOL that I could see right away, and I'm not sure that the horn can be meshed with a swept mesh due to its irregularity. I'll look in to this further, but I had no immediate solution for making a hex mesh in the horn) and anyway they are much coarser than the fibers so I don't expect them to cause much trouble.

These changes were fine with a 10um mesh, but ended up with much too fine a mesh (~80000 boundary mesh elements in the initial mesh between the taper and main fiber) due to meshing the surface and boundary in a single stage in the first step. I separate them out to get a coarser mesh in the bulk, and also make the resolution a bit coarser.

114   Mon Jul 31 22:18:57 2017 ranaGeneralGeneralusing more than 12 cores in matlab

Since 2014, the limit of 12 workers using the matlab parallel computing toolbox has been lifted. Today, I was able to get this to work. There's a trick.

Usually, when you start up matlab and run a parallel thing like 'parfor', it just uses a default profile 'local' which limits you to 12 workers. You can try to ask for more by doing 'parpool(40)' for 40 workers, but it will tell you that NumWorkers = 12 and you're out of luck. So instead:

`myCluster = parcluster('local')`

`myCluster.NumWorkers = 40;`

`saveProfile(myCluster);`

`parpool('myCluster', 40)`

It seems that it needs the max # of workers and the requested number of workers to be 40 to use 40, otherwise you'll just get 12 (as of matlab 2016a).

Attachment 1: Screen_Shot_2017-07-31_at_10.11.35_PM.png ELOG V3.1.3-