Discussion Closed This discussion was created more than 6 months ago and has been closed. To start a new discussion with a link back to this one, click here.

Error increases with mesh density for trivial 1D model

Please login with a confirmed email address before reporting spam

Having had some problems, I reduced them down to a surprisingly minimal example:

• 1D, single interval, Laplace equation.
• Dirichlet BC on left end: fix it at "1.0".
• (There is a zero-flux BC on the right end by default.)
• Mesh has 500,000 equally-sized elements.
• Everything else is at default values.

The result comes back with a massive error (see the plot, attached) and relative error indicator (LinErr) of 0.0031 (MUMPS), 0.0047 (PARDISO), 0.0032 (SPOOLES).

The error is solver-dependent for the three direct solvers, which should all be, theoretically, producing the same answer. But they are not, as can be seen from the residual plot. All answers are bad by at least 0.0003 at the free end. So we only get 3 significant figures accuracy out of this trivial simulation.

Of course it's clear that this has something to do with numerical errors accumulating due to the large number of elements. However, 1 million DOFs doesn't sound like a big job for Comsol.

Typically we expect the error to decrease as we refine the mesh further and further. But with this example, the error keeps increasing:

Elements DOFs LinErr
500 1001 4e-8
5000 10001 2e-6
50000 100001 4e-4
500000 1000001 5e-3

So instead of converging to the right answer with mesh refinement, it diverges!

I was under the impression that these sorts of effects (when the mesh becomes too fine) won't come into play until some very large numbers of elements that we never need to worry about. This seems to be wrong now, and is a nuisance, because now it works against error estimation by mesh refinement. I don't really understand the nature of these errors, actually, beyond the imprecise "accumulation of round-off and discretisation errors". It would be good to get a feel for how fine is too fine, for different types of problems and geometries. Any ideas?


3 Replies Last Post 12.04.2016, 08:00 GMT-4
Henrik Sönnerlind COMSOL Employee

Please login with a confirmed email address before reporting spam

Posted: 9 years ago 30.03.2016, 16:22 GMT-4
Hi,

The condition number for the stiffness matrix for this problem can be shown to be where C is a constant of order 1 and n is the number of degrees of freedom. So with a million DOFs, you can expect a rather ill conditioned problem. The calculations are done with approximately 16 significant digits.

You are right in that a million DOFs is not extremely large per se, but it is uncommon in 1D!

A 3D model with the same number of DOFs would be a 100x100x100 cube, and then the condition number is of the order instead. To reach the same level of ill-conditioning on a cube, you would need DOFs or so.

This does not imply that ill-conditioning is not a problem in practice. Most real structures are not cubes, and mesh size or material data can vary widely. Attaching rubber to steel in a structural analysis can kill five of the available significant digits.

Your example is however not usable for checking mesh convergence, since there is no discretization error. The only thing it can capture is the numerical round-off errors since the solution can be exactly represented by a single element. You need a source term which gives a more than parabolic variation of the true solution to do a mesh convergence analysis.

Regards,
Henrik
Hi, The condition number for the stiffness matrix for this problem can be shown to be [math]Cn^2[/math] where C is a constant of order 1 and n is the number of degrees of freedom. So with a million DOFs, you can expect a rather ill conditioned problem. The calculations are done with approximately 16 significant digits. You are right in that a million DOFs is not extremely large per se, but it is uncommon in 1D! A 3D model with the same number of DOFs would be a 100x100x100 cube, and then the condition number is of the order [math]100^2[/math] instead. To reach the same level of ill-conditioning on a cube, you would need [math]10^{18}[/math] DOFs or so. This does not imply that ill-conditioning is not a problem in practice. Most real structures are not cubes, and mesh size or material data can vary widely. Attaching rubber to steel in a structural analysis can kill five of the available significant digits. Your example is however not usable for checking mesh convergence, since there is no discretization error. The only thing it can capture is the numerical round-off errors since the solution can be exactly represented by a single element. You need a source term which gives a more than parabolic variation of the true solution to do a mesh convergence analysis. Regards, Henrik

Please login with a confirmed email address before reporting spam

Posted: 9 years ago 30.03.2016, 18:10 GMT-4
Let me take a stab at clarifying....

LaPlace's equation is a 2nd order equation and so if the resolution is sufficiently finer than the characteristic length over which derivatives change then the error will be dominated by numerical precision. So for a sufficiently fine mesh the precision would need to be increased.

I'm sure Henrik will correct me if this is off-base.

Let me take a stab at clarifying.... LaPlace's equation is a 2nd order equation and so if the resolution is sufficiently finer than the characteristic length over which derivatives change then the error will be dominated by numerical precision. So for a sufficiently fine mesh the precision would need to be increased. I'm sure Henrik will correct me if this is off-base.

Please login with a confirmed email address before reporting spam

Posted: 9 years ago 12.04.2016, 08:00 GMT-4
Hi, thank you Henrik and Daniel,

The condition number being a function of the level of refinement was indeed the key to my problem.

However, the cube example is at the opposite end of the spectrum, i.e. one of the safest possible models as far as condition number is concerned. Yes, 1D problems are not typical, but slender 2D problems are common enough to encourage the need to be mindful of these issues (well, I'm working on one right now).

After a bit of experimentation in 2D with NxM grids, it seems that the longest edge (in terms of the number of elements, so max(N, M)) determines the condition number in exactly the same way as running a 1D simulation with N elements.

The next question that arises is: what is the effect if we have some coarse areas and some densely meshed areas? To look at answering this, I've built a simulation with a few horizontal layers, of thicknesses 1, 2, 4, 8, ..., n^2, ..., 2048, and meshed the edges between them with 40960/thickness vertices. Meshed the domains with Delaunay triangular mesher. Applied Dirichlet BCs (= 1) on the left boundary of the model. (Model is attached, but note it does take a long time to mesh.) The results are:

____________________Error___LinErr__DOFs____Solve_time__Condest
Whole_model_______1e-8____3e-8____1.9e6___1172_s______2e7
Only_the_last_row___1e-5____4e-5____1.0e6___56_s________3e10

(The "Error" above can be found because we know the exact solution. LinErr is a prediction of this, which is available when we don't know the solution, and mostly seems to be in the right ballpark.)

So, solving just the last row separately, which may seem like a "smaller" model in some sense, has a worse fundamental bound on the accuracy achievable.

The idea is the condest exponent of 7 tells us that we've lost 7 significant figures of precision, and indeed the error is in the 15-7 = 8th decimal place. Similar with 15-10=5.

What seems to matter, therefore, is not how many elements you have along some dimension, but the maximum length of the shortest paths between any two elements (i.e. if elements sharing an edge are considered to be adjacent).

I estimate that the maximum shortest path in the full model is about 130, while in the last row alone is about 20480. Now, to check this guess, running the original 1D model, and estimating its condition number for various mesh densities, we get:

m = mphload('1D_Laplace1MDOF')
ms = mphmatrix(m, 'sol1', 'Out', {'Kc', 'Lc'})
condest(ms.Kc)

Elements____Condest
500,000_____3e12
50,000______3e10
5,000_______3e8
500_________3e6
50__________3e4

So exactly the Cn^2 pattern as predicted.

So what does the "maximum shortest path" heuristic predict? For 20480 it predicts ~2e10, which is close to the 3e10 above, and for 130 is predicts ~1e5, rather than 2e7 (probably the element size growth factor plays a role here). So the heuristic is somewhat useful, because it allows visual assessment, unlike the opaque number that is "condest". So: eventually it's a good idea to run "condest", but while designing the mesh we can make quick estimates using this heuristic.

(By the way, ms.Kc \ ms.Lc solves quickly in Matlab, but with a similar kind of error as we get from the solvers used by Comsol. Not surprising, as there is no trick against condition number except improving precision or posing the problem differently.)

So, items to add to strategy might be:

• For all shortest paths between every two elements, what is the maximum length? For every decimal digit in this number, we will lose two digits of precision out of 15 (all else kept perfect — so if there is a mismatch of materials, for example, we'll lose more). This rule is for modelling.

• Eventually, to get a more accurate estimate, assemble the matrix for the coarsest mesh and estimate its condition number using "condest" in Matlab.

• For every 3 regular refinements in 2D, we'll lose about 2 decimal digits of precision on top of the above. (To see this, think about what happens to the shortest path.)

• We shouldn't expect better than sqrt(machine epsilon) precision from any simulation anyway, so use 1e-8 as the base "precision floor".

• The above estimates might raise the precision floor for a particular problem. For example, for my 1D simulation with 50,000 elements, we'll be left with 15-10 = 5 digits of precision.

• When doing mesh refinement, if we observe relative error of about the precision floor (possibly raised by the condition number effects), then it may be considered converged to the accuracy achievable for this problem with double precision arithmetic. And in this case it doesn't make sense to use the observed quantities to tell whether they are in the asymptotic region, or to use Richardson extrapolation in order to get a higher-order-accurate value. (E.g. in my 1D example above, if we tried to do that, we would get an even worse answer than all of those that we already have!)

Why so? Because with regular refinement the discretisation error reduces, in the asymptotic region, cubically for quadratic shape functions (if I remember correctly), while the condition number grows by a factor of 4 (in 2D). So we lose log10(4) ~= 0.6 digits of precision per refinement. (Imagine we just erase these digits and have to draw conclusions given those shorter numbers.) In the asymptotic region the quantity we wish to estimate has the form True_solution+C*h^3, and depending on how large C is for a particular quantities being observed, the effective available number of digits may or may not be enough to extrapolate meaningfully. There is another caveat: suppose we are interested in a quantity near where the solution crosses zero, while not asymptotically converging to zero — then the condition number is still going to eat decimal places based on the characteristic magnitude of the solution in the vicinity; that is to say: we can't assume that for any output quantity, the error due to condition number affects e.g. all significant figures after the 7th, because it could be after the 3rd if it's near a zero-crossing. This is hard to take into account in general, but should be quite straight-forward when looking at concrete examples.
Hi, thank you Henrik and Daniel, The condition number being a function of the level of refinement was indeed the key to my problem. However, the cube example is at the opposite end of the spectrum, i.e. one of the safest possible models as far as condition number is concerned. Yes, 1D problems are not typical, but slender 2D problems are common enough to encourage the need to be mindful of these issues (well, I'm working on one right now). After a bit of experimentation in 2D with NxM grids, it seems that the longest edge (in terms of the number of elements, so max(N, M)) determines the condition number in exactly the same way as running a 1D simulation with N elements. The next question that arises is: what is the effect if we have some coarse areas and some densely meshed areas? To look at answering this, I've built a simulation with a few horizontal layers, of thicknesses 1, 2, 4, 8, ..., n^2, ..., 2048, and meshed the edges between them with 40960/thickness vertices. Meshed the domains with Delaunay triangular mesher. Applied Dirichlet BCs (= 1) on the left boundary of the model. (Model is attached, but note it does take a long time to mesh.) The results are: ____________________Error___LinErr__DOFs____Solve_time__Condest Whole_model_______1e-8____3e-8____1.9e6___1172_s______2e7 Only_the_last_row___1e-5____4e-5____1.0e6___56_s________3e10 (The "Error" above can be found because we know the exact solution. LinErr is a prediction of this, which is available when we don't know the solution, and mostly seems to be in the right ballpark.) So, solving just the last row separately, which may seem like a "smaller" model in some sense, has a worse fundamental bound on the accuracy achievable. The idea is the condest exponent of 7 tells us that we've lost 7 significant figures of precision, and indeed the error is in the 15-7 = 8th decimal place. Similar with 15-10=5. What seems to matter, therefore, is not how many elements you have along some dimension, but the maximum length of the shortest paths between any two elements (i.e. if elements sharing an edge are considered to be adjacent). I estimate that the maximum shortest path in the full model is about 130, while in the last row alone is about 20480. Now, to check this guess, running the original 1D model, and estimating its condition number for various mesh densities, we get: m = mphload('1D_Laplace1MDOF') ms = mphmatrix(m, 'sol1', 'Out', {'Kc', 'Lc'}) condest(ms.Kc) Elements____Condest 500,000_____3e12 50,000______3e10 5,000_______3e8 500_________3e6 50__________3e4 So exactly the Cn^2 pattern as predicted. So what does the "maximum shortest path" heuristic predict? For 20480 it predicts ~2e10, which is close to the 3e10 above, and for 130 is predicts ~1e5, rather than 2e7 (probably the element size growth factor plays a role here). So the heuristic is somewhat useful, because it allows visual assessment, unlike the opaque number that is "condest". So: eventually it's a good idea to run "condest", but while designing the mesh we can make quick estimates using this heuristic. (By the way, ms.Kc \ ms.Lc solves quickly in Matlab, but with a similar kind of error as we get from the solvers used by Comsol. Not surprising, as there is no trick against condition number except improving precision or posing the problem differently.) So, items to add to strategy might be: • For all shortest paths between every two elements, what is the maximum length? For every decimal digit in this number, we will lose two digits of precision out of 15 (all else kept perfect — so if there is a mismatch of materials, for example, we'll lose more). This rule is for modelling. • Eventually, to get a more accurate estimate, assemble the matrix for the coarsest mesh and estimate its condition number using "condest" in Matlab. • For every 3 regular refinements in 2D, we'll lose about 2 decimal digits of precision on top of the above. (To see this, think about what happens to the shortest path.) • We shouldn't expect better than sqrt(machine epsilon) precision from any simulation anyway, so use 1e-8 as the base "precision floor". • The above estimates might raise the precision floor for a particular problem. For example, for my 1D simulation with 50,000 elements, we'll be left with 15-10 = 5 digits of precision. • When doing mesh refinement, if we observe relative error of about the precision floor (possibly raised by the condition number effects), then it may be considered converged to the accuracy achievable for this problem with double precision arithmetic. And in this case it doesn't make sense to use the observed quantities to tell whether they are in the asymptotic region, or to use Richardson extrapolation in order to get a higher-order-accurate value. (E.g. in my 1D example above, if we tried to do that, we would get an even worse answer than all of those that we already have!) Why so? Because with regular refinement the discretisation error reduces, in the asymptotic region, cubically for quadratic shape functions (if I remember correctly), while the condition number grows by a factor of 4 (in 2D). So we lose log10(4) ~= 0.6 digits of precision per refinement. (Imagine we just erase these digits and have to draw conclusions given those shorter numbers.) In the asymptotic region the quantity we wish to estimate has the form True_solution+C*h^3, and depending on how large C is for a particular quantities being observed, the effective available number of digits may or may not be enough to extrapolate meaningfully. There is another caveat: suppose we are interested in a quantity near where the solution crosses zero, while not asymptotically converging to zero — then the condition number is still going to eat decimal places based on the characteristic magnitude of the solution in the vicinity; that is to say: we can't assume that for any output quantity, the error due to condition number affects e.g. all significant figures after the 7th, because it could be after the 3rd if it's near a zero-crossing. This is hard to take into account in general, but should be quite straight-forward when looking at concrete examples.

Note that while COMSOL employees may participate in the discussion forum, COMSOL® software users who are on-subscription should submit their questions via the Support Center for a more comprehensive response from the Technical Support team.