Loading 5 Votes - +

First Rule of Computation: Don't Trust Your Calculator

Somewhere along the line, we all come to trust the numbers that come out of computers. After all, an infallible device told us it was true. So, let us start with a simple example, on a ten digit calculator enter the following:

1E12 + 1 - 1E12

If the 1 is in any position, but the last one, your calculator spits out 0. Despite the fact that the answer is very clearly 1. Using this example, and others, I try to have my students resist the urge to enter numbers into their calculators until the very end of the problem, and they will then usually avoid such problems. Despite my advice to my students, this bit me twice in two days, leading to a corollary to the first rule:

Just because you have a bigger, more sophisticated calculator, does not mean you can trust it!

I use Mathematica daily, and, in some sense, it is my extra large calculator. The first problem I had was the result of this innocuous looking bit of code:

n[x]f[x]

The problem comes in as x goes to 0, n goes to infinity. However, f does not do this, and n times f does not do this either. In other words, f goes to zero faster than n goes to infinity, so that when they are multiplied together the resulting limit is finite. But, I had defined both n and f as functions, so that they are evaluated separately. Therefor when I set x to 0, the whole thing blew up in my face, as Mathematica only saw that I had infinity times something that was finite. I fixed the problem by replacing the function calls with the functions themselves, at which point Mathematica got the hint. Thinking I was done, I ran the code again, and it blew up, again.

This time the trouble came about because of the following term in the denominator:

e[k - q]^2 - 2 e[k - q] e[k - q - qp] + e[k - q - qp]^2 - w[q]^2

Mathematica was saying that at k = q = qp this term was going to zero. This should not be zero as the first three terms cancel each other out because they equal

(e[k - q] - e[k - q - qp])^2

and w[q]^2 is not zero. However, individually each of the first three terms is about 20 orders of magnitude larger than w[q]^2, so Mathematica was just blithely ignoring it. The solution: ensure Mathematica knows that the first three terms cancel out by using the following, instead.

(e[k - q] - e[k - q - qp])^2 - w[q]^2

It should be noted that increasing the precision of the calculation using either N[<>,100] or by increasing $MinPrecision to over 100 does not work. Only the modified form actually gives you a non-zero answer.

So, even with an extra large calculator, you can really foul things up if you are not careful. There are two morals to this story: when performing floating point calculations, order matters, and a careful up front analysis will often help you avoid these problems.

Similarly tagged OmniNerd content:

Information This article was edited after publication by the author on 18 Dec 2009. View changes.
Thread parent sort order:
Thread verbosity:

Do you think this limitation in very small or very large number manipulation comes about from hardware/software limitations? A double-precision floating point number is represented with 64 bits. Are your numbers larger (or smaller) than what can be represented this way?
I’m noticing in MS Visual Studio that the ‘double’ type variable allows for a maximum value of 1.79769313486232E+308. But I think this falls in the same category as the 10 digit calculator example where significant digits begin to be compromised after 2^52 as the IEEE spec shows.
(Some bits out of the 64 are used for the sign, and the exponent.)

But then I wonder how Mathmatica can calculate say 2^330 and get:
2187250724783011924372502227117621365353169430893212436425770606409952999199375923223513177023053824 — All significant digits.
Maybe it’s a software trick for these straightforward cases?

I just spent a week searching for a non-existent bug in some code I wrote for a class. According to the work we had done looking at the solution analytically (the “we” includes the professor for the class), as a parameter, t, in the equations was increased we expected the output of our code to go to one, for all values of another parameter, n. However, my code showed a distinct peak near n=0, that I could not get rid of. Since, we all knew what the answer should be, I stripped down my code looking for numerical errors. The stripping down involved rearranging terms to eliminate any numerical errors, extending the range of n to get rid of truncation errors, completing rewriting it in a different form, and having another student write their own version in a different language entirely. But, the result remained consistent. So, despite this, the professor insisted that the code was wrong, as “if the system must go to 1 as t is increased, then the code cannot be correct.” So, after losing my temper, I went back and looked closely at the original assumption. After very carefully examining the high-t limit, I concluded that my code was correct all along and we were all incorrect. My fellow students concurred. The moral of the story: sometimes you have to trust your calculator.

Speaking of which, in science we often bend over backwards to expose the flaws in our methodologies just so that we can be absolutely sure of the correctness of our results. If I were an experimentalist, this would mean that I would have to carefully re-examine anomalous data (re-doing the experiment if necessary) just to prove or disprove the correctness of my original result. Computation is very similar.

The first step is always to re-examine the code, ensuring its correctness in every way possible, first. Once the code is found to be correct, then, and only then, can we say our original view may be wrong. The second step is to show that the numerics had to be correct by analyzing the system analytically, if possible. (Usually, this step is done first, but sometimes the numerics reveals something different.) It is this step that is often very difficult, and usually can only be accomplished in some limit.

That said, a thought occurred to me this morning: was it possible that my professor knew our original assumption was wrong, and he wanted to see if any of us would challenge it. If so, he is more devious than I gave him credit for, and I would feel like a schmuck for losing my temper. In other words, the statement: “I’m right, but I can’t prove it” just doesn’t fly if you want to be a theoretician. (And, no those weren’t my exact words, just what I was thinking.)

Share & Socialize

What is OmniNerd?

Omninerd_icon Welcome! OmniNerd's content is generated by nerds like you. Learn more.

Voting Booth

What if a spouse cheats?

7 votes, 3 comments