Accuracy is a measure of how close a value is to the true value. It helps me to remember this if I associate it with the word correctness.
Precision is a measure of the degree to which a set of many values agree with each other. I often think of this as scatter.
The two are not the same. Just because a bunch of values are tightly clumped together does NOT mean that they must be clumped around the correct value. It's usually very hard to detect systematic bias in a data set, since one usually does not know the true value.
If you know the true value, then you can use
true error = (truth) - (your value) (truth) - (your value) true % error = ----------------------- truth
Of course, in real life, you almost always don't know the true value. Sometimes, you may be able to create a set of better and better approximations -- either by using an iterative method, or by using better information. In these situations, you may define
approx error = (current estimate) - (previous estimate) (current estimate) - (previous estimate) approx % error = ------------------------------------------ current estimate
But, just because your approximate error decreases (that is, the estimates converge) does not mean that you are getting any closer to the truth.
# sig figs = 2 - log (2 * percent error) 10
Even if you do succeed in calculating a value accurately, and with great precision, it will still do no good unless it is expressed in the proper units. It's surprising how frequently people forget to include the proper units -- or, in fact, any units at all. One sad example of the importance of units is given by the story of the Mars Climate Orbiter.
I found this book to be full of entertaining stories of accidents and disasters:
Copyright © Michael Richmond. This work is licensed under a Creative Commons License.