Floating point inaccuracy examples -


how explain floating point inaccuracy fresh programmers , laymen still think computers infinitely wise , accurate?
have favourite example or anecdote seems idea across better precise, dry, explanation?
how taught in computer science classes?

there 2 major pitfalls people stumble in floating-point numbers.

  1. the problem of scale. each fp number has exponent determines overall “scale” of number can represent either small values or larges ones, though number of digits can devote limited. adding 2 numbers of different scale result in smaller 1 being “eaten” since there no way fit larger scale.

    ps> $a = 1; $b = 0.0000000000000000000000001 ps> write-host a=$a b=$b a=1 b=1e-25 ps> $a + $b 1 

    as analogy case picture large swimming pool , teaspoon of water. both of different sizes, individually can grasp how are. pouring teaspoon swimming pool, however, leave still swimming pool full of water.

    (if people learning have trouble exponential notation, 1 can use values 1 , 100000000000000000000 or so.)

  2. then there problem of binary vs. decimal representation. number 0.1 can't represented limited amount of binary digits. languages mask this, though:

    ps> "{0:n50}" -f 0.1 0.10000000000000000000000000000000000000000000000000 

    but can “amplify” representation error repeatedly adding numbers together:

    ps> $sum = 0; ($i = 0; $i -lt 100; $i++) { $sum += 0.1 }; $sum 9,99999999999998 

    i can't think of nice analogy explain this, though. it's same problem why can represent 1/3 approximately in decimal because exact value need repeat 3 indefinitely @ end of decimal fraction.

    similarly, binary fractions representing halves, quarters, eighths, etc. things tenth yield infinitely repeating stream of binary digits.

  3. then there problem, though people don't stumble that, unless they're doing huge amounts of numerical stuff. then, know problem. since many floating-point numbers merely approximations of exact value means given approximation f of real number r there can infinitely many more real numbers r1, r2, ... map same approximation. numbers lie in interval. let's rmin minimum possible value of r results in f , rmax maximum possible value of r holds, got interval [rmin, rmax] number in interval can actual number r.

    now, if perform calculations on number—adding, subtracting, multiplying, etc.—you lose precision. every number approximation, therefore you're performing calculations intervals. result interval , approximation error ever gets larger, thereby widening interval. may single number calculation. that's merely one number interval of possible results, taking account precision of original operands , precision loss due calculation.

    that sort of thing called interval arithmetic , @ least me part of our math course @ university.


Comments

Popular posts from this blog

python - How to insert QWidgets in the middle of a Layout? -

python - serve multiple gunicorn django instances under nginx ubuntu -

module - Prestashop displayPaymentReturn hook url -