Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Multiplication and division of floats creates rounding errors of <1 ulp each time. In most contexts you never need to worry about them.

The operations you need to watch out for are addition/subtraction, in cases where your result has much smaller magnitude than your inputs, causing loss of significance. Sometimes great care must be taken in implementing numerical algorithms to avoid this. But this is an inherent problem in numerical computing, not the fault of the floating point format per se.



When does this problem crop up when only dealing with pure ints?


Doing integer/rational arithmetic gives you a choice: either never do any rounding and require exponentially growing precision that makes even the simplest algorithms impractically expensive (not to mention giving up entirely on the many common computations which cannot be represented whatsoever in an exact rational arithmetic system), or allow rounding/approximation of some kind and end up with roughly the same problems floats have.


While division is a problem, you could symbolically represent numbers such as 1/3 by storing the numerator and denominator as a fraction of 2 values.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: