From the Cobra language, comes the idea of using Decimal numbers as the default. It's 2008, so let's use decimals by default!

This is an error that python has... Add 0.1 together ten times, and you do not get 1.0.

>>> 1.0 == 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1

False

Maybe py3k should use Decimals by default instead? Or is this planned already?

Python float performance is really bad anyway, so we might as well make it a little more accurate right?

Floats are such a cause of errors in code, it'd be nice if math was more accurate by default :) It requires a fair bit of knowledge to understand all the quirks of floats, but less to understand the quirks of decimals.

cu.

## Monday, March 03, 2008

Subscribe to:
Post Comments (Atom)

## 5 comments:

I wonder if this has been debated on comp.lang.python? There has been some people reading c.l.p who know a lot more than me about floating point representation and I can't help but suspect that using decimals as the default numeric solution has its own drawbacks.

How is the precision of decimals set if one types in

1.000000000000000000001 +

1.000000000000000000001

or 1.<60 zeroes>1 + 1.<60 zeroes>1 ??

Defaulting to decimal representation might not only affectcomputation speed.

- Paddy.

Here's another weird one for you...

>>> from decimal import Decimal

>>> 1.0 == Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1")

False

>>> 1 == Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1") + Decimal("0.1")

True

There's lots more issues with floats... like comparing a float with another. Using equality with floats is a problem, because of lack of precision. So using == with float is pretty much a bug in 99% of cases. Maybe python should even raise an error in this case. Mostly you should compare floats to be within say 0.0000000001 of each other (commonly called a small epsilon).

"Python float performance is really bad anyway, so we might as well make it a little more accurate right?"

Well, yeah, but Decimal performance is more than a 100 times worse. So... no thanks.

Maybe you can reimplement it in C first? If we could get something that is only say 3-5 times a s slow, then that would be acceptable as a default for me, as I rarely use floats.

The problems with floats are btw common for all languages, really. It's often difficult to accurately represent a decimal number as a binary number, just as it's difficult to represent some rational fractions as decimal numbers. Such as 1/3. (Hence, Python 3 will have a module for rational fractions).

You appear to make the common mistake of confusing precision and accuracy. All number systems will introduce inaccuracies, and this

hasbeen discussedad nauseamon c.l.py.One advantage the decimal module has is the ability to use a precision determined by the context, but if someone is thrown by your example they aren't going to have the sophistication to use Decimal contexts.

Another example:

>>> from decimal import Decimal

>>> print 1/Decimal(3)*3 == 3.0

False

Post a Comment