difference between float and decimal

difference between float and decimal

difference between float and decimal.Decimal using an example that involves decimal arithmetic and rounding.

from decimal import Decimal

# Example 1: Using float (binary floating-point)
x = 0.1 + 0.1 + 0.1
y = 0.3

print(x == y)  # Output: False
print(x)       # Output: 0.30000000000000004
print(y)       # Output: 0.3

# Example 2: Using Decimal (decimal floating-point)
a = Decimal('0.1') + Decimal('0.1') + Decimal('0.1')
b = Decimal('0.3')

print(a == b)  # Output: True
print(a)       # Output: 0.3
print(b)       # Output: 0.3

In the above example, we perform calculations involving the decimal value 0.1 using both float and decimal.Decimal data types.

In Example 1, where float (binary floating-point) is used, we add 0.1 three times and compare the result with 0.3. However, due to the nature of binary floating-point representation, the exact decimal value of 0.1 cannot be precisely represented. As a result, the equality comparison fails, and the printed values of x and y show a small rounding error.

In Example 2, we use decimal.Decimal (decimal floating-point) for the calculations. By initializing the decimal values with strings, we ensure the exact decimal representation of 0.1. As a result, the addition and comparison operations produce the expected results, and both a and b are equal to 0.3 without any rounding errors.

The main difference between float and decimal.Decimal lies in their internal representation and precision. float uses binary floating-point representation, which can lead to rounding errors and precision limitations for decimal arithmetic. decimal.Decimal, on the other hand, uses decimal floating-point representation, providing higher precision and control over rounding, making it suitable for accurate decimal calculations.