what I found about int and float in python
Python's int data type may be used to store whole numbers of arbitrary size. lnt values are automatically converted to longer representations when they become too large for the underlying hardware int.
floats have a finite amount of precision and cannot represent most numbers exactly. float data type in Python does not have the same capability as the int data type to represent numbers of arbitrary length. While the int data type can represent integers of any size, the float data type in Python has a finite precision and is based on the underlying floating-point representation used by the system.
The precision of float numbers depends on the system and the implementation of Python. Typically, Python's float data type has a precision of about 15 to 17 significant digits.
Example, when you perform an operation like this it will get rounded up to 17 digits
x = 123456789012345.6789
print(x)
#answer: 123456789012345.67
If you need to work with numbers of arbitrary precision in Python, you can use the decimal
module, which provides the Decimal
data type. The Decimal data type allows you to work with floating-point numbers with arbitrary precision, but it comes with a trade-off in terms of performance compared to the built-in float data type.
Here's an example of using the Decimal data type to perform calculations with arbitrary precision:
Example:
from decimal import Decimal
x = Decimal('1.23456789123456789')
y = Decimal('2.34567891234567890')
result = x * y
print(result)
In this example, the Decimal data type is used to represent the numbers x and y with arbitrary precision. The calculation x * y
is performed accurately with the desired precision.
2.895899868327999615622620750 #result
when you perform normal float, you will get this instead 2.8958998683279997
when you get a result like this 2.6525285981219103e+32
as in factorial of 30! for example
The e+32 at the end means that the result is equal to 2.6525285981219103 x 1032. You can think of the +32 at the end as a marker that shows where the decimal point should be placed. In this case, it must move 32 places to the right to get the actual value. However, there are only 16 digits to the right of the decimal, so we have "lost" the last 16 digits.