python精度误差如何控制?,python精度误差,Python 2.7.9


Python 2.7.9 (default, Dec 10 2014, 12:28:03) [MSC v.1500 64 bit (AMD64)] on win32>>> print '%.100f' % (9999.0/10000)0.9999000000000000110134124042815528810024261474609375000000000000000000000000000000000000000000000000

从原理上阐述下为什么产生这样的误差?

可以参考IEEE的浮点数标准

浮点数在计算机中的存储往往采取这个标准, 其只能精确表示2^x的倍数, 而除此之外的其它数字只能尽量逼近.因此使用浮点数是会有误差的.

python有关浮点数的误差是这么说的:

On a typical machine running Python, there are 53 bits of precision available for a Python float, so the value stored internally when you enter the decimal number 0.1 is the binary fraction

见这里 https://docs.python.org/2/tutorial/floatingpoint.html
你可以试试Decimal, 估计底层没有完全实现一个完整的大数算法,实际上,我觉得这个并不完全是Python规范的内容,而是取决于平台和实现,你可以看下当前平台的float大小

python手册里如是说

Floating point numbers are usually implemented using double in C; information about the precision andinternal representation of floating point numbers for the machine on which your program is running is available insys.float_info.

上面说的这些都是CPython的实现,而这个在Jython中,又是另一番定义:

    Jython 2.5.3 (2.5:c56500f08d34+, Aug 13 2012, 14:48:36)[Java HotSpot(TM) 64-Bit Server VM (Oracle Corporation)] on java1.8.0_40Type "help", "copyright", "credits" or "license" for more information.>>> '%.100f' % (9999.0/10000)'0.9999000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'

结论: 这个取决于平台实现。

编橙之家文章,

评论关闭