dimanche 18 octobre 2015

How to perform unittest for floating point outputs? - python

Let's say I am writing a Unit test for a function that returns a floating point, I can do it as such in full precision as per my machine:

>>> import unittest
>>> def div(x,y): return x/float(y)
... 
>>>
>>> class Testdiv(unittest.TestCase):
...     def testdiv(self):
...             assert div(1,9) == 0.1111111111111111
... 
>>> unittest.main()
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

My question is will the same full floating point precision be the same across OS/distro/machine?

I could try to round off and do a unit test as such:

>>> class Testdiv(unittest.TestCase):
...     def testdiv(self):
...             assert round(div(1,9),4) == 0.1111
... 
>>>

I could also do an assert with log(output) but to keep to a fix decimal precision, I would still need to do rounding or truncating.

But what other way should one pythonically deal with unittesting for floating point output?

Aucun commentaire:

Enregistrer un commentaire