When I only run the following test it works perfectly fine. But when I run all tests within its test class at the same time, it fails because a wrong return value is being given by data_backend.read_month_data
. The wrong data is a dict, which is being specified as return_value
in multiple previous tests.
def test_get_statistics(self):
mock_month_data = {
1: {
"PyCharm": 4000,
"IntelliJ": 6000
}
}
data_backend = MagicMock()
data_backend.read_month_data.return_value = mock_month_data
data_repository = CodeTimeDataRepository(data_backend)
result = data_repository.get_statistics(datetime.date(2020, 1, 1))
expected_result = {
"date": "Jan 01 2020",
"total_time": 10000,
"activities": [
{
"name": "IntelliJ",
"time": 6000,
"progress": 0.6
},
{
"name": "PyCharm",
"time": 4000,
"progress": 0.4
}
]
}
self.maxDiff = None
self.assertDictEqual(expected_result, result)
The mock object is set up in the previous tests using the following code. self.get_default_month_data()
returns a dict looking similar to mock_month_data
, but with different data.
data_backend = MagicMock()
data_backend.read_month_data.return_value = self.get_default_month_data()
Aucun commentaire:
Enregistrer un commentaire