While increasing the code coverage and testability of an in-house developed framework there is a specific challenge when it comes to imported modules.
Let's say a module looks like this:
(for legacy and other reasons the general set up of the framework is not up for discussion :) )
from Toolkits import some_toolkit
Class X(object):
def __init__():
# this toolkit interfaces with a legacy system
self.toolkit_instance = some_toolkit()
def run():
# do some stuff with info from the toolkit instance
This class is instantiated by the framework if certain conditions are met and the run
function is called.
Now I want to test the logic in this run
function and thought about doing this:
from Toolkits import some_toolkit
Class X(object):
def __init__():
# this toolkit interfaces with a legacy system
self.toolkit = some_toolkit
def run():
self.toolkit_instance = self.toolkit()
# do some stuff with info from the toolkit instance
Which would allow me to instantiate the class in my test, patch self.toolkit
with a mock module containing the test data I need and validate the logic in the run
function.
The reason I am not patching the import itself is because of caching of imports by Python. IRL the instantiation of the imported toolkit would reflect state in an external system and for different test scenarios I want the import to reflect different states. Parametrizing them leads to unwanted results (the first instance is always used by the object under test) so that leads me to this solution. Is there something against such a solution and am I missing an obvious other (hopefully elegant) solution?
Aucun commentaire:
Enregistrer un commentaire