Imagine having a feature with this design (I believe this is technically a procedural-design):
def run_feature(input_tsv):
"""
Performs a bunch of data transformations and finally outputs a particular calculation
:param str input_tsv: Path to input tsv
:return int: Final calculated value
"""
post_op_1 = _operation_1(input_tsv)
post_op_2 = _operation_2(post_op_1)
post_op_3 = _operation_3(post_op_2)
# ... Imagine several more of these sub-operations
return _operation_N(post_op_N_minus_1)
# Code for each _operation_#
I have tests for each of the _operation_#
functions, but I don't have any real tests for run_feature
. So, I'm wondering, how do I ensure all of the parts are integrating correctly? Further complicating this is the fact that the input_tsv contains many columns and potential combinations of situations.
Some thoughts:
- I could mock each part and verify that each is called. This would be easy, but it does seem to lock the future of this code in this design.
- I could try to generate as many input_tsv situations as I can and hope that I'm not missing any potential integration problems. This seems impractical and also cumbersome to update in the future.
- Some combination of 1 and 2 to ensure all parts are being called, but also that the correct final return value is generated as-expected for a few common cases (e.g. input_tsv that I could manually predict the correct answer).
Aucun commentaire:
Enregistrer un commentaire