I am trying to learn TDD and I have some questions on how much testing should be done on acceptance/integration/system level compared to only on unit level (And about testing the same thing on more than one level)
For example, lets say I am building something that:
- Loads current state
- Calls an external API for updates of the state
- Saves the state again
- Based on the new state decides if you should be allowed to login (returns a boolean)
Lets say that I have unit tests for the API itself, for reading and updating the state and for deciding if the login should be allowed based on the state.
I don't know how much testing I should then be done on the acceptance level. From the top of my head all these things could be required:
- Verify that login is allowed when starting with no state and getting an OK state from the API
- Verify that login is NOT allowed when starting with no state and getting an NOT OK state from the API
- Verify that login is allowed when starting with an OK state and the updated state is still allowing it
- Verify that login is NOT allowed when starting with an OK state but getting to an NOT OK state based on the new info from the API (And this could happen in different ways based on the response from the external API)
- Verify that login is allowed when starting in a NOT OK state that is changed based on the information from the API.
Basically I don't know how to start from the business spec that "If you have a valid state based on the information in our system and the external system you should be allowed o login"
How do I write Acceptance Tests (or system/integration tests) and Unit Tests to drive this? I feel like I need a different acceptance test for every possible different input?
Aucun commentaire:
Enregistrer un commentaire