To give a bit of background, our team has a separate E2E test suite repository in which we use puppeteer to test a product that we're building. These E2E tests are clicking around on a frontend UI, but as multiple backend microservices are involved in ensuring that the correct behaviour is observed we decided to place them all in a separate repo.
We're discussing a philosophical issue internally whether we should require our test suite to pass in the pipelines of pull requests INTO the E2E repo or not.
The main points for not requiring tests to pass in the PR are:
- Tests may fail because there are bugs in the architecture, but that means the test was good and found a bug so it should be merged in to the test suite.
- Pull requests into the E2E test suite are blocked and cannot be merged until bugs have been fixed. Pull requests may build up or be forgotten about.
- E2E tests that aren't merged because of being blocked by bugs aren't run as part of periodic scheduled tests so we're less likely to see failing tests turn green without manually retriggering PR pipelines
The main points for requiring tests to pass in the PR are:
- Tests should pass most of the time. If an E2E PR pipeline fails then developers are more likely to address the bug to get the pipeline passing and tests merged.
- If E2E tests aren't run in the pipeline, developers can't be sure the test would ever pass. We may end up with a suite of tests in master that aren't valid.
- The traditional approach of "Test fails" -> "Test passes" -> "Refactor" in TDD would imply that we write a test and if it fails, fix it so that it's green and then merge it in. This ensures that our pipelines are always green.
What are your thoughts? Which is the correct approach? Are there any relevant resources for creating test suites across multiple services?
Aucun commentaire:
Enregistrer un commentaire