dimanche 27 septembre 2020

What is a "good practice" approach to including long running tests as part of CI?

Our current CI setup is a typical one: for each PR made to our main Git repo, we have a Jenkins on which do a build + test, and after passing those and code reviews, we merge. This takes about an hour per PR.

I'm trying to find a reasonable way to automate a stage of testing process we run after our CI (performance testing) and we currently run on demand, manually. These are long running tests that should take around 20 hours to run so running them for each PR would slow us down.

So that means we might want to run them separately from our current CI, perhaps once per day. However, this doesn't seem like a good solution either since it will be hard to maintain. Say PRs A, B and C get merged in after passing CI but B contains a change that will cause a performance slow down. We will find out about this 24 hours later and then have to figure out which commit out of the 3 caused the issue. So let's say this takes use at least one day and find the fix for, B*. But by then, 2 more PRs D, E have been merged. So we have A - B - C - D - E - B*, which ideally should run and fix the original issue caused by B.

But what if D introduced another performance slow down? How would we tell if B* worked? Now we have an even bigger group of PRs we need to look through for the issue! It seems like merging PRs in before it passes these tests isn't a good idea. But that brings us back to the first idea, which is too slow.

How do people usually automate long running tests as part of their workflow?

Aucun commentaire:

Enregistrer un commentaire