Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> To make matters worse if your project grows so large that your hypothetical tests take over an hour to run, it sounds like the project would already be broken down into modules.

Story time so that you can revisit your assumptions.

Imagine your product is a graphics driver. Graphics APIs have extensive test suites with millions of individual tests. Running them serially typically takes many hours, depending on the target hardware.

But over the years you invariably also run across bugs exposed by real applications that the conformance suites don't catch. So, you also accumulate additional tests, some of them distilled versions of those triggers, some of them captured frames with known good "golden" output pictures. Those add further to the test runtime.

Then, mostly due to performance pressures, there are many options that can affect the precise details of how the driver runs. Many of them are heuristics auto-tuned, but the conformance suite is unlikely to hit all the heuristic cases, so really you should also run all your tests with overrides for the heuristic. Now you have a combinatorial explosion that means the space of tests you really ought to run is at least in the quadrillions.

It's simply infeasible to run all tests on every PR, so what tends to happen in practice is that a manually curated subset of tests is run on every commit, and then more thorough testing happens on various asynchronous schedules (e.g. on release branches, daily on the development branch).

I'm not convinced that the article is the solution, but it could be part of one. New ideas in this space are certainly welcome.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: