You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, Gremlins uses the time it takes to run the coverage run as a basis for the timeout, then it multiplies it by a configurable coefficient. This approach is naïve and creates problems in projects where it takes a lot of time to run the whole test suite.
This happens because if you run gremlins from the root of the project, it will calculate the coverage on the whole project and then it will run single package tests with the timeout of the whole test suite multiplied by the coefficient. This means that if, for example, the whole coverage suite takes 2 minutes to compute, then a single package test which takes a few milliseconds to complete, will have applied a timeout of 2 minutes multiplied by the coefficient (which is 3 by default).
This slows down the execution because it will keep some workers busy on tests that could already be timeout. Even worse, if the tests bound to timeout keep allocating resources (ex. a loop that with te mutation becomes infinite), then the memory usage of some of the workers skyrockets.
We need to find a better approach to how gremlins calculates the timeouts (maybe something adaptive), and also how to control the memory in tests.
The text was updated successfully, but these errors were encountered:
gremlins runs a tests for a mutant package. We can create a cache map with package test timeouts.
Before running tests against mutant we take timeout from the cache.
If there is no known timeout, then we run tests to get timeout (execution time * coefficient).
It sound interesting. If you are interested, you can take a look at the work I started doing on the branch 144-find-a-better-way-to-establish-timeouts. I'm not working much on Gemlins recently because I'm quite busy, but I plan to tackle this asap.
I have not much time these days. If you are willing to work on it you're welcome.
It has been a while since I worked on it, so I don't remember exactly the approach I was taking, but it was something adaptive. It recalculated the time of each test and kept track of its average if I remember well. Still not sure it is the best approach, though.
Currently, Gremlins uses the time it takes to run the coverage run as a basis for the timeout, then it multiplies it by a configurable coefficient. This approach is naïve and creates problems in projects where it takes a lot of time to run the whole test suite.
This happens because if you run gremlins from the root of the project, it will calculate the coverage on the whole project and then it will run single package tests with the timeout of the whole test suite multiplied by the coefficient. This means that if, for example, the whole coverage suite takes 2 minutes to compute, then a single package test which takes a few milliseconds to complete, will have applied a timeout of 2 minutes multiplied by the coefficient (which is 3 by default).
This slows down the execution because it will keep some workers busy on tests that could already be timeout. Even worse, if the tests bound to timeout keep allocating resources (ex. a loop that with te mutation becomes infinite), then the memory usage of some of the workers skyrockets.
We need to find a better approach to how gremlins calculates the timeouts (maybe something adaptive), and also how to control the memory in tests.
The text was updated successfully, but these errors were encountered: