Description
The following error was seen on the feature/incremental-training
branch for a PR build of #2032:
[Exception] - critical check errorDecreaseOnNew > 2.0 * errorIncreaseOnOld has failed [0 <= 0]
== [File] - CBoostedTreeTest.cc
== [Line] -918
The test could be trivially fixed by changing the assertion on line 918 from >
to >=
. However, this comment makes me think this might be a sign of a worse problem with the test:
// By construction, the prediction error for the old training data must
// increase because the old and new data distributions overlap and their
// target values disagree. However, we should see proportionally a much
// larger reduction in the new training data predcition error.
If the test is supposed to be constructed such that the prediction errors for the old and new data distributions cannot both be 0 then something is wrong in the test setup code.
Another thing to note is that this failure only occurred once out of several CI runs. So this test is not deterministic for a given platform/architecture. At one time we were always using the same random number generator seed for ml-cpp unit tests so that behaviour was consistent. If we are no longer doing that then we need to get the test code to print the random number generator seed for each run so that problems like this can be debugged.