You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 11, 2024. It is now read-only.
When an unexpected error happens in a deferred test case, the case doesn’t call deferred.resolve() and the whole benchmarking process hangs forever. There is nothing I can do with it without using monkey patches. This is in contrast to non-defer mode, and I find this difference illogical.
Benchmark.js is used in jsbench.me. It’s a public platform for making browser JS benchmarks. Since browsers are different, and benchmarks can be run by random people, unexpected errors will happen, and therefore benchmark.js should help handle them. I see a couple of ways for benchmark.js to help:
Add a timeout for test cases. When a timeout happens, the benchmark will continue running other cases so that the user will know the results for other cases.
Add a function like deferred.reject(error). I’ll be able to catch unexpected errors and let benchmark.js know that a test case has failed, and it will stop the case and continue running other cases.
You may object that a test case should never fail, therefore my issue isn’t an issue. But the reality is that some test cases will never succeed in some cases (sorry for the pun), for example when a test case uses an API that isn’t implemented in the browser. What should I do in this situation?
Similar to #123
When an unexpected error happens in a deferred test case, the case doesn’t call
deferred.resolve()
and the whole benchmarking process hangs forever. There is nothing I can do with it without using monkey patches. This is in contrast to non-defer mode, and I find this difference illogical.Benchmark.js is used in jsbench.me. It’s a public platform for making browser JS benchmarks. Since browsers are different, and benchmarks can be run by random people, unexpected errors will happen, and therefore benchmark.js should help handle them. I see a couple of ways for benchmark.js to help:
deferred.reject(error)
. I’ll be able to catch unexpected errors and let benchmark.js know that a test case has failed, and it will stop the case and continue running other cases.You may object that a test case should never fail, therefore my issue isn’t an issue. But the reality is that some test cases will never succeed in some cases (sorry for the pun), for example when a test case uses an API that isn’t implemented in the browser. What should I do in this situation?
I’ve made a simile issue in the jsbench.me repository: psiho/jsbench-me#43
The text was updated successfully, but these errors were encountered: