You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
from which information-gain bounds and 2-type error are return inside an EvaluationResult. However, this alpha value is then forgotten, which cause the EvaluationResult plotting to require recalling the original value of alpha with which the t-test was carried out.
or to redefine the attributes of the t-test. For instance, shouldnt result.quantile, instead of result.test_distribution, contain actually the information_gain lower and upper bounds?
Also, the W-test confidence interval is calculated inside the plotting functions, instead of the evaluation function itself.
The text was updated successfully, but these errors were encountered:
pabloitu
changed the title
EvaluationResult of T-test do not contain the original percentile/alpha value
EvaluationResult of T-test (and W-test) do not contain the original percentile/alpha value
Aug 19, 2024
addressed in #263, commit 7306329, where added an extra value to EvaluationResult().quantile that stores the type1-error alpha value. Now, the alpha value can be written in the plotting legend to explain what the symbols/colors in the t-test plot mean.
Currently, the t-test EValuationResult() is defined as:
but the values doesn't feel so in place. The dof value is also lost. Which would involve to do some crazy acrobatics if a different confidence interval is desired, or re-run the entire test. This is different with consistency test, where the confidence interval is defined at the Plot level.
I wonder if the attributes of the resulting EvaluationResult should be re-defined for the t-test as:
test_distribution: the actual t-distribution, with the 3 parameters of the location-scaled dist: e.g (meanIG, stdIG, dof).
observation_statistic: 0, since we are testing if LogScores are substantially different, i.e., IG=0
quantile: % mass of test_distribution below 0.
In this way, the comparison test results are analogous to consistency test. A test_distribution, similar to Poisson/NegBinom distribution. A quantile value, that can be immediately looked if below (or above) of a confidence level.
The t-test requires an alpha value to create a confidence interval (e.g., 5%)
pycsep/csep/core/poisson_evaluations.py
Lines 14 to 15 in 5f84ea9
EvaluationResult
. However, this alpha value is then forgotten, which cause theEvaluationResult
plotting to require recalling the original value ofalpha
with which the t-test was carried out.pycsep/csep/utils/plots.py
Line 1718 in 5f84ea9
Not sure if creating a new attribute
alpha
of the resultingEvaluationResult
pycsep/csep/core/poisson_evaluations.py
Lines 46 to 54 in 5f84ea9
or to redefine the attributes of the t-test. For instance, shouldnt
result.quantile
, instead ofresult.test_distribution
, contain actually the information_gain lower and upper bounds?Also, the W-test confidence interval is calculated inside the plotting functions, instead of the evaluation function itself.
The text was updated successfully, but these errors were encountered: