-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement check-result: test check failure affects test result by default #3239
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
#!/bin/bash | ||
. /usr/share/beakerlib/beakerlib.sh || exit 1 | ||
|
||
rlJournalStart | ||
rlPhaseStartSetup | ||
rlRun "tmp=\$(mktemp -d)" 0 "Creating tmp directory" | ||
rlRun "pushd $tmp" | ||
rlRun "set -o pipefail" | ||
rlPhaseEnd | ||
|
||
rlPhaseStartTest "Test with passing checks" | ||
rlRun -s "tmt run -vvv test --name /test/check-pass" | ||
rlAssertGrep "pass /test/check-pass" $rlRun_LOG | ||
rlAssertGrep "pass dmesg" $rlRun_LOG | ||
rlPhaseEnd | ||
|
||
rlPhaseStartTest "Test with failing dmesg check (respect)" | ||
rlRun -s "tmt run -vvv test --name /test/check-fail-respect" 2 | ||
rlAssertGrep "fail /test/check-fail-respect" $rlRun_LOG | ||
rlAssertGrep "fail dmesg" $rlRun_LOG | ||
rlPhaseEnd | ||
|
||
rlPhaseStartTest "Test with failing dmesg check (info)" | ||
rlRun -s "tmt run -vvv test --name /test/check-fail-info" | ||
rlAssertGrep "pass /test/check-fail-info" $rlRun_LOG | ||
rlAssertGrep "skip dmesg" $rlRun_LOG | ||
rlPhaseEnd | ||
|
||
rlPhaseStartTest "Test with passing dmesg check (xfail)" | ||
rlRun -s "tmt run -vvv test --name /test/check-xfail-pass" 2 | ||
rlAssertGrep "fail /test/check-xfail-pass" $rlRun_LOG | ||
rlAssertGrep "pass dmesg" $rlRun_LOG | ||
rlPhaseEnd | ||
|
||
rlPhaseStartTest "Test with failing dmesg check (xfail)" | ||
rlRun -s "tmt run -vvv test --name /test/check-xfail-fail" | ||
rlAssertGrep "pass /test/check-xfail-fail" $rlRun_LOG | ||
rlAssertGrep "fail dmesg" $rlRun_LOG | ||
rlPhaseEnd | ||
|
||
rlPhaseStartTest "Test with multiple checks with different result interpretations" | ||
rlRun -s "tmt run -vvv test --name /test/check-multiple" 2 | ||
rlAssertGrep "fail /test/check-multiple" $rlRun_LOG | ||
rlAssertGrep "fail dmesg" $rlRun_LOG | ||
rlAssertGrep "pass dmesg" $rlRun_LOG | ||
rlAssertGrep "info dmesg" $rlRun_LOG | ||
rlPhaseEnd | ||
|
||
rlPhaseStartTest "Test with failing dmesg check but overridden by test result" | ||
rlRun -s "tmt run -vvv test --name /test/check-override" | ||
rlAssertGrep "pass /test/check-override" $rlRun_LOG | ||
rlAssertGrep "fail dmesg" $rlRun_LOG | ||
rlPhaseEnd | ||
|
||
rlPhaseStartCleanup | ||
rlRun "popd" | ||
rlRun "rm -r $tmp" 0 "Removing tmp directory" | ||
rlPhaseEnd | ||
rlJournalEnd |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,83 @@ | ||
summary: Tests for check results behaviour | ||
description: Verify that check results, including after-test checks, are correctly handled | ||
|
||
/test/check-pass: | ||
summary: Test with passing checks | ||
test: echo "Test passed" | ||
framework: shell | ||
duration: 1m | ||
check: | ||
- how: dmesg | ||
result: respect | ||
# Expected outcome: PASS (test passes, check passes) | ||
|
||
/test/check-fail-respect: | ||
summary: Test with failing dmesg check (respect) | ||
test: echo "Test passed" | ||
framework: shell | ||
duration: 1m | ||
check: | ||
- how: dmesg | ||
failure-pattern: '.*' | ||
result: respect | ||
# Expected outcome: FAIL (test passes, but check fails and is respected) | ||
|
||
/test/check-fail-info: | ||
summary: Test with failing dmesg check (info) | ||
test: echo "Test passed" | ||
framework: shell | ||
duration: 1m | ||
check: | ||
- how: dmesg | ||
failure-pattern: '.*' | ||
result: skip | ||
# Expected outcome: PASS (test passes, check fails, but should be just info) | ||
|
||
/test/check-xfail-pass: | ||
summary: Test with passing dmesg check (xfail) | ||
test: echo "Test passed" | ||
framework: shell | ||
duration: 1m | ||
check: | ||
- how: dmesg | ||
result: xfail | ||
# Expected outcome: FAIL (test passes, check passes but xfail expects it to fail) | ||
|
||
/test/check-xfail-fail: | ||
summary: Test with failing dmesg check (xfail) | ||
test: echo "Test passed" | ||
framework: shell | ||
duration: 1m | ||
check: | ||
- how: dmesg | ||
failure-pattern: '.*' | ||
result: xfail | ||
# Expected outcome: PASS (test passes, check fails but xfail expects it to fail) | ||
|
||
/test/check-multiple: | ||
summary: Test with multiple checks with different result interpretations | ||
test: echo "Test passed" | ||
framework: shell | ||
duration: 1m | ||
check: | ||
- how: dmesg | ||
failure-pattern: '.*' | ||
result: respect | ||
- how: dmesg | ||
result: xfail | ||
- how: dmesg | ||
failure-pattern: '.*' | ||
result: info | ||
# Expected outcome: FAIL (first dmesg check fails and is respected, second dmesg check passes but xfail expects it to fail, third failing dmesg check is just info) | ||
|
||
/test/check-override: | ||
summary: Test with failing dmesg check but overridden by test result | ||
test: echo "Test passed" | ||
framework: shell | ||
duration: 1m | ||
result: pass | ||
check: | ||
- how: dmesg | ||
failure-pattern: '.*' | ||
result: respect | ||
# Expected outcome: PASS (test passes, check fails but is overridden by 'result: pass') |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
import enum | ||
|
||
|
||
class CheckEvent(enum.Enum): | ||
""" Events in test runtime when a check can be executed """ | ||
|
||
BEFORE_TEST = 'before-test' | ||
AFTER_TEST = 'after-test' | ||
|
||
@classmethod | ||
def from_spec(cls, spec: str) -> 'CheckEvent': | ||
try: | ||
return CheckEvent(spec) | ||
except ValueError: | ||
raise ValueError(f"Invalid test check event '{spec}'.") | ||
|
||
|
||
class CheckResultInterpret(enum.Enum): | ||
INFO = 'info' | ||
RESPECT = 'respect' | ||
XFAIL = 'xfail' | ||
|
||
@classmethod | ||
def from_spec(cls, spec: str) -> 'CheckResultInterpret': | ||
try: | ||
return CheckResultInterpret(spec) | ||
except ValueError: | ||
raise ValueError(f"Invalid check result interpretation '{spec}'.") |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -10,7 +10,7 @@ | |
import tmt.identifier | ||
import tmt.log | ||
import tmt.utils | ||
from tmt.checks import CheckEvent | ||
from tmt.checks.enums import CheckEvent, CheckResultInterpret | ||
from tmt.utils import GeneralError, Path, SerializableContainer, field | ||
|
||
if TYPE_CHECKING: | ||
|
@@ -189,6 +189,27 @@ class CheckResult(BaseResult): | |
default=CheckEvent.BEFORE_TEST, | ||
serialize=lambda event: event.value, | ||
unserialize=CheckEvent.from_spec) | ||
check_result: CheckResultInterpret = field( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The result class cannot contain a field that governs its own interpretation. That is owned by a different entity: def interpret_check_result(self, interpret: CheckResultInterpret) -> 'CheckResult':
"""Interpret check result according to the check_result interpretation."""
if interpret == CheckResultInterpret.RESPECT:
return self
if interpret == CheckResultInterpret.INFO:
self.result = ResultOutcome.INFO
elif interpret == CheckResultInterpret.XFAIL:
self.result = {
ResultOutcome.FAIL: ResultOutcome.PASS,
ResultOutcome.PASS: ResultOutcome.FAIL
}.get(self.result, self.result)
return self Take a {
check.how: check.result
for check in invocation.test.check
} It's the same solution, the idea is there: the interpretation does not lie in the result itself, it's owned by test (for test results) or test check (for test check results), and must be extracted from them (which is exactly what |
||
default=CheckResultInterpret.RESPECT, | ||
serialize=lambda result: result.value, | ||
unserialize=CheckResultInterpret.from_spec | ||
) | ||
|
||
def interpret_check_result(self) -> 'CheckResult': | ||
"""Interpret check result according to the check_result interpretation.""" | ||
|
||
if self.check_result == CheckResultInterpret.RESPECT: | ||
return self | ||
|
||
if self.check_result == CheckResultInterpret.INFO: | ||
self.result = ResultOutcome.INFO | ||
elif self.check_result == CheckResultInterpret.XFAIL: | ||
self.result = { | ||
ResultOutcome.FAIL: ResultOutcome.PASS, | ||
ResultOutcome.PASS: ResultOutcome.FAIL | ||
}.get(self.result, self.result) | ||
|
||
return self | ||
|
||
|
||
@dataclasses.dataclass | ||
|
@@ -347,6 +368,21 @@ def interpret_result(self, interpret: ResultInterpret) -> 'Result': | |
raise tmt.utils.SpecificationError( | ||
f"Test result note '{self.note}' must be a string.") | ||
|
||
# Interpret check results | ||
failed_checks: list[CheckResult] = [] | ||
for check in self.check: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please, |
||
interpreted_check = check.interpret_check_result() | ||
if ( | ||
interpreted_check.result == ResultOutcome.FAIL and | ||
interpreted_check.check_result != CheckResultInterpret.INFO | ||
): | ||
failed_checks.append(interpreted_check) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will need a bit more work, because # This is constructed in `from_test_invocation()` and fed to `interpret_result()` - just like the result interpretation:
{
check.how: check.result
for check in invocation.test.check
}
def interpret_result(self, interpret: ResultInterpret, interpret_checks: dict[str, CheckResultInterpret]) -> 'Result':
...
self.check = [
check_result.interpret_check_result(interpret_checks[check_result.name])
for check_result in self.check
]
failed_checks = [
check_result
for check_result in self.check
if check_result.outcome == ResultOutcome.FAIL] There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the review. Needs work, yes. Though I might need someone to go through the code with me, because most of the time I'm just guessing what is supposed to do what and how and then trying a blind trial-error. Not enjoyable and I'm back to feeling like an useless idiot. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Well, I did offer to have an online session, and the offer still stands. Add something to my calendar and we can try it out. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. awesome, much appreciated. |
||
|
||
if failed_checks: | ||
self.result = ResultOutcome.FAIL | ||
check_note = ', '.join([f"Check '{check.name}' failed" for check in failed_checks]) | ||
self.note = f"{self.note}, {check_note}" if self.note else check_note | ||
|
||
if interpret == ResultInterpret.XFAIL: | ||
# Swap just fail<-->pass, keep the rest as is (info, warn, | ||
# error) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why keep them in a standalone file?
__init__.py
has about 250+ lines, isn't that small enough?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just needed to deal with that circular import somehow. Would gladly do a less bad solution.