-
-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Embrace and promote the use of descriptive messages #1332
Comments
Thanks for the feature request. Can you please take a look at Attaching a reason and explain the difference to your feature request. |
I can imagine to add functionality which allows to add arbitrary context at any place. however, looking at spring for instance, they just print to the console e.g. when testing with mvcMock the request which were carried out before the failure. do we really want to put context in the expectation error message? |
Even though Spring Boot prints the requests, they do not answer why certain value was expected. Here's an example: atrium/atrium-core/src/jsTest/kotlin/ch/tutteli/atrium/reporting/erroradjusters/AdjustStackTest.kt Lines 45 to 53 in b00b0cb
It has two assertions:
The only explanation clarifying why certain things were expected is given in the method name: For instance, the following might produce better failure messages To put it another way: the example Here's one of my tests. WDYT? @Test
fun `overall_info timings`() {
val mapping = createMapping<Mapping>()
val res = mapping.applyModifyAndApply {
nameColumn.dataType = "VARCHAR2(99)"
}
assertSoftly {
for ((instance, result) in res.instanceResults) {
assertManager.allDatabases().first { it.toString() == instance }.also { db ->
withClue("Database $db, session_id=${result.sessionId} and explain_plan_uid=hextoraw('${result.explainPlanUid}')") {
db.query(
"""
select a.started_when overall_info_started
, t.created_when session_created
, a.finished_when overall_info_finished
, t.last_updated session_last_updated
, a.pse_units_total
, a.pse_units_started
, a.pse_units_completed
, a.action_uid
from acme_actions a
, acme_session_traces t
where a.explain_plan_uid = :explain_plan_uid
and a.action_type = 'OVERALL_INFO'
and t.session_id = :session_id
""".trimIndent(),
{ rs ->
if(rs.nextMissing { "OVERALL_INFO / acme_session_trace is not found" }) {
return@query
}
val overallUid = rs.getUid("action_uid")
withClue("overall info action_uid=hextoraw('$overallUid')") {
withClue("overall_info action's started_when should be the same as session's created_when") {
rs.getObject<OffsetDateTime>("overall_info_started")
.shouldBeWithin(
Duration.ofSeconds(1),
rs.getObject<OffsetDateTime>("session_created")
)
}
withClue("overall_info action's started_when should be the same as session's created_when") {
rs.getObject<OffsetDateTime>("overall_info_started")
.shouldBeWithin(
Duration.ofSeconds(1),
rs.getObject<OffsetDateTime>("session_created")
)
}
withClue("overall_info action's finished_when should be the same as session's last_updated") {
rs.getObject<OffsetDateTime>("overall_info_finished")
.shouldBeWithin(
Duration.ofSeconds(1),
rs.getObject<OffsetDateTime>("session_last_updated")
)
}
withClue("pse_units_completed should match pse_units_total in action") {
rs.getInt("pse_units_completed") shouldBe rs.getInt("pse_units_total")
}
withClue("pse_units_started should match pse_units_total in action") {
rs.getInt("pse_units_started") shouldBe rs.getInt("pse_units_total")
}
}
},
result.explainPlanUid,
result.sessionId
)
}
}
result.sessionId
}
}
approveExplainPlan(res)
} |
I guess you missed this comment #1332 (comment)
That being said, I see your need, especially if you need to use a test runner which does not allow to nest test cases. I usually describe things via test-runner. But since, gradle switched reporting in 7.x (or maybe also in 6.x) and now only shows the last leaf in a nested test if it fails on the terminal, this might be even beneficial for test runners which allow nesting as it is quite cumbersome to see in CI which test case failed (in contrast to intellij which shows the hierarchy). I intend to switch from spek to kotest so that we can run test also for JS and maybe even more targets in the future. As kotest does unfortunately not support nesting when using the JS target I will have the same problems as shown in your example and I will definitely add something like This issue overlaps with the other issue you opened. I suggest we focus on grouping and attaching context to the error report in this issue as they are closely related and the functionality to collect expectations in arbitrary places in the other issue. To sum up, for me it's not a question if we should add new functionality to Atrium which supports users like you, it's only the question left how we want to add it. Taking your example, I would use
and I would use Note that Atrium provides ways to modify how types are represented in reporting. It might be worth improving this functionality (expose it via API, guide the user etc.) together with adding functionality to add arbitrary context. I could imagine that you write your asClue over and over again (in different tests) the same way. It might be worth to define it at a single place instead. I'll take more time to thing about this new feature. I hope this gives already some input to inspire you to think about it as well. Last but not least
If you have the time, feel free to improve it. I tried to be as simple as possible but I agree, an example which is a bit more realistic would also make it easier to understand the example 👍 |
I put wrong words. Once again, you previously said that
What I say is that adding extra information is helpful even in the case the framework logs requests and responses, and it is still helpful in the case when the test code uses feature extractors.
Well, I find that expect("filename?")
.because("? is not allowed in file names on Windows") {
notToContain("?")
} What do you think of expect("filename?")
.withDescription("? is not allowed in file names on Windows") {
notToContain("?")
} expect("filename?")
.withReason("? is not allowed in file names on Windows") {
notToContain("?")
} Inspired by https://github.com/melix/jdoctor expect(pizza) {
"Italian cuisine must be respected".so {
notToContain("pineapple")
}
} |
not sure what you mean by that (Edit: I think now I get it, you refer to the next section about spring and not the statement above it) but we both agree that it is valuable to have something in the given test to describe why you expect certain things 👍
I totally agree, we have already we are on the same page here, isn't it? please take a look at robstoll/atrium-roadmap#91 (comment) in the long run I plan to make the distinctions mentioned there.
I expect the driver, because you have to be an adult to drive, is at least 18 years old. IMO a perfect sentence, transforms in atrium to:
which later on, I would read as: I expect the driver's age, because you have to be an adult to drive, to be greater or equal to 18. A bit more technical than the initial thought but |
btw. thank you very much for your example, it shows that Atrium needs a functionality which allows to compare features, I am going to create a separate issue later on. my goal is that you don't need to use
because you just repeat what you expect |
Even though I repeat, it is way easier to write than figuring out the APIs with "feature extraction". The mental task is "pse_units_completed should match pse_units_total". I would not invest time in designing an API that compares features.
|
would you still use
and in the report you would see:
the only thing missing is that 19 corresponded to pse_units_total
I know, it's due to a technical reason. if you use |
check Those values come from different entities, so a mere column name is not enough.
I assume something like this might work: { "Here's a common message: $country" }.group {
expect(pizza).notToContain(pineapple) // the failure will contain "common info"
expect(pizza) {
{ "pineapple type: ${pineapple.type}" }.group {
notToContain(pineapple) // the failure should include both top-level group and the nested one
}
{ "pineapple type: ${pineapple.type}" }.group {
notToContain(pineapple)
}
}
}
Adding the message afterward would probably hurt readability in case the comparison spans several lines. |
For reference, here's another case for adding the clarification messages to the failures:
|
Exactly, that's the reason why we did the same with Thanks for those sources. Is your example here #1332 (comment) also available in a public repository? I am interested to see how you use kotest currently |
Not really :-/ Here's how I migrated it to Atrium. I don't quite like the reporting yet, however, it somehow works. fun <T> Expect<T>.toBeWithin(
duration: TemporalAmount,
other: Temporal
) where T : Temporal, T : Comparable<T> {
@Suppress("UNCHECKED_CAST")
val maxExpectedTime = other.plus(duration) as T
@Suppress("UNCHECKED_CAST")
val minExpectedTime = other.minus(duration) as T
_logic.manualFeature(LazyUntranslatable { "Should be within $duration of $other" }) { this }
.collectAndAppend {
toBeGreaterThanOrEqualTo(minExpectedTime)
toBeLessThanOrEqualTo(maxExpectedTime)
}
}
expectSoftly {
for ((instance, result) in res.instanceResults) {
assertManager.allDatabases().first { it.toString() == instance }.also { db ->
group({ "Database $db, session_id=${result.sessionId} and explain_plan_uid=hextoraw('${result.explainPlanUid}')" }) {
db.query(
"""
select a.started_when overall_info_started
, t.created_when session_created
, a.finished_when overall_info_finished
, t.last_updated + interval '120' minute session_last_updated
, a.pse_units_total
, a.pse_units_started+3 pse_units_started
, a.pse_units_completed
, a.action_uid
from acme_actions a
, acme_session_traces t
where a.explain_plan_uid = :explain_plan_uid
and a.action_type = 'OVERALL_INFO'
and t.session_id = :session_id
""".trimIndent(),
{ rs ->
if (rs.nextMissing { "OVERALL_INFO / acme_session_trace is not found" }) {
return@query
}
val overallUid = rs.getUid("action_uid")
group({ "overall info action_uid=hextoraw('$overallUid')" }) {
group("overall_info action's started_when should be the same as session's created_when") {
expect(rs.getObject<OffsetDateTime>("overall_info_started"))
.toBeWithin(
Duration.ofSeconds(1),
rs.getObject<OffsetDateTime>("session_created")
)
}
group("overall_info action's finished_when should be the same as session's last_updated") {
expect(rs.getObject<OffsetDateTime>("overall_info_finished"))
.toBeWithin(
Duration.ofSeconds(1),
rs.getObject<OffsetDateTime>("session_last_updated")
)
}
group("pse_units_completed should match pse_units_total") {
expect(rs.getInt("pse_units_completed"))
.toEqual(rs.getInt("pse_units_total"))
}
group("pse_units_started should match pse_units_total") {
expect(rs.getInt("pse_units_started"), "test")
.toEqual(rs.getInt("pse_units_total"))
}
}
},
result.explainPlanUid,
result.sessionId
)
}
}
result.sessionId
}
}
approveExplainPlan(res)
} Here's a sample failure:
It would probably be better to have something like (let's ignore "expected should align actual" issue for now)
|
What's the reason that you don't use I think we could get the report to something like the following -- and in this case we skip a few
But we would need to introduce a functionality to use features on the right hand side of expectation functions. Once we have it and feature extractors for
I renamed your
WDYT? especially regarding:
|
The current
expect(actual)
.toEqual(expected)
// <=>
expect(actual)
.because(message) {
toEqual(expected)
}
// Adding "group" does not require adjusting the expectation:
group(message) {
expect(actual) // this is the same with and without the group
.toEqual(expected)
} I can't "just add When I use As I said earlier,
|
thanks for the clarification, I see your point about
Note, in your example you don't have to use
The difference between |
That is fair. I probably need to rename However, having because inside expect(actual)
.toEqual(expected) {
because(message) // reported as an annotation below "to equal..."
}
// vs
group(message) { // reported as group, with expectations inside
expect(actual)
.toEqual(expected)
}
// or even
group(message) { // reported as group, with expectations inside
expect(actual)
.toEqual(expected) {
because(extraReason) // reported as an annotation below "to equal..."
}
} |
Thanks. They should be removed. |
It looks good.
It looks more like a workaround of Kotlin limitations rather than a natural way of writing test to me 🤷
Adding lots of For instance, I think regular // Assume compiler plugin will add <<rs.getInt("pse_units_completed")>> and <<rs.getInt("pse_units_total")>> messages
expect(rs.getInt("pse_units_completed"))
.toEqual(rs.getInt("pse_units_total"))
// getInt looks ok. It requires creating extensions though
getInt("pse_units_started")
// this looks odd, and it is cumbersome to write
.using({ getInt("pse_units_total") }, ::toEqual) It might be interesting to automatically generate |
More related to #1330, I'll continue this aspect there (glad you mention it)
I guess I am a marked man when it comes to giving reasons why something failed. I have seen so much repetition in the past and in so many cases those repetitions where not updated when the corresponding expectations were changed (same problem as with code comments). That was the reason why I did not include something like a
and then you would use
which in reporting looks like
and we could think about adding a
WDYT?
As ResultSet is very stable, I wouldn't mind to add feature extractors manually (and not generate them). The advantage of feature extractors compared to compiler plugin would be:
Nevertheless, I think a compiler plugin would be a nice addition, especially for folks which primarily want a clean way to write the expectation functions and don't mind if the report contains code, does not provide failure hints etc.
I agree, it looks better. I was considering in the past to generate overloads for expectation functions via KSP where we expect
We could take the same approach and add overloads for feature extractors. Then it would look as follows.
This would actually be way cleaner than the |
Initially I created expect(x).because(""bla bla bla"){
toBeGreaterThan(5)
toBeLessThan(10)
} However, it produced bad reports: it does not print print expectations that hold, and it was hard to understand the actual expectation was So I moved to
Currently,
Those are my thoughts exactly. |
True my mistake. Your experience using Atrium is very valuable to understand what features we should enable in the API. Atrium shows only failing expectations by default for most expectation functions. But there is already a SummaryGroup functionality on the logic level which allows to show also expectations which hold. I am going to create an issue later on => #1342 |
For reference, I've added the compiler plugin group({ "overall info action_uid=hextoraw('$overallUid')" }) {
expect(rs.getObject<OffsetDateTime>("overall_info_started"))
.toEqualWithErrorTolerance(
rs.getObject<OffsetDateTime>("session_created")
)
expect(rs.getObject<OffsetDateTime>("overall_info_finished"))
.toEqualWithErrorTolerance(
rs.getObject<OffsetDateTime>("session_last_updated"),
Duration.ofSeconds(1)
)
expect(rs.getInt("pse_units_completed"))
.toEqual(rs.getInt("pse_units_total"))
expect(rs.getInt("pse_units_started"))
.toEqual(rs.getInt("pse_units_total"))
}
Well, Here are some extensions: fun <T> expect(
subject: T,
@CallerArgumentExpression("subject") description: String = ""
): RootExpect<T> =
RootExpectBuilder.forSubject(subject)
.withVerb(LazyUntranslatable { "I expected $description" })
.withOptions {
withComponent(ObjectFormatter::class) { c -> SimplifiedObjectFormatter(c.build()) }
}
.build()
fun <T> expect(
subject: T,
@CallerArgumentExpression("subject") description: String = "",
assertionCreator: Expect<T>.() -> Unit
): Expect<T> =
expect(subject, description).and(assertionCreator)
fun <T, R> Expect<T>.expect(
newSubject: R,
@CallerArgumentExpression("newSubject") description: String = ""
): FeatureExpect<T, R> =
_logic.manualFeature(LazyUntranslatable { "I expected $description" }) { newSubject }
.transform()
fun <T> Expect<T>.toEqual(
expected: T,
@CallerArgumentExpression("expected") description: String = ""
): Expect<T> =
if (description.isNotEmpty()) {
because(description) {
toEqual(expected)
}
} else {
toEqual(expected)
}
fun <T> Expect<T>.toEqualWithErrorTolerance(
other: Temporal,
tolerance: TemporalAmount = Duration.ofSeconds(1),
@CallerArgumentExpression("other") otherDescription: String = "",
@CallerArgumentExpression("tolerance") toleranceDescription: String = "",
) where T : Temporal, T : Comparable<T> {
@Suppress("UNCHECKED_CAST")
val maxExpectedTime = other.plus(tolerance) as T
@Suppress("UNCHECKED_CAST")
val minExpectedTime = other.minus(tolerance) as T
_logic.manualFeature(LazyUntranslatable { "to equal <<$otherDescription>> with $tolerance tolerance <<$toleranceDescription>>" }) { this }
.collectAndAppend {
toBeGreaterThanOrEqualTo(minExpectedTime)
toBeLessThanOrEqualTo(maxExpectedTime)
}
} |
looks promising 👍
I agree |
Here's a relevant review by Truth team: google/truth#849 |
@vlsi Thanks for the link, interesting read.
and I still think it would be helpful if Atrium had a compiler plugin which turned:
In reporting into:
But that's more the task of robstoll/atrium-roadmap#74 (I guess I should turn those issues into discussions in Atrium, now that I have enabled this feature it would be a better fit to have everything in one place). btw. Atrium 1.1.0 shipped with |
Platform (all, jvm, js): all
Extension (none, kotlin 1.3): none
Code related feature
Currently, it is hard to add descriptive messages to the assertions.
I believe the assertion description should clarify the intention of the test author, so the failure message should include more than "expected: true got: false".
Unfortunately, Atrium examples do not use message at all.
https://github.com/robstoll/atrium#your-first-expectation
=>
Why 9 was expected?
When the test fails like that, the developer have to reverse-engineer the meaning of the test.
It would be way better, if the test author supplied the clarification message that describes why something was expected.
For instance
Then the failing test would be way easier to understand.
See more samples in https://kotest.io/docs/assertions/clues.html that I contributed in kotest/kotest#3348
The text was updated successfully, but these errors were encountered: