-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Health support #130
base: main
Are you sure you want to change the base?
Health support #130
Conversation
Signed-off-by: Daniel Kec <[email protected]>
Can one of the admins verify this patch? |
Signed-off-by: Daniel Kec <[email protected]>
|
||
=== Liveness | ||
|
||
Liveness check `messaging` gets DOWN only when `cancel` or `onError` signals are detected on any of the messaging channels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to Readiness - define if it starts in UP
or DOWN
state, then define states that trigger change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What should happen if a message gets nack
ed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe acknowledgement should be related to the ability of the channel to send messages as messaging itself doesn't leverage acking/nacking anyhow. It should be responsibility of the publishers and subscribers(connectors) as that requirement can change case by case. In other words such checks should be present in the connectors or business code if or when needed.
...clipse/microprofile/reactive/messaging/tck/health/HealthCancelledConnectorInChannelTest.java
Show resolved
Hide resolved
* Rewording definition of state transition description of liveness and readiness * Move config key to messaging context * Document test cases * Utilize ArchiveExtender Signed-off-by: Daniel Kec <[email protected]>
* Revert trailing whitespaces Signed-off-by: Daniel Kec <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a quick first pass.
In addition to the comments, I made in the code, the TCK does not seem to cover Emitter, unmanaged channels, and nacked messages.
The TCK relies on HTTP, which is a bit annoying, as it makes running the TCK much more framework-specific. Typically, I would need to use Quarkus (and so create a cycle between out implementation and Quarkus), instead of just Weld.
@@ -1331,6 +1331,44 @@ The connector is responsible for the acknowledgment (positive or negative) of th | |||
* An outgoing connector must acknowledge the incoming `org.eclipse.microprofile.reactive.messaging.Message` once it has successfully dispatched the message. | |||
* An outgoing connector must acknowledge negatively the incoming `org.eclipse.microprofile.reactive.messaging.Message` if it cannot be dispatched. | |||
|
|||
== Health | |||
|
|||
When MicroProfile Reactive Messaging is used in an environment where MicroProfile Health is enabled, implicit readiness |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
readiness might be misleading. Do you mean "ready to get traffic" or "started successfully"? Kubernetes has readiness and startup checks, and readiness is focusing on traffic, which most of the time is ignored by messaging protocol (as the probe are used for HTTP request routing).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ve been thinking about use-cases of producing/consuming RS channels with websocket or SSE
== Health | ||
|
||
When MicroProfile Reactive Messaging is used in an environment where MicroProfile Health is enabled, implicit readiness | ||
and liveness checks `messaging` are produced. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
messaging
might not be explicit enough, or conflict with user checks. What about MicroProfile Reactive Messaging
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, I just can't find any definitive convention for naming health checks, maybe mp-reactive-messaging
would make it more machine readable?
] | ||
} | ||
---- | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the case, we don't pass the checks, what extra data should we provide? None won't be very useful (even if these checks tend to be consumed by machines ignoring the extra data). Should we list the unsatisfied channels? Is it implementation specific?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was playing with that idea but couldn't come with any sufficient reason for requiring channel listing in the checks
=== Liveness | ||
|
||
State of the liveness check `messaging` is UP until `cancel` or `onError` signals are detected on any of the messaging | ||
channels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about cancellations. We have seen applications using cancellations as part of their regular lifecycle.
Failure definitely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have also seen use-cases with failures as part of its regular lifecycle (onErrorResumeWith, ...), but as I see it channel is a simple processor which is not mutating the stream(except for un/wrapping), so any cancel
or onError
signal passing thru channel renders it DOWN for good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Problem with channels where retries/re-subscriptions are expected should be solvable with exclusions(inclusions?)
|
||
State of the liveness check `messaging` is UP until `cancel` or `onError` signals are detected on any of the messaging | ||
channels. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering about "all channels" while it seems to only require checking the final subscribers (subscriber methods, outgoing channels).
Also, about @Channel
injection (unmanaged stream). It might be hard to track, as in this case, the subscription is delegated to the users. So, all the used channels may not receive their subscription, because the user has not subscribed yet (and it does not mean it's not ready).
Maybe it should focus on:
- subscriber methods:
- for readiness, must have received a subscription,
- for liveness, a received failure would trigger the liveness check to be DOWN
- outgoing connector
- for readiness, must have received a subscription,
- for liveness, a received failure would trigger the liveness check to be DOWN, an unrecoverable (like serialization) failure in the connector logic would trigger the liveness check to be DOWN.
- incoming connector
- for liveness, any unrecoverable failure would set the liveness check to DOWN
- for readiness, it can be tricky, as we may not have a downstream subscription yet (because of unmanaged streams) (we are actually seen reports about that in Quarkus)
Intermediate channels can recover from failures, and implement retry logic, so it should not be reported as a DOWN.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be quite a different angle than the one I was trying to take.
There are not much parts of reactive pipelines we are actually managing in the MP Messaging impls and those are the channels. Channels are basically processors which are not mutating the stream (except for un/wrapping) so if cancel
or 'onError' signal passes thru it, it can be considered as failed. Any recovery made by unmanaged streams is just re-routing the stream thru different chain of operators or retry/resubscribe.
I am not sure that the health of connectors, methods and unmanaged streams should be in the scope of MP Messaging implicit health check. I would count those as business code where health and readiness can be monitored with standard MP Health API.
Also implicit checks should be consistent, if we differentiate its behavior by its pub/sub kinds, we risk user confusion. Again custom check with exclusion/inclusion can help in such cases.
[source, properties] | ||
---- | ||
mp.messaging.health.live.exclude=<channel-name>,<channel-name> | ||
mp.messaging.health.ready.exclude=<channel-name>,<channel-name> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we follow the logic I described above, this would only make sense for subscriber methods. For connectors, it would be better to have a specific attribute and keep the connector configuration co-localized.
*/ | ||
static HealthAssertions create(String url) { | ||
try { | ||
HttpURLConnection con = (HttpURLConnection) new URL(url).openConnection(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have to use HTTP to verify the checks? Doesn't Microprofile Health expose an API (asking the question as I never checked).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be great, I didn't found other way. MP Health TCKs use same approach
* </ul> | ||
*/ | ||
@RunWith(Arquillian.class) | ||
public class HealthCancelledConnectorInChannelTest extends HealthBase { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test should only run if MP Health is present. One of the solutions is to load Readiness api to check whther MP Health is enabled or not. This comment apply to all of the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO if the feature is required when mp health is present, mp health needs to be required by TCKs. Other way around feature would be effectively optional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The MP RM feature does not need MP Health to be enabled. Only this integration comes into action if MP health is enabled. My point is that the tck will fail if a vendor does not support MP Health. However, MP Heath is not mandatory hence the tcks should only be activated if MP health api can be loaded in the runtime.
Fixes #47
Signed-off-by: Daniel Kec [email protected]