-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAOS-17111 cart: Use only swim ctx for outage #15924
base: master
Are you sure you want to change the base?
Conversation
Ticket title is '[SWIM] Zombie Node Messes Up SWIM' |
The "network outage" detection for swim (see the existing crt_swim_update_last_unpack_hlc) uses all crt contexts. In an engine, if the swim context can't receive or send any message, while at least one other context can and does receive messages constantly, then swim will not detect any "network outage", leading to more false positive DEAD events. The purpose of that detection is to find out swim-specific "network outages", where swim may be unable to receive any swim message. This patch changes the detection algorithm to use only the swim crt context: - Remove crt_context.cc_last_unpack_hlc. - Update crt_swim_membs.csm_last_unpack_hlc when receiving swim requests and replies. Signed-off-by: Li Wei <[email protected]>
{ | ||
crt_swim_csm_lock(csm); | ||
if (csm->csm_last_unpack_hlc < hlc) | ||
csm->csm_last_unpack_hlc = hlc; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
csm->csm_last_unpack_hlc = MAX(csm->csm_last_unpack_hlc, hlc):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the current style does not have any real problem, I'd like to keep it as it is. Also, seems like the current style is more consistent with similar code in the rest of the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's just better code. After this PR https://github.com/daos-stack/daos/pull/15929/files is merged, I believe most of places will be fixed.
@@ -1064,7 +1063,7 @@ static int64_t crt_swim_progress_cb(crt_context_t crt_ctx, int64_t timeout_us, v | |||
uint64_t max_delay = swim_suspect_timeout_get() * 2 / 3; | |||
|
|||
if (delay > max_delay) { | |||
D_ERROR("Network outage detected (idle during " | |||
D_ERROR("SWIM network outage detected (idle during " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to think we should never update csm->csm_last_unpack_hlc
in this code block, since that variable is the ground truth to measure how long the current node has been isolated. If we need to calculate progressive delay, let's introduce another variable for that.
if (csm->csm_alive_count > 2) {
Do you know what's the purpose of this check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know about csm_alive_count
.
Agreed on "should never update csm_last_unpack_hlc
". But I wonder if we just remove the update below, would this error be printed too frequently? Perhaps this should be addressed in next PR that changes the handling of outages?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that csm_alive_count
intends to track the number of alive members. (This tracking might have been broken by an old change of mine. Actually, I think it was problematic even before my old change. I'll work on a fix.) The number must be > 2 here perhaps because if there's only one alive member, that is, myself, then there's no use in detecting outage, whereas if there are two alive members, then we need to detect the death of the peer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I understand what it tracks, but I don't know why this is required, like why it is critical to have at least 2 alive members.
In my local fix, I just removed this check and it fixes all the issues for me in my 3-node cluster. I tend to just remove it.
The "network outage" detection for swim (see the existing crt_swim_update_last_unpack_hlc) uses all crt contexts. In an engine, if the swim context can't receive or send any message, while at least one other context can and does receive messages constantly, then swim will not detect any "network outage", leading to more false positive DEAD events. The purpose of that detection is to find out swim-specific "network outages", where swim may be unable to receive any swim message.
This patch changes the detection algorithm to use only the swim crt context:
Remove crt_context.cc_last_unpack_hlc.
Update crt_swim_membs.csm_last_unpack_hlc when receiving swim requests and replies.
Before requesting gatekeeper:
Features:
(orTest-tag*
) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.Gatekeeper: