Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAOS-17111 cart: Use only swim ctx for outage #15924

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft

Conversation

liw
Copy link
Contributor

@liw liw commented Feb 18, 2025

The "network outage" detection for swim (see the existing crt_swim_update_last_unpack_hlc) uses all crt contexts. In an engine, if the swim context can't receive or send any message, while at least one other context can and does receive messages constantly, then swim will not detect any "network outage", leading to more false positive DEAD events. The purpose of that detection is to find out swim-specific "network outages", where swim may be unable to receive any swim message.

This patch changes the detection algorithm to use only the swim crt context:

  • Remove crt_context.cc_last_unpack_hlc.

  • Update crt_swim_membs.csm_last_unpack_hlc when receiving swim requests and replies.

Before requesting gatekeeper:

  • Two review approvals and any prior change requests have been resolved.
  • Testing is complete and all tests passed or there is a reason documented in the PR why it should be force landed and forced-landing tag is set.
  • Features: (or Test-tag*) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.
  • Commit messages follows the guidelines outlined here.
  • Any tests skipped by the ticket being addressed have been run and passed in the PR.

Gatekeeper:

  • You are the appropriate gatekeeper to be landing the patch.
  • The PR has 2 reviews by people familiar with the code, including appropriate owners.
  • Githooks were used. If not, request that user install them and check copyright dates.
  • Checkpatch issues are resolved. Pay particular attention to ones that will show up on future PRs.
  • All builds have passed. Check non-required builds for any new compiler warnings.
  • Sufficient testing is done. Check feature pragmas and test tags and that tests skipped for the ticket are run and now pass with the changes.
  • If applicable, the PR has addressed any potential version compatibility issues.
  • Check the target branch. If it is master branch, should the PR go to a feature branch? If it is a release branch, does it have merge approval in the JIRA ticket.
  • Extra checks if forced landing is requested
    • Review comments are sufficiently resolved, particularly by prior reviewers that requested changes.
    • No new NLT or valgrind warnings. Check the classic view.
    • Quick-build or Quick-functional is not used.
  • Fix the commit message upon landing. Check the standard here. Edit it to create a single commit. If necessary, ask submitter for a new summary.

Copy link

Ticket title is '[SWIM] Zombie Node Messes Up SWIM'
Status is 'Open'
https://daosio.atlassian.net/browse/DAOS-17111

The "network outage" detection for swim (see the existing
crt_swim_update_last_unpack_hlc) uses all crt contexts. In an engine, if
the swim context can't receive or send any message, while at least one
other context can and does receive messages constantly, then swim will
not detect any "network outage", leading to more false positive DEAD
events. The purpose of that detection is to find out swim-specific
"network outages", where swim may be unable to receive any swim message.

This patch changes the detection algorithm to use only the swim crt
context:

  - Remove crt_context.cc_last_unpack_hlc.

  - Update crt_swim_membs.csm_last_unpack_hlc when receiving swim
    requests and replies.

Signed-off-by: Li Wei <[email protected]>
{
crt_swim_csm_lock(csm);
if (csm->csm_last_unpack_hlc < hlc)
csm->csm_last_unpack_hlc = hlc;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

csm->csm_last_unpack_hlc = MAX(csm->csm_last_unpack_hlc, hlc):

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the current style does not have any real problem, I'd like to keep it as it is. Also, seems like the current style is more consistent with similar code in the rest of the file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's just better code. After this PR https://github.com/daos-stack/daos/pull/15929/files is merged, I believe most of places will be fixed.

@@ -1064,7 +1063,7 @@ static int64_t crt_swim_progress_cb(crt_context_t crt_ctx, int64_t timeout_us, v
uint64_t max_delay = swim_suspect_timeout_get() * 2 / 3;

if (delay > max_delay) {
D_ERROR("Network outage detected (idle during "
D_ERROR("SWIM network outage detected (idle during "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to think we should never update csm->csm_last_unpack_hlc in this code block, since that variable is the ground truth to measure how long the current node has been isolated. If we need to calculate progressive delay, let's introduce another variable for that.

if (csm->csm_alive_count > 2) {

Do you know what's the purpose of this check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know about csm_alive_count.

Agreed on "should never update csm_last_unpack_hlc". But I wonder if we just remove the update below, would this error be printed too frequently? Perhaps this should be addressed in next PR that changes the handling of outages?

Copy link
Contributor Author

@liw liw Feb 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that csm_alive_count intends to track the number of alive members. (This tracking might have been broken by an old change of mine. Actually, I think it was problematic even before my old change. I'll work on a fix.) The number must be > 2 here perhaps because if there's only one alive member, that is, myself, then there's no use in detecting outage, whereas if there are two alive members, then we need to detect the death of the peer?

Copy link
Contributor

@jxiong jxiong Feb 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I understand what it tracks, but I don't know why this is required, like why it is critical to have at least 2 alive members.

In my local fix, I just removed this check and it fixes all the issues for me in my 3-node cluster. I tend to just remove it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants