Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAOS-16982 csum: recalculate checksum on retrying #15786

Open
wants to merge 4 commits into
base: jvolivie/disable_target
Choose a base branch
from

Conversation

jxiong
Copy link
Contributor

@jxiong jxiong commented Jan 24, 2025

This PR fixes retry logic by actually recalculating the checksum; also it removes the code that incorrectly records nvme error.

This is a quick fix before we make an ultimate fix discussed here: https://daos-stack.slack.com/archives/C4SM0RZ54/p1738030213108609

Change-Id: Ib0287851fea4d125eecda48c5ccb3c73ed85b8f8
Signed-off-by: Jinshan Xiong [email protected]

Before requesting gatekeeper:

  • Two review approvals and any prior change requests have been resolved.
  • Testing is complete and all tests passed or there is a reason documented in the PR why it should be force landed and forced-landing tag is set.
  • Features: (or Test-tag*) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.
  • Commit messages follows the guidelines outlined here.
  • Any tests skipped by the ticket being addressed have been run and passed in the PR.

Gatekeeper:

  • You are the appropriate gatekeeper to be landing the patch.
  • The PR has 2 reviews by people familiar with the code, including appropriate owners.
  • Githooks were used. If not, request that user install them and check copyright dates.
  • Checkpatch issues are resolved. Pay particular attention to ones that will show up on future PRs.
  • All builds have passed. Check non-required builds for any new compiler warnings.
  • Sufficient testing is done. Check feature pragmas and test tags and that tests skipped for the ticket are run and now pass with the changes.
  • If applicable, the PR has addressed any potential version compatibility issues.
  • Check the target branch. If it is master branch, should the PR go to a feature branch? If it is a release branch, does it have merge approval in the JIRA ticket.
  • Extra checks if forced landing is requested
    • Review comments are sufficiently resolved, particularly by prior reviewers that requested changes.
    • No new NLT or valgrind warnings. Check the classic view.
    • Quick-build or Quick-functional is not used.
  • Fix the commit message upon landing. Check the standard here. Edit it to create a single commit. If necessary, ask submitter for a new summary.

Copy link

Ticket title is 'We should not report checksum errors against the nmve device for key verification'
Status is 'Open'
Labels: 'google-cloud-daos'
https://daosio.atlassian.net/browse/DAOS-16982

@jxiong
Copy link
Contributor Author

jxiong commented Jan 24, 2025

I have already tested it by manually injecting failure, and I'm working on turning that into a unit test.

diff --git a/src/object/cli_obj.c b/src/object/cli_obj.c
index fdd9528a0..700abe77e 100644
--- a/src/object/cli_obj.c
+++ b/src/object/cli_obj.c
@@ -5141,6 +5141,7 @@ obj_csum_update(struct dc_object *obj, daos_obj_update_t *args, struct obj_auxi_
 		return 0;

 	if (obj_auxi->csum_retry) {
+	  	D_ERROR("recalculate csum error\n");
 		/* release old checksum result and prepare for new calculation */
 		daos_csummer_free_ic(obj->cob_co->dc_csummer, &obj_auxi->rw_args.iod_csums);
 	}
@@ -5156,6 +5157,7 @@ obj_csum_fetch(const struct dc_object *obj, daos_obj_fetch_t *args,
 	       struct obj_auxi_args *obj_auxi)
 {
 	if (obj_auxi->csum_retry) {
+	  	D_ERROR("recalculate csum error\n");
 		/* release old checksum result and prepare for new calculation */
 		daos_csummer_free_ic(obj->cob_co->dc_csummer, &obj_auxi->rw_args.iod_csums);
 	}
diff --git a/src/object/srv_obj.c b/src/object/srv_obj.c
index 26240540b..aae06ef61 100644
--- a/src/object/srv_obj.c
+++ b/src/object/srv_obj.c
@@ -1366,10 +1366,16 @@ obj_local_rw_internal(crt_rpc_t *rpc, struct obj_io_context *ioc, daos_iod_t *io
 		D_GOTO(out, rc = 0);
 	}

+	static int fail_count = 5;
+
 	rc = csum_verify_keys(ioc->ioc_coc->sc_csummer, &orw->orw_dkey,
 			      orw->orw_dkey_csum, &orw->orw_iod_array,
 			      &orw->orw_oid);
-	if (rc != 0) {
+	if (fail_count > 0) {
+	  	fail_count--;
+		rc = -DER_CSUM;
+	}
+	if (rc != 0) {
 		D_ERROR(DF_C_UOID_DKEY"verify_keys error: "DF_RC"\n",
 			DP_C_UOID_DKEY(orw->orw_oid, &orw->orw_dkey),
 			DP_RC(rc));

@jxiong jxiong requested review from jolivier23 and wangdi1 January 24, 2025 19:49
@jxiong jxiong force-pushed the jxiong/fixes_csum branch from 7f74db4 to bb23b17 Compare January 24, 2025 20:06
@@ -5140,6 +5141,11 @@ obj_csum_update(struct dc_object *obj, daos_obj_update_t *args, struct obj_auxi_
if (!obj_csum_dedup_candidate(&obj->cob_co->dc_props, args->iods, args->nr))
return 0;

if (obj_auxi->csum_retry) {
/* Release old checksum result and prepare for new calculation */
daos_csummer_free_ic(obj->cob_co->dc_csummer, &obj_auxi->rw_args.iod_csums);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we probably want to do this after a couple of retries

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's really easy to add but I wonder if that is indeed necessary, because cksum error is a rare event by itself.

How about revising it to:

if (obj_auxi->csum_retry && obj_auxi->csum_retry_cnt > 2) { ... }

would that work for you?

/* Release old checksum result and prepare for new calculation */
daos_csummer_free_ic(obj->cob_co->dc_csummer, &obj_auxi->rw_args.iod_csums);
}

return dc_obj_csum_update(obj->cob_co->dc_csummer, obj->cob_co->dc_props,
obj->cob_md.omd_id, args->dkey, args->iods, args->sgls, args->nr,
obj_auxi->reasb_req.orr_singv_los, &obj_auxi->rw_args.dkey_csum,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the case of the actual issue we saw, it was the dkey_csum that needs to be recalculated, is that happening here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes if I read the code correctly because we release the previous calculation above.

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/3/execution/node/344/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/3/execution/node/334/log

@daosbuild1
Copy link
Collaborator

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/3/execution/node/345/log

@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/3/execution/node/480/log

@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/3/execution/node/339/log

@daosbuild1
Copy link
Collaborator

@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/4/execution/node/373/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/4/execution/node/345/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/4/execution/node/338/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/4/execution/node/335/log

@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/4/execution/node/342/log

@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/5/execution/node/373/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/5/execution/node/319/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/5/execution/node/345/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/5/execution/node/342/log

@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/6/execution/node/374/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/6/execution/node/371/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/6/execution/node/356/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/6/execution/node/355/log

@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/6/execution/node/359/log

@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/7/execution/node/373/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/7/execution/node/348/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15786/7/execution/node/347/log

@jxiong jxiong requested review from a team as code owners February 4, 2025 19:25
@jxiong jxiong force-pushed the jxiong/fixes_csum branch from c63ecc9 to e119758 Compare February 4, 2025 19:26
@daltonbohning daltonbohning removed request for a team February 4, 2025 19:29
@jxiong jxiong force-pushed the jxiong/fixes_csum branch from e119758 to f0a07e6 Compare February 5, 2025 02:02
This PR fixes retry logic by actually recalculating the checksum; also
it removes the code that incorrectly records nvme error.

Run-GHA: true

Change-Id: Ib0287851fea4d125eecda48c5ccb3c73ed85b8f8
Signed-off-by: Jinshan Xiong <[email protected]>
@jxiong jxiong force-pushed the jxiong/fixes_csum branch from f0a07e6 to eb6a7d1 Compare February 5, 2025 02:05
Copy link

github-actions bot commented Feb 5, 2025

Functional on EL 8.8 Test Results

131 tests   127 ✅  1h 30m 53s ⏱️
 41 suites    4 💤
 41 files      0 ❌

Results for commit eb6a7d1.

jolivier23
jolivier23 previously approved these changes Feb 5, 2025
@jxiong
Copy link
Contributor Author

jxiong commented Feb 6, 2025

@wangdi1 @liuxuezhao can you please take a look?

DP_C_UOID_DKEY(orw->orw_oid, &orw->orw_dkey),
DP_RC(rc));
if (rc == -DER_CSUM)
obj_log_csum_err();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we should fix this in a separate patch?

wangdi1
wangdi1 previously approved these changes Feb 13, 2025
@jolivier23 jolivier23 dismissed stale reviews from wangdi1 and themself via 3d5f0a8 February 13, 2025 22:49
@jolivier23 jolivier23 changed the base branch from master to jvolivie/disable_target February 13, 2025 22:50
Signed-off-by: Jeff Olivier <[email protected]>
/* Retry fetch on alternative shard */
if (obj_auxi->opc == DAOS_OBJ_RPC_FETCH) {
if (task->dt_result == -DER_CSUM)
if ((obj_auxi->opc == DAOS_OBJ_RPC_FETCH ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now if FETCH get DER_CSUM

  1. for replica obj it will try other replica, if all replicas retried then will fail, see obj_retry_next_shard(),
    so actually the "csum_retry_cnt < MAX_CSUM_RETRY" compare for fetch looks not very reasonable, because if #replicas < MAX_CSUM_RETRY the check is useless (alwasy true) , if #replicas > MAX_CSUM_RETRY it acutally has chance to sucess but the code will fail it.
  2. for EC obj it will mark the shard as failed (obj_auxi_add_failed_tgt()) and do EC degraded fetch, so the "csum_retry_cnt < MAX_CSUM_RETRY" compare also not really useful, because if the retried times exceed number of parity shards it will fail.

The retry for UPDATE is easier just retry that times, the check is valid.

so looks to me that the change here make the code a kind of inconsistent with real behavior (for FETCH). Do you think is it fine? I'll leave a -1 first, thx

Copy link
Contributor Author

@jxiong jxiong Feb 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

csum_retry_cnt < MAX_CSUM_RETRY will be applied to both read and write. For read, even though it may have more than 10 (MAX_CSUM_RETRY is set to 10 at this point) replicas, but if it returns the same error 10 times in a row, something may have gone terribly wrong. Either way, I think it would be reasonable to limit the rw RPC to retry only limited times. If there existed BUGs in our code, this would prevent it from unlimited trying, which we have seen this in production.

I actually have a comment below to explain when I did this. Or otherwise, please suggest a fix.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thought that for FETCH need not consider the csum_retry_cnt as it with its own different control.
I thought again that as long as the MAX_CSUM_RETRY will not be changed smaller probably is fine.

@@ -5140,6 +5140,12 @@ obj_csum_update(struct dc_object *obj, daos_obj_update_t *args, struct obj_auxi_
if (!obj_csum_dedup_candidate(&obj->cob_co->dc_props, args->iods, args->nr))
return 0;

if (obj_auxi->csum_retry) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just confirm that will the original code cause mem leak? (as it did not call the daos_csummer_free_ci() before).
and why only do the daos_csummer_free_ci for csum_retry, need not do it for other retry like nvme_io_err/tx_uncertain?

Copy link
Contributor Author

@jxiong jxiong Feb 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not about memory leak. The checksum computation code dc_obj_csum_update() will use obj_auxi->rw_args.dkey_csum and obj_auxi->rw_args.iod_csums to check if the checksum has been computed.

By freeing the previous result, it will cause the checksum to be recomputed, which is what we want for the sake of this fix. The reason we only do this for checksum retry is that it's possible that the buffer gets updated after the checksum is computed, and retrying (w/o recomputation) won't really help because it will certainly get the same error.

We're actually considering changing NVME checksum error to nvme_io_err. Therefore, csum_err is dedicated for network checksum error. The discussion is initiated here: https://daos-stack.slack.com/archives/C4SM0RZ54/p1738030213108609

Copy link
Contributor

@liuxuezhao liuxuezhao Feb 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, thanks for explanation.

For "changing NVME checksum error to nvme_io_err",
"For DER_NVME_IO error, it should try next replica if it has; otherwise, it should just send the RPC to the same replica again and hopefully network error would disappear."
that will need change quite some codes and make the logic more complicate.
It depends on the possibility of network transferring caused CSUM err, is it high or very rare, when it happens can it be corrected by retry the transferring to same target?
It looks to me that possibility (pure network caused csum err and fixed by re-transferring) should be very rare. On the other side, RPC (such as mercury) internally can verify the CSUM during transferring and will fail the underneath RPC.
So I'm wondering that possibly not worth to introduce much complexity for that looks-not-true assumption.
(also replied to the slack thread, we may try to simplify the retry logic and avoid complexity if possible).

/* Retry fetch on alternative shard */
if (obj_auxi->opc == DAOS_OBJ_RPC_FETCH) {
if (task->dt_result == -DER_CSUM)
if ((obj_auxi->opc == DAOS_OBJ_RPC_FETCH ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thought that for FETCH need not consider the csum_retry_cnt as it with its own different control.
I thought again that as long as the MAX_CSUM_RETRY will not be changed smaller probably is fine.

@@ -5140,6 +5140,12 @@ obj_csum_update(struct dc_object *obj, daos_obj_update_t *args, struct obj_auxi_
if (!obj_csum_dedup_candidate(&obj->cob_co->dc_props, args->iods, args->nr))
return 0;

if (obj_auxi->csum_retry) {
Copy link
Contributor

@liuxuezhao liuxuezhao Feb 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, thanks for explanation.

For "changing NVME checksum error to nvme_io_err",
"For DER_NVME_IO error, it should try next replica if it has; otherwise, it should just send the RPC to the same replica again and hopefully network error would disappear."
that will need change quite some codes and make the logic more complicate.
It depends on the possibility of network transferring caused CSUM err, is it high or very rare, when it happens can it be corrected by retry the transferring to same target?
It looks to me that possibility (pure network caused csum err and fixed by re-transferring) should be very rare. On the other side, RPC (such as mercury) internally can verify the CSUM during transferring and will fail the underneath RPC.
So I'm wondering that possibly not worth to introduce much complexity for that looks-not-true assumption.
(also replied to the slack thread, we may try to simplify the retry logic and avoid complexity if possible).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

5 participants