Skip to content

OSD-26415: Allow pull-secret to be updated w/o transferring ownership. #705

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

nephomaniac
Copy link
Contributor

@nephomaniac nephomaniac commented Mar 26, 2025

This PR attempts to allow a user to update a cluster's pull secret w/o transferring ownership for both classic and HCP clusters.

  • ~~This adds a new CLI arg '--pull-secret-only' (bool) which is mutually exclusive with '--new-owner'. ~~
  • This adds a new command 'osdctl cluster update-pullsecret'. This cmd is a wrapper re-using the transfer-owner's pullsecret update functions and general flow.
  • When 'updating the pull secret only' is used the utility will now exit after the pull secret is updated with the account's OCM accessToken values.
  • The pull secret only op prompts user to choose to send an internal service log before the operation begins, and prompts to send a customer service log after the operation completes.
  • This adds additional programmatic checks/comparisons of the resulting on cluster pull-secret auths for the end user to review.
  • The new programmatic checks/comparisons may negate the need for printing to the terminal for visual comparison. Previously this util printed the secret data to the terminal, this PR changes this to instead warn + prompt the user to choose whether or not to print the raw data for the optional visual inspection.
  • Additional information and formatting of errors.

Example usage:

osdctl cluster update-pullsecret -h
Update cluster pullsecret with current OCM accessToken data(to be done by Region Lead)

Usage:
  osdctl cluster update-pullsecret [flags]

Examples:

  # Update Pull Secret's OCM access token data
  osdctl cluster update-pullsecret --cluster-id 1kfmyclusteristhebesteverp8m --reason "Update PullSecret per pd or jira-id"

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 26, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Mar 26, 2025

@nephomaniac: This pull request references OSD-26415 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

This PR attempts to allow a user to update a cluster's pull secret w/o transferring ownership for both classic and HCP clusters.

  • This adds a new CLI arg '--pull-secret-only' (bool) which is mutually exclusive with '--new-owner'.
  • When ''--pull-secret-only' is used the utility will now exit after the pull secret is updated with the account's OCM accessToken values.
  • The pull secret only op prompts user to choose to send an internal service log before the operation begins, and prompts to send a customer service log after the operation completes.
  • This adds additional programmatic checks/comparisons of the resulting on cluster pull-secret auths for the end user to review, in addition the previous printed raw data and added indented/pretty printed data for visual comparison and user confirmation(s).
  • Additional information and formatting of errors

Example usage:

osdctl -S cluster transfer-owner -C 2ho9npdt3oeq3t604ria3lq3vcABC123  --reason "testing [OSD-26415](https://issues.redhat.com//browse/OSD-26415)" --pull-secret-only

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from Tafhim and typeid March 26, 2025 01:57
Copy link
Contributor

openshift-ci bot commented Mar 26, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: nephomaniac
Once this PR has been reviewed and has the lgtm label, please assign joshbranham for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@joshbranham
Copy link
Contributor

@nephomaniac would you mind posting the output of the command running to the PR, just as an added validation?

@nephomaniac
Copy link
Contributor Author

Example 'pull-secret rotation w/o ownership transfer ...
(note stack trace is addressed in PR #704 )

(⎈|api-maclarkstatest-4...:default)➜  osdctl/ git:(OSD-26415) ./osdctl -S cluster transfer-owner -C 2hpl2shfhj34boahjiqr8j85i9ilr6k7  --reason "testing OSD-26415" --pull-secret-only
Old username:'maclark.openshift'
Given cluster is HCP, start to proceed the HCP owner transfer
Gathering all required information for the cluster transfer...
Using old account values. OwnerAccount:'maclark.openshift'
old orgID:'1HELaFOf2YHWwwvt3XMbT5Mja7M', new orgID:'1HELaFOf2YHWwwvt3XMbT5Mja7M'
Internal SL Being Sent
INFO[0006] The following clusters match the given parameters:
Name                ID                                 State               Version             Cloud Provider      Region
maclarkstatest      2hpl2shfhj34boahjiqr8j85i9ilr6k7   ready               4.18.5              aws                 us-west-2

INFO[0007] The following template will be sent:
{
  "severity":"Info",
  "service_name":"SREManualAction",
  "summary":"INTERNAL ONLY, DO NOT SHARE WITH CUSTOMER",
  "description":"Pull-secret update initiated. UserName:'1455657', OwnerID:'maclark.openshift'",
  "internal_only":true,
  "event_stream_id":"",
  "doc_references":null
}
Continue? (y/N): y
INFO[0013] Success: 1, Failed: 0

INFO[0013] Successful clusters:
ID                                     Status
055a7a55-d0cc-4348-b2d5-74ef5bbab593   Message has been successfully sent to 055a7a55-d0cc-4348-b2d5-74ef5bbab593

INFO[0013] Backplane URL retrieved via OCM environment: https://api.stage.backplane.openshift.com
INFO[0013] No PagerDuty API Key configuration available. This will result in failure of `ocm-backplane login --pd <incident-id>` command.
INFO[0015] Backplane URL retrieved via OCM environment: https://api.stage.backplane.openshift.com
INFO[0015] No PagerDuty API Key configuration available. This will result in failure of `ocm-backplane login --pd <incident-id>` command.
Pull Secret data(Indented)...

{
 "auths": {
  "cloud.openshift.com": {
   "auth": "**REDACTED",
   "email": "[email protected]"
  },
  "quay.io": {
   "auth": "**REDACTED",
   "email": "[email protected]"
  },
  "registry.connect.redhat.com": {
   "auth": "**REDACTED",
   "email": "[email protected]"
  },
  "registry.redhat.io": {
   "auth": "**REDACTED",
   "email": "[email protected]"
  }
 }
}

Please review Pull Secret data to be used for update(after formatting):
{"auths":{"cloud.openshift.com":{"auth":"**REDACTED","email":"[email protected]"},"quay.io":{"auth":"**REDACTED","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"**REDACTED","email":"[email protected]"},"registry.redhat.io":{"auth":"**REDACTED","email":"[email protected]"}}}

Do you want to continue? (yes/no): yes
updateManifestwork begin...
get() Manifestwork...
update() Manifestwork...
Manifest work updated.
Sleeping 60 seconds here to allow secret to be synced on guest cluster
Create cluster kubecli...
INFO[0083] Backplane URL retrieved via OCM environment: https://api.stage.backplane.openshift.com
INFO[0083] No PagerDuty API Key configuration available. This will result in failure of `ocm-backplane login --pd <incident-id>` command.
[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
Detected at:
	>  goroutine 1 [running]:
	>  runtime/debug.Stack()
	>  	/opt/homebrew/Cellar/go/1.23.2/libexec/src/runtime/debug/stack.go:26 +0x64
	>  sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
	>  	/Users/maclark/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/log.go:60 +0xf4
	>  sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0x1400065b800, {0x104e8394a, 0x14})
	>  	/Users/maclark/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/deleg.go:147 +0x34
	>  github.com/go-logr/logr.Logger.WithName({{0x106c1a208, 0x1400065b800}, 0x0}, {0x104e8394a?, 0x14000186d00?})
	>  	/Users/maclark/go/pkg/mod/github.com/go-logr/[email protected]/logr.go:345 +0x40
	>  sigs.k8s.io/controller-runtime/pkg/client.newClient(0x0?, {0x0, 0x0, {0x0, 0x0}, 0x0, 0x0})
	>  	/Users/maclark/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:118 +0xac
	>  sigs.k8s.io/controller-runtime/pkg/client.New(0x14000e861b0?, {0x0, 0x0, {0x0, 0x0}, 0x0, 0x0})
	>  	/Users/maclark/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:98 +0x44
	>  github.com/openshift/osdctl/cmd/common.GetKubeConfigAndClient({0x14000e9af60, 0x20}, {0x1400100aa00, 0x2, 0x2})
	>  	/Users/maclark/sandbox/osdctl_new/osdctl/cmd/common/helpers.go:56 +0x188
	>  github.com/openshift/osdctl/cmd/cluster.(*transferOwnerOptions).run(0x1400016fb00)
	>  	/Users/maclark/sandbox/osdctl_new/osdctl/cmd/cluster/transferowner.go:897 +0x287c
	>  github.com/openshift/osdctl/cmd/cluster.newCmdTransferOwner.func2(0x14000048908?, {0x14000e9de00?, 0x4?, 0x104e36931?})
	>  	/Users/maclark/sandbox/osdctl_new/osdctl/cmd/cluster/transferowner.go:84 +0x20
	>  github.com/spf13/cobra.(*Command).execute(0x14000048908, {0x14000e9dda0, 0x6, 0x6})
	>  	/Users/maclark/go/pkg/mod/github.com/spf13/[email protected]/command.go:1019 +0x814
	>  github.com/spf13/cobra.(*Command).ExecuteC(0x14000856308)
	>  	/Users/maclark/go/pkg/mod/github.com/spf13/[email protected]/command.go:1148 +0x350
	>  github.com/spf13/cobra.(*Command).Execute(0x106b98298?)
	>  	/Users/maclark/go/pkg/mod/github.com/spf13/[email protected]/command.go:1071 +0x1c
	>  main.main()
	>  	/Users/maclark/sandbox/osdctl_new/osdctl/main.go:23 +0xd0
Cluster kubecli created

Comparing pull-secret to expected auth sections...
Auth 'cloud.openshift.com' - tokens match
Auth 'cloud.openshift.com' - emails match
Auth 'quay.io' - tokens match
Auth 'quay.io' - emails match
Auth 'registry.connect.redhat.com' - tokens match
Auth 'registry.connect.redhat.com' - emails match
Auth 'registry.redhat.io' - tokens match
Auth 'registry.redhat.io' - emails match

Comparison shows subset of Auths from OCM AuthToken have matching tokens + emails in cluster pull-secret. PASS
Actual Cluster Pull Secret:
{"auths":{"950916221866.dkr.ecr.us-east-1.amazonaws.com":{"auth":"**REDACTED==","email":""},"cloud.openshift.com":{"auth":"**REDACTED==","email":"[email protected]"},"quay.io":{"auth":"**REDACTED==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"**REDACTED==","email":"[email protected]"},"registry.redhat.io":{"auth":"**REDACTED==","email":"[email protected]"}}}

Expected Auths from OCM AccessToken expected to be present in Pull Secret (note this can be a subset):
{"auths":{"cloud.openshift.com":{"auth":"**REDACTED==","email":"[email protected]"},"quay.io":{"auth":"**REDACTED==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"**REDACTED==","email":"[email protected]"},"registry.redhat.io":{"auth":"**REDACTED==","email":"[email protected]"}}}

Does the actual pull secret match your expectation? (yes/no): yes
Pull secret verification (by user) successful.
Notify the customer the pull-secret update is completed. Sending service log.
INFO[0094] The following clusters match the given parameters:
Name                ID                                 State               Version             Cloud Provider      Region
maclarkstatest      2hpl2shfhj34boahjiqr8j85i9ilr6k7   ready               4.18.5              aws                 us-west-2

WARN[0094] A service log has been submitted in last hour
Description: Pull-secret update initiated. UserName:'1455657', OwnerID:'maclark.openshift'
Continue? (y/N): y
INFO[0096] The following template will be sent:
{
  "severity":"Info",
  "service_name":"SREManualAction",
  "summary":"Cluster pull secret updated",
  "description":"The pull secret associated with account '2g9OLHPkwDDcXvq2mt7kjfIQ0gf' has been rotated by Red Hat SRE in order to ensure that your cluster has successful connectivity to the Red Hat Registry and OpenShift Cluster Manager. Should you wish, you may download the updated copy of your pull secret from https://console.redhat.com/openshift/downloads#tool-pull-secret#. This is an informational notice and no further action is required by you.",
  "internal_only":false,
  "event_stream_id":"",
  "doc_references":null
}
Continue? (y/N): y
INFO[0097] Success: 1, Failed: 0

INFO[0097] Successful clusters:
ID                                     Status
055a7a55-d0cc-4348-b2d5-74ef5bbab593   Message has been successfully sent to 055a7a55-d0cc-4348-b2d5-74ef5bbab593

Pull secret update complete, exiting successfully

@nephomaniac
Copy link
Contributor Author

Example output from ownership transfer.
Note: I do not have perms to complete this, but this example shows that the errors is now presented to the usr.
Note: stack trace is addressed in PR #704 )

(⎈|api-maclarkstatest-4...:default)➜  osdctl/ git:(OSD-26415) ✗ ./osdctl -S cluster transfer-owner -C 2hpl2shfhj34boahjiqr8j85i9ilr6k7  --reason "testing OSD-26415"  --new-owner 2g9OLHPkwDDcXvq2mt7kjfIQ0gf
Given cluster is HCP, start to proceed the HCP owner transfer
Gathering all required information for the cluster transfer...
old orgID:'1HELaFOf2YHWwwvt3XMbT5Mja7M', new orgID:'1HELaFOf2YHWwwvt3XMbT5Mja7M'
Notify the customer before ownership transfer commences. Sending service log.
INFO[0007] The following clusters match the given parameters:
Name                ID                                 State               Version             Cloud Provider      Region
maclarkstatest      2hpl2shfhj34boahjiqr8j85i9ilr6k7   ready               4.18.5              aws                 us-west-2

WARN[0007] A service log has been submitted in last hour
Description: The pull secret associated with account '2g9OLHPkwDDcXvq2mt7kjfIQ0gf' has been rotated by Red Hat SRE in order to ensure that your cluster has successful connectivity to the Red Hat Registry and OpenShift Cluster Manager. Should you wish, you may download the updated copy of your pull secret from https://console.redhat.com/openshift/downloads#tool-pull-secret#. This is an informational notice and no further action is required by you.
WARN[0007] A service log has been submitted in last hour
Description: Pull-secret update initiated. UserName:'1455657', OwnerID:'maclark.openshift'
Continue? (y/N): y
INFO[0010] The following template will be sent:
{
  "severity":"Info",
  "service_name":"SREManualAction",
  "summary":"Cluster Ownership Transfer Initiated",
  "description":"Your requested cluster ownership transfer has been initiated. We expect the cluster to be available during this time.",
  "internal_only":false,
  "event_stream_id":"",
  "doc_references":null
}
Continue? (y/N): y
INFO[0011] Success: 1, Failed: 0

INFO[0011] Successful clusters:
ID                                     Status
055a7a55-d0cc-4348-b2d5-74ef5bbab593   Message has been successfully sent to 055a7a55-d0cc-4348-b2d5-74ef5bbab593

Internal SL Being Sent
INFO[0012] The following clusters match the given parameters:
Name                ID                                 State               Version             Cloud Provider      Region
maclarkstatest      2hpl2shfhj34boahjiqr8j85i9ilr6k7   ready               4.18.5              aws                 us-west-2

WARN[0013] A service log has been submitted in last hour
Description: Your requested cluster ownership transfer has been initiated. We expect the cluster to be available during this time.
WARN[0013] A service log has been submitted in last hour
Description: The pull secret associated with account '2g9OLHPkwDDcXvq2mt7kjfIQ0gf' has been rotated by Red Hat SRE in order to ensure that your cluster has successful connectivity to the Red Hat Registry and OpenShift Cluster Manager. Should you wish, you may download the updated copy of your pull secret from https://console.redhat.com/openshift/downloads#tool-pull-secret#. This is an informational notice and no further action is required by you.
WARN[0013] A service log has been submitted in last hour
Description: Pull-secret update initiated. UserName:'1455657', OwnerID:'maclark.openshift'
Continue? (y/N): y
INFO[0014] The following template will be sent:
{
  "severity":"Info",
  "service_name":"SREManualAction",
  "summary":"INTERNAL ONLY, DO NOT SHARE WITH CUSTOMER",
  "description":"From user 'maclark.openshift' in Red Hat account 1455657 => user 'maclark.openshift' in Red Hat account 1455657.",
  "internal_only":true,
  "event_stream_id":"",
  "doc_references":null
}
Continue? (y/N): y
INFO[0015] Success: 1, Failed: 0

INFO[0015] Successful clusters:
ID                                     Status
055a7a55-d0cc-4348-b2d5-74ef5bbab593   Message has been successfully sent to 055a7a55-d0cc-4348-b2d5-74ef5bbab593

INFO[0016] Backplane URL retrieved via OCM environment: https://api.stage.backplane.openshift.com
INFO[0016] No PagerDuty API Key configuration available. This will result in failure of `ocm-backplane login --pd <incident-id>` command.
INFO[0018] Backplane URL retrieved via OCM environment: https://api.stage.backplane.openshift.com
INFO[0018] No PagerDuty API Key configuration available. This will result in failure of `ocm-backplane login --pd <incident-id>` command.
Pull Secret data(Indented)...

{
 "auths": {
  "cloud.openshift.com": {
   "auth": "**REDACTED",
   "email": "[email protected]"
  },
  "quay.io": {
   "auth": "**REDACTED",
   "email": "[email protected]"
  },
  "registry.connect.redhat.com": {
   "auth": "**REDACTED",
   "email": "[email protected]"
  },
  "registry.redhat.io": {
   "auth": "**REDACTED",
   "email": "[email protected]"
  }
 }
}

Please review Pull Secret data to be used for update(after formatting):
{"auths":{"cloud.openshift.com":{"auth":"**REDACTED","email":"[email protected]"},"quay.io":{"auth":"**REDACTED","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"**REDACTED","email":"[email protected]"},"registry.redhat.io":{"auth":"**REDACTED","email":"[email protected]"}}}

Do you want to continue? (yes/no): yes

updateManifestwork begin...
get() Manifestwork...
update() Manifestwork...
Manifest work updated.
Sleeping 60 seconds here to allow secret to be synced on guest cluster
Create cluster kubecli...
INFO[0085] Backplane URL retrieved via OCM environment: https://api.stage.backplane.openshift.com
INFO[0085] No PagerDuty API Key configuration available. This will result in failure of `ocm-backplane login --pd <incident-id>` command.
[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
Detected at:
	>  goroutine 1 [running]:
	>  runtime/debug.Stack()
	>  	/opt/homebrew/Cellar/go/1.23.2/libexec/src/runtime/debug/stack.go:26 +0x64
	>  sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
	>  	/Users/maclark/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/log.go:60 +0xf4
	>  sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0x1400065b640, {0x106d3794a, 0x14})
	>  	/Users/maclark/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/deleg.go:147 +0x34
	>  github.com/go-logr/logr.Logger.WithName({{0x108ace208, 0x1400065b640}, 0x0}, {0x106d3794a?, 0x14001202900?})
	>  	/Users/maclark/go/pkg/mod/github.com/go-logr/[email protected]/logr.go:345 +0x40
	>  sigs.k8s.io/controller-runtime/pkg/client.newClient(0x0?, {0x0, 0x0, {0x0, 0x0}, 0x0, 0x0})
	>  	/Users/maclark/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:118 +0xac
	>  sigs.k8s.io/controller-runtime/pkg/client.New(0x14000b9a240?, {0x0, 0x0, {0x0, 0x0}, 0x0, 0x0})
	>  	/Users/maclark/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:98 +0x44
	>  github.com/openshift/osdctl/cmd/common.GetKubeConfigAndClient({0x14000e5c100, 0x20}, {0x14000a60900, 0x2, 0x2})
	>  	/Users/maclark/sandbox/osdctl_new/osdctl/cmd/common/helpers.go:56 +0x188
	>  github.com/openshift/osdctl/cmd/cluster.(*transferOwnerOptions).run(0x140005babd0)
	>  	/Users/maclark/sandbox/osdctl_new/osdctl/cmd/cluster/transferowner.go:897 +0x287c
	>  github.com/openshift/osdctl/cmd/cluster.newCmdTransferOwner.func2(0x14000d8c008?, {0x140005255e0?, 0x4?, 0x106cea931?})
	>  	/Users/maclark/sandbox/osdctl_new/osdctl/cmd/cluster/transferowner.go:84 +0x20
	>  github.com/spf13/cobra.(*Command).execute(0x14000d8c008, {0x14000525500, 0x7, 0x7})
	>  	/Users/maclark/go/pkg/mod/github.com/spf13/[email protected]/command.go:1019 +0x814
	>  github.com/spf13/cobra.(*Command).ExecuteC(0x14000325b08)
	>  	/Users/maclark/go/pkg/mod/github.com/spf13/[email protected]/command.go:1148 +0x350
	>  github.com/spf13/cobra.(*Command).Execute(0x108a4c298?)
	>  	/Users/maclark/go/pkg/mod/github.com/spf13/[email protected]/command.go:1071 +0x1c
	>  main.main()
	>  	/Users/maclark/sandbox/osdctl_new/osdctl/main.go:23 +0xd0
Cluster kubecli created

Comparing pull-secret to expected auth sections...
Auth 'cloud.openshift.com' - tokens match
Auth 'cloud.openshift.com' - emails match
Auth 'quay.io' - tokens match
Auth 'quay.io' - emails match
Auth 'registry.connect.redhat.com' - tokens match
Auth 'registry.connect.redhat.com' - emails match
Auth 'registry.redhat.io' - tokens match
Auth 'registry.redhat.io' - emails match

Comparison shows subset of Auths from OCM AuthToken have matching tokens + emails in cluster pull-secret. PASS
Actual Cluster Pull Secret:
{"auths":{"950916221866.dkr.ecr.us-east-1.amazonaws.com":{"auth":"**REDACTED==","email":""},"cloud.openshift.com":{"auth":"**REDACTED==","email":"[email protected]"},"quay.io":{"auth":"**REDACTED==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"**REDACTED==","email":"[email protected]"},"registry.redhat.io":{"auth":"**REDACTED==","email":"[email protected]"}}}

Expected Auths from OCM AccessToken expected to be present in Pull Secret (note this can be a subset):
{"auths":{"cloud.openshift.com":{"auth":"**REDACTED==","email":"[email protected]"},"quay.io":{"auth":"**REDACTED==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"**REDACTED==","email":"[email protected]"},"registry.redhat.io":{"auth":"**REDACTED==","email":"[email protected]"}}}

Does the actual pull secret match your expectation? (yes/no): yes
Pull secret verification (by user) successful.

Transfer cluster: 		'055a7a55-d0cc-4348-b2d5-74ef5bbab593' (maclarkstatest)
from user 			'2g9OLHPkwDDcXvq2mt7kjfIQ0gf' to '2g9OLHPkwDDcXvq2mt7kjfIQ0gf'
Continue? (y/N): y
Error, Patch Request Response: '{"code":"ACCT-MGMT-4","href":"/api/accounts_mgmt/v1/errors/4","id":"4","kind":"Error","operation_id":"1e9e85d3-8fdf-423c-928b-a456892891d3","reason":"Permission Denied"}'
error: Subscription request to patch creator failed with status: 403, err: '{"code":"ACCT-MGMT-4","href":"/api/accounts_mgmt/v1/errors/4","id":"4","kind":"Error","operation_id":"1e9e85d3-8fdf-423c-928b-a456892891d3","reason":"Permission Denied"}'

@nephomaniac
Copy link
Contributor Author

/label tide/merge-method-squash

@openshift-ci openshift-ci bot added the tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. label Apr 23, 2025

fmt.Println("Actual Cluster Pull Secret:")
blue.Println("Actual Cluster Pull Secret:")
fmt.Println(string(pullSecretData))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this wasn't introduced in this PR, but I'm a bit surprised that we just print the decoded pull-secret to stdout like this

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nephomaniac - you may have the most comprehension around this subcommand at the moment, do you know if this is necessary for the user to complete an owner transfer/pull-secret update task?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly wondering if this is worth hiding behind a -v flag or something in a follow-up PR. That way we can still use the output if/when its needed, but it won't appear by default on screenshares or copy/pasted output

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This caught my attention too. I'm not sure why this was originally printed to the terminal, but I had assumed it was to allow the user one last chance to validate the marshaled data(?).
I can put this behind a -v or maybe prompt the user to display/hide.
I've added some programatic comparisons and the tool now displays the output of those in addition to the full secret dump. I think the results of this comparison can replace the printing to the screen for most cases.

Copy link
Contributor Author

@nephomaniac nephomaniac May 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left the option to dump the secret data to the terminal, prompting the user to optionally do so with a warning. The newly added checks per auth section might provide enough info to the user executing this to make an informed decision about the status w/o the raw secret printed to the terminal.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 15, 2025
@devppratik
Copy link
Contributor

Hello @nephomaniac Any update on this PR?

@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 21, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented May 21, 2025

@nephomaniac: This pull request references OSD-26415 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.20.0" version, but no target version was set.

In response to this:

This PR attempts to allow a user to update a cluster's pull secret w/o transferring ownership for both classic and HCP clusters.

  • ~~This adds a new CLI arg '--pull-secret-only' (bool) which is mutually exclusive with '--new-owner'. ~~
  • This adds a new command 'osdctl cluster update-pullsecret'. This cmd is a wrapper re-using the transfer-owner's pullsecret update functions and general flow.
  • When 'updating the pull secret only' is used the utility will now exit after the pull secret is updated with the account's OCM accessToken values.
  • The pull secret only op prompts user to choose to send an internal service log before the operation begins, and prompts to send a customer service log after the operation completes.
  • This adds additional programmatic checks/comparisons of the resulting on cluster pull-secret auths for the end user to review.
  • The new programmatic checks/comparisons may negate the need for printing to the terminal for visual comparison. Previously this util printed the secret data to the terminal, this PR changes this to instead warn + prompt the user to choose whether or not to print the raw data for the optional visual inspection.
  • Additional information and formatting of errors.

Example usage:

osdctl cluster update-pullsecret -h
Update cluster pullsecret with current OCM accessToken data(to be done by Region Lead)

Usage:
 osdctl cluster update-pullsecret [flags]

Examples:

 # Update Pull Secret's OCM access token data
 osdctl cluster update-pullsecret --cluster-id 1kfmyclusteristhebesteverp8m --reason "Update PullSecret per pd or jira-id"

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@nephomaniac
Copy link
Contributor Author

/test verify-docs

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 22, 2025
@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Contributor

openshift-ci bot commented May 22, 2025

@nephomaniac: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/verify-docs be702b3 link true /test verify-docs
ci/prow/format be702b3 link true /test format
ci/prow/images be702b3 link true /test images
ci/prow/lint be702b3 link true /test lint
ci/prow/build be702b3 link true /test build
ci/prow/test be702b3 link true /test test

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants