Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to orphaned driver pod #8

Merged
merged 2 commits into from
Jan 11, 2024
Merged

Conversation

rollandf
Copy link
Contributor

@rollandf rollandf commented Dec 31, 2023

  • Add orphaned driver pods in the state built
  • Ignore not scheduled pod in the state built
  • Support upgrade-requested annotation to force moving to
    upgrade-required state

With this functionality, upgrading from a DS to a new one
is possible, with the following assumptions:

  • New DS should have Node Anti Affinity to prevent scheduling
    new driver pods where old still run.
  • The old DS should be deleted by Operator with DeletePropagationOrphan
    option to keep the old driver pods running until the upgrade flow
    replaces them.

In addition, it will also be possible to only detach the pod from
DaemonSet to migrate to new DaemonSet.

@rollandf
Copy link
Contributor Author

Migrate from gitlab to continue discussion

@shivamerla
Copy link
Collaborator

Thanks @rollandf. I did a quick review and this looks good to me.

pkg/upgrade/consts.go Outdated Show resolved Hide resolved
pkg/upgrade/upgrade_state.go Outdated Show resolved Hide resolved
@@ -506,12 +541,49 @@ func (m *ClusterUpgradeStateManagerImpl) ProcessDoneOrUnknownNodes(
return nil
}

// podInSyncWithDS returns true if the Pod and DS have the same Controller Revision Hash, or if Pod is orphaned
func (m *ClusterUpgradeStateManagerImpl) podInSyncWithDS(ctx context.Context, nodeState *NodeUpgradeState) (bool, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we split conditions for clarity ?

e.g return: (inSync, orphaned, error)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

pkg/upgrade/upgrade_state.go Outdated Show resolved Hide resolved
pkg/upgrade/upgrade_state.go Outdated Show resolved Hide resolved
Copy link
Collaborator

@adrianchiris adrianchiris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small question/comments otherwise lgtm

@@ -43,6 +43,11 @@ type NodeUpgradeState struct {
DriverDaemonSet *appsv1.DaemonSet
}

// IsOrphanedPod returns true if Pod is not associated to a DaemonSet or is not managed by a DaemonSet
func (nus *NodeUpgradeState) IsOrphanedPod() bool {
return nus.DriverDaemonSet == nil && nus.DriverPod != nil
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can DriverPod ever be nil ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cannot actually

@@ -43,6 +43,11 @@ type NodeUpgradeState struct {
DriverDaemonSet *appsv1.DaemonSet
}

// IsOrphanedPod returns true if Pod is not associated to a DaemonSet or is not managed by a DaemonSet
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is not associated to a DaemonSet or is not managed by a DaemonSet

whats the difference between associated and managed by DaemonSet ? is it the same ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

realized i asked to change the wording originally :D

pkg/upgrade/upgrade_state.go Outdated Show resolved Hide resolved
Copy link
Collaborator

@adrianchiris adrianchiris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM !

Copy link
Collaborator

@cdesiniotis cdesiniotis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few nits from me. Overall this lgtm, thanks @rollandf

pkg/upgrade/upgrade_state_test.go Outdated Show resolved Hide resolved
pkg/upgrade/upgrade_state_test.go Outdated Show resolved Hide resolved
pkg/upgrade/consts.go Outdated Show resolved Hide resolved
pkg/upgrade/upgrade_state.go Outdated Show resolved Hide resolved
- Add orphaned driver pods in the state built
- Ignore not scheduled pod in the state built
- Support upgrade-requested annotation to force moving to
  upgrade-required state

With this functionality, upgrading from a DS to a new one
is possible, with the following assumptions:

- New DS should have Node Anti Affinity to prevent scheduling
new driver pods where old still run.
- The old DS should be deleted by Operator with DeletePropagationOrphan
option to keep the old driver pods running until the upgrade flow
replaces them.

In addition, it will also be possible to only detach the pod from
DaemonSet to migrate to new DaemonSet.

Signed-off-by: Fred Rolland <[email protected]>
@rollandf
Copy link
Contributor Author

Thanks @cdesiniotis for the review. I fixed according to your comments

Copy link
Collaborator

@cdesiniotis cdesiniotis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, I have triggered the CI.

@adrianchiris
Copy link
Collaborator

mergin per 2 LGTMs !

thx for working on it @rollandf !

@adrianchiris adrianchiris merged commit f1a3aeb into NVIDIA:main Jan 11, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants