Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic switch to emergency mode when metrics unavailable #424

Open
wants to merge 37 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 23 commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
438f653
upgrade kubebuilder to plugin/v4
randytqwjp Oct 10, 2024
df7b7b8
add test utils
randytqwjp Oct 10, 2024
52780ef
fix controller test
randytqwjp Oct 22, 2024
9c153b5
fix gha test
randytqwjp Oct 22, 2024
f4454f4
chmod tortoisectl test
randytqwjp Oct 22, 2024
bba0747
edit tortoisectl
randytqwjp Oct 23, 2024
a968c7a
fix lint
randytqwjp Oct 23, 2024
691c4f4
fix lint
randytqwjp Oct 23, 2024
e28d881
add lint-fix to ci
randytqwjp Oct 23, 2024
192fecf
go mod tidy
randytqwjp Oct 23, 2024
4343ea9
add make dependencies
randytqwjp Oct 23, 2024
93556df
remove lint-fix
randytqwjp Oct 23, 2024
5cda06c
upgrade tools
randytqwjp Oct 23, 2024
0fbfff5
lint-fix
randytqwjp Oct 23, 2024
ae2334a
add tool chain version
randytqwjp Oct 23, 2024
28080e3
change toolchain to 1.22
randytqwjp Oct 23, 2024
ca36611
add timeout
randytqwjp Oct 23, 2024
5677abc
remove lint-fix
randytqwjp Oct 23, 2024
a6b0318
edit licenses
randytqwjp Oct 29, 2024
e8454e9
remove chmod
randytqwjp Nov 1, 2024
3725788
Merge branch 'main' of github.com:mercari/tortoise into kubebuilder-i…
randytqwjp Nov 26, 2024
ba4351e
automatic emergency mode trigger when kube metrics unavailable for hpa
randytqwjp Nov 26, 2024
11d8cb8
add return statement
randytqwjp Nov 26, 2024
c056d82
clean up code
randytqwjp Nov 28, 2024
b47578c
clean up code
randytqwjp Nov 28, 2024
7d07efd
add hpa test and try to fix controller test
randytqwjp Dec 3, 2024
12c7b05
fix old controller tests
randytqwjp Dec 4, 2024
ed22342
add controller test and fix checkHPAStatus function
randytqwjp Dec 6, 2024
bddb139
clean up code
randytqwjp Dec 6, 2024
0dc2749
remove autoemergency phase and use emergency instead
randytqwjp Dec 6, 2024
dd7c10a
fix lint
randytqwjp Dec 6, 2024
2e261c4
refactor tortoisephase change into tortoise service and write unit tests
randytqwjp Dec 12, 2024
7c0e997
fix lint
randytqwjp Dec 13, 2024
5be0c1a
fix lint
randytqwjp Dec 13, 2024
d8ae58d
fix review comments
randytqwjp Dec 19, 2024
46d71a9
fix nits
randytqwjp Jan 9, 2025
e41aab4
fix nits
randytqwjp Jan 9, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions internal/controller/tortoise_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,17 @@
return ctrl.Result{}, err
}
tortoise = tortoiseService.UpdateTortoiseAutoscalingPolicyInStatus(tortoise, hpa, now)
scalingActive := r.HpaService.checkHpaMetricStatus(ctx, hpa)

Check failure on line 188 in internal/controller/tortoise_controller.go

View workflow job for this annotation

GitHub Actions / Test

r.HpaService.checkHpaMetricStatus undefined (type *hpa.Service has no field or method checkHpaMetricStatus)
if scalingActive == false && tortoise.Spec.UpdateMode != autoscalingv1beta3.UpdateModeOff {
//switch to emergency mode
if tortoise.Spec.UpdateMode == autoscalingv1beta3.UpdateModeAuto {
tortoise.Spec.UpdateMode = autoscalingv1beta3.UpdateModeEmergency
}
}
if scalingActive == true && tortoise.Spec.UpdateMode == autoscalingv1beta3.UpdateModeEmergency {
tortoise.Spec.UpdateMode = autoscalingv1beta3.UpdateModeAuto
}

tortoise = r.TortoiseService.UpdateTortoisePhase(tortoise, now)
if tortoise.Status.TortoisePhase == autoscalingv1beta3.TortoisePhaseInitializing {
logger.Info("initializing tortoise", "tortoise", req.NamespacedName)
Expand Down
16 changes: 16 additions & 0 deletions pkg/hpa/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -765,3 +765,19 @@ func (c *Service) excludeExternalMetric(ctx context.Context, hpa *v2.HorizontalP

return newHPA
}

func (c *Service) checkHpaMetricStatus(ctx context.Context, currenthpa *v2.HorizontalPodAutoscaler, now time.Time) bool {
currenthpa = currenthpa.DeepCopy()
conditions := currenthpa.Status.Conditions

if conditions[1].Type == "ScalingActive" && conditions[1].Status == "True" {
//switch to Auto mode since metrics are back
return true
}

if conditions[1].Type == "ScalingActive" && conditions[1].Status == "False" && conditions[1].Reason == "FailedGetResourceMetric" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FailedGetResourceMetric

What about other failures? e.g., the container resource metrics

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sanposhiho should we only switch to emergency mode when main container resource metrics are missing? e.g. if istio-proxy metrics are unavailable for some reason but the main container metrics are still available, do we stay in auto mode since scaling is still active?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should fallback to emergency only if all metrics are dead..? Maybe it's too aggressive to do a fallback if some, but not all metrics are dead.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

e.g., Let's say a user deploys a new version to the app container which contains a bug unfortunately, and all main containers are crash due to the bug. In this case, switching to emerge doesn't help, rather just become unnecessary cost increase.

So, ideally we have to detect which issue is solvable with increasing the replicas, and which isn't.
I know it's super difficult, though. But, if two container's metrics are missing (a metric server is dead, the service is overwhelmed and whole dead, etc), we can say increasing the replica might help solve the issue, with more confidence.

//switch to Emergency mode since no metrics
return false
}
return true
}
Loading