-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add builtin:logmonitoring.log-storage-settings to CR and reconciler #4053
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #4053 +/- ##
==========================================
- Coverage 64.66% 64.42% -0.25%
==========================================
Files 397 399 +2
Lines 26466 26697 +231
==========================================
+ Hits 17115 17200 +85
- Misses 8023 8141 +118
- Partials 1328 1356 +28
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
The methods are pretty similar, that is true, but they still differ in the parameters, hence I decided to exclude the dupl linter for both methods
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
settings are created and only created once, so works
but we are constantly (ie.: every reconcile) check if the settings exist this is a bit of an overkill
func SetLogMonitoringSettingExists(conditions *[]metav1.Condition, conditionType string) { | ||
condition := metav1.Condition{ | ||
Type: conditionType, | ||
Status: metav1.ConditionTrue, | ||
Reason: SettingsExistReason, | ||
Message: "LogMonitoring settings already exist, will not create new ones.", | ||
} | ||
_ = meta.SetStatusCondition(conditions, condition) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't put this here
It doesn't have to be "global", you only care about this in the reconciler that creates/manages this
@@ -65,5 +68,40 @@ func (r *Reconciler) Reconcile(ctx context.Context) error { | |||
return err | |||
} | |||
|
|||
err = r.checkLogMonitoringSettings(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be a separate reconciler?
- as its rather simple this is prooobably fine as is.
You should try to limit how often we query for it, as now the logs are a bit spammy (we check it everytime)
- also we should remove the condition when logmonitoring is turned off (the setting does not need to be deleted, as that is not a requirement as of right now)
Try to do something similar that we already do for connection-info (just an example)
- Cleanup condition if necessary
- Only check the setting if condition is outdated
And if you add all this fun stuff, then having it in a separate package/reconciler makes more sense 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the input. I agree totally. Will do that. 👍
Description
Ticket
With this it is possible to define ingest rule matchers in the dynakube. This configuration needs to be stored in settings (schema
builtin:logmonitoring.log-storage-settings
) in the scope of the current cluster ( =KubernetesClusterMEID
).This logic queries inside the logmonitoring reconciler all settings with this schemaID, and if there is none, we create new ones based on the defined ingest rule matchers in the dynakube (default is empty).
IF we query the settings and there are already some defined, I decided to add a condition to the dynakube that says exactly that.
How can this be tested?
Go to the Environment API v2 page in your tenant and try it out. It is in the Settings section. At first you'll need to enable
com.compuware.apm.webuiff.config.core.hierarchy.resolution.pg.k8workload.pgwlhr.feature
Deploy a dynakube with a valid logmonitoring section for example:
Wait a little bit then run in the environment API page the get query for the settings with the schema. See if it is created properly.
It should look like this:
You can then grab the objectID, delete the setting and reapply a dynakube with an empty logmonitoring section and see if default settings gets created.
That looks like this: