Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add builtin:logmonitoring.log-storage-settings to CR and reconciler #4053

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

waodim
Copy link
Contributor

@waodim waodim commented Nov 13, 2024

Description

Ticket

With this it is possible to define ingest rule matchers in the dynakube. This configuration needs to be stored in settings (schema builtin:logmonitoring.log-storage-settings) in the scope of the current cluster ( = KubernetesClusterMEID).

This logic queries inside the logmonitoring reconciler all settings with this schemaID, and if there is none, we create new ones based on the defined ingest rule matchers in the dynakube (default is empty).

IF we query the settings and there are already some defined, I decided to add a condition to the dynakube that says exactly that.

How can this be tested?

Go to the Environment API v2 page in your tenant and try it out. It is in the Settings section. At first you'll need to enable com.compuware.apm.webuiff.config.core.hierarchy.resolution.pg.k8workload.pgwlhr.feature

Deploy a dynakube with a valid logmonitoring section for example:

  logMonitoring:
    ingestRuleMatchers:
    - attribute: "k8s.deployment.name"
      values:
        - "dynatrace"
        - "anothertest"
    - attribute: "k8s.container.name"
      values:
        - "testContainerName"
  templates:
      logMonitoring:
        imageRef:
          repository: us-central1-docker.pkg.dev/cloud-platform-207208/chmu/logmodule
          tag: latest

Wait a little bit then run in the environment API page the get query for the settings with the schema. See if it is created properly.

It should look like this:

{
  "items": [
    {
      "objectId": "vu9U3hXa3q0AAAABACpidWlsdGluOmxvZ21vbml0b3JpbmcubG9nLXN0b3JhZ2Utc2V0dGluZ3MAEktVQkVSTkVURVNfQ0xVU1RFUgAQMDFGNjZGMjkxRjBENjU0MAAkMzcxM2RjMDktMDYzNC0zMjM0LTlhNTAtYTg4MTA4OGY2YmI0vu9U3hXa3q0",
      "value": {
        "enabled": true,
        "config-item-title": "dynakube-test",
        "send-to-storage": true,
        "matchers": [
          {
            "attribute": "k8s.deployment.name",
            "operator": "MATCHES",
            "values": [
              "dynatrace",
              "anothertest"
            ]
          },
          {
            "attribute": "k8s.container.name",
            "operator": "MATCHES",
            "values": [
              "testContainerName"
            ]
          }
        ]
      }
    }
  ],
  "totalCount": 1,
  "pageSize": 100
}

You can then grab the objectID, delete the setting and reapply a dynakube with an empty logmonitoring section and see if default settings gets created.

That looks like this:

[
  {
    "schemaId": "builtin:logmonitoring.log-storage-settings",
    "scope": "KUBERNETES_CLUSTER-D3A3C5A146830A79",
    "value": {
      "enabled": true,
      "config-item-title": "my-cluster-name",
      "send-to-storage": true,
      "matchers": []
    }
  }
]

@codecov-commenter
Copy link

codecov-commenter commented Nov 13, 2024

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 33.18386% with 149 lines in your changes missing coverage. Please review.

Project coverage is 64.42%. Comparing base (d389598) to head (6a09f0c).
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
test/mocks/pkg/clients/dynatrace/client.go 29.41% 36 Missing and 12 partials ⚠️
pkg/clients/dynatrace/settings.go 0.00% 28 Missing ⚠️
pkg/clients/dynatrace/settings_logmonitoring.go 64.70% 16 Missing and 8 partials ⚠️
...a3/dynakube/logmonitoring/zz_generated.deepcopy.go 0.00% 19 Missing and 1 partial ⚠️
...g/controllers/dynakube/logmonitoring/reconciler.go 29.62% 13 Missing and 6 partials ⚠️
pkg/util/conditions/settings_logmonitoring.go 0.00% 8 Missing ⚠️
test/mocks/cmd/remote_command/executor.go 0.00% 2 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4053      +/-   ##
==========================================
- Coverage   64.66%   64.42%   -0.25%     
==========================================
  Files         397      399       +2     
  Lines       26466    26697     +231     
==========================================
+ Hits        17115    17200      +85     
- Misses       8023     8141     +118     
- Partials     1328     1356      +28     
Flag Coverage Δ
unittests 64.42% <33.18%> (-0.25%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

The methods are pretty similar, that is true, but they still differ in the parameters, hence I decided to exclude the dupl linter for both methods
hack/make/manifests/config.mk Show resolved Hide resolved
pkg/clients/dynatrace/settings_logmonitoring.go Outdated Show resolved Hide resolved
pkg/clients/dynatrace/settings_logmonitoring.go Outdated Show resolved Hide resolved
Copy link
Contributor

@0sewa0 0sewa0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

settings are created and only created once, so works

but we are constantly (ie.: every reconcile) check if the settings exist this is a bit of an overkill

Comment on lines +12 to +20
func SetLogMonitoringSettingExists(conditions *[]metav1.Condition, conditionType string) {
condition := metav1.Condition{
Type: conditionType,
Status: metav1.ConditionTrue,
Reason: SettingsExistReason,
Message: "LogMonitoring settings already exist, will not create new ones.",
}
_ = meta.SetStatusCondition(conditions, condition)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't put this here

It doesn't have to be "global", you only care about this in the reconciler that creates/manages this

@@ -65,5 +68,40 @@ func (r *Reconciler) Reconcile(ctx context.Context) error {
return err
}

err = r.checkLogMonitoringSettings(ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be a separate reconciler?

  • as its rather simple this is prooobably fine as is.

You should try to limit how often we query for it, as now the logs are a bit spammy (we check it everytime)

  • also we should remove the condition when logmonitoring is turned off (the setting does not need to be deleted, as that is not a requirement as of right now)

Try to do something similar that we already do for connection-info (just an example)

  • Cleanup condition if necessary
  • Only check the setting if condition is outdated

And if you add all this fun stuff, then having it in a separate package/reconciler makes more sense 😉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the input. I agree totally. Will do that. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants