Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logger.WithValues bypasses KubeAwareEncoder #1290

Open
timebertt opened this issue Dec 4, 2020 · 26 comments · May be fixed by #1883
Open

logger.WithValues bypasses KubeAwareEncoder #1290

timebertt opened this issue Dec 4, 2020 · 26 comments · May be fixed by #1883
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@timebertt
Copy link
Contributor

When calling WithValues on a zap-backed logger, the value bypasses the KubeAwareEncoder, meaning it will be fully marshalled using the underlying encoder (e.g. json encoder).

For example:
If I add a key and value to the example controller here:

// Print the ReplicaSet
log.Info("Reconciling ReplicaSet", "replicaSet", rs, "container name", rs.Spec.Template.Spec.Containers[0].Name)

it will use the KubeAwareEncoder (which is the desired behaviour)

{"level":"info", [...] "replicaSet":{"apiVersion":"apps/v1","kind":"ReplicaSet","namespace":"kube-system","name":"coredns-66bff467f8"},"container name":"coredns"}

But if I use WithValues, which helps avoiding to add the values for every log statement in a controller, like this:

// Print the ReplicaSet
log = log.WithValues("replicaSet", rs)
log.Info("Reconciling ReplicaSet", "container name", rs.Spec.Template.Spec.Containers[0].Name)

it will bypass the KubeAwareEncoder and fully marshal the object into json, which will make the logs very hard to search:

{"level":"info", [...] "replicaSet":"&ReplicaSet{ObjectMeta:{coredns-66bff467f8  kube-system /apis/apps/v1/namespaces/kube-system/replicasets/coredns-66bff467f8 627fa1f9-ff8f-40fc-9f16-5df4dd71d567 13218 1 2020-12-04 13:49:52 +0100 CET <nil> <nil> map[hello:world k8s-app:kube-dns pod-template-hash:66bff467f8] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment coredns 9675a95d-33c0-42bc-82ae-b785d55280b1 0xc0004de338 0xc0004de339}] []  [{kube-controller-manager Update apps/v1 2020-12-04 13:50:21 +0100 CET FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:deployment.kubernetes.io/desired-replicas\":{},\"f:deployment.kubernetes.io/max-replicas\":{},\"f:deployment.kubernetes.io/revision\":{}},\"f:labels\":{\".\":{},\"f:k8s-app\":{},\"f:pod-template-hash\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"9675a95d-33c0-42bc-82ae-b785d55280b1\\\"}\":{\".\":{},\"f:apiVersion\":{},\"f:blockOwnerDeletion\":{},\"f:controller\":{},\"f:kind\":{},\"f:name\":{},\"f:uid\":{}}}},\"f:spec\":{\"f:replicas\":{},\"f:selector\":{\"f:matchLabels\":{\".\":{},\"f:k8s-app\":{},\"f:pod-template-hash\":{}}},\"f:template\":{\"f:metadata\":{\"f:labels\":{\".\":{},\"f:k8s-app\":{},\"f:pod-template-hash\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"coredns\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:livenessProbe\":{\".\":{},\"f:failureThreshold\":{},\"f:httpGet\":{\".\":{},\"f:path\":{},\"f:port\":{},\"f:scheme\":{}},\"f:initialDelaySeconds\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:name\":{},\"f:protocol\":{}},\"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:name\":{},\"f:protocol\":{}},\"k:{\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:name\":{},\"f:protocol\":{}}},\"f:readinessProbe\":{\".\":{},\"f:failureThreshold\":{},\"f:httpGet\":{\".\":{},\"f:path\":{},\"f:port\":{},\"f:scheme\":{}},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{\".\":{},\"f:limits\":{\".\":{},\"f:memory\":{}},\"f:requests\":{\".\":{},\"f:cpu\":{},\"f:memory\":{}}},\"f:securityContext\":{\".\":{},\"f:allowPrivilegeEscalation\":{},\"f:capabilities\":{\".\":{},\"f:add\":{},\"f:drop\":{}},\"f:readOnlyRootFilesystem\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/etc/coredns\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{},\"f:readOnly\":{}}}}},\"f:dnsPolicy\":{},\"f:nodeSelector\":{\".\":{},\"f:kubernetes.io/os\":{}},\"f:priorityClassName\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:serviceAccount\":{},\"f:serviceAccountName\":{},\"f:terminationGracePeriodSeconds\":{},\"f:tolerations\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"config-volume\\\"}\":{\".\":{},\"f:configMap\":{\".\":{},\"f:defaultMode\":{},\"f:items\":{},\"f:name\":{}},\"f:name\":{}}}}}},\"f:status\":{\"f:availableReplicas\":{},\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:readyReplicas\":{},\"f:replicas\":{}}}} {builtins Update apps/v1 2020-12-04 15:08:51 +0100 CET FieldsV1 {\"f:metadata\":{\"f:labels\":{\"f:hello\":{}}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{k8s-app: kube-dns,pod-template-hash: 66bff467f8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:66bff467f8] map[] [] []  []} {[{config-volume {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:coredns,},Items:[]KeyToPath{KeyToPath{Key:Corefile,Path:Corefile,Mode:nil,},},DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil}}] [] [{coredns k8s.gcr.io/coredns:1.6.7 [] [-conf /etc/coredns/Corefile]  [{dns 0 53 UDP } {dns-tcp 0 53 TCP } {metrics 0 9153 TCP }] [] [] {map[memory:{{178257920 0} {<nil>} 170Mi BinarySI}] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{73400320 0} {<nil>} 70Mi BinarySI}]} [{config-volume true /etc/coredns  <nil> }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0004de398 <nil> Default map[kubernetes.io/os:linux] coredns coredns <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [{CriticalAddonsOnly Exists   <nil>} {node-role.kubernetes.io/master   NoSchedule <nil>}] [] system-cluster-critical <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:1,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},}","container name":"coredns"}

I didn't find any issue/discussion related to a fix for this, other than this note from @DirectXMan12:

// NB(directxman12): can't just override AddReflected, since the encoder calls AddReflected on itself directly

So I'm opening this issue to see what can be done here to fix this behaviour.

@DirectXMan12
Copy link
Contributor

huh, yeah, looks like the autogenerated .String method is being called instead.

/triage accepted
/help

@k8s-ci-robot
Copy link
Contributor

@DirectXMan12:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

huh, yeah, looks like the autogenerated .String method is being called instead.

/triage accepted
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Dec 8, 2020
@DirectXMan12
Copy link
Contributor

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 8, 2020
@jay-rob
Copy link

jay-rob commented Dec 15, 2020

Does this only occur when you have the logger set to verbose?

@timebertt
Copy link
Contributor Author

Nope, also for the info log level. The log level should be unrelated to this issue.

@Szymongib
Copy link

Szymongib commented Jan 8, 2021

I have investigated the issue a little bit and it seems like there is not much that can be done.

When calling zap.WithValues() the fields are mapped immediately and added to the encoder as strings (if they implement Stringer interface, which is the case for K8s objects) therefor when calling Write they do not go through encoder.EncodeEntry as []Field but their type is already saved in the logger's encoder.
https://github.com/uber-go/zap/blob/master/zapcore/core.go#L72-L76
https://github.com/uber-go/zap/blob/master/zapcore/field.go#L202-L206

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 8, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 8, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@timebertt
Copy link
Contributor Author

/reopen
/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot reopened this Dec 16, 2021
@k8s-ci-robot
Copy link
Contributor

@timebertt: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 16, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2022
@timebertt
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2022
@STRRL
Copy link
Contributor

STRRL commented Apr 25, 2022

Hi! I am interested in this issue; And there are some related changes on upstream go-logr/logr, logr introduces logr.Marshaler, which seems aims to resolve this situation, as the description in the commit:

The intention is to use this when the output is structured (like JSON) when the
original type would be logged as string. It also has some other use cases.
This approach was chosen instead of a full marshaler API as in
zapcore.ObjectMarshaler because the implementation is simpler. The overhead for
large types is expected to be higher, but it is not certain yet whether this is
relevant in practice. If it is, then a marshaler API can still be added later.

And zapr has already supported that:

https://github.com/go-logr/zapr/blob/master/zapr.go#L171-L178

Maybe we could turn the implementation from "zapcore encoder level" to "logrSink level". Within the invoke of WithValues(), it would replace the origin runtime.Object with another wrapper(extremely like the kubeObjectWrapper but implements logr.Marshaler).

How do you think about it?

@STRRL
Copy link
Contributor

STRRL commented Apr 28, 2022

Hi @timebertt, @DirectXMan12, @Szymongib , I think we could finally resolve this issue with the latest logr. PTAL ❤️

@STRRL
Copy link
Contributor

STRRL commented May 2, 2022

I would try to resolve this issue.

@STRRL
Copy link
Contributor

STRRL commented May 2, 2022

/assign

@timebertt
Copy link
Contributor Author

I think we could finally resolve this issue with the latest logr.

Yeah, that would be awesome. Your plan sounds good to me :)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 1, 2022
@timebertt
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 31, 2022
@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jan 19, 2024
@caozhuozi
Copy link

any updates?

@buraksekili
Copy link
Contributor

any updates on this?

@STRRL
Copy link
Contributor

STRRL commented Nov 26, 2024

Hi team! I am happy to be back to work with #1883, please let me know if anything I should do so. 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants