Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KUBERNETES_COLLECT_EVENTS send event but do not flag them with configured TAGS #229

Open
julienbachmann opened this issue Aug 18, 2017 · 4 comments

Comments

@julienbachmann
Copy link

**Output of the info page **

2017-08-18 09:42:51,589 | WARNING | dd.collector | utils.service_discovery.config(config.py:31) | No configuration backend provided for service discovery. Only auto config templates will be used.
====================
Collector (v 5.16.0)
====================

  Status date: 2017-08-18 09:42:46 (5s ago)
  Pid: 25
  Platform: Linux-4.4.41-k8s-x86_64-with-debian-8.9
  Python Version: 2.7.13, 64bit
  Logs: <stderr>, /var/log/datadog/collector.log

  Clocks
  ======
  
    NTP offset: 0.0018 s
    System UTC time: 2017-08-18 09:42:51.662215
  
  Paths
  =====
  
    conf.d: /etc/dd-agent/conf.d
    checks.d: /opt/datadog-agent/agent/checks.d
  
  Hostnames
  =========
  
    ec2-hostname: ip-172-20-85-135.eu-west-1.compute.internal
    local-ipv4: 172.20.85.135
    local-hostname: ip-172-20-85-135.eu-west-1.compute.internal
    socket-hostname: kube-state-metrics-2169312636-jbxkl
    public-hostname: ec2-34-253-216-94.eu-west-1.compute.amazonaws.com
    hostname: i-02d0c299bf692272f
    instance-id: i-02d0c299bf692272f
    public-ipv4: 34.253.216.94
    socket-fqdn: kube-state-metrics-2169312636-jbxkl
  
  Checks
  ======
  
    kube_dns (5.16.0)
    -----------------
      - instance #0 [OK]
      - Collected 81 metrics, 0 events & 0 service checks
  
    kubernetes_state (5.16.0)
    -------------------------
      - instance #0 [OK]
      - Collected 19091 metrics, 0 events & 1294 service checks
  
    kubernetes (5.16.0)
    -------------------
      - instance #0 [OK]
      - Collected 375 metrics, 0 events & 3 service checks
  
    ntp (5.16.0)
    ------------
      - Collected 0 metrics, 0 events & 0 service checks
  
    disk (5.16.0)
    -------------
      - instance #0 [OK]
      - Collected 32 metrics, 0 events & 0 service checks
  
    docker_daemon (5.16.0)
    ----------------------
      - instance #0 [OK]
      - Collected 368 metrics, 1 event & 1 service check
  
  
  Emitters
  ========
  
    - http_emitter [OK]

====================
Dogstatsd (v 5.16.0)
====================

  Status date: 2017-08-18 09:42:46 (5s ago)
  Pid: 19
  Platform: Linux-4.4.41-k8s-x86_64-with-debian-8.9
  Python Version: 2.7.13, 64bit
  Logs: <stderr>, /var/log/datadog/dogstatsd.log

  Flush count: 69
  Packet Count: 1173
  Packets per second: 1.7
  Metric count: 9
  Event count: 0
  Service check count: 0

====================
Forwarder (v 5.16.0)
====================

  Status date: 2017-08-18 09:42:50 (1s ago)
  Pid: 18
  Platform: Linux-4.4.41-k8s-x86_64-with-debian-8.9
  Python Version: 2.7.13, 64bit
  Logs: <stderr>, /var/log/datadog/forwarder.log

  Queue Size: 377933 bytes
  Queue Length: 2
  Flush Count: 216
  Transactions received: 137
  Transactions flushed: 135
  Transactions rejected: 0
  API Key Status: API Key is valid
  

======================
Trace Agent (v 5.16.0)
======================

  Pid: 17
  Uptime: 698 seconds
  Mem alloc: 2488032 bytes

  Hostname: kube-state-metrics-2169312636-jbxkl
  Receiver: 0.0.0.0:8126
  API Endpoint: https://trace.agent.datadoghq.com

  Bytes received (1 min): 0
  Traces received (1 min): 0
  Spans received (1 min): 0

  Bytes sent (1 min): 0
  Traces sent (1 min): 0
  Stats sent (1 min): 0

Additional environment details (Operating System, Cloud provider, etc):
running in a kubernetes cluster on AWS.

Steps to reproduce the issue:

  1. set environnement variable: KUBERNETES=true and KUBERNETES_COLLECT_EVENTS=true
  2. set environnement variable: TAGS=env:staging
  3. Run your dd-agent container on 1 of your k8s node

Describe the results you received:
Kuberenetes events are sent to datadog but they do not have the tag env:staging

Describe the results you expected:
Kuberenetes events should be flagged with the tag env:staging

Additional information you deem important (e.g. issue happens only occasionally):

@xvello xvello self-assigned this Aug 28, 2017
@xvello
Copy link
Contributor

xvello commented Aug 28, 2017

Hello @julienbachmann,

Unfortunately I cannot reproduce this bug. Please note that host tags are resolved in our backend and will not appear in the output of /opt/datadog-agent/agent/agent.py check kubernetes.

The most common cause of missing host tags is running several agents per host. This is currently not supported as two agents will send conflicting host metadata payloads and the backend-side resolution will have issues. Did you make sure you are not running another "event-less" agent on this node?

FWI, 5.17.0 (to be out in a couple of days) introduces a new mechanism to allow for easier event collection, you might want to give it a try: DataDog/integrations-core#687

@username1366
Copy link

It also doesn't work for me. I have a datadog agent with such configuration:

k8s-daemonset.yaml:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: datadog-agent
spec:
  template:
    metadata:
      labels:
        app: datadog-agent
      name: datadog-agent
    spec:
      hostNetwork: true
      hostIPC: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - image: datadog/docker-dd-agent:latest-dogstatsd-alpine
        imagePullPolicy: Always
        name: datadog-agent
        ports:
        - containerPort: 8125
          name: dogstatsdport
          protocol: UDP
        env:
        - name: API_KEY
          value: secret
        - name: DD_LOGS_STDOUT
          value: 'yes'
        - name: DD_TAGS
          value: "DC:tier1-europe-west1"
        - name: DD_URL
          value: "http://my-host-to-sniff-dd-requests:8081"

datadog.conf

dd_url = http://my-host-to-sniff-dd-requests:8081
api_key = secret
gce_updated_hostname = yes
collector_log_file = /opt/datadog-agent/logs/collector.log
forwarder_log_file = /opt/datadog-agent/logs/forwarder.log
dogstatsd_log_file = /opt/datadog-agent/logs/dogstatsd.log
jmxfetch_log_file = /opt/datadog-agent/logs/jmxfetch.log
tags = DC:tier1-europe-west1
non_local_traffic = yes
log_to_syslog = no

And I have a host which sniff all datadog agent http requests

{  
   "series":[  
      {  
         "tags":null,
         "metric":"datadog.dogstatsd.packet.count",
         "interval":10.0,
         "device_name":null,
         "host":"gke-preemptible2-1be1ade4-ms1r.c.project.internal",
         "points":[  
            [  
               1507283690.0,
               1809
            ]
         ],
         "type":"gauge"
      },

As you can see metric datadog.dogstatsd.packet.count sent with "tags":null but I guess it should be "tags": "DC:tier1-europe-west1"

@hkaj
Copy link
Member

hkaj commented Oct 6, 2017

Hi @username1366
As @xvello said, tags in datadog.conf are resolved in our backend, they are sent separately as host tags. They are not applied to every metrics client side. So this behavior is expected, and the tag should be set in the app if gke-preemptible2-1be1ade4-ms1r.c.project.internal is also the host name or a host alias for this node and you have a full agent (as opposed to just dogstatsd) running on it.

@buddyledungarees
Copy link

buddyledungarees commented Dec 14, 2017

Nvm, looks like it's addressed in #266

@xvello xvello removed their assignment Jul 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants