Skip to content

Latest commit

 

History

History
137 lines (116 loc) · 5.57 KB

custom-logs.md

File metadata and controls

137 lines (116 loc) · 5.57 KB

Custom Logs Configuration

How to use custom logSource (oci_la_log_source_name) and/or other custom configuration for Pod/Container Logs collected through "Kubernetes Container Generic Logs" logSource ?

A generic source with time only parser is defined/configured for collecting all application pod logs from /var/log/containers/ out of the box. This is to ensure that all the logs generated by all pods are collected and pushed to Logging Analytics. Often you may need to configure a custom logSource for a particular pod log, either by using one of the existing OOB logSources at Logging Analytics or by defining one custom logSource matching to the requirements. Once you have defined/identified a logSource for a particular pod log, the following are couple of ways to get those pod logs associated to the logSource.

Use Pod Annotations

In this approach, all that you need to do is add the following annotation, oracle.com/oci_la_log_source_name (with logSourceName as value) to all the pods of choice. This approach works for all the use-cases, except for multi-line plain text formatted logs.

  • Refer this doc to find how to add the annotation through Pod's metadata section. This is the recommended approach as it provides the persistent behavior.
  • Refer this doc to find how to add annotation through 'kubectl annotate' command. You may use this approach for quick testing.

Note The following configuration parameters are supported for customisation through Pod Annotations in addition to logSource,

  • oracle.com/oci_la_log_group_id => to use custom logGroupId (oci_la_log_group_id)
  • oracle.com/oci_la_entity_id => to use custom entityId (oci_la_entity_id)

customLogs section in helm chart values.yaml

In this approach, all that you need to do is to provide the necessary configuration information like log file path, logSource, multiline start regular expression (in case of multi-line logs) in the customLogs section of override_values.yaml. Using this information the corresponding Fluentd configuration is generated automatically.

Note This approach is valid only when using helm chart based installation.

The following example demonstrates a container customLogs configuration

...
...
oci-onm-logan:
  ...
  ...
  fluentd:
    ...
    ...
    customLogs:
      custom-log1:
        path: /var/log/containers/custom-1.log
        ociLALogSourceName: "Custom1 Logs"
        multilineStartRegExp: <Multi-line start expression for multi-line logs>
        isContainerLog: true

The following example demonstrates a non container customLogs configuration

...
...
oci-onm-logan:
  ...
  ...
  fluentd:
    ...
    ...
    customLogs:
      custom-log2:
        path: /var/log/custom/custom-2.log
        ociLALogSourceName: "Custom2 Logs"
        multilineStartRegExp: <Multi-line start expression for multi-line logs>
        isContainerLog: false

Use Fluentd conf

In this approach, a new set of Source, Filter sections have to be created in the customFluentdConf section of values.yaml. The following example demonstrates a custom fluentd config to tag /var/log/containers/frontend*.log with logSource "Guestbook Frontend Logs" (To be added to helm-chart override_values.yaml, under customFluentdConf section).

...
...
oci-onm-logan:
  ...
  ...
  fluentd:
    ...
    ...
    customFluentdConf: |
         <source>
            @type tail
            @id in_tail_frontend
            path_key tailed_path
            path /var/log/containers/frontend-*.log
            pos_file /var/log/oci_la_fluentd_outplugin/pos/frontend.logs.pos
            tag oci.oke.frontend.*
            read_from_head "#{ENV['FLUENT_OCI_READ_FROM_HEAD'] || true}"
            <parse>
            {{- if eq $runtime "docker" }}
            @type json
            {{- else}}
            @type cri
            {{- end }}
            </parse>
         </source>

         # Record transformer filter to apply Logging Analytics configuration to each record.
         <filter oci.oke.frontend.**>
            @type record_transformer
            enable_ruby true
            <record>
            oci_la_metadata ${{"{{"}}"Kubernetes Cluster Name": "#{ENV['FLUENT_OCI_KUBERNETES_CLUSTER_NAME'] || 'UNDEFINED'}", "Kubernetes Cluster ID": "#{ENV['FLUENT_OCI_KUBERNETES_CLUSTER_ID'] || 'UNDEFINED'}"{{"}}"}}
            oci_la_log_group_id "#{ENV['FLUENT_OCI_KUBERNETES_LOGGROUP_ID'] || ENV['FLUENT_OCI_DEFAULT_LOGGROUP_ID']}"
            oci_la_log_path "${record['tailed_path']}"
            oci_la_log_source_name "Guestbook Frontend Logs"
            {{- if eq $runtime "docker" }}
            message "${record['log']}"
            {{- end }}
            tag ${tag}
            </record>
         </filter>

Note: The log path /var/log/containers/frontend-*.log has to be excluded from the generic container logs to avoid duplicate log collection. Add the log path toexclude_pathvalue underin_tail_containerlogs source section.

...
...
oci-onm-logan:
  ...
  ...
  fluentd:
    ...
    ...
    genericContainerLogs:
      exclude_path:
        - '"/var/log/containers/kube-proxy-*.log"'
        ...
        ...
        - '"/var/log/containers/frontend-*.log"'

In addition to the above, you may need to modify the source section to add multiline parser, if the logs are of plain text multi-line format (OR) add a concat plugin filter if the logs are of say multi-line but wrapped in json. Refer oci-onm-logan chart logs-configmap template for examples.