Aggregates the following logs to Elasticsearch, where logs include OpenTelemetry trace and span IDs.
- System logs
- Request logs
- Audit logs
Start with base deployments such as the following examples from the Kubernetes Training repository.
- The Curity Identity Server example deployment.
- The Curity Token Handler example deployment.
Before deploying the Curity product, edit its log4j2.xml file.
Replace default layouts with JSON layouts for the system, request and audit logs.
<Appenders>
<Console name="stdout" target="SYSTEM_OUT">
<JSONLayout compact="true" eventEol="true" properties="true" includeTimeMillis="true">
<KeyValuePair key="hostname" value="${env:HOSTNAME}" />
</JSONLayout>
...
</Console>
<Appenders>
Use sidecar containers to tail request and audit log files to write them to Kubernetes nodes, ready for log shipping.
Do so by updating the Helm chart values.yaml file.
curity:
runtime:
logging:
level: INFO
image: 'busybox:latest'
stdout: true
logs:
- request
- audit
...
- An index template helps to ensure the type safety storage of fields in logging events.
- An ingest pipeline enables Elasticsearch to transform recived log data to the final JSON format.
- A Kubernetes job runs a script to create the index template and the ingest pipeline.
Elasticsearch creates indexes when Filebeat first sends a particular type of log data for a new day.
Each document in the results has an Elasticsearch index such as curity-request-2025.03.05
.
Use Elasticsearch commands to view the index template and ensure that it gets matched to indexes.
GET /_index_template/curity
POST /_index_template/_simulate_index/curity-request-2025.03.05
The Filebeat log shipper reads log files from the /var/log/containers
folder on Kubernetes nodes.
The log shipper uploads logging events to an Elasticsearch index calculated from the file path and date.
The following partial configuration shows the approach.
filebeat.inputs:
- type: container
paths:
- /var/log/containers/curity-idsvr-runtime*audit*.log
- /var/log/containers/tokenhandler-runtime*-audit*.log
json:
keys_under_root: true
add_error_key: false
fields:
logtype: 'audit'
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
index: "curity-%{[fields.logtype]}-%{+yyyy.MM.dd}"
pipelines:
- pipeline: curity
If you use the example deployment, run the following script to deploy log aggregation components.
Alternatively, adapt the scripting to match your own deployments.
./deploy-elastic-stack.sh
The script runs a demo deployment of Elasticsearch, Kibana and Filebeat.
The Kibana frontend uses an external URL of https://logs.testcluster.example
.
To make the URL resolvable, get the API gateway's external IP address.
kubectl get svc -n apigateway
Then add the Kibana hostname to any other entries for that IP address in the local computer's /etc/hosts
file.
172.20.0.5 logs.testcluster.example
Sign in to Kibana with the following details and access log data from Dev Tools.
- URL:
https://logs.testcluster.example/app/dev_tools#/console
- User: elastic
- Password: Password1
For example, run Lucene or SQL queries on these indexes to operate on JSON log data.
You can quickly filter logging events using index fields like an OpenTelemetry trace ID.
GET curity-system*/_search
{
"query":
{
"match":
{
"contextMap.TraceId": "ce41b85c6f00f167baa53fd814d23c30"
}
}
}
- See the Logging Best Practices article to learn more about Curity Identity Server logs.
- See the Elasticsearch Tutorial for a summary of the Elasticsearch integration.
Please visit curity.io for more information about the Curity Identity Server.