Skip to content

Commit

Permalink
add custom log label prefix and update document
Browse files Browse the repository at this point in the history
  • Loading branch information
quanzhao.cqz committed Jan 29, 2018
1 parent a7d5622 commit 07bda8c
Show file tree
Hide file tree
Showing 18 changed files with 386 additions and 134 deletions.
28 changes: 14 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
fluentd-pilot
=============
log-pilot
=========

[![CircleCI](https://circleci.com/gh/AliyunContainerService/fluentd-pilot.svg?style=svg)](https://circleci.com/gh/AliyunContainerService/fluentd-pilot)
[![CircleCI](https://circleci.com/gh/AliyunContainerService/log-pilot.svg?style=svg)](https://circleci.com/gh/AliyunContainerService/log-pilot)

`fluentd-pilot` is an awesome docker log tool. With `fluentd-pilot` you can collect logs from docker hosts and send them to your centralize log system such as elasticsearch, graylog2, awsog and etc. `fluentd-pilot` can collect not only docker stdout but also log file that inside docker containers.
`log-pilot` is an awesome docker log tool. With `log-pilot` you can collect logs from docker hosts and send them to your centralized log system such as elasticsearch, graylog2, awsog and etc. `log-pilot` can collect not only docker stdout but also log file that inside docker containers.

Try it
======
Expand All @@ -14,8 +14,8 @@ Prerequisites:
- Docker Engine >= 1.10

```
git clone [email protected]:AliyunContainerService/fluentd-pilot.git
cd fluentd-pilot/quickstart
git clone [email protected]:AliyunContainerService/log-pilot.git
cd log-pilot/quickstart
./run
```

Expand All @@ -36,7 +36,7 @@ Quickstart
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /:/host \
registry.cn-hangzhou.aliyuncs.com/acs-sample/fluentd-pilot:latest
registry.cn-hangzhou.aliyuncs.com/acs-sample/log-pilot:latest
```

### Run applications whose logs need to be collected
Expand All @@ -51,29 +51,29 @@ docker run -it --rm -p 10080:8080 \
tomcat
```

Now watch the output of fluentd-pilot. You will find that fluentd-pilot get all tomcat's startup logs. If you access tomcat with your broswer, access logs in `/usr/local/tomcat/logs/localhost_access_log.\*.txt` will also be displayed in fluentd-pilot's output.
Now watch the output of log-pilot. You will find that log-pilot get all tomcat's startup logs. If you access tomcat with your broswer, access logs in `/usr/local/tomcat/logs/localhost_access_log.\*.txt` will also be displayed in log-pilot's output.

More Info: [Documents](docs/docs.md)
More Info: [Fluentd Plugin](docs/fluentd/docs.md) and [Filebeat Plugin](docs/filebeat/docs.md)

Feature
========

- Single fluentd process per docker host. You don't need to create new fluentd process for every docker container.
- Support both stdout and file. Either Docker log driver or logspout can only collect stdout.
- Support both [fluentd plugin](docs/fluentd/docs.md) and [filebeat plugin](docs/filebeat/docs.md). You don't need to create new fluentd or filebeat process for every docker container.
- Support both stdout and log files. Either docker log driver or logspout can only collect stdout.
- Declarative configuration. You need do nothing but declare the logs you want to collect.
- Support many log management: elastichsearch, graylog2, awslogs and more.
- Tags. You could add tags on the logs collected, and later filter by tags in log management.

Build fluentd-pilot
Build log-pilot
===================

Prerequisites:

- Go >= 1.6

```
go get github.com/AliyunContainerService/fluentd-pilot
cd $GOPATH/github.com/AliyunContainerService/fluentd-pilot/docker-images
go get github.com/AliyunContainerService/log-pilot
cd $GOPATH/github.com/AliyunContainerService/log-pilot/docker-images
./build.sh # This will create a new docker image named pilot:latest
```

Expand Down
1 change: 0 additions & 1 deletion docker-images/config.fluentd
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ ${FLUENTD_BUFFER_CHUNK_FULL_THRESHOLD:+chunk_full_threshold ${FLUENTD_BUFFER_CHU
${FLUENTD_BUFFER_COMPRESS:+compress ${FLUENTD_BUFFER_COMPRESS}}
${FLUENTD_FLUSH_INTERVAL:+flush_interval $FLUENTD_FLUSH_INTERVAL}
${FLUENTD_FLUSH_MODE:+flush_mode ${FLUENTD_FLUSH_MODE}}
${FLUENTD_FLUSH_INTERVAL:+flush_interval ${FLUENTD_FLUSH_INTERVAL}}
${FLUENTD_FLUSH_THREAD_COUNT:+flush_thread_count ${FLUENTD_FLUSH_THREAD_COUNT}}
${FLUENTD_FLUSH_AT_SHUTDOWN:+flush_at_shutdown $FLUENTD_FLUSH_AT_SHUTDOWN}
${FLUENTD_DISABLE_RETRY_LIMIT:+disable_retry_limit $FLUENTD_DISABLE_RETRY_LIMIT}
Expand Down
5 changes: 3 additions & 2 deletions docker-images/entrypoint
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import subprocess
base = '/host'
pilot_fluentd = "fluentd"
pilot_filebeat = "filebeat"
ENV_PILOT_TYPE = "PILOT_TYPE"


def umount(volume):
Expand All @@ -33,7 +34,7 @@ def cleanup():


def run():
pilot_type = os.environ.get("PILOT_TYPE")
pilot_type = os.environ.get(ENV_PILOT_TYPE)
if pilot_filebeat == pilot_type:
tpl_config = "/pilot/filebeat.tpl"
else:
Expand All @@ -44,7 +45,7 @@ def run():


def config():
pilot_type = os.environ.get("PILOT_TYPE")
pilot_type = os.environ.get(ENV_PILOT_TYPE)
if pilot_filebeat == pilot_type:
print "enable pilot:", pilot_filebeat
subprocess.check_call(['/pilot/config.filebeat'])
Expand Down
Binary file removed docs/architecture.png
Binary file not shown.
69 changes: 0 additions & 69 deletions docs/docs.md

This file was deleted.

Binary file added docs/filebeat/architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
133 changes: 133 additions & 0 deletions docs/filebeat/docs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
Architecture
============

On every docker host, run a log-pilot instance. Log-pilot will monitor docker events, and parse log labels on new docker conatainer, and generate appropriate log configuration and notify fluentd or filebeat process to reload the new configuration.

![Architecture](architecture.png)

Run Log-pilot With Filebeat Plugin
=================================

You must set environment variable ```PILOT_TYPE=filebeat``` to enable filebeat plugin within log-pilot.

### Start log-pilot in docker container

```
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /:/host \
-e PILOT_TYPE=filebeat \
registry.cn-hangzhou.aliyuncs.com/acs-sample/log-pilot:latest
```

By default, all the logs that log-pilot collect will write to log-pilot's stdout.

### Work with elastichsearch

The command below run pilot with elastichsearch output, this makes log-pilot send all logs to elastichsearch.

```
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /:/host \
-e PILOT_TYPE=filebeat \
-e FILEBEAT_OUTPUT=elasticsearch \
-e ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST} \
-e ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT} \
registry.cn-hangzhou.aliyuncs.com/acs-sample/log-pilot:latest
```

Log output plugin configuration
===============================

You can config the environment variable ```FILEBEAT_OUTPUT ``` to determine which log management will be output.

### Supported log management

- elasticsearch

```
ELASTICSEARCH_HOST "(required) elasticsearch host"
ELASTICSEARCH_PORT "(required) elasticsearch port"
ELASTICSEARCH_USER "(optinal) elasticsearch authentication username"
ELASTICSEARCH_PASSWORD "(optinal) elasticsearch authentication password"
ELASTICSEARCH_PATH "(optinal) elasticsearch http path prefix"
ELASTICSEARCH_SCHEME "(optinal) elasticsearch scheme, default is http"
```

- logstash

```
LOGSTASH_HOST "(required) logstash host"
LOGSTASH_PORT "(required) logstash port"
```

- file

```
FILE_PATH "(required) output log file directory"
FILE_NAME "(optinal) the name of the generated files, default is filebeat"
FILE_ROTATE_SIZE "(optinal) the maximum size in kilobytes of each file. When this size is reached, the files are rotated. The default value is 10240 KB"
FILE_NUMBER_OF_FILES "(optinal) the maximum number of files to save under path. When this number of files is reached, the oldest file is deleted, and the rest of the files are shifted from last to first. The default is 7 files"
FILE_PERMISSIONS "(optinal) permissions to use for file creation, default is 0600"
```

- redis

```
REDIS_HOST "(required) redis host"
REDIS_PORT "(required) redis port"
REDIS_PASSWORD "(optinal) redis authentication password"
REDIS_DATATYPE "(optinal) redis data type to use for publishing events"
REDIS_TIMEOUT "(optinal) redis connection timeout in seconds, default is 5"
```

- kafka

```
KAFKA_BROKERS "(required) kafka brokers"
KAFKA_VERSION "(optinal) kafka version"
KAFKA_USERNAME "(optianl) kafka username"
KAFKA_PASSWORD "(optianl) kafka password"
KAFKA_PARTITION_KEY "(optinal) kafka partition key"
KAFKA_PARTITION "(optinal) kafka partition strategy"
KAFKA_CLIENT_ID "(optinal) the configurable ClientID used for logging, debugging, and auditing purposes. The default is beats"
KAFKA_BROKER_TIMEOUT "(optinal) the number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds)."
KAFKA_KEEP_ALIVE "(optinal) keep-alive period for an active network connection. If 0s, keep-alives are disabled, default is 0 seconds"
KAFKA_REQUIRE_ACKS "(optinal) ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1"
```

### Other log management

Supports for other log managements are in progress. You are welcome to create a pull request.

Declare log configuration of docker container
=============================================

### Basic usage

```
docker run -it --rm -p 10080:8080 \
-v /usr/local/tomcat/logs \
--label aliyun.logs.catalina=stdout \
--label aliyun.logs.access=/usr/local/tomcat/logs/localhost_access_log.*.txt \
tomcat
```

The command above runs tomcat container, expect that log-pilot collect stdout of tomcat and logs in `/usr/local/tomcat/logs/localhost_access_log.\*.txt`. `-v /usr/local/tomcat/logs` is needed here so fluentd-pilot could access file in tomcat container.

### More

There are many labels you can use to describe the log info.

- `aliyun.logs.$name=$path`
- Name is an identify, can be any string you want. The valid characters in name are `0-9a-zA-Z_-`
- Path is the log file path, can contians wildcard. `stdout` is a special value which means stdout of the container.
- `aliyun.logs.$name.format=none|json|csv|nginx|apache2|regexp` format of the log
- none: pure text.
- json: a json object per line.
- regexp: use regex parse log. The pattern is specified by `aliyun.logs.$name.format.pattern = $regex`
- `aliyun.logs.$name.tags="k1=v1,k2=v2"`: tags will be appended to log.
- `aliyun.logs.$name.target=target-for-log-storage`: target is used by the output plugins, instruct the plugins to store
logs in appropriate place. For elasticsearch output, target means the log index in elasticsearch. For aliyun_sls output,
target means the logstore in aliyun sls. The default value of target is the log name.
Binary file added docs/fluentd/architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 07bda8c

Please sign in to comment.