forked from AliyunContainerService/log-pilot
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
add custom log label prefix and update document
- Loading branch information
quanzhao.cqz
committed
Jan 29, 2018
1 parent
a7d5622
commit 07bda8c
Showing
18 changed files
with
386 additions
and
134 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,9 +1,9 @@ | ||
fluentd-pilot | ||
============= | ||
log-pilot | ||
========= | ||
|
||
[](https://circleci.com/gh/AliyunContainerService/fluentd-pilot) | ||
[](https://circleci.com/gh/AliyunContainerService/log-pilot) | ||
|
||
`fluentd-pilot` is an awesome docker log tool. With `fluentd-pilot` you can collect logs from docker hosts and send them to your centralize log system such as elasticsearch, graylog2, awsog and etc. `fluentd-pilot` can collect not only docker stdout but also log file that inside docker containers. | ||
`log-pilot` is an awesome docker log tool. With `log-pilot` you can collect logs from docker hosts and send them to your centralized log system such as elasticsearch, graylog2, awsog and etc. `log-pilot` can collect not only docker stdout but also log file that inside docker containers. | ||
|
||
Try it | ||
====== | ||
|
@@ -14,8 +14,8 @@ Prerequisites: | |
- Docker Engine >= 1.10 | ||
|
||
``` | ||
git clone [email protected]:AliyunContainerService/fluentd-pilot.git | ||
cd fluentd-pilot/quickstart | ||
git clone [email protected]:AliyunContainerService/log-pilot.git | ||
cd log-pilot/quickstart | ||
./run | ||
``` | ||
|
||
|
@@ -36,7 +36,7 @@ Quickstart | |
docker run --rm -it \ | ||
-v /var/run/docker.sock:/var/run/docker.sock \ | ||
-v /:/host \ | ||
registry.cn-hangzhou.aliyuncs.com/acs-sample/fluentd-pilot:latest | ||
registry.cn-hangzhou.aliyuncs.com/acs-sample/log-pilot:latest | ||
``` | ||
|
||
### Run applications whose logs need to be collected | ||
|
@@ -51,29 +51,29 @@ docker run -it --rm -p 10080:8080 \ | |
tomcat | ||
``` | ||
|
||
Now watch the output of fluentd-pilot. You will find that fluentd-pilot get all tomcat's startup logs. If you access tomcat with your broswer, access logs in `/usr/local/tomcat/logs/localhost_access_log.\*.txt` will also be displayed in fluentd-pilot's output. | ||
Now watch the output of log-pilot. You will find that log-pilot get all tomcat's startup logs. If you access tomcat with your broswer, access logs in `/usr/local/tomcat/logs/localhost_access_log.\*.txt` will also be displayed in log-pilot's output. | ||
|
||
More Info: [Documents](docs/docs.md) | ||
More Info: [Fluentd Plugin](docs/fluentd/docs.md) and [Filebeat Plugin](docs/filebeat/docs.md) | ||
|
||
Feature | ||
======== | ||
|
||
- Single fluentd process per docker host. You don't need to create new fluentd process for every docker container. | ||
- Support both stdout and file. Either Docker log driver or logspout can only collect stdout. | ||
- Support both [fluentd plugin](docs/fluentd/docs.md) and [filebeat plugin](docs/filebeat/docs.md). You don't need to create new fluentd or filebeat process for every docker container. | ||
- Support both stdout and log files. Either docker log driver or logspout can only collect stdout. | ||
- Declarative configuration. You need do nothing but declare the logs you want to collect. | ||
- Support many log management: elastichsearch, graylog2, awslogs and more. | ||
- Tags. You could add tags on the logs collected, and later filter by tags in log management. | ||
|
||
Build fluentd-pilot | ||
Build log-pilot | ||
=================== | ||
|
||
Prerequisites: | ||
|
||
- Go >= 1.6 | ||
|
||
``` | ||
go get github.com/AliyunContainerService/fluentd-pilot | ||
cd $GOPATH/github.com/AliyunContainerService/fluentd-pilot/docker-images | ||
go get github.com/AliyunContainerService/log-pilot | ||
cd $GOPATH/github.com/AliyunContainerService/log-pilot/docker-images | ||
./build.sh # This will create a new docker image named pilot:latest | ||
``` | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file not shown.
This file was deleted.
Oops, something went wrong.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,133 @@ | ||
Architecture | ||
============ | ||
|
||
On every docker host, run a log-pilot instance. Log-pilot will monitor docker events, and parse log labels on new docker conatainer, and generate appropriate log configuration and notify fluentd or filebeat process to reload the new configuration. | ||
|
||
 | ||
|
||
Run Log-pilot With Filebeat Plugin | ||
================================= | ||
|
||
You must set environment variable ```PILOT_TYPE=filebeat``` to enable filebeat plugin within log-pilot. | ||
|
||
### Start log-pilot in docker container | ||
|
||
``` | ||
docker run --rm -it \ | ||
-v /var/run/docker.sock:/var/run/docker.sock \ | ||
-v /:/host \ | ||
-e PILOT_TYPE=filebeat \ | ||
registry.cn-hangzhou.aliyuncs.com/acs-sample/log-pilot:latest | ||
``` | ||
|
||
By default, all the logs that log-pilot collect will write to log-pilot's stdout. | ||
|
||
### Work with elastichsearch | ||
|
||
The command below run pilot with elastichsearch output, this makes log-pilot send all logs to elastichsearch. | ||
|
||
``` | ||
docker run --rm -it \ | ||
-v /var/run/docker.sock:/var/run/docker.sock \ | ||
-v /:/host \ | ||
-e PILOT_TYPE=filebeat \ | ||
-e FILEBEAT_OUTPUT=elasticsearch \ | ||
-e ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST} \ | ||
-e ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT} \ | ||
registry.cn-hangzhou.aliyuncs.com/acs-sample/log-pilot:latest | ||
``` | ||
|
||
Log output plugin configuration | ||
=============================== | ||
|
||
You can config the environment variable ```FILEBEAT_OUTPUT ``` to determine which log management will be output. | ||
|
||
### Supported log management | ||
|
||
- elasticsearch | ||
|
||
``` | ||
ELASTICSEARCH_HOST "(required) elasticsearch host" | ||
ELASTICSEARCH_PORT "(required) elasticsearch port" | ||
ELASTICSEARCH_USER "(optinal) elasticsearch authentication username" | ||
ELASTICSEARCH_PASSWORD "(optinal) elasticsearch authentication password" | ||
ELASTICSEARCH_PATH "(optinal) elasticsearch http path prefix" | ||
ELASTICSEARCH_SCHEME "(optinal) elasticsearch scheme, default is http" | ||
``` | ||
|
||
- logstash | ||
|
||
``` | ||
LOGSTASH_HOST "(required) logstash host" | ||
LOGSTASH_PORT "(required) logstash port" | ||
``` | ||
|
||
- file | ||
|
||
``` | ||
FILE_PATH "(required) output log file directory" | ||
FILE_NAME "(optinal) the name of the generated files, default is filebeat" | ||
FILE_ROTATE_SIZE "(optinal) the maximum size in kilobytes of each file. When this size is reached, the files are rotated. The default value is 10240 KB" | ||
FILE_NUMBER_OF_FILES "(optinal) the maximum number of files to save under path. When this number of files is reached, the oldest file is deleted, and the rest of the files are shifted from last to first. The default is 7 files" | ||
FILE_PERMISSIONS "(optinal) permissions to use for file creation, default is 0600" | ||
``` | ||
|
||
- redis | ||
|
||
``` | ||
REDIS_HOST "(required) redis host" | ||
REDIS_PORT "(required) redis port" | ||
REDIS_PASSWORD "(optinal) redis authentication password" | ||
REDIS_DATATYPE "(optinal) redis data type to use for publishing events" | ||
REDIS_TIMEOUT "(optinal) redis connection timeout in seconds, default is 5" | ||
``` | ||
|
||
- kafka | ||
|
||
``` | ||
KAFKA_BROKERS "(required) kafka brokers" | ||
KAFKA_VERSION "(optinal) kafka version" | ||
KAFKA_USERNAME "(optianl) kafka username" | ||
KAFKA_PASSWORD "(optianl) kafka password" | ||
KAFKA_PARTITION_KEY "(optinal) kafka partition key" | ||
KAFKA_PARTITION "(optinal) kafka partition strategy" | ||
KAFKA_CLIENT_ID "(optinal) the configurable ClientID used for logging, debugging, and auditing purposes. The default is beats" | ||
KAFKA_BROKER_TIMEOUT "(optinal) the number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds)." | ||
KAFKA_KEEP_ALIVE "(optinal) keep-alive period for an active network connection. If 0s, keep-alives are disabled, default is 0 seconds" | ||
KAFKA_REQUIRE_ACKS "(optinal) ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1" | ||
``` | ||
|
||
### Other log management | ||
|
||
Supports for other log managements are in progress. You are welcome to create a pull request. | ||
|
||
Declare log configuration of docker container | ||
============================================= | ||
|
||
### Basic usage | ||
|
||
``` | ||
docker run -it --rm -p 10080:8080 \ | ||
-v /usr/local/tomcat/logs \ | ||
--label aliyun.logs.catalina=stdout \ | ||
--label aliyun.logs.access=/usr/local/tomcat/logs/localhost_access_log.*.txt \ | ||
tomcat | ||
``` | ||
|
||
The command above runs tomcat container, expect that log-pilot collect stdout of tomcat and logs in `/usr/local/tomcat/logs/localhost_access_log.\*.txt`. `-v /usr/local/tomcat/logs` is needed here so fluentd-pilot could access file in tomcat container. | ||
|
||
### More | ||
|
||
There are many labels you can use to describe the log info. | ||
|
||
- `aliyun.logs.$name=$path` | ||
- Name is an identify, can be any string you want. The valid characters in name are `0-9a-zA-Z_-` | ||
- Path is the log file path, can contians wildcard. `stdout` is a special value which means stdout of the container. | ||
- `aliyun.logs.$name.format=none|json|csv|nginx|apache2|regexp` format of the log | ||
- none: pure text. | ||
- json: a json object per line. | ||
- regexp: use regex parse log. The pattern is specified by `aliyun.logs.$name.format.pattern = $regex` | ||
- `aliyun.logs.$name.tags="k1=v1,k2=v2"`: tags will be appended to log. | ||
- `aliyun.logs.$name.target=target-for-log-storage`: target is used by the output plugins, instruct the plugins to store | ||
logs in appropriate place. For elasticsearch output, target means the log index in elasticsearch. For aliyun_sls output, | ||
target means the logstore in aliyun sls. The default value of target is the log name. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.