Skip to content

Commit 01a3250

Browse files
committed
added fluentd
1 parent 7be4129 commit 01a3250

File tree

1 file changed

+127
-5
lines changed

1 file changed

+127
-5
lines changed

logging/README.md

Lines changed: 127 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -53,9 +53,7 @@ Each sink can be configured with **parameters**
5353
| `exit-on-error` | When true, stops the Cockroach node if an error is encountered while writing to the sink. We recommend enabling this option on file sinks in order to avoid losing any log entries. When set to false, this can be used to mark certain sinks (such as stderr) as non-critical. |
5454
| `auditable` | If true, enables exit-on-error on the sink. Also disables buffered-writes if the sink is under file-groups. This guarantees non-repudiability for any logs in the sink, but can incur a performance overhead and higher disk IOPS consumption. This setting is typically enabled for security-related logs. |
5555

56-
## Labs
57-
58-
### Setup
56+
## Setup
5957

6058
Create file `logs.yaml` to store your logging configuration.
6159
Read through it:
@@ -79,12 +77,17 @@ file-defaults:
7977
auditable: false
8078
fluent-defaults:
8179
filter: INFO
82-
format: json
80+
# format: json-fluent # default
8381
redact: false
8482
redactable: true
8583
exit-on-error: false
8684
auditable: false
8785
sinks:
86+
# fluent-servers:
87+
# myhost:
88+
# channels: [DEV, OPS, HEALTH]
89+
# address: 127.0.0.1:5170
90+
# net: tcp
8891
file-groups:
8992
default:
9093
channels: [DEV, OPS, HEALTH, SQL_SCHEMA, USER_ADMIN, PRIVILEGES]
@@ -103,7 +106,7 @@ sinks:
103106
stderr:
104107
channels: all
105108
filter: NONE
106-
format: json-fluent-compact
109+
format: json
107110
redact: false
108111
redactable: true
109112
exit-on-error: true
@@ -128,6 +131,8 @@ cockroach start-single-node --certs-dir=certs --background --log-config-file=log
128131
cockroach sql --certs-dir=certs
129132
```
130133

134+
### File output
135+
131136
Good, now open a new Terminal window to inspect the files that were created
132137

133138
```bash
@@ -201,6 +206,122 @@ I211027 20:34:11.553235 10838 9@util/log/event_log.go:32 ⋮ [intExec=‹cancel/
201206

202207
Very good! Now you have a simple setup to test how logging works.
203208

209+
### Fluentd output
210+
211+
You can also hook up [Fluentd](https://www.fluentd.org/) to test how logging is handled over a network.
212+
213+
Create a file `fluent.conf` in your home directory
214+
215+
```bash
216+
##########
217+
# Inputs #
218+
##########
219+
220+
# this source reads files in a directory
221+
# <source>
222+
# @type tail
223+
# path "/var/log/*.log"
224+
# read_from_head true
225+
# tag gino
226+
# <parse>
227+
# @type json
228+
# time_type string
229+
# time_format "%Y-%m-%d %H:%M:%S.%N"
230+
# # time_type float
231+
# time_key timestamp
232+
# </parse>
233+
# </source>
234+
235+
# this source listens on a network port
236+
<source>
237+
@type tcp
238+
tag CRDB
239+
<parse>
240+
@type json
241+
time_type float
242+
time_key timestamp
243+
</parse>
244+
port 5170
245+
bind 0.0.0.0
246+
delimiter "\n"
247+
</source>
248+
249+
###########
250+
# Outputs #
251+
###########
252+
253+
# output to file buffer
254+
<match **>
255+
@type file
256+
path /output/
257+
append true
258+
<buffer>
259+
timekey 1d
260+
timekey_use_utc true
261+
timekey_wait 1m
262+
</buffer>
263+
<format>
264+
@type out_file
265+
delimiter ","
266+
time_format "%Y-%m-%d %H:%M:%S.%N"
267+
</format>
268+
</match>
269+
```
270+
271+
Then, start the server using Docker
272+
273+
```bash
274+
cd ~
275+
mkdir output
276+
277+
# make sure you map the path accordingly
278+
docker run -ti --rm -v /Users/fabio/fluent.conf:/fluentd/etc/fluent.conf -v /Users/fabio/output:/output -p=5170:5170 fluent/fluentd
279+
```
280+
281+
If fluent start successfully, you should see this output on stdout:
282+
283+
```text
284+
2021-11-01 16:56:24 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
285+
2021-11-01 16:56:24 +0000 [info]: using configuration file: <ROOT>
286+
[...]
287+
2021-11-01 16:56:24 +0000 [info]: adding source type="tcp"
288+
2021-11-01 16:56:24 +0000 [info]: #0 starting fluentd worker pid=17 ppid=7 worker=0
289+
2021-11-01 16:56:24 +0000 [info]: #0 fluentd worker is now running worker=0
290+
```
291+
292+
Ok, fluent started successfully and it's listening for incoming messages.
293+
294+
Now, stop CockroachDB
295+
296+
```bash
297+
cockroach quit --certs-dir=certs
298+
```
299+
300+
Uncomment this section in the `logs.yaml` file
301+
302+
```yaml
303+
fluent-servers:
304+
myhost:
305+
channels: [DEV, OPS, HEALTH]
306+
address: 127.0.0.1:5170
307+
net: tcp
308+
```
309+
310+
Restart CockroachDB
311+
312+
```bash
313+
cockroach start-single-node --certs-dir=certs --background --log-config-file=logs.yaml
314+
```
315+
316+
Check directory `output` and tail the file
317+
318+
```bash
319+
$ tail -n1 output/buffer.*.log
320+
2021-11-01 17:03:09.353577800 CRDB {"tag":"cockroach.health","c":2,"t":"1635786189.332476000","x":"92c25c70-6e91-41f6-9480-6b2de3cb729a","N":1,"s":1,"sev":"I","g":325,"f":"server/status/runtime.go","l":569,"n":73,"r":1,"tags":{"n":"1"},"message":"runtime stats: 101 MiB RSS, 271 goroutines (stacks: 3.5 MiB), 27 MiB/58 MiB Go alloc/total (heap fragmentation: 7.4 MiB, heap reserved: 8.9 MiB, heap released: 17 MiB), 3.8 MiB/7.0 MiB CGO alloc/total (0.0 CGO/sec), 0.0/0.0 %(u/s)time, 0.0 %gc (0x), 79 KiB/180 KiB (r/w)net"}
321+
```
322+
323+
Good stuff, you successfully integrated Fluentd with CockroachDB logging framework!
324+
204325
## Reference
205326

206327
- [Logging Overview](https://www.cockroachlabs.com/docs/stable/logging-overview.html)
@@ -209,3 +330,4 @@ Very good! Now you have a simple setup to test how logging works.
209330
- [Log Formats](https://www.cockroachlabs.com/docs/v21.1/log-formats.html)
210331
- [Notable Event Types](https://www.cockroachlabs.com/docs/v21.1/eventlog.html)
211332
- [Cluster Settings](https://www.cockroachlabs.com/docs/v21.1/cluster-settings)
333+
- [Fluentd Docs](https://docs.fluentd.org/)

0 commit comments

Comments
 (0)