Skip to main content

Fluent Bit Agent

Overview

Fluent Bit is a tiny yet high-performance log aggregation and forwarding agent. It's known for being fast and efficient, yet having a tiny memory footprint (a few MB of RAM), and is popular in cloud environments and containers where memory may be constrained.

Over 40 log sources are supported, including files, kernel logs, systemd / journald, syslog, Kubernetes, and Docker. Fluent Bit can ship logs to SparkLogs using the https output.

Although AutoExtract will automatically extract structured field data from your raw logs and is recommended in the typical case, you can also manually parse log data within the Fluent Bit agent into structured fields using Fluent Bit parsers.

If relevant for your logs, we recommend using the Fluent Bit Multiline Parsing engine that can automatically join multi-line log messages for Docker, go, python, and java, and can be extended to support other languages and formats.

How to Use

Follow these steps for each logical agent that will receive data from Fluent Bit:

1. Consider buffering & storage requirements

By default, Fluent Bit buffers log events in a fixed amount of memory. This can be adjusted as desired, and buffering to disk is also available (details).

2. Create agent and get config template

In the app, click the Configure sidebar button:
Configure Sidebar Button
and then click the Agents tab.

As appropriate, create a new agent, or highlight an existing agent and click View API Key. In the dialog that shows the agent configuration template, click the Fluent Bit tab and copy the configuration template.

3. Customize configuration

Copy the configuration template and customize it based on your needs. At a minimum, add additional inputs in the config as appropriate (e.g., for files, kernel logs, etc.).

Example Fluent Bit configuration template

Make sure to get your configuration template from the app, as your ingestion endpoint can vary based on your provisioned region. This is an example of what it will look like:

# [SERVICE] section and [INPUT] sections go here

[OUTPUT]
name http
match *
host ingest-<REGION>.engine.sparklogs.app
port 443
uri /ingest/v1
net.connect_timeout 60
tls On
tls.verify On
tls.verify_hostname On
format json
json_date_format iso8601
json_date_key observedtimestamp
compress gzip
workers 4
http_user <AGENT-ID>
http_passwd <AGENT-ACCESS-TOKEN>
# Customize headers as desired, e.g., set to "true" to disable AutoExtract
#header X-No-AutoExtract false

4. Deploy Fluent Bit agents

On each system that will ship data to SparkLogs for this agent, install the Fluent Bit agent with the appropriate configuration, and make sure it starts on system boot.

If you're using Kubernetes, consider deploying using Helm.

Advanced Use Cases

Multiline aggregation

You may wish to have multiple lines of log text joined together into one log event. For example, if your application logs a long stack trace after an error message. While this isn't required, you may find it easier to explore and analyze your log data with multi-line log messages properly merged into a single log event.

Fluent Bit has advanced built-in support for multiline parsing with built-in parsers for Docker, go, python, and java, and the ability to customize for other languages and formats.