AWS ECS
There are several options how to ship your AWS ECS logs. Here are some of the most popular:
Using AWS FireLens with Fluent Bit
You can use the AWS FireLens ECS log driver
to ship logs to SparkLogs. You will need to customize the task definition so that the "logConfiguration"."options"
section
ships the data to SparkLogs (see fluentbit config
for more details):
Example logConfiguration
section of the FireLens task definition
Note that you may want to use the secretOptions
to set the http_passwd
option.
"logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "http",
"Host": "ingest-<REGION>.engine.sparklogs.app",
"Port": "443",
"Uri": "/ingest/v1",
"net.connect_timeout": "30",
"tls": "On",
"tls.verify": "On",
"tls.verify_hostname": "On",
"format": "json",
"json_date_format": "iso8601",
"json_date_key": "observedtimestamp",
"compress": "gzip",
"http_user": "<AGENT-ID>",
"http_passwd": "<AGENT-ACCESS-TOKEN>",
}
},
Using daemonsets with Fluent Bit
You can deploy Fluent Bit as a daemonset to your AWS ECS cluster to ship logs to SparkLogs.
First, follow the directions in the AWS example.
Make sure to customize your fluentbit config
so that the [OUTPUT]
section will ship logs to SparkLogs (see config template). You may also need
to customize or eliminate the [FILTER]
and parser settings as the example parses ngnix logs (AutoExtract
will do this automatically for different log types).
HEC log driver to vector
Another strategy known to work well is to use the splunk
(HEC) AWS log driver
along with a vector.dev sidecar container that has an HEC sink, any VRL transforms as desired (e.g., to store ECS metadata in fields),
and then forward logs to SparkLogs (see vector.dev configuration).
Example VRL transform to add ECS metadata:
.ecs_cluster = get_env_var!("ECS_CLUSTER")
.ecs_service_name = get_env_var!("ECS_SERVICE_NAME")
.ecs_task_arn = get_env_var!("ECS_TASK_ARN")