Skip to main content

Rust

Overview

The Rust OpenTelemetry SDK exports OTLP/HTTP logs natively. The recommended path is the tracing crate bridged via opentelemetry-appender-tracing; the log crate is supported via opentelemetry-appender-log. Rust also supports zstd compression out of the box, which we recommend over gzip for the best compression ratio and CPU profile.

For OTLP/HTTP with the default batch log processor, enable reqwest-blocking-client on opentelemetry-otlp (not reqwest-client): the batch processor runs on a background thread that must not call into Tokio without a reactor. If you embed OpenTelemetry in a Tokio-first async app and need async HTTP or gRPC wiring, follow the current OpenTelemetry Rust migration notes and the opentelemetry-otlp / opentelemetry_sdk::logs docs for your crate version — feature flags and shutdown patterns change between releases.

TopologyWhen to use it
OTel SDK direct (this page)The default. Up to a few hundred events / second / process. Simplest setup — no extra hop, no second process.
OTel SDK → local collectorMany services on a node, multi-backend fan-out, central config / secret management, queue-on-outage durability, sampling or redaction. See the OTel SDKs in production guide.
Log to file + agentLanguages without a stable OTel logs SDK, very high throughput, air-gapped environments, or container runtimes that already capture stdout. See the operating-systems page.

Prerequisites

You need the following before you start:

  • Regionus or eu. Pick the region your SparkLogs account lives in.
  • Agent ID — short identifier for the agent that will send these logs.
  • Agent access token — bearer credential for the agent.

View or create an agent in the SparkLogs app under Configure → Agents. Each agent has its own ID and access token; revoke or rotate either at any time without restarting your application.

Install

Add to Cargo.toml:

[dependencies]
opentelemetry = { version = "0.32", features = ["logs"] }
opentelemetry_sdk = { version = "0.32", features = ["logs"] }
opentelemetry-otlp = { version = "0.32", features = ["logs", "http-proto", "reqwest-blocking-client", "gzip-http"] }
opentelemetry-appender-tracing = "0.32"
opentelemetry-semantic-conventions = "0.32"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }

Add gzip-http for SparkLogs cloud (OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip). Add zstd-http when you want zstd; set OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=zstd to use it. For production, pin TLS/crypto the way your platform requires (the reqwest-rustls-webpki-roots feature on opentelemetry-otlp tracks reqwest’s feature names; if resolution fails on your toolchain, use default TLS or follow opentelemetry-otlp features for the current release).

Configure the OTel exporter

Most OTel SDKs read exporter configuration from environment variables. Set these before your app starts:

export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="Authorization=Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>"
export OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip
export OTEL_EXPORTER_OTLP_LOGS_TIMEOUT=25000
tip

Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from Configure → Agents.

Other OTLP receivers. The variables above are the standard OpenTelemetry logs exporter settings. Point OTEL_EXPORTER_OTLP_LOGS_ENDPOINT and OTEL_EXPORTER_OTLP_LOGS_HEADERS at SparkLogs as shown, at a local OpenTelemetry Collector (for example http://localhost:4318/v1/logs), or at any OTLP/HTTP-compatible receiver. Swap only the URL and auth headers your target expects; keep OTEL_EXPORTER_OTLP_LOGS_PROTOCOL aligned with what that endpoint accepts.

Why set OTEL_EXPORTER_OTLP_LOGS_TIMEOUT? The OTel default is 10s, and on rare occasion our cloud may delay a request up to 12 seconds (p99.99 latency). 25s leaves headroom for rare request latency and network delays.

Compression. gzip is recommended and what most users should use. CPU-constrained workloads can set OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=none to send uncompressed — SparkLogs does not bill for inbound bytes, so the trade-off is purely network-vs-CPU on your side. See the scaling guide for the full list and important SDK-vs-wire-protocol differences.

Batching. The OTel SDK's BatchLogRecordProcessor defaults (max queue 2048, max batch 512, 1s schedule delay, 30s export timeout) are production-appropriate for most workloads. Higher-throughput pipelines may want to tune them — see the scaling guide.

Set up the OTel SDK

use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use opentelemetry_otlp::LogExporter;
use opentelemetry_sdk::logs::SdkLoggerProvider;
use opentelemetry_sdk::Resource;
use opentelemetry_semantic_conventions::attribute::{DEPLOYMENT_ENVIRONMENT_NAME, SERVICE_VERSION};
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;

fn init_logger() -> SdkLoggerProvider {
let exporter = LogExporter::builder()
.with_http()
.build()
.expect("failed to build OTLP log exporter");

let resource = Resource::builder()
.with_service_name("my-service")
.with_attribute(opentelemetry::KeyValue::new(SERVICE_VERSION, "1.0.0"))
.with_attribute(opentelemetry::KeyValue::new(
DEPLOYMENT_ENVIRONMENT_NAME,
"production",
))
.build();

let provider = SdkLoggerProvider::builder()
.with_resource(resource)
.with_batch_exporter(exporter)
.build();

let layer = OpenTelemetryTracingBridge::new(&provider);
tracing_subscriber::registry().with(layer).init();

provider
}

This snippet wires tracing only to the OpenTelemetry log exporter: events are sent over OTLP, not printed to the process console. During development, register tracing_subscriber::fmt::layer() on the same registry as OpenTelemetryTracingBridge if you also want human-readable stderr output (see the sparklogs-otel-tracing example in sparklogs-ingest-examples).

with_batch_exporter installs a batch log processor, the production-correct choice. Don't use with_simple_exporter outside tests. On 0.28+ the batch processor no longer takes a separate runtime argument; use reqwest-blocking-client (as in Cargo.toml above) unless you deliberately adopt the upstream async-runtime processor path from the docs.

For SparkLogs cloud, set OTEL_EXPORTER_OTLP_LOGS_HEADERS (and the other OTEL_EXPORTER_OTLP_LOGS_* variables) as in the EnvVars section; the sparklogs-ingest-examples Makefile fills Authorization=Bearer … from SPARKLOGS_AGENT_ID and SPARKLOGS_AGENT_ACCESS_TOKEN when you run make test. The runnable sparklogs-otel-tracing example also demonstrate how to bind only to IPv4 (often needed in WSL2 environments).

Set resource attributes

SparkLogs derives the searchable source, service, and app pivot fields from your OpenTelemetry Resource attributes. Setting these correctly means your events arrive grouped, filterable, and indexed without further configuration:

  • service.name — the logical service identity (e.g. checkout, auth-api). Maps to the service field.
  • service.version — the version / build of the running service (e.g. 1.42.0, abc123def).
  • deployment.environment — the environment label (e.g. production, staging, development). Maps to the app field.

Most OTel SDKs accept these via the OTEL_RESOURCE_ATTRIBUTES environment variable as a comma-separated list:

export OTEL_RESOURCE_ATTRIBUTES="service.name=my-service,service.version=1.0.0,deployment.environment=production"

The SDK setup snippets below show the in-code equivalent for each language.

For the full mapping (including container, host, and Kubernetes attributes that derive source), see OTLP/HTTP API → Resource attributes.

Integrate the OTel SDK with your logging crate

Option 1: tracing (recommended)
fn main() {
let provider = init_logger();
let my_error = std::io::Error::new(std::io::ErrorKind::ConnectionRefused, "refused");

tracing::info!("hello, SparkLogs");
tracing::warn!(disk = "/dev/sda1", pct = 92, "disk usage at 92%");
tracing::error!(err = %my_error, "connection refused");

provider
.shutdown_with_timeout(std::time::Duration::from_secs(10))
.expect("shutdown failed");
}

tracing field syntax (disk = …) is preserved as structured fields on the OTel log record.

Option 2: log

Use opentelemetry-appender-log:

[dependencies]
opentelemetry-appender-log = "0.32"
log = "0.4"
use opentelemetry_appender_log::OpenTelemetryLogBridge;

let provider = init_logger();
let bridge = OpenTelemetryLogBridge::new(&provider);
log::set_boxed_logger(Box::new(bridge)).unwrap();
log::set_max_level(log::LevelFilter::Info);

log::info!("hello, SparkLogs");
log::warn!("disk usage at 92%");

Flush on shutdown

Call shutdown or shutdown_with_timeout on the SdkLoggerProvider before your process exits so batched OTLP payloads flush (especially important for short-lived CLIs and tests):

use std::time::Duration;

provider
.shutdown_with_timeout(Duration::from_secs(10))
.expect("OTel logger shutdown failed");

Dropping the last clone of the provider also triggers shutdown, but explicit shutdown gives a bounded wait and a clear error path. For long-running services, combine this with signal handling (tokio::signal::ctrl_c(), signal-hook, etc.). See graceful shutdown.

When you pick versions for opentelemetry, opentelemetry_sdk, opentelemetry-otlp, and opentelemetry-appender-tracing, keep them on the same release line — mixing minors across these crates often produces confusing compile errors because provider and exporter types will not line up.

Runnable examples

Tested examples (no cloud credentials)

The public sparklogs-ingest-examples repo includes matching projects for this page. In each project directory, run make mock-test to send OTLP batches to a local mock receiver (no SparkLogs agent token required). Use make test with agent credentials when you want to verify against a real workspace.

Frequently asked questions

Where to next

  • Production deployments — batching, queue tuning, backpressure, when to add a collector, graceful shutdown: see OTel SDKs in production.
  • OTLP/HTTP transport details — full encoding / compression / auth / retry status code reference: see the OTLP/HTTP API page.
  • Add an OTel Collector — when one of your services on a node should aggregate, sample, redact, or fan out to multiple backends: see the OpenTelemetry Collector guide.
  • Log to a file + agent — for very high throughput, languages without a stable OTel logs SDK, or container runtimes that already capture stdout: see operating-system agents.