Skip to main content

Python

Overview

The Python OpenTelemetry SDK exports OTLP/HTTP logs natively to SparkLogs. This page shows how to wire it up alongside the logging library you already use — Python's stdlib logging, structlog, or loguru — in three ways that match how Python apps actually look in production.

TopologyWhen to use it
OTel SDK direct (this page)The default. Up to a few hundred events / second / process. Simplest setup — no extra hop, no second process.
OTel SDK → local collectorMany services on a node, multi-backend fan-out, central config / secret management, queue-on-outage durability, sampling or redaction. See the OTel SDKs in production guide.
Log to file + agentLanguages without a stable OTel logs SDK, very high throughput, air-gapped environments, or container runtimes that already capture stdout. See the operating-systems page.

Prerequisites

You need the following before you start:

  • Regionus or eu. Pick the region your SparkLogs account lives in.
  • Agent ID — short identifier for the agent that will send these logs.
  • Agent access token — bearer credential for the agent.

View or create an agent in the SparkLogs app under Configure → Agents. Each agent has its own ID and access token; revoke or rotate either at any time without restarting your application.

Install

pip install 'opentelemetry-api>=1.27,<2.0' 'opentelemetry-sdk>=1.27,<2.0' 'opentelemetry-exporter-otlp-proto-http>=1.27,<2.0'

These ranges match the tested sparklogs-ingest-examples Python projects; each example’s requirements.txt is the source of truth if you need structlog or loguru pins too.

Configure the OTel exporter

Most OTel SDKs read exporter configuration from environment variables. Set these before your app starts:

export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="Authorization=Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>"
export OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip
export OTEL_EXPORTER_OTLP_LOGS_TIMEOUT=25000
tip

Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from Configure → Agents.

Other OTLP receivers. The variables above are the standard OpenTelemetry logs exporter settings. Point OTEL_EXPORTER_OTLP_LOGS_ENDPOINT and OTEL_EXPORTER_OTLP_LOGS_HEADERS at SparkLogs as shown, at a local OpenTelemetry Collector (for example http://localhost:4318/v1/logs), or at any OTLP/HTTP-compatible receiver. Swap only the URL and auth headers your target expects; keep OTEL_EXPORTER_OTLP_LOGS_PROTOCOL aligned with what that endpoint accepts.

Why set OTEL_EXPORTER_OTLP_LOGS_TIMEOUT? The OTel default is 10s, and on rare occasion our cloud may delay a request up to 12 seconds (p99.99 latency). 25s leaves headroom for rare request latency and network delays.

Compression. gzip is recommended and what most users should use. CPU-constrained workloads can set OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=none to send uncompressed — SparkLogs does not bill for inbound bytes, so the trade-off is purely network-vs-CPU on your side. See the scaling guide for the full list and important SDK-vs-wire-protocol differences.

Batching. The OTel SDK's BatchLogRecordProcessor defaults (max queue 2048, max batch 512, 1s schedule delay, 30s export timeout) are production-appropriate for most workloads. Higher-throughput pipelines may want to tune them — see the scaling guide.

Set up the OTel logger provider

A minimal Python setup creates a LoggerProvider, attaches a BatchLogRecordProcessor with the OTLP/HTTP exporter, and registers it globally. The processor reads its endpoint, compression, and timeout from the environment variables you set above.

import logging
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import Resource

resource = Resource.create({
"service.name": "my-service",
"service.version": "1.0.0",
"deployment.environment": "production",
})

logger_provider = LoggerProvider(resource=resource)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(OTLPLogExporter()))
set_logger_provider(logger_provider)

BatchLogRecordProcessor is the production-correct choice — it buffers records and exports them in batches. Do not use SimpleLogRecordProcessor in production; it sends one HTTP request per log record.

Set resource attributes

SparkLogs derives the searchable source, service, and app pivot fields from your OpenTelemetry Resource attributes. Setting these correctly means your events arrive grouped, filterable, and indexed without further configuration:

  • service.name — the logical service identity (e.g. checkout, auth-api). Maps to the service field.
  • service.version — the version / build of the running service (e.g. 1.42.0, abc123def).
  • deployment.environment — the environment label (e.g. production, staging, development). Maps to the app field.

Most OTel SDKs accept these via the OTEL_RESOURCE_ATTRIBUTES environment variable as a comma-separated list:

export OTEL_RESOURCE_ATTRIBUTES="service.name=my-service,service.version=1.0.0,deployment.environment=production"

The SDK setup snippets below show the in-code equivalent for each language.

For the full mapping (including container, host, and Kubernetes attributes that derive source), see OTLP/HTTP API → Resource attributes.

Integrate the OTel SDK with your logging library

Pick the section that matches what your application already uses. Each option is self-contained — install the extra package (if any), wire the bridge, and keep using your normal logging API.

Option 1: stdlib logging (most common)

The OTel SDK ships a LoggingHandler that forwards records from Python's stdlib logging module to the OTel pipeline. No extra package needed beyond what you installed above.

import logging
from opentelemetry.sdk._logs import LoggingHandler

# Attach the OTel handler to the root logger so every logger in your app forwards to OTel.
otel_handler = LoggingHandler(logger_provider=logger_provider)
logging.getLogger().addHandler(otel_handler)
logging.getLogger().setLevel(logging.INFO)

# Use logging exactly as you would normally.
log = logging.getLogger(__name__)
log.info("hello, SparkLogs")
log.warning("disk usage at 92%%", extra={"disk": "/dev/sda1", "pct": 92})
log.error("connection refused", exc_info=True)

extra={...} keys are preserved as structured fields on the OTel log record and arrive at SparkLogs as searchable custom fields. Exception info is captured automatically when exc_info=True.

Option 2: structlog

structlog produces structured log records natively. Route its output through stdlib logging so the OTel LoggingHandler picks it up. Install structlog on the same range as our example: pip install 'structlog>=24.1,<26' (or the equivalent in your package manager).

import logging
import structlog
from opentelemetry.sdk._logs import LoggingHandler

# Wire OTel handler into stdlib logging (same as Option 1).
otel_handler = LoggingHandler(logger_provider=logger_provider)
logging.getLogger().addHandler(otel_handler)
logging.getLogger().setLevel(logging.INFO)

# Configure structlog to emit JSON via stdlib logging (matches sparklogs-otel-structlog).
structlog.configure(
processors=[
structlog.contextvars.merge_contextvars,
structlog.processors.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.dict_tracebacks,
structlog.processors.JSONRenderer(),
],
wrapper_class=structlog.make_filtering_bound_logger(logging.INFO),
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)

log = structlog.get_logger()
log.info("user logged in", user_id=42, ip="203.0.113.7")

The structured key/value pairs flow through to SparkLogs as searchable custom fields.

Option 3: loguru

loguru doesn't go through stdlib logging by default. Add a sink that forwards to the OTel handler. Install loguru on the same range as our example: pip install 'loguru>=0.7,<0.8' (or the equivalent in your package manager).

import logging
from loguru import logger
from opentelemetry.sdk._logs import LoggingHandler

otel_handler = LoggingHandler(logger_provider=logger_provider)

# loguru sink that forwards every record into stdlib logging (which the OTel handler is on).
class InterceptHandler(logging.Handler):
def emit(self, record):
logging.getLogger(record.name).handle(record)

logging.getLogger().addHandler(otel_handler)
logging.getLogger().setLevel(logging.INFO)
logger.add(InterceptHandler(), level="INFO", format="{message}", serialize=True)

logger.info("hello, SparkLogs")
logger.bind(user_id=42).warning("disk usage at 92%")

Flush on shutdown

Short-lived processes (CLIs, scripts, AWS Lambda, Cloud Run jobs) must flush before exit, or the in-flight batch is dropped:

logger_provider.shutdown()

For long-running services, also wire shutdown to your termination signal handler:

import signal
signal.signal(signal.SIGTERM, lambda *_: logger_provider.shutdown())

The shutdown call drains the queue and waits for in-flight exports up to the configured timeout. See graceful shutdown for why this matters.

Runnable examples

Tested examples (no cloud credentials)

The public sparklogs-ingest-examples repo includes matching projects for this page. In each project directory, run make mock-test to send OTLP batches to a local mock receiver (no SparkLogs agent token required). Use make test with agent credentials when you want to verify against a real workspace.

Tested projects in sparklogs-ingest-examples (set the env vars from the Configure section, then run):

Frequently asked questions

Where to next

  • Production deployments — batching, queue tuning, backpressure, when to add a collector, graceful shutdown: see OTel SDKs in production.
  • OTLP/HTTP transport details — full encoding / compression / auth / retry status code reference: see the OTLP/HTTP API page.
  • Add an OTel Collector — when one of your services on a node should aggregate, sample, redact, or fan out to multiple backends: see the OpenTelemetry Collector guide.
  • Log to a file + agent — for very high throughput, languages without a stable OTel logs SDK, or container runtimes that already capture stdout: see operating-system agents.