Go
Overview
The Go OpenTelemetry SDK exports OTLP/HTTP logs natively. The recommended path is slog (the standard library's structured logger added in Go 1.21) bridged via the otelslog handler. Bridges also exist for zap, logrus, and zerolog under opentelemetry-go-contrib/bridges.
| Topology | When to use it |
|---|---|
| OTel SDK direct (this page) | The default. Up to a few hundred events / second / process. Simplest setup — no extra hop, no second process. |
| OTel SDK → local collector | Many services on a node, multi-backend fan-out, central config / secret management, queue-on-outage durability, sampling or redaction. See the OTel SDKs in production guide. |
| Log to file + agent | Languages without a stable OTel logs SDK, very high throughput, air-gapped environments, or container runtimes that already capture stdout. See the operating-systems page. |
Prerequisites
You need the following before you start:
- Region —
usoreu. Pick the region your SparkLogs account lives in. - Agent ID — short identifier for the agent that will send these logs.
- Agent access token — bearer credential for the agent.
View or create an agent in the SparkLogs app under Configure → Agents. Each agent has its own ID and access token; revoke or rotate either at any time without restarting your application.
Install
Pin modules to the same versions we test in sparklogs-otel-slog (Go 1.25+ in that example):
go get go.opentelemetry.io/otel@v1.43.0 \
go.opentelemetry.io/otel/log@v0.19.0 \
go.opentelemetry.io/otel/sdk@v1.43.0 \
go.opentelemetry.io/otel/sdk/log@v0.19.0 \
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp@v0.19.0 \
go.opentelemetry.io/contrib/bridges/otelslog@v0.18.0
Bridge packages for zap, logrus, and zerolog live under go.opentelemetry.io/contrib/bridges/...; use the same otel / sdk/log versions as above when you add a bridge.
Configure the OTel exporter
Most OTel SDKs read exporter configuration from environment variables. Set these before your app starts:
export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="Authorization=Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>"
export OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip
export OTEL_EXPORTER_OTLP_LOGS_TIMEOUT=25000
Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from Configure → Agents.
Other OTLP receivers. The variables above are the standard OpenTelemetry logs exporter settings. Point OTEL_EXPORTER_OTLP_LOGS_ENDPOINT and OTEL_EXPORTER_OTLP_LOGS_HEADERS at SparkLogs as shown, at a local OpenTelemetry Collector (for example http://localhost:4318/v1/logs), or at any OTLP/HTTP-compatible receiver. Swap only the URL and auth headers your target expects; keep OTEL_EXPORTER_OTLP_LOGS_PROTOCOL aligned with what that endpoint accepts.
Why set OTEL_EXPORTER_OTLP_LOGS_TIMEOUT? The OTel default is 10s, and on rare occasion our cloud may delay a request up to 12 seconds (p99.99 latency). 25s leaves headroom for rare request latency and network delays.
Compression. gzip is recommended and what most users should use. CPU-constrained workloads can set OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=none to send uncompressed — SparkLogs does not bill for inbound bytes, so the trade-off is purely network-vs-CPU on your side. See the scaling guide for the full list and important SDK-vs-wire-protocol differences.
Batching. The OTel SDK's BatchLogRecordProcessor defaults (max queue 2048, max batch 512, 1s schedule delay, 30s export timeout) are production-appropriate for most workloads. Higher-throughput pipelines may want to tune them — see the scaling guide.
Set up the OTel SDK
package main
import (
"context"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp"
"go.opentelemetry.io/otel/sdk/log"
"go.opentelemetry.io/otel/sdk/resource"
)
func newLoggerProvider(ctx context.Context) (*log.LoggerProvider, error) {
exporter, err := otlploghttp.New(ctx)
if err != nil {
return nil, err
}
res, err := resource.New(ctx,
resource.WithAttributes(
attribute.String("service.name", "my-service"),
attribute.String("service.version", "1.0.0"),
attribute.String("deployment.environment", "production"),
),
)
if err != nil {
return nil, err
}
return log.NewLoggerProvider(
log.WithResource(res),
log.WithProcessor(log.NewBatchProcessor(exporter)),
), nil
}
log.NewBatchProcessor is the production-correct choice. Don't use log.NewSimpleProcessor outside tests.
Set resource attributes
SparkLogs derives the searchable source, service, and app pivot fields from your OpenTelemetry Resource attributes. Setting these correctly means your events arrive grouped, filterable, and indexed without further configuration:
service.name— the logical service identity (e.g.checkout,auth-api). Maps to theservicefield.service.version— the version / build of the running service (e.g.1.42.0,abc123def).deployment.environment— the environment label (e.g.production,staging,development). Maps to theappfield.
Most OTel SDKs accept these via the OTEL_RESOURCE_ATTRIBUTES environment variable as a comma-separated list:
export OTEL_RESOURCE_ATTRIBUTES="service.name=my-service,service.version=1.0.0,deployment.environment=production"
The SDK setup snippets below show the in-code equivalent for each language.
For the full mapping (including container, host, and Kubernetes attributes that derive source), see OTLP/HTTP API → Resource attributes.
Integrate the OTel SDK with your logging library
Option 1: slog (recommended)
import (
"context"
"errors"
"log/slog"
"go.opentelemetry.io/contrib/bridges/otelslog"
)
func main() {
ctx := context.Background()
provider, err := newLoggerProvider(ctx)
if err != nil {
panic(err)
}
defer provider.Shutdown(ctx)
// Set up slog with the otelslog bridge.
// The first argument is the OpenTelemetry *instrumentation scope* name (not service.name).
logger := otelslog.NewLogger("github.com/example/myapp/loghandler", otelslog.WithLoggerProvider(provider))
slog.SetDefault(logger)
slog.Info("hello, SparkLogs")
slog.Warn("disk usage at 92%", slog.String("mount", "/dev/sda1"), slog.Int("pct", 92))
slog.Error("connection refused", slog.Any("err", errors.New("connection refused")))
}
The structured key/value pairs (slog.String, slog.Int, etc.) flow through to SparkLogs as structured, searchable fields.
Option 2: zap
Use otelzap:
import "go.opentelemetry.io/contrib/bridges/otelzap"
// The first argument is the OpenTelemetry *instrumentation scope* name (not service.name).
logger := zap.New(otelzap.NewCore("github.com/example/myapp/loghandler", otelzap.WithLoggerProvider(provider)))
defer logger.Sync()
logger.Info("hello, SparkLogs", zap.Int("user_id", 42))
Option 3: logrus
Use otellogrus:
import "go.opentelemetry.io/contrib/bridges/otellogrus"
// The first argument is the OpenTelemetry *instrumentation scope* name (not service.name).
hook := otellogrus.NewHook("github.com/example/myapp/loghandler", otellogrus.WithLoggerProvider(provider))
logrus.AddHook(hook)
logrus.WithField("user_id", 42).Info("hello, SparkLogs")
Option 4: zerolog
Use otelzerolog:
import "go.opentelemetry.io/contrib/bridges/otelzerolog"
// The first argument to NewWriter is the OpenTelemetry *instrumentation scope* name (not service.name).
logger := zerolog.New(zerolog.MultiLevelWriter(
os.Stdout,
otelzerolog.NewWriter("github.com/example/myapp/loghandler", otelzerolog.WithLoggerProvider(provider)),
))
logger.Info().Int("user_id", 42).Msg("hello, SparkLogs")
Flush on shutdown
Use the *log.LoggerProvider from your setup (here provider) and a context.Context with a timeout:
defer func() {
shutdownCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := provider.Shutdown(shutdownCtx); err != nil {
slog.Error("otel shutdown failed", "err", err)
}
}()
Add context, time, and log/slog imports to the same package as provider (as in the slog example above).
For long-running services, also wire shutdown to os.Interrupt / syscall.SIGTERM via signal.NotifyContext. See graceful shutdown.
Runnable examples
The public sparklogs-ingest-examples repo includes matching projects for this page. In each project directory, run make mock-test to send OTLP batches to a local mock receiver (no SparkLogs agent token required). Use make test with agent credentials when you want to verify against a real workspace.
Tested projects in sparklogs-ingest-examples:
Frequently asked questions
Where to next
- Production deployments — batching, queue tuning, backpressure, when to add a collector, graceful shutdown: see OTel SDKs in production.
- OTLP/HTTP transport details — full encoding / compression / auth / retry status code reference: see the OTLP/HTTP API page.
- Add an OTel Collector — when one of your services on a node should aggregate, sample, redact, or fan out to multiple backends: see the OpenTelemetry Collector guide.
- Log to a file + agent — for very high throughput, languages without a stable OTel logs SDK, or container runtimes that already capture stdout: see operating-system agents.