Skip to main content

Ruby

warning

The Ruby OpenTelemetry logs SDK is currently in beta. The traces SDK is stable. Pin your gem versions (as in our runnable example), or use the agent-based path (file + log-forwarder) until logs GA. See the OTel Ruby status page for current API stability.

Overview

Ruby applications can ship logs to SparkLogs using opentelemetry-logs-sdk, a BatchLogRecordProcessor, and the OTLP logs exporter. The pattern below matches our tested sparklogs-otel-logger project: build a LoggerProvider, obtain an OTel logger, and emit with on_emit.

TopologyWhen to use it
OTel SDK direct (this page)The default. Up to a few hundred events / second / process. Simplest setup — no extra hop, no second process.
OTel SDK → local collectorMany services on a node, multi-backend fan-out, central config / secret management, queue-on-outage durability, sampling or redaction. See the OTel SDKs in production guide.
Log to file + agentLanguages without a stable OTel logs SDK, very high throughput, air-gapped environments, or container runtimes that already capture stdout. See the operating-systems page.

Prerequisites

You need the following before you start:

  • Regionus or eu. Pick the region your SparkLogs account lives in.
  • Agent ID — short identifier for the agent that will send these logs.
  • Agent access token — bearer credential for the agent.

View or create an agent in the SparkLogs app under Configure → Agents. Each agent has its own ID and access token; revoke or rotate either at any time without restarting your application.

Install

Add to your Gemfile (constraints match the tested example; newer gem lines may require Ruby 3.3+):

ruby '>= 3.1.0'

gem 'opentelemetry-sdk', '>= 1.4', '< 1.11'
gem 'opentelemetry-logs-api', '~> 0.2.0'
gem 'opentelemetry-logs-sdk', '~> 0.4.0'
gem 'opentelemetry-exporter-otlp-logs', '~> 0.3.0'

Then:

bundle install

Configure the OTel exporter

Most OTel SDKs read exporter configuration from environment variables. Set these before your app starts:

export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="Authorization=Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>"
export OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip
export OTEL_EXPORTER_OTLP_LOGS_TIMEOUT=25000
tip

Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from Configure → Agents.

Other OTLP receivers. The variables above are the standard OpenTelemetry logs exporter settings. Point OTEL_EXPORTER_OTLP_LOGS_ENDPOINT and OTEL_EXPORTER_OTLP_LOGS_HEADERS at SparkLogs as shown, at a local OpenTelemetry Collector (for example http://localhost:4318/v1/logs), or at any OTLP/HTTP-compatible receiver. Swap only the URL and auth headers your target expects; keep OTEL_EXPORTER_OTLP_LOGS_PROTOCOL aligned with what that endpoint accepts.

Why set OTEL_EXPORTER_OTLP_LOGS_TIMEOUT? The OTel default is 10s, and on rare occasion our cloud may delay a request up to 12 seconds (p99.99 latency). 25s leaves headroom for rare request latency and network delays.

Compression. gzip is recommended and what most users should use. CPU-constrained workloads can set OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=none to send uncompressed — SparkLogs does not bill for inbound bytes, so the trade-off is purely network-vs-CPU on your side. See the scaling guide for the full list and important SDK-vs-wire-protocol differences.

Batching. The OTel SDK's BatchLogRecordProcessor defaults (max queue 2048, max batch 512, 1s schedule delay, 30s export timeout) are production-appropriate for most workloads. Higher-throughput pipelines may want to tune them — see the scaling guide.

Set up the OTel SDK and emit

require 'opentelemetry/sdk'
require 'opentelemetry-logs-sdk'
require 'opentelemetry/exporter/otlp_logs'

resource = OpenTelemetry::SDK::Resources::Resource.create({
'service.name' => 'my-service',
'service.version' => '1.0.0',
'deployment.environment' => 'production',
})

exporter = OpenTelemetry::Exporter::OTLP::Logs::LogsExporter.new
processor = OpenTelemetry::SDK::Logs::Export::BatchLogRecordProcessor.new(exporter)
provider = OpenTelemetry::SDK::Logs::LoggerProvider.new(resource: resource)
provider.add_log_record_processor(processor)

otel_logger = provider.logger(name: 'my-app')

otel_logger.on_emit(
severity_text: 'INFO',
severity_number: 9,
body: 'hello, SparkLogs',
attributes: { 'user_id' => 42 }
)

BatchLogRecordProcessor is the production-correct choice. Don't use SimpleLogRecordProcessor outside tests.

To bridge stdlib Logger, Rails ActiveSupport::Logger, or another library, reuse the same LoggerProvider / exporter setup, then forward log lines into on_emit with the severity and attributes your app already has. We do not ship a separate tested Rails sample yet; start from the runnable example above and adapt your app’s logger wiring.

Set resource attributes

SparkLogs derives the searchable source, service, and app pivot fields from your OpenTelemetry Resource attributes. Setting these correctly means your events arrive grouped, filterable, and indexed without further configuration:

  • service.name — the logical service identity (e.g. checkout, auth-api). Maps to the service field.
  • service.version — the version / build of the running service (e.g. 1.42.0, abc123def).
  • deployment.environment — the environment label (e.g. production, staging, development). Maps to the app field.

Most OTel SDKs accept these via the OTEL_RESOURCE_ATTRIBUTES environment variable as a comma-separated list:

export OTEL_RESOURCE_ATTRIBUTES="service.name=my-service,service.version=1.0.0,deployment.environment=production"

The SDK setup snippets below show the in-code equivalent for each language.

For the full mapping (including container, host, and Kubernetes attributes that derive source), see OTLP/HTTP API → Resource attributes.

Flush on shutdown

provider.shutdown

Wire to at_exit for short-lived processes:

at_exit { provider.shutdown }

See graceful shutdown.

Runnable examples

Tested examples (no cloud credentials)

The public sparklogs-ingest-examples repo includes matching projects for this page. In each project directory, run make mock-test to send OTLP batches to a local mock receiver (no SparkLogs agent token required). Use make test with agent credentials when you want to verify against a real workspace.

Frequently asked questions

Where to next

  • Production deployments — batching, queue tuning, backpressure, when to add a collector, graceful shutdown: see OTel SDKs in production.
  • OTLP/HTTP transport details — full encoding / compression / auth / retry status code reference: see the OTLP/HTTP API page.
  • Add an OTel Collector — when one of your services on a node should aggregate, sample, redact, or fan out to multiple backends: see the OpenTelemetry Collector guide.
  • Log to a file + agent — for very high throughput, languages without a stable OTel logs SDK, or container runtimes that already capture stdout: see operating-system agents.