Skip to main content

Java

Overview

The Java OpenTelemetry SDK exports OTLP/HTTP logs natively. The recommended path for Java apps is to keep using your existing logging framework (Logback or Log4j2) and route its output through an OpenTelemetry log appender that bridges to the OTel SDK. The java.util.logging route is best handled by the OpenTelemetry Java agent for zero-code instrumentation.

TopologyWhen to use it
OTel SDK direct (this page)The default. Up to a few hundred events / second / process. Simplest setup — no extra hop, no second process.
OTel SDK → local collectorMany services on a node, multi-backend fan-out, central config / secret management, queue-on-outage durability, sampling or redaction. See the OTel SDKs in production guide.
Log to file + agentLanguages without a stable OTel logs SDK, very high throughput, air-gapped environments, or container runtimes that already capture stdout. See the operating-systems page.

Prerequisites

You need the following before you start:

  • Regionus or eu. Pick the region your SparkLogs account lives in.
  • Agent ID — short identifier for the agent that will send these logs.
  • Agent access token — bearer credential for the agent.

View or create an agent in the SparkLogs app under Configure → Agents. Each agent has its own ID and access token; revoke or rotate either at any time without restarting your application.

Install

Our runnable examples target Java 17 and align OpenTelemetry SDK, alpha, and instrumentation artifacts with BOM imports (versions match sparklogs-otel-logback).

<properties>
<opentelemetry.version>1.49.0</opentelemetry.version>
<opentelemetry.instrumentation.version>2.16.0-alpha</opentelemetry.instrumentation.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-bom</artifactId>
<version>${opentelemetry.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-bom-alpha</artifactId>
<version>${opentelemetry.version}-alpha</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>io.opentelemetry.instrumentation</groupId>
<artifactId>opentelemetry-instrumentation-bom-alpha</artifactId>
<version>${opentelemetry.instrumentation.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-api</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-sdk</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry.instrumentation</groupId>
<artifactId>opentelemetry-logback-appender-1.0</artifactId>
</dependency>
</dependencies>

For Log4j2, swap opentelemetry-logback-appender-1.0 for opentelemetry-log4j-appender-2.17 (see the Log4j2 example).

Configure the OTel exporter

Most OTel SDKs read exporter configuration from environment variables. Set these before your app starts:

export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="Authorization=Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>"
export OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip
export OTEL_EXPORTER_OTLP_LOGS_TIMEOUT=25000
tip

Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from Configure → Agents.

Other OTLP receivers. The variables above are the standard OpenTelemetry logs exporter settings. Point OTEL_EXPORTER_OTLP_LOGS_ENDPOINT and OTEL_EXPORTER_OTLP_LOGS_HEADERS at SparkLogs as shown, at a local OpenTelemetry Collector (for example http://localhost:4318/v1/logs), or at any OTLP/HTTP-compatible receiver. Swap only the URL and auth headers your target expects; keep OTEL_EXPORTER_OTLP_LOGS_PROTOCOL aligned with what that endpoint accepts.

Why set OTEL_EXPORTER_OTLP_LOGS_TIMEOUT? The OTel default is 10s, and on rare occasion our cloud may delay a request up to 12 seconds (p99.99 latency). 25s leaves headroom for rare request latency and network delays.

Compression. gzip is recommended and what most users should use. CPU-constrained workloads can set OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=none to send uncompressed — SparkLogs does not bill for inbound bytes, so the trade-off is purely network-vs-CPU on your side. See the scaling guide for the full list and important SDK-vs-wire-protocol differences.

Batching. The OTel SDK's BatchLogRecordProcessor defaults (max queue 2048, max batch 512, 1s schedule delay, 30s export timeout) are production-appropriate for most workloads. Higher-throughput pipelines may want to tune them — see the scaling guide.

Set up the OTel SDK

Configure the SDK once at application startup, then install the appender on your existing logging framework. The autoconfigure module reads the env vars above:

<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId>
</dependency>
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.sdk.autoconfigure.AutoConfiguredOpenTelemetrySdk;

OpenTelemetry openTelemetry = AutoConfiguredOpenTelemetrySdk.initialize().getOpenTelemetrySdk();

The SDK now reads endpoint, headers, compression, and timeout from the environment. Use BatchLogRecordProcessor (the default the autoconfigure module installs) — never SimpleLogRecordProcessor in production.

Set resource attributes

SparkLogs derives the searchable source, service, and app pivot fields from your OpenTelemetry Resource attributes. Setting these correctly means your events arrive grouped, filterable, and indexed without further configuration:

  • service.name — the logical service identity (e.g. checkout, auth-api). Maps to the service field.
  • service.version — the version / build of the running service (e.g. 1.42.0, abc123def).
  • deployment.environment — the environment label (e.g. production, staging, development). Maps to the app field.

Most OTel SDKs accept these via the OTEL_RESOURCE_ATTRIBUTES environment variable as a comma-separated list:

export OTEL_RESOURCE_ATTRIBUTES="service.name=my-service,service.version=1.0.0,deployment.environment=production"

The SDK setup snippets below show the in-code equivalent for each language.

For the full mapping (including container, host, and Kubernetes attributes that derive source), see OTLP/HTTP API → Resource attributes.

In Java, set resource attributes via the OTEL_RESOURCE_ATTRIBUTES environment variable, or programmatically when building the SDK manually.

Integrate the OTel SDK with your logging framework

Option 1: Logback

Add the OpenTelemetry appender to your logback.xml:

<configuration>
<appender name="OpenTelemetry" class="io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender"/>

<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>

<root level="INFO">
<appender-ref ref="OpenTelemetry" />
<appender-ref ref="CONSOLE" />
</root>
</configuration>

Then connect the appender to the SDK once at startup:

import io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender;

OpenTelemetryAppender.install(openTelemetry);

Use SLF4J as you would normally — every event flows to both the console and SparkLogs:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

Logger log = LoggerFactory.getLogger(MyClass.class);
log.info("hello, SparkLogs");
log.warn("disk usage at 92%, mount={}", "/dev/sda1");
log.error("connection refused", new java.io.IOException("connection refused"));

MDC values flow through to SparkLogs as structured fields automatically.

Option 2: Log4j2

Add the appender to your log4j2.xml:

<Configuration>
<Appenders>
<OpenTelemetry name="OpenTelemetryAppender" />
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="OpenTelemetryAppender" />
<AppenderRef ref="Console" />
</Root>
</Loggers>
</Configuration>
import io.opentelemetry.instrumentation.log4j.appender.v2_17.OpenTelemetryAppender;

OpenTelemetryAppender.install(openTelemetry);
Option 3: Java agent (zero-code)

For applications you'd rather not modify (legacy services, java.util.logging, mixed logging frameworks), download the OpenTelemetry Java agent and run your app with:

java -javaagent:opentelemetry-javaagent.jar -jar my-app.jar

The agent auto-installs appenders into Logback, Log4j2, and java.util.logging, and configures the OTLP exporter from the same OTEL_* environment variables documented above. No code changes needed.

Flush on shutdown

((io.opentelemetry.sdk.OpenTelemetrySdk) openTelemetry).close();

Or rely on a Runtime.getRuntime().addShutdownHook for long-running services. The autoconfigure module registers a JVM shutdown hook automatically when you initialize via AutoConfiguredOpenTelemetrySdk.initialize(). See graceful shutdown.

Runnable examples

Tested examples (no cloud credentials)

The public sparklogs-ingest-examples repo includes matching projects for this page. In each project directory, run make mock-test to send OTLP batches to a local mock receiver (no SparkLogs agent token required). Use make test with agent credentials when you want to verify against a real workspace.

Tested projects in sparklogs-ingest-examples:

Frequently asked questions

Where to next

  • Production deployments — batching, queue tuning, backpressure, when to add a collector, graceful shutdown: see OTel SDKs in production.
  • OTLP/HTTP transport details — full encoding / compression / auth / retry status code reference: see the OTLP/HTTP API page.
  • Add an OTel Collector — when one of your services on a node should aggregate, sample, redact, or fan out to multiple backends: see the OpenTelemetry Collector guide.
  • Log to a file + agent — for very high throughput, languages without a stable OTel logs SDK, or container runtimes that already capture stdout: see operating-system agents.