Skip to main content

OTLP/HTTP API

Overview

SparkLogs is OpenTelemetry-aligned by design: schemaless storage, infinite cardinality, native support for wide events, and automatic detection of OpenTelemetry semantic conventions. Native OTLP/HTTP ingestion means standard OpenTelemetry SDKs, the OpenTelemetry Collector, and any OTLP-compliant exporter ship logs directly to SparkLogs — no vendor adapter, no custom transport.

Your instrumentation stays portable. Ship the same OTLP stream to SparkLogs today and to anything else tomorrow — your observability code never has to change.

tip

If your data is not in OTLP format and you just want to ingest events as JSON, use the HTTPS+JSON API instead.

Signal status

SignalStatusEndpoint
LogsGenerally availablePOST /v1/logs
TracesRoadmap
MetricsRoadmap
ProfilesRoadmap

Today we only support receiving logs. Requests to /v1/traces, /v1/metrics, or /v1/profiles return HTTP 404. Point your collector or SDK at logs only until trace and metric support land.

Runnable SDK matrix

Language-by-language runnable projects (with make mock-test) live in the public sparklogs-ingest-examples repo — see the OpenTelemetry SDKs hub for direct links.

Curl smoke test

Verify the endpoint with a minimal OTLP/JSON payload (uncompressed for clarity):

export AGENT_ID="<AGENT-ID>"
export AGENT_ACCESS_TOKEN="<AGENT-ACCESS-TOKEN>"
curl -X POST "https://ingest-<REGION>.engine.sparklogs.app/v1/logs" \
-H "Authorization: Bearer ${AGENT_ID}:${AGENT_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
--data-binary @- <<'EOF'
{"resourceLogs":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"checkout"}},{"key":"deployment.environment","value":{"stringValue":"development"}}]},"scopeLogs":[{"scope":{"name":"my.app"},"logRecords":[{"timeUnixNano":"1714000000000000000","severityNumber":9,"severityText":"INFO","body":{"stringValue":"hello from OTLP"}}]}]}]}
EOF

Expected response: HTTP 200, Content-Type: application/json, body {}.

OpenTelemetry Collector

Use the contrib distribution's otlphttp exporter:

exporters:
otlphttp/sparklogs:
logs_endpoint: "https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
compression: zstd
encoding: proto
auth:
authenticator: basicauth/sparklogs
extensions:
basicauth/sparklogs:
client_auth:
username: "<AGENT-ID>"
password: "<AGENT-ACCESS-TOKEN>"

Prefer logs_endpoint (full URL including /v1/logs) over endpoint so the collector does not try to inadvertently ship unsupported signals. See the full OpenTelemetry Collector guide for receivers, processors, and deployment topology.

OpenTelemetry SDKs

OTel SDKs read exporter configuration from environment variables. The same variables work across Python, Node.js, Go, Java, .NET, Rust, and others:

export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="Authorization=Bearer ${AGENT_ID}:${AGENT_ACCESS_TOKEN}"
export OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip

Common supported values for compression are none and gzip. Python also supports deflate (faster but less compression). The Rust SDK has support for zstd (ideal algorithm for logs) via the zstd-tonic and zstd-http feature flags.

Python:

Install the appropriate packages:

pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http

Then setup logging in your code:

import logging
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor

provider = LoggerProvider()
provider.add_log_record_processor(BatchLogRecordProcessor(OTLPLogExporter()))
# use the provider to configure the global logger
set_logger_provider(provider)
logging.getLogger().addHandler(LoggingHandler(logger_provider=provider))

Node.js:

Install the appropriate packages:

npm install @opentelemetry/api @opentelemetry/api-logs @opentelemetry/sdk-logs @opentelemetry/exporter-logs-otlp-proto

Then setup logging in your code:

import { LoggerProvider, BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-proto';

const provider = new LoggerProvider({
processors: [new BatchLogRecordProcessor(new OTLPLogExporter())],
});

Go:

Install the appropriate packages:

go get go.opentelemetry.io/otel \
go.opentelemetry.io/otel/log \
go.opentelemetry.io/otel/sdk/log \
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp

Then setup logging in your code:

import (
"go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp"
sdklog "go.opentelemetry.io/otel/sdk/log"
)

exporter, err := otlploghttp.New(ctx)
// handle error as appropriate
provider := sdklog.NewLoggerProvider(sdklog.WithProcessor(sdklog.NewBatchProcessor(exporter)))

Endpoint and authentication

  • Method: POST
  • Path: /v1/logs on your region's ingest base URL — TLS required.
  • Authentication: Same agent credentials as HTTPS+JSON: set Authorization to Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>, or use HTTP Basic (agent ID as username, agent access token as password). View your agent's credentials in the app under Configure → Agents.

Encoding and compression

Content-Type selects the OTLP payload encoder:

Content-TypeBody
application/jsonOTLP logs JSON
application/x-protobufOTLP protobuf ExportLogsServiceRequest
application/x-google-protobufSame protobuf encoding (alias)
application/protobufSame protobuf encoding (alias)

If Content-Type is omitted, text/plain, or application/octet-stream, SparkLogs still accepts a valid OTLP JSON or protobuf payload (encoding is inferred from the body).

Any other explicit media type returns HTTP 415 with a google.rpc.Status body.

Content-Encoding (compression) — SparkLogs decompresses transparently:

AlgorithmNotes
zstdRecommended — best ratio and speed
gzipDefault for most OpenTelemetry SDKs
deflate, zlibStandard zlib variants
snappyListed by the OTLP specification
lz4, lz4-blockHigh-throughput shippers
identityNo compression

Body limits: 50 MiB compressed, 200 MiB after decompression (decompression-bomb defense).

Resource attributes → source / service / app

SparkLogs derives three pivot fields from your OpenTelemetry Resource attributes using documented semantic conventions. These are searchable, filterable, and are indexed for query acceleration. This means you get fast, scoped queries automatically as long as your instrumentation populates the standard OTel resource attributes.

source (tightest physical producer — the "same machine" pivot):

PriorityAttribute
1k8s.namespace.name + / + k8s.pod.name (composite)
2k8s.pod.name
3faas.instance
4service.instance.id
5container.name
6container.id
7host.name
8host.id
9k8s.node.name

service (logical service identity):

PriorityAttribute
1service.name
2k8s.deployment.name
3k8s.statefulset.name
4k8s.daemonset.name
5k8s.cronjob.name
6k8s.job.name
7faas.name

app (broader application grouping):

PriorityAttribute
1service.namespace
2k8s.namespace.name
3deployment.environment
4deployment.environment.name

All resource attributes are also retained verbatim under a nested resource object with dotted keys unflattened (resource.service.instance.id, etc.); nothing is dropped, and you can still query the original attribute values.

Your data is further grouped by the hierarchical organization tree that you configure. Role-based access controls are enforced at the organization level, so you can give different users access to different data as appropriate.

Field mapping

SparkLogs builds one event per OTLP log record:

  • Timestamps: timeUnixNano populates the standard event timestamp; observedTimeUnixNano is preserved as observed_time when present.
  • Severity: severityNumber (1–24, per the OpenTelemetry log data model) and severityText map to the standard severity columns.
  • Trace correlation: trace_id, span_id, numeric flags, and a derived boolean flag_sampled when the W3C sampled bit is set. Correlation IDs are lowercase and stored as a hex-formatted string without dashes or padding.
  • Body shape: a string body populates message; a non-string scalar body is stringified into message; a map body merges at the event root with dotted keys unflattened; an array body is stored as the body field.
  • Scope: instrumentation scope is preserved as a nested scope object, and root subsource mirrors the trimmed scope.name when present.
  • Service identity: Logical service from your resource attributes (see the table above) is available for queries as service. See Standard field mapping.

After OTLP decode, events flow through the same pipeline as every other ingest path, including standard-field detection and AutoExtract. Even if your OTel pipeline only emits a single-string body, structured key/value pairs, embedded JSON, IP addresses, and timestamps inside that body are extracted into searchable custom fields automatically.

Reliability

OTLP/HTTP ingestion ships with operational guarantees to keep your at-scale data ingestion reliable and efficient:

  • Request deduplication: Server-side dedup on a content hash of (agent_id, client_ip, body) for payloads ≥ 1 KiB. Safe retries on transient network errors don't double-write events. Dedup can be disabled per request via the X-No-Dedup header when you genuinely need to send identical batches.
  • Clock-drift correction: When a request reports a client clock that differs from server time by more than 120 seconds, SparkLogs applies a per-request adjustment so events land at their actual occurrence time despite skewed source clocks (useful for IoT, embedded devices, and air-gapped systems). To activate, your client must send the X-Client-Clock-Utc-Now header.
  • Denial of service defense: Request limit sizes (50 MiB compressed, 200 MiB decompressed) ensure predictable ingestion performance per batch. Tune send_batch_size in your collector or SDK to stay well under these. The ingestor scans for and rejects malformed or malicious instrumentation.

Response contract

Responses conform to the OpenTelemetry OTLP specification:

Success (HTTP 2xx)

  • Content-Type matches the request.
  • JSON requests: response body is the literal {}.
  • Protobuf requests: response body is ExportLogsServiceResponse (an empty message as a 0-byte serialized body is valid and represents success).

Errors (non-2xx)

For errors produced inside OTLP log handling (after authentication), the response body is google.rpc.Status encoded in the same format as the request (JSON or protobuf). The numeric code field is a standard gRPC status code.

HTTP status should drive retry behavior. Typical mappings from the ingest service are:

HTTP statusgoogle.rpc.Status.codeMeaning
400INVALID_ARGUMENT (3)Decode error, invalid OTLP payload, bad query/header options, or adapter failure
401PERMISSION_DENIED (7)Agent credentials are invalid (see note below)
403PERMISSION_DENIED (7)Credentials valid but access denied (for example cloud resource permissions); check private cloud setup (see note below)
413OUT_OF_RANGE (11)Request body exceeds the compressed 50 MiB cap (MaxBytesReader)
415INVALID_ARGUMENT (3)Unsupported explicit Content-Type (must indicate JSON or protobuf, see above); OTLP responds with JSON google.rpc.Status for this case
429UNAVAILABLE (14)(retryable) Transient condition
500INTERNAL (13)Server-side error (including unclassified ingest failures mapped to HTTP 500)
note

401 and 403 responses are generated by the authentication layer before OTLP handling runs. Those responses may have a generic error response in JSON (or no payload at all).

Limits

  • Request body: 50 MiB compressed, 200 MiB after decompression.
  • Per-event message size, per-event total JSON size, and trimming behavior follow the same rules as HTTPS+JSON limits; trimmed events get an __event_truncated_details field.
  • Tune send_batch_size in your collector or SDK to keep individual batches comfortably below the body cap.

Tested compatibility

SparkLogs speaks the OTLP/HTTP wire protocol, so any compliant client works:

Relationship to other APIs

  • HTTPS+JSON: JSON event arrays, for shippers and apps where OTLP is not their native data schema (for example, vector.dev).
  • Elasticsearch Bulk and Loki: high fidelity compatibility for shippers built around these protocols.
  • OTLP/HTTP (this page): choose this if your shipper or SDK works with the OTLP data schema natively. Ingesting OTLP data via our native OTLP/HTTP API gives you the richest semantic-attribute fidelity and the strongest forward compatibility with the OpenTelemetry ecosystem.