OTLP/HTTP API
Overview
SparkLogs is OpenTelemetry-aligned by design: schemaless storage, infinite cardinality, native support for wide events, and automatic detection of OpenTelemetry semantic conventions. Native OTLP/HTTP ingestion means standard OpenTelemetry SDKs, the OpenTelemetry Collector, and any OTLP-compliant exporter ship logs directly to SparkLogs — no vendor adapter, no custom transport.
Your instrumentation stays portable. Ship the same OTLP stream to SparkLogs today and to anything else tomorrow — your observability code never has to change.
If your data is not in OTLP format and you just want to ingest events as JSON, use the HTTPS+JSON API instead.
Signal status
| Signal | Status | Endpoint |
|---|---|---|
| Logs | Generally available | POST /v1/logs |
| Traces | Roadmap | — |
| Metrics | Roadmap | — |
| Profiles | Roadmap | — |
Today we only support receiving logs. Requests to /v1/traces, /v1/metrics, or /v1/profiles return HTTP 404. Point your collector or SDK at logs only until trace and metric support land.
Language-by-language runnable projects (with make mock-test) live in the public sparklogs-ingest-examples repo — see the OpenTelemetry SDKs hub for direct links.
Curl smoke test
Verify the endpoint with a minimal OTLP/JSON payload (uncompressed for clarity):
export AGENT_ID="<AGENT-ID>"
export AGENT_ACCESS_TOKEN="<AGENT-ACCESS-TOKEN>"
curl -X POST "https://ingest-<REGION>.engine.sparklogs.app/v1/logs" \
-H "Authorization: Bearer ${AGENT_ID}:${AGENT_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
--data-binary @- <<'EOF'
{"resourceLogs":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"checkout"}},{"key":"deployment.environment","value":{"stringValue":"development"}}]},"scopeLogs":[{"scope":{"name":"my.app"},"logRecords":[{"timeUnixNano":"1714000000000000000","severityNumber":9,"severityText":"INFO","body":{"stringValue":"hello from OTLP"}}]}]}]}
EOF
Expected response: HTTP 200, Content-Type: application/json, body {}.
OpenTelemetry Collector
Use the contrib distribution's otlphttp exporter:
exporters:
otlphttp/sparklogs:
logs_endpoint: "https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
compression: zstd
encoding: proto
auth:
authenticator: basicauth/sparklogs
extensions:
basicauth/sparklogs:
client_auth:
username: "<AGENT-ID>"
password: "<AGENT-ACCESS-TOKEN>"
Prefer logs_endpoint (full URL including /v1/logs) over endpoint so the collector does not try to inadvertently ship unsupported signals. See the full OpenTelemetry Collector guide for receivers, processors, and deployment topology.
OpenTelemetry SDKs
OTel SDKs read exporter configuration from environment variables. The same variables work across Python, Node.js, Go, Java, .NET, Rust, and others:
export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="Authorization=Bearer ${AGENT_ID}:${AGENT_ACCESS_TOKEN}"
export OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip
Common supported values for compression are none and gzip. Python also supports deflate (faster but less compression). The Rust SDK has support for zstd (ideal algorithm for logs) via the zstd-tonic and zstd-http feature flags.
Python:
Install the appropriate packages:
- pip
- Poetry
- uv
- pipenv
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
poetry add opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
uv add opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
pipenv install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
Then setup logging in your code:
import logging
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
provider = LoggerProvider()
provider.add_log_record_processor(BatchLogRecordProcessor(OTLPLogExporter()))
# use the provider to configure the global logger
set_logger_provider(provider)
logging.getLogger().addHandler(LoggingHandler(logger_provider=provider))
Node.js:
Install the appropriate packages:
- npm
- pnpm
- Yarn
- Bun
npm install @opentelemetry/api @opentelemetry/api-logs @opentelemetry/sdk-logs @opentelemetry/exporter-logs-otlp-proto
pnpm add @opentelemetry/api @opentelemetry/api-logs @opentelemetry/sdk-logs @opentelemetry/exporter-logs-otlp-proto
yarn add @opentelemetry/api @opentelemetry/api-logs @opentelemetry/sdk-logs @opentelemetry/exporter-logs-otlp-proto
bun add @opentelemetry/api @opentelemetry/api-logs @opentelemetry/sdk-logs @opentelemetry/exporter-logs-otlp-proto
Then setup logging in your code:
import { LoggerProvider, BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-proto';
const provider = new LoggerProvider({
processors: [new BatchLogRecordProcessor(new OTLPLogExporter())],
});
Go:
Install the appropriate packages:
go get go.opentelemetry.io/otel \
go.opentelemetry.io/otel/log \
go.opentelemetry.io/otel/sdk/log \
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp
Then setup logging in your code:
import (
"go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp"
sdklog "go.opentelemetry.io/otel/sdk/log"
)
exporter, err := otlploghttp.New(ctx)
// handle error as appropriate
provider := sdklog.NewLoggerProvider(sdklog.WithProcessor(sdklog.NewBatchProcessor(exporter)))
Endpoint and authentication
- Method:
POST - Path:
/v1/logson your region's ingest base URL — TLS required. - Authentication: Same agent credentials as HTTPS+JSON: set
AuthorizationtoBearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>, or use HTTP Basic (agent ID as username, agent access token as password). View your agent's credentials in the app under Configure → Agents.
Encoding and compression
Content-Type selects the OTLP payload encoder:
| Content-Type | Body |
|---|---|
application/json | OTLP logs JSON |
application/x-protobuf | OTLP protobuf ExportLogsServiceRequest |
application/x-google-protobuf | Same protobuf encoding (alias) |
application/protobuf | Same protobuf encoding (alias) |
If Content-Type is omitted, text/plain, or application/octet-stream, SparkLogs still accepts a valid OTLP JSON or protobuf payload (encoding is inferred from the body).
Any other explicit media type returns HTTP 415 with a google.rpc.Status body.
Content-Encoding (compression) — SparkLogs decompresses transparently:
| Algorithm | Notes |
|---|---|
zstd | Recommended — best ratio and speed |
gzip | Default for most OpenTelemetry SDKs |
deflate, zlib | Standard zlib variants |
snappy | Listed by the OTLP specification |
lz4, lz4-block | High-throughput shippers |
identity | No compression |
Body limits: 50 MiB compressed, 200 MiB after decompression (decompression-bomb defense).
Resource attributes → source / service / app
SparkLogs derives three pivot fields from your OpenTelemetry Resource attributes using documented semantic conventions. These are searchable, filterable, and are indexed for query acceleration. This means you get fast, scoped queries automatically as long as your instrumentation populates the standard OTel resource attributes.
source (tightest physical producer — the "same machine" pivot):
| Priority | Attribute |
|---|---|
| 1 | k8s.namespace.name + / + k8s.pod.name (composite) |
| 2 | k8s.pod.name |
| 3 | faas.instance |
| 4 | service.instance.id |
| 5 | container.name |
| 6 | container.id |
| 7 | host.name |
| 8 | host.id |
| 9 | k8s.node.name |
service (logical service identity):
| Priority | Attribute |
|---|---|
| 1 | service.name |
| 2 | k8s.deployment.name |
| 3 | k8s.statefulset.name |
| 4 | k8s.daemonset.name |
| 5 | k8s.cronjob.name |
| 6 | k8s.job.name |
| 7 | faas.name |
app (broader application grouping):
| Priority | Attribute |
|---|---|
| 1 | service.namespace |
| 2 | k8s.namespace.name |
| 3 | deployment.environment |
| 4 | deployment.environment.name |
All resource attributes are also retained verbatim under a nested resource object with dotted keys unflattened (resource.service.instance.id, etc.); nothing is dropped, and you can still query the original attribute values.
Your data is further grouped by the hierarchical organization tree that you configure. Role-based access controls are enforced at the organization level, so you can give different users access to different data as appropriate.
Field mapping
SparkLogs builds one event per OTLP log record:
- Timestamps:
timeUnixNanopopulates the standard event timestamp;observedTimeUnixNanois preserved asobserved_timewhen present. - Severity:
severityNumber(1–24, per the OpenTelemetry log data model) andseverityTextmap to the standard severity columns. - Trace correlation:
trace_id,span_id, numericflags, and a derived booleanflag_sampledwhen the W3C sampled bit is set. Correlation IDs are lowercase and stored as a hex-formatted string without dashes or padding. - Body shape: a string body populates
message; a non-string scalar body is stringified intomessage; a map body merges at the event root with dotted keys unflattened; an array body is stored as thebodyfield. - Scope: instrumentation scope is preserved as a nested
scopeobject, and rootsubsourcemirrors the trimmedscope.namewhen present. - Service identity: Logical service from your resource attributes (see the table above) is available for queries as
service. See Standard field mapping.
After OTLP decode, events flow through the same pipeline as every other ingest path, including standard-field detection and AutoExtract. Even if your OTel pipeline only emits a single-string body, structured key/value pairs, embedded JSON, IP addresses, and timestamps inside that body are extracted into searchable custom fields automatically.
Reliability
OTLP/HTTP ingestion ships with operational guarantees to keep your at-scale data ingestion reliable and efficient:
- Request deduplication: Server-side dedup on a content hash of
(agent_id, client_ip, body)for payloads ≥ 1 KiB. Safe retries on transient network errors don't double-write events. Dedup can be disabled per request via theX-No-Dedupheader when you genuinely need to send identical batches. - Clock-drift correction: When a request reports a client clock that differs from server time by more than 120 seconds, SparkLogs applies a per-request adjustment so events land at their actual occurrence time despite skewed source clocks (useful for IoT, embedded devices, and air-gapped systems). To activate, your client must send the
X-Client-Clock-Utc-Nowheader. - Denial of service defense: Request limit sizes (50 MiB compressed, 200 MiB decompressed) ensure predictable ingestion performance per batch. Tune
send_batch_sizein your collector or SDK to stay well under these. The ingestor scans for and rejects malformed or malicious instrumentation.
Response contract
Responses conform to the OpenTelemetry OTLP specification:
Success (HTTP 2xx)
Content-Typematches the request.- JSON requests: response body is the literal
{}. - Protobuf requests: response body is
ExportLogsServiceResponse(an empty message as a 0-byte serialized body is valid and represents success).
Errors (non-2xx)
For errors produced inside OTLP log handling (after authentication), the response body is google.rpc.Status encoded in the same format as the request (JSON or protobuf). The numeric code field is a standard gRPC status code.
HTTP status should drive retry behavior. Typical mappings from the ingest service are:
| HTTP status | google.rpc.Status.code | Meaning |
|---|---|---|
| 400 | INVALID_ARGUMENT (3) | Decode error, invalid OTLP payload, bad query/header options, or adapter failure |
| 401 | PERMISSION_DENIED (7) | Agent credentials are invalid (see note below) |
| 403 | PERMISSION_DENIED (7) | Credentials valid but access denied (for example cloud resource permissions); check private cloud setup (see note below) |
| 413 | OUT_OF_RANGE (11) | Request body exceeds the compressed 50 MiB cap (MaxBytesReader) |
| 415 | INVALID_ARGUMENT (3) | Unsupported explicit Content-Type (must indicate JSON or protobuf, see above); OTLP responds with JSON google.rpc.Status for this case |
| 429 | UNAVAILABLE (14) | (retryable) Transient condition |
| 500 | INTERNAL (13) | Server-side error (including unclassified ingest failures mapped to HTTP 500) |
401 and 403 responses are generated by the authentication layer before OTLP handling runs. Those responses may have a generic error response in JSON (or no payload at all).
Limits
- Request body: 50 MiB compressed, 200 MiB after decompression.
- Per-event message size, per-event total JSON size, and trimming behavior follow the same rules as HTTPS+JSON limits; trimmed events get an
__event_truncated_detailsfield. - Tune
send_batch_sizein your collector or SDK to keep individual batches comfortably below the body cap.
Tested compatibility
SparkLogs speaks the OTLP/HTTP wire protocol, so any compliant client works:
- OpenTelemetry Collector:
otlp_httpexporter (contrib distribution). - OpenTelemetry SDKs: Python, Node.js, Go, Java, .NET, Rust, and other languages with
OTEL_EXPORTER_OTLP_LOGS_*support; see language-specific examples above, SDK docs and configuration reference - Grafana Alloy:
otelcol.exporter.otlphttpexporter. - Vector:
opentelemetrysink. (more details) - Any HTTP client that can
POSTJSON or protobuf OTLP payloads.
Relationship to other APIs
- HTTPS+JSON: JSON event arrays, for shippers and apps where OTLP is not their native data schema (for example, vector.dev).
- Elasticsearch Bulk and Loki: high fidelity compatibility for shippers built around these protocols.
- OTLP/HTTP (this page): choose this if your shipper or SDK works with the OTLP data schema natively. Ingesting OTLP data via our native OTLP/HTTP API gives you the richest semantic-attribute fidelity and the strongest forward compatibility with the OpenTelemetry ecosystem.