Node.js
Overview
The Node.js OpenTelemetry SDK exports OTLP/HTTP logs natively. The recommended path is to keep using your existing logging library (Pino, Winston, or Bunyan) and route its output through the matching OpenTelemetry transport / appender that bridges to the OTel SDK.
| Topology | When to use it |
|---|---|
| OTel SDK direct (this page) | The default. Up to a few hundred events / second / process. Simplest setup — no extra hop, no second process. |
| OTel SDK → local collector | Many services on a node, multi-backend fan-out, central config / secret management, queue-on-outage durability, sampling or redaction. See the OTel SDKs in production guide. |
| Log to file + agent | Languages without a stable OTel logs SDK, very high throughput, air-gapped environments, or container runtimes that already capture stdout. See the operating-systems page. |
Prerequisites
You need the following before you start:
- Region —
usoreu. Pick the region your SparkLogs account lives in. - Agent ID — short identifier for the agent that will send these logs.
- Agent access token — bearer credential for the agent.
View or create an agent in the SparkLogs app under Configure → Agents. Each agent has its own ID and access token; revoke or rotate either at any time without restarting your application.
Install
- npm
- pnpm
- Yarn
- Bun
npm install '@opentelemetry/api@^1.9.0' '@opentelemetry/api-logs@^0.57.0' \
'@opentelemetry/sdk-logs@^0.57.0' \
'@opentelemetry/exporter-logs-otlp-proto@^0.57.0' \
'@opentelemetry/resources@^1.30.0' \
'@opentelemetry/semantic-conventions@^1.30.0'
pnpm add '@opentelemetry/api@^1.9.0' '@opentelemetry/api-logs@^0.57.0' \
'@opentelemetry/sdk-logs@^0.57.0' \
'@opentelemetry/exporter-logs-otlp-proto@^0.57.0' \
'@opentelemetry/resources@^1.30.0' \
'@opentelemetry/semantic-conventions@^1.30.0'
yarn add '@opentelemetry/api@^1.9.0' '@opentelemetry/api-logs@^0.57.0' \
'@opentelemetry/sdk-logs@^0.57.0' \
'@opentelemetry/exporter-logs-otlp-proto@^0.57.0' \
'@opentelemetry/resources@^1.30.0' \
'@opentelemetry/semantic-conventions@^1.30.0'
bun add '@opentelemetry/api@^1.9.0' '@opentelemetry/api-logs@^0.57.0' \
'@opentelemetry/sdk-logs@^0.57.0' \
'@opentelemetry/exporter-logs-otlp-proto@^0.57.0' \
'@opentelemetry/resources@^1.30.0' \
'@opentelemetry/semantic-conventions@^1.30.0'
These ranges match the tested sparklogs-ingest-examples Node.js OTel projects; each example’s package.json is the source of truth.
Configure the OTel exporter
Most OTel SDKs read exporter configuration from environment variables. Set these before your app starts:
export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest-<REGION>.engine.sparklogs.app/v1/logs"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="Authorization=Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>"
export OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=gzip
export OTEL_EXPORTER_OTLP_LOGS_TIMEOUT=25000
Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from Configure → Agents.
Other OTLP receivers. The variables above are the standard OpenTelemetry logs exporter settings. Point OTEL_EXPORTER_OTLP_LOGS_ENDPOINT and OTEL_EXPORTER_OTLP_LOGS_HEADERS at SparkLogs as shown, at a local OpenTelemetry Collector (for example http://localhost:4318/v1/logs), or at any OTLP/HTTP-compatible receiver. Swap only the URL and auth headers your target expects; keep OTEL_EXPORTER_OTLP_LOGS_PROTOCOL aligned with what that endpoint accepts.
Why set OTEL_EXPORTER_OTLP_LOGS_TIMEOUT? The OTel default is 10s, and on rare occasion our cloud may delay a request up to 12 seconds (p99.99 latency). 25s leaves headroom for rare request latency and network delays.
Compression. gzip is recommended and what most users should use. CPU-constrained workloads can set OTEL_EXPORTER_OTLP_LOGS_COMPRESSION=none to send uncompressed — SparkLogs does not bill for inbound bytes, so the trade-off is purely network-vs-CPU on your side. See the scaling guide for the full list and important SDK-vs-wire-protocol differences.
Batching. The OTel SDK's BatchLogRecordProcessor defaults (max queue 2048, max batch 512, 1s schedule delay, 30s export timeout) are production-appropriate for most workloads. Higher-throughput pipelines may want to tune them — see the scaling guide.
Set up the OTel logger provider
Configure the SDK once at application startup:
import { LoggerProvider, BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-proto';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
import { logs } from '@opentelemetry/api-logs';
const provider = new LoggerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'my-service',
[SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
[SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]: 'production',
}),
processors: [new BatchLogRecordProcessor(new OTLPLogExporter())],
});
logs.setGlobalLoggerProvider(provider);
BatchLogRecordProcessor is the production-correct choice. Don't use SimpleLogRecordProcessor outside tests.
Set resource attributes
SparkLogs derives the searchable source, service, and app pivot fields from your OpenTelemetry Resource attributes. Setting these correctly means your events arrive grouped, filterable, and indexed without further configuration:
service.name— the logical service identity (e.g.checkout,auth-api). Maps to theservicefield.service.version— the version / build of the running service (e.g.1.42.0,abc123def).deployment.environment— the environment label (e.g.production,staging,development). Maps to theappfield.
Most OTel SDKs accept these via the OTEL_RESOURCE_ATTRIBUTES environment variable as a comma-separated list:
export OTEL_RESOURCE_ATTRIBUTES="service.name=my-service,service.version=1.0.0,deployment.environment=production"
The SDK setup snippets below show the in-code equivalent for each language.
For the full mapping (including container, host, and Kubernetes attributes that derive source), see OTLP/HTTP API → Resource attributes.
Integrate the OTel SDK with your logging library
Option 1: Pino (recommended for new projects)
Pino has a first-party OpenTelemetry transport that emits OTLP-compatible logs:
npm install 'pino@^9.6.0' 'pino-opentelemetry-transport@^1.1.0'
import pino from 'pino';
const err = new Error('connection refused');
const logger = pino({
level: 'info',
transport: {
target: 'pino-opentelemetry-transport',
options: {
resourceAttributes: {
'service.name': 'my-service',
'service.version': '1.0.0',
'deployment.environment': 'production',
},
// Endpoint, headers, compression, and timeout come from OTEL_EXPORTER_OTLP_LOGS_* env vars.
},
},
});
logger.info('hello, SparkLogs');
logger.warn({ disk: '/dev/sda1', pct: 92 }, 'disk usage at 92%');
logger.error({ err }, 'connection refused');
The transport uses its own OTel SDK instance internally, so you do not need to install or configure @opentelemetry/sdk-logs separately when using Pino with pino-opentelemetry-transport.
Option 2: Winston
Winston v3+ has an OpenTelemetry transport in the OTel JS contrib repo:
npm install 'winston@^3.17.0' '@opentelemetry/winston-transport@^0.10.0'
import winston from 'winston';
import { OpenTelemetryTransportV3 } from '@opentelemetry/winston-transport';
const logger = winston.createLogger({
level: 'info',
format: winston.format.combine(winston.format.timestamp(), winston.format.json()),
transports: [
new OpenTelemetryTransportV3(), // ships to OTel SDK provider configured above
new winston.transports.Console({ level: 'warn' }),
],
});
const err = new Error('connection refused');
logger.info('hello, SparkLogs');
logger.warn('disk usage at 92%', { disk: '/dev/sda1', pct: 92 });
logger.error('connection refused', { err });
For trace-context injection (correlating logs with active spans), additionally install @opentelemetry/instrumentation-winston and register it with the auto-instrumentation loader.
Option 3: Bunyan
Bunyan integration uses @opentelemetry/instrumentation-bunyan (auto-instrumentation that both injects trace context and emits OTel log records):
npm install 'bunyan@^1.8.15' '@opentelemetry/instrumentation@^0.57.0' '@opentelemetry/instrumentation-bunyan@^0.46.0'
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { BunyanInstrumentation } from '@opentelemetry/instrumentation-bunyan';
import bunyan from 'bunyan';
registerInstrumentations({
instrumentations: [new BunyanInstrumentation()],
});
const err = new Error('connection refused');
const log = bunyan.createLogger({ name: 'my-app', level: 'info' });
log.info('hello, SparkLogs');
log.warn({ disk: '/dev/sda1', pct: 92 }, 'disk usage at 92%');
log.error({ err }, 'connection refused');
The instrumentation forwards every Bunyan log record to the OTel SDK's logger provider, so your existing console / file streams keep working unchanged.
Flush on shutdown
Short-lived processes (CLIs, scripts, AWS Lambda, Cloud Run jobs) must flush before exit:
await provider.shutdown();
For long-running services:
process.on('SIGTERM', async () => {
await provider.shutdown();
process.exit(0);
});
The shutdown call drains the queue and waits for in-flight exports up to the configured timeout. See graceful shutdown for why this matters.
Runnable examples
The public sparklogs-ingest-examples repo includes matching projects for this page. In each project directory, run make mock-test to send OTLP batches to a local mock receiver (no SparkLogs agent token required). Use make test with agent credentials when you want to verify against a real workspace.
Recommended OTLP/HTTP path — tested projects in sparklogs-ingest-examples:
The snippets above use ESM (import) for readability. The runnable projects on GitHub use CommonJS (require) so instrumentation order matches typical production Node services today — both module systems are supported by the OpenTelemetry JavaScript SDKs.
Frequently asked questions
Where to next
- Production deployments — batching, queue tuning, backpressure, when to add a collector, graceful shutdown: see OTel SDKs in production.
- OTLP/HTTP transport details — full encoding / compression / auth / retry status code reference: see the OTLP/HTTP API page.
- Add an OTel Collector — when one of your services on a node should aggregate, sample, redact, or fan out to multiple backends: see the OpenTelemetry Collector guide.
- Log to a file + agent — for very high throughput, languages without a stable OTel logs SDK, or container runtimes that already capture stdout: see operating-system agents.
Legacy: shipping via the Elasticsearch bulk transport (without OpenTelemetry)
SparkLogs also accepts logs via the Elasticsearch bulk API, which works with the original winston-elasticsearch and bunyan-elasticsearch-bulk transports. OTLP/HTTP via the OpenTelemetry SDK (above) is now the recommended path — it's better aligned with the OTel ecosystem, supports trace correlation, and is the same setup we recommend across every other language. This section is preserved for users with existing pipelines on the older transports.
For very high throughput applications (hundreds of log lines per second per Node.js process), you can also log to a local logfile (or container stdout) and ship that with the relevant log forwarding agent for your operating system.
Legacy runnable examples
These runnable projects resolve the es8. ingest host and bearer auth from SPARKLOGS_REGION (or SPARKLOGS_INGEST_BASE_URI) plus SPARKLOGS_AGENT_ID and SPARKLOGS_AGENT_ACCESS_TOKEN. Optional npm overrides for transitive dependencies are pinned in each example’s package.json if you need them.
Logging with Winston via Elasticsearch bulk
Winston is a popular logging library for Node.js. The Elasticsearch transport ships logs directly via SparkLogs's ES-bulk endpoint.
Use the Winston format.timestamp() and format.json() formats for maximum log fidelity.
Follow the instructions below, or start from the Winston + Elasticsearch bulk example.
Instructions for Winston via ES bulk
1. Install NPM packages
npm install 'winston@^3.17.0' 'winston-elasticsearch@^0.19.0'
2. Configure the Winston transport
import os from 'os';
import winston from 'winston';
import { ElasticsearchTransport } from 'winston-elasticsearch';
// Uses the microseconds part of the timestamp to ensure events are ordered properly for events logged on the same millisecond
let logicalTimeCounter = 0, logicalTimeLastMs = ""
function formatDateWithLogicalClock(dt) {
let s = ((dt instanceof Date) ? dt.toISOString() : dt.toString()).substring(0, 23);
let curMs = s.substring(20);
logicalTimeCounter = (curMs == logicalTimeLastMs) ? ((logicalTimeCounter + 1) % 1000) : 0;
logicalTimeLastMs = curMs;
return s + logicalTimeCounter.toString().padStart(3, '0') + 'Z';
}
const cloudLoggingTransport = new ElasticsearchTransport({
level: 'info',
indexPrefix: 'app-logs',
handleExceptions: true,
flushInterval: 2000,
buffering: true,
bufferLimit: 4000,
retryLimit: 20,
source: os.hostname(),
clientOpts: {
node: 'https://es8.ingest-<REGION>.engine.sparklogs.app/',
auth: {
bearer: "<AGENT-ID>:<AGENT-ACCESS-TOKEN>",
},
headers: {"X-Timezone": Intl.DateTimeFormat().resolvedOptions().timeZone},
maxRetries: 20,
requestTimeout: 30000,
},
transformer: (e) => {
const fieldTimestamp = 'timestamp', fieldSeverity = 'severity', fieldMessage = 'textpayload'
delete e.meta[fieldTimestamp];
delete e.meta[fieldSeverity];
delete e.meta[fieldMessage]
return {
[fieldTimestamp]: formatDateWithLogicalClock(e.timestamp || (new Date())),
[fieldSeverity]: e.level,
[fieldMessage]: e.message,
...e.meta,
};
},
});
cloudLoggingTransport.on('error', (error) => {
console.error('Error shipping logs to cloud', error);
});
Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from your agent (Configure → Agents). Prefer the SPARKLOGS_* env vars from the legacy runnable examples when wiring a real app.
const logger = winston.createLogger({
level: 'info',
format: winston.format.combine(winston.format.timestamp(), winston.format.json()),
transports: [
cloudLoggingTransport,
new winston.transports.Console({level: 'warn'}),
],
});
logger.on('error', (error) => {
console.error('Logger error', error);
});
3. Use Winston for logging as usual
logger.info('Hello, Winston shipping logs to SparkLogs!');
logger.warn('This is a warning message');
logger.error('This is an error message');
logger.info('message with more fields', {"hello": "world", "f2": 42, "f3": "v3", "f4": "v4"});
Logging with Bunyan via Elasticsearch bulk
Bunyan is another popular logging library for Node.js.
To properly map Bunyan numeric severity levels, set the X-Severity-Map HTTP header to 10=TRACE,20=DEBUG,30=INFO,40=WARN,50=ERROR,60=FATAL.
Follow the instructions below, or start from the Bunyan + Elasticsearch bulk example.
Instructions for Bunyan via ES bulk
1. Install NPM packages
npm install 'bunyan@^1.8.15' 'bunyan-elasticsearch-bulk@^2.0.10'
2. Configure the Bunyan stream
import bunyan from 'bunyan';
import createESStream from 'bunyan-elasticsearch-bulk';
const cloudLoggingStream = createESStream({
indexPattern: '[app-logs-]YYYY[-]MM[-]DD',
interval: 2000,
limit: 4000,
node: 'https://es8.ingest-<REGION>.engine.sparklogs.app/',
auth: {
bearer: "<AGENT-ID>:<AGENT-ACCESS-TOKEN>",
},
headers: {
"X-Timezone": Intl.DateTimeFormat().resolvedOptions().timeZone,
"X-Severity-Map": "10=TRACE,20=DEBUG,30=INFO,40=WARN,50=ERROR,60=FATAL",
},
maxRetries: 20,
requestTimeout: 30000,
});
function forceFlushAndCloseCloudLogs() {
cloudLoggingStream.flush();
cloudLoggingStream.end();
}
cloudLoggingStream.on('error', (error) => {
console.error('Error shipping logs to cloud', error);
});
const logger = bunyan.createLogger({
name: "my-app-name",
serializers: bunyan.stdSerializers,
streams: [
{
level: 'info',
stream: cloudLoggingStream,
},
{
level: 'info',
stream: process.stdout,
}
],
});
process.on('uncaughtException', function (err) {
logger.error("Uncaught Exception", err);
forceFlushAndCloseCloudLogs();
setTimeout(() => process.exit(1), 5000);
});
3. Use Bunyan for logging as usual
logger.info('Hello, Bunyan shipping logs to SparkLogs!');
logger.warn('This is a warning message');
logger.error('This is an error message');
4. Manually flush before exiting the process
forceFlushAndCloseCloudLogs();