Skip to main content

Node.js

Overview

For very high throughput applications (hundreds of log lines per second for each Node.js process), we recommend that your Node.js applications log to a local logfile (or container stdout), which you can then ship to SparkLogs with a log forwarding agent.

For other applications, plugins are available for Winston and Bunyan logging libraries that will ship the logs directly to SparkLogs.

Using a Log Forwarding Agent

Configure your Node.js applications to log to a local logfile, which you can then ship to SparkLogs using the relevant log forwarding agent for your operating system.

If your applications are running in containers, application logs should be captured by your container runtime, and you should focus on integrating the appropriate log forwarding agent with your container runtime to ship logs to SparkLogs.

Logging with Winston

Winston is a popular logging library for Node.js. It allows you to log messages to various transports (e.g., console, file, HTTP) with different levels of severity (e.g., info, error, debug).

tip

Use the winston format.timestamp() and format.json() to ensure maximum log fidelity.

See our example application or follow the instructions below.

Instructions for Winston Plugin

1. Install NPM packages

npm install winston winston-elasticsearch

2. Configure the Winston transport

In the file where you currently setup Winston, add code to configure the cloudLoggingTransport transport.

import os from 'os';
import winston from 'winston';
import { ElasticsearchTransport } from 'winston-elasticsearch';

// Uses the microseconds part of the timestamp to ensure events are ordered properly for events logged on the same millisecond
let logicalTimeCounter = 0, logicalTimeLastMs = ""
function formatDateWithLogicalClock(dt) {
let s = ((dt instanceof Date) ? dt.toISOString() : dt.toString()).substring(0, 23);
let curMs = s.substring(20);
logicalTimeCounter = (curMs == logicalTimeLastMs) ? ((logicalTimeCounter + 1) % 1000) : 0;
logicalTimeLastMs = curMs;
return s + logicalTimeCounter.toString().padStart(3, '0') + 'Z';
}

// The Winston transport that will ship logs to the SparkLogs cloud
const cloudLoggingTransport = new ElasticsearchTransport({
level: 'info', // minimum severity of log lines to send via this transport
indexPrefix: 'app-logs', // index name for these logs (could be anything you want)
handleExceptions: true, // include exceptions in logged data
flushInterval: 2000, // how often to flush pending logs in milliseconds
buffering: true, // must be true for proper performance and to avoid blocking
bufferLimit: 4000,
retryLimit: 20,
source: os.hostname(),
clientOpts: {
node: 'https://es8.ingest-<REGION>.engine.sparklogs.app/',
auth: {
bearer: "<AGENT-ID>:<AGENT-PASSWORD>",
//bearer: process.env.CLOUD_LOGGING_AUTH_TOKEN, // get it from the env
},
headers: {"X-Timezone": Intl.DateTimeFormat().resolvedOptions().timeZone},
maxRetries: 20,
requestTimeout: 30000,
},
transformer: (e) => {
const fieldTimestamp = 'timestamp', fieldSeverity = 'severity', fieldMessage = 'textpayload'
delete e.meta[fieldTimestamp];
delete e.meta[fieldSeverity];
delete e.meta[fieldMessage]
return {
[fieldTimestamp]: formatDateWithLogicalClock(e.timestamp || (new Date())),
[fieldSeverity]: e.level,
[fieldMessage]: e.message,
...e.meta,
};
},
});
// Make sure to log errors attempting to ship logs to the cloud in some way for diagnostics
cloudLoggingTransport.on('error', (error) => {
console.error('Error shipping logs to cloud', error);
});
tip

Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-PASSWORD> with appropriate values.

Then use the transport where you setup your winston logger, making sure to use at least the json and timestamp formats:

const logger = winston.createLogger({
level: 'info',
// must use timestamp and json formats
format: winston.format.combine(winston.format.timestamp(), winston.format.json()),
transports: [
cloudLoggingTransport,
// and any other transports here as you desire, for example:
new winston.transports.Console({level: 'warn'}),
],
});
logger.on('error', (error) => {
console.error('Logger error', error);
});

3. Use Winston for logging as usual

For example:

logger.info('Hello, Winston shipping logs to SparkLogs!');
logger.warn('This is a warning message');
logger.error('This is an error message');
logger.info('message with more fields', {"hello": "world", "f2": 42, "f3": "v3", "f4": "v4"});

Logging with Bunyan

Bunyan is a popular logging library for Node.js. It allows you to log messages to various streams (e.g., console, file, HTTP) with different levels of severity (e.g., info, error, debug).

tip

To properly log severity levels of Bunyan logs, you need to set the X-Severity-Map HTTP header value to 10=TRACE,20=DEBUG,30=INFO,40=WARN,50=ERROR,60=FATAL. Make sure to configure your log forwarding agent to include this header, or use the example code below.

See our example application or follow the instructions below.

Instructions for Bunyan Plugin

1. Install NPM packages

npm install bunyan bunyan-elasticsearch-bulk

2. Configure the Bunyan stream

In the file where you currently setup Bunyan, add code to configure the cloudLoggingStream stream.

import bunyan from 'bunyan';
import createESStream from 'bunyan-elasticsearch-bulk';

// The Bunyan stream that will ship logs to the SparkLogs cloud
const cloudLoggingStream = createESStream({
indexPattern: '[app-logs-]YYYY[-]MM[-]DD', // index name for these logs (could be anything you want)
interval: 2000, // how often to flush pending logs in milliseconds
limit: 4000,
// -- Elasticsearch client options
node: 'https://es8.ingest-<REGION>.engine.sparklogs.app/',
auth: {
bearer: "<AGENT-ID>:<AGENT-PASSWORD>",
//bearer: process.env.CLOUD_LOGGING_AUTH_TOKEN, // get it from the env
},
headers: {
"X-Timezone": Intl.DateTimeFormat().resolvedOptions().timeZone,
// Map Bunyan numeric levels to standard textual severity levels
"X-Severity-Map": "10=TRACE,20=DEBUG,30=INFO,40=WARN,50=ERROR,60=FATAL",
},
maxRetries: 20,
requestTimeout: 30000,
});
// Call before you manually exit the process. Warning: this will actually return BEFORE all logs are shipped!
function forceFlushAndCloseCloudLogs() {
cloudLoggingStream.flush();
cloudLoggingStream.end();
}
// Make sure to log errors attempting to ship logs to the cloud in some way for diagnostics
cloudLoggingStream.on('error', (error) => {
console.error('Error shipping logs to cloud', error);
});
tip

Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-PASSWORD> with appropriate values.

Then use cloudLoggingStream where you setup your bunyan logger:

const logger = bunyan.createLogger({
name: "my-app-name",
serializers: bunyan.stdSerializers,
streams: [
{
level: 'info',
stream: cloudLoggingStream,
},
// other streams as you desire, for example:
{
level: 'info',
stream: process.stdout,
}
],
});
// You probably want to log uncaught exceptions to the cloud
process.on('uncaughtException', function (err) {
logger.error("Uncaught Exception", err);
forceFlushAndCloseCloudLogs();
setTimeout(() => process.exit(1), 5000);
});

3. Use bunyan for logging as usual

For example:

logger.info('Hello, Bunyan shipping logs to SparkLogs!');
logger.warn('This is a warning message');
logger.error('This is an error message');
logger.error('Test internal severity field');

4. Be sure to manually flush logs before exiting the process

Before you exit you must manually flush logs or the event loop will not exit:

forceFlushAndCloseCloudLogs();