Skip to main content

SparkLogs

Petabytes of Log Data

25 GB/month free. As low as $0.10/GB-ingested.
 

Schemaless

Ingestion is "point and shoot" — fields don't have to be configured, just send data. Capture complex JSON data with each log event. No field limits.

AutoExtract

Auto-extract semi-structured and JSON data from plain text. Auto-detect field types. Auto-extract IP addresses, timestamps, and bracketed values.

Visual Data Exploration

Visualize patterns across billions of events.
Instant zoom-in, filter, search, and export.
Easily sift through huge query results.

Petabyte Scale

Fully managed in our cloud.
Always on, infinitely scalable.

Ingest Anything

Open-source ingestion agents for files, Kubernetes, syslog, journald, Kafka, Docker, and more.
fluentbit, vector.dev, filebeat, Logstash, Alloy.
Or ingest via REST or elasticsearch API.

Enterprise Ready

Data encrypted at rest and in-transit.
SSO in every plan. Role-based access control.
Optionally use your own Google cloud tenant.

Petabyte-scale Querying

Analyze datasets with 100s of *billions* of events in less than 10 seconds.
SQL-like query language with custom fields, array unfolding, and advanced operators.
Full-text index provides fast search over any time scale for “needle-in-PB-haystack” type searches.
Adaptive-scale querying to explore any time-scale in seconds, then easily zoom-in to refine areas of interest.
Example of Massive Scale Adaptive Querying Engine

Interactive Data Exploration

Interactive histogram with live zooming and instant severity filtering.
Filter by data hierarchy (organization) and data sources; pivot to filter on any field.
Explore events at any point within query time window, with infinite bi-directional scrolling.
Side-by-side context viewer (e.g., matching errors on left, surrounding full context on right).
Copy logs to clipboard or download first million matches with shareable link.
Example of Massive Scale Adaptive Querying Engine

Pattern Analysis

Automatically classify log events into prototypical patterns with zero configuration.
Identify top application error patterns, then pivot to see examples in context.
Analyze top 10,000 values for any (custom) field over any window of time.
Example of pattern analysis over automatic log classification field over a 1 day time period

How Ingestion Works

Support for your favorite tools like fluentbit, vector.dev, filebeat, Logstash, Alloy.
Or ingest via HTTPS/REST, elasticsearch, or Loki Push APIs.
How Ingestion Works: (1) Organizations scope data. (2) Users are granted access to data at level of organizations. (3) Agents are logical API endpoints that belong to a single organization. (4) Any number of data sources can ship data to an agent API endpoint using log shippers like vector.dev, fluentbit, filebeat, Logstash, or directly via HTTP REST, elasticsearch, or loki protocols.

Zero Config + Schemaless + AutoExtract

No indexes to configure, no field schemas, no parsing rules. Point and shoot ingestion.
Infinite custom fields. Infinite cardinality. Send plain text or (semi) structured data.
AutoExtract structured data from plain text. Automatic category and pattern classification.

Try it out for yourself on your own log line!

Game Changing Cost Savings

Illustrative example for a company that ingests 20 TB/month, retains logs for 30 days, and makes 200 queries/day.
We offer 3 different pricing plans to meet the needs of organizations of any size from GB-scale to PB-scale.
SparkLogs delivers up to 25x cost savings vs competitors. For an organization ingesting 20 TB/month, SparkLogs costs between $2,900/month and $7,800/month depending upon the service plan, vs competing solutions that cost between $16,100/month and $71,905/month.