Skip to main content

Azure Functions

Overview

Azure Functions has built-in OpenTelemetry support — both the Functions host (stdout/stderr app logs, system telemetry from triggers, scaling, and runtime) and the language worker (your code) can emit OTLP-compliant logs and traces to any OTLP endpoint, including SparkLogs. Azure Application Insights is not required.

This page shows the minimal configuration to point a function app at SparkLogs.

.NET apps: use the isolated worker model

Azure Functions offers two .NET execution models:

  • Isolated worker (default for .NET 8+, recommended for all .NET versions) — your function code runs in its own .NET process, separate from the Functions runtime. Microsoft's OpenTelemetry integration supports this model.
  • In-process (legacy and EOL Nov 2026) — your function code runs inside the same .NET process as the Functions runtime. Microsoft's OpenTelemetry integration does not support in-process.

If you're using the in-process model, migrate to isolated worker before following this guide. The other supported runtimes — Java, Node.js (JavaScript / TypeScript), Python, and PowerShell — do not have this limitation.

Prerequisites

You need the following before you start:

  • Regionus or eu. Pick the region your SparkLogs account lives in.
  • Agent ID — short identifier for the agent that will send these logs.
  • Agent access token — bearer credential for the agent.
  • A function app on a plan that supports custom telemetry — Consumption, Premium, Dedicated (App Service), or Flex Consumption.

View or create an agent in the SparkLogs app under Configure → Agents. Each agent has its own ID and access token; revoke or rotate either at any time without restarting your application.

Enable OpenTelemetry in the Functions host

In your function app's host.json, add "telemetryMode": "OpenTelemetry" at the root:

{
"version": "2.0",
"telemetryMode": "OpenTelemetry"
}

This switches the host to emit OTLP-shaped telemetry (logs and traces) for every supported language.

Point the OTLP exporter at SparkLogs

Set these application settings on the function app (Portal: Settings → Environment variables, or az functionapp config appsettings set):

SettingValue
OTEL_EXPORTER_OTLP_LOGS_ENDPOINThttps://ingest-<REGION>.engine.sparklogs.app/v1/logs
OTEL_EXPORTER_OTLP_LOGS_PROTOCOLhttp/protobuf
OTEL_EXPORTER_OTLP_LOGS_HEADERSAuthorization=Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN>
OTEL_EXPORTER_OTLP_LOGS_COMPRESSIONgzip
OTEL_EXPORTER_OTLP_LOGS_TIMEOUT25000

Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from Configure → Agents.

tip

If you also want Application Insights in parallel, leave APPLICATIONINSIGHTS_CONNECTION_STRING set — the host fans out to both. To send only to SparkLogs, remove that setting.

That's enough to see basic logs flowing. If this is sufficient for your needs, you are done with setup! ✅

Additional OTel worker setup (optional)

An Azure Functions app has two processes running side-by-side:

  • The Functions host — the runtime that triggers your functions, manages scaling, captures stdout / stderr from your code, and emits startup / lifecycle messages.
  • The language worker — a separate process where your actual function code runs. (For .NET in-process this is the same process, which is partly why it isn't supported.)

With just the host.json + application-settings configuration above, the host alone ships these to SparkLogs:

  • Function-runtime events: function indexing, scale decisions, trigger invocations, errors raised by the host.
  • Anything your code writes to stdout / stderr: includes Console.WriteLine, print(), Write-Host, and unstructured logger output. The host captures these from the worker's process and forwards them.

For complex use cases, worker-level configuration will add further useful information:

  • Structured metadata from your logger; AutoExtract will detect log levels, key/value pairs, and JSON from your unstructured text; but you can directly include custom fields with a worker-level setup.
  • Distributed-tracing context — spans, trace IDs, and parent/child relationships that are propagated from your code to the exported telemetry.
  • Structured exception details that go beyond what the stack trace string contains in stderr output.

Adding the worker-level OpenTelemetry setup below turns each runtime's native logger (MEL / SLF4J / Python logging / etc.) into a native language-level source of telemetry, allowing richer information to be captured. Install the language's OpenTelemetry packages and wire up the OTLP exporter using the per-language snippet below; Microsoft's OpenTelemetry with Azure Functions reference is the source of truth.

Install the worker OTel packages and register the exporter:

dotnet add package Microsoft.Azure.Functions.Worker.OpenTelemetry
dotnet add package OpenTelemetry.Extensions.Hosting
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol

In Program.cs, after ConfigureFunctionsWebApplication:

using OpenTelemetry;

builder.Services.AddOpenTelemetry()
.UseFunctionsWorkerDefaults()
.UseOtlpExporter();

The exporter reads its endpoint, headers, and protocol from the application settings above. See the .NET OTel guide for advanced setup (Serilog, NLog, custom resource attributes).

Resource attributes (field mappings)

Azure Functions' resource detectors populate the standard OTel resource attributes that SparkLogs maps directly into searchable pivots:

OTel attributeAuto-populated valueSparkLogs pivot
service.nameFunction app nameservice
cloud.providerazure(custom field)
cloud.regionRegion (e.g. eastus)(custom field)
cloud.resource_idFull ARM resource ID(custom field)
faas.nameThe specific function name (per-invocation, on the span)(custom field; per-span attribute)
faas.triggerhttp / servicebus / eventhubs / etc. (per-span)(custom field)

service.name defaults to the function app name, which is usually correct. Override only if you need a different stable name across slots or environments:

# Application setting on the function app
OTEL_SERVICE_NAME=checkout-api

To set the app pivot (broadest grouping below organization_id — typically a product or platform name that spans several services), add service.namespace to your resource attributes. You can include other attributes in the same comma-separated list:

# Application setting on the function app
OTEL_RESOURCE_ATTRIBUTES=service.namespace=checkout-platform,deployment.environment=production,team=billing

service.namespace maps to app. deployment.environment and arbitrary custom keys (like team above) are preserved as searchable resource fields on each event. See Standard field mapping for the full list of resource attributes that drive each pivot.

Verify

After deploying:

  1. Invoke your function (HTTP trigger, message, scheduled, etc.).
  2. Open SparkLogs and search service:"<function-app-name>". Host-level startup logs (function indexing, scaling decisions) appear with service set to the function app name. Worker-level logs from your code appear alongside, with service matching unless you set OTEL_SERVICE_NAME differently.
  3. If you don't see telemetry, double-check the application settings are persisted (Portal often requires a Save), then restart the function app and re-invoke.

Limitations and gotchas

  • C# in-process .NET model is not supported — see the callout near the top of this page.
  • Azure Portal "Log streaming" is disabled whenever host.json has telemetryMode: OpenTelemetry. Microsoft hasn't built a bridge between OTel-mode telemetry and the portal's log-stream pane, so the pane stays empty. Options:
    • Use SparkLogs's live Explore view as your stream. (Recommended — your logs are already there.)
    • Keep Application Insights configured in parallel: leave APPLICATIONINSIGHTS_CONNECTION_STRING set (the host fans out telemetry to both App Insights and SparkLogs). The portal Log-streaming pane still won't work, but App Insights's Live Metrics view shows real-time logs and metrics as a close substitute.
    • For local development, run the app with func start and read the console output directly.
  • logging.applicationInsights in host.json is ignored when OTel is enabled. Use the OTel log-level filters shown in Microsoft's troubleshooting guide.
  • Distributed-tracing visualization isn't available in SparkLogs yet. SparkLogs ingests trace_id and span_id as standard fields. You can search and group logs by trace ID, and tracing-data is preserved for future use, but there is no span-timeline view today. Same caveat for the Azure Portal's "Recent function invocation" panel: that drill-down only renders when telemetry is sent to Azure Monitor. If span visualization matters to your workflow now, keep APPLICATIONINSIGHTS_CONNECTION_STRING configured so the host fans out to both Application Insights (for the trace UI) and SparkLogs (for logs).
  • Double-counted request telemetry. If the worker also runs AspNetCoreInstrumentation (.NET), the host's request span and the worker's request span both fire. Pick one — typically remove the worker-level HTTP instrumentation.

Reference