Azure Functions
Overview
Azure Functions has built-in OpenTelemetry support — both the Functions host (stdout/stderr app logs, system telemetry from triggers, scaling, and runtime) and the language worker (your code) can emit OTLP-compliant logs and traces to any OTLP endpoint, including SparkLogs. Azure Application Insights is not required.
This page shows the minimal configuration to point a function app at SparkLogs.
Azure Functions offers two .NET execution models:
- Isolated worker (default for .NET 8+, recommended for all .NET versions) — your function code runs in its own .NET process, separate from the Functions runtime. Microsoft's OpenTelemetry integration supports this model.
- In-process (legacy and EOL Nov 2026) — your function code runs inside the same .NET process as the Functions runtime. Microsoft's OpenTelemetry integration does not support in-process.
If you're using the in-process model, migrate to isolated worker before following this guide. The other supported runtimes — Java, Node.js (JavaScript / TypeScript), Python, and PowerShell — do not have this limitation.
Prerequisites
You need the following before you start:
- Region —
usoreu. Pick the region your SparkLogs account lives in. - Agent ID — short identifier for the agent that will send these logs.
- Agent access token — bearer credential for the agent.
- A function app on a plan that supports custom telemetry — Consumption, Premium, Dedicated (App Service), or Flex Consumption.
View or create an agent in the SparkLogs app under Configure → Agents. Each agent has its own ID and access token; revoke or rotate either at any time without restarting your application.
Enable OpenTelemetry in the Functions host
In your function app's host.json, add "telemetryMode": "OpenTelemetry" at the root:
{
"version": "2.0",
"telemetryMode": "OpenTelemetry"
}
This switches the host to emit OTLP-shaped telemetry (logs and traces) for every supported language.
Point the OTLP exporter at SparkLogs
Set these application settings on the function app (Portal: Settings → Environment variables, or az functionapp config appsettings set):
| Setting | Value |
|---|---|
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT | https://ingest-<REGION>.engine.sparklogs.app/v1/logs |
OTEL_EXPORTER_OTLP_LOGS_PROTOCOL | http/protobuf |
OTEL_EXPORTER_OTLP_LOGS_HEADERS | Authorization=Bearer <AGENT-ID>:<AGENT-ACCESS-TOKEN> |
OTEL_EXPORTER_OTLP_LOGS_COMPRESSION | gzip |
OTEL_EXPORTER_OTLP_LOGS_TIMEOUT | 25000 |
Replace <REGION> (us or eu), <AGENT-ID>, and <AGENT-ACCESS-TOKEN> with the values from Configure → Agents.
If you also want Application Insights in parallel, leave APPLICATIONINSIGHTS_CONNECTION_STRING set — the host fans out to both. To send only to SparkLogs, remove that setting.
That's enough to see basic logs flowing. If this is sufficient for your needs, you are done with setup! ✅
Additional OTel worker setup (optional)
An Azure Functions app has two processes running side-by-side:
- The Functions host — the runtime that triggers your functions, manages scaling, captures
stdout/stderrfrom your code, and emits startup / lifecycle messages. - The language worker — a separate process where your actual function code runs. (For .NET in-process this is the same process, which is partly why it isn't supported.)
With just the host.json + application-settings configuration above, the host alone ships these to SparkLogs:
- Function-runtime events: function indexing, scale decisions, trigger invocations, errors raised by the host.
- Anything your code writes to
stdout/stderr: includesConsole.WriteLine,print(),Write-Host, and unstructured logger output. The host captures these from the worker's process and forwards them.
For complex use cases, worker-level configuration will add further useful information:
- Structured metadata from your logger; AutoExtract will detect log levels, key/value pairs, and JSON from your unstructured text; but you can directly include custom fields with a worker-level setup.
- Distributed-tracing context — spans, trace IDs, and parent/child relationships that are propagated from your code to the exported telemetry.
- Structured exception details that go beyond what the stack trace string contains in stderr output.
Adding the worker-level OpenTelemetry setup below turns each runtime's native logger (MEL / SLF4J / Python logging / etc.) into a native language-level source of telemetry, allowing richer information to be captured. Install the language's OpenTelemetry packages and wire up the OTLP exporter using the per-language snippet below; Microsoft's OpenTelemetry with Azure Functions reference is the source of truth.
- .NET (isolated workers)
- Java
- Node.js
- Python
- PowerShell
Install the worker OTel packages and register the exporter:
dotnet add package Microsoft.Azure.Functions.Worker.OpenTelemetry
dotnet add package OpenTelemetry.Extensions.Hosting
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
In Program.cs, after ConfigureFunctionsWebApplication:
using OpenTelemetry;
builder.Services.AddOpenTelemetry()
.UseFunctionsWorkerDefaults()
.UseOtlpExporter();
The exporter reads its endpoint, headers, and protocol from the application settings above. See the .NET OTel guide for advanced setup (Serilog, NLog, custom resource attributes).
Add azure-functions-java-opentelemetry to your project (Maven shown):
<dependency>
<groupId>com.microsoft.azure.functions</groupId>
<artifactId>azure-functions-java-opentelemetry</artifactId>
<version>1.0.0</version>
</dependency>
The Java worker autodiscovers the middleware — no Program.java change required. Resource attributes come from the resource detectors Microsoft ships in the package. See the Java OTel guide for app-level enrichment.
Install the worker OTel packages:
npm install @opentelemetry/api @opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-logs-otlp-http \
@azure/functions-opentelemetry-instrumentation
Create src/index.js and register the providers (see Microsoft's Node.js example). Update package.json main to include the new entry point. See the Node.js OTel guide for advanced setup.
Add PYTHON_ENABLE_OPENTELEMETRY=true to your application settings (this enables the Python worker's OTel stream and prevents duplicate host-level entries).
In requirements.txt:
opentelemetry-api
opentelemetry-sdk
opentelemetry-exporter-otlp
opentelemetry-instrumentation-logging
Wire the OTLP log exporter to Python's logging module in function_app.py (see Microsoft's Python example). See the Python OTel guide for structlog / loguru integration.
Add OTEL_FUNCTIONS_WORKER_ENABLED=true to your application settings.
In an app-level Modules folder at the root of your project, run:
Save-Module -Name AzureFunctions.PowerShell.OpenTelemetry.SDK
In profile.ps1:
Import-Module AzureFunctions.PowerShell.OpenTelemetry.SDK -Force -ErrorAction Stop
Initialize-FunctionsOpenTelemetry
The Flex Consumption plan does not support requirements.psd1-managed dependencies, which is why the module is committed to the app's Modules folder.
Resource attributes (field mappings)
Azure Functions' resource detectors populate the standard OTel resource attributes that SparkLogs maps directly into searchable pivots:
| OTel attribute | Auto-populated value | SparkLogs pivot |
|---|---|---|
service.name | Function app name | service |
cloud.provider | azure | (custom field) |
cloud.region | Region (e.g. eastus) | (custom field) |
cloud.resource_id | Full ARM resource ID | (custom field) |
faas.name | The specific function name (per-invocation, on the span) | (custom field; per-span attribute) |
faas.trigger | http / servicebus / eventhubs / etc. (per-span) | (custom field) |
service.name defaults to the function app name, which is usually correct. Override only if you need a different stable name across slots or environments:
# Application setting on the function app
OTEL_SERVICE_NAME=checkout-api
To set the app pivot (broadest grouping below organization_id — typically a product or platform name that spans several services), add service.namespace to your resource attributes. You can include other attributes in the same comma-separated list:
# Application setting on the function app
OTEL_RESOURCE_ATTRIBUTES=service.namespace=checkout-platform,deployment.environment=production,team=billing
service.namespace maps to app. deployment.environment and arbitrary custom keys (like team above) are preserved as searchable resource fields on each event. See Standard field mapping for the full list of resource attributes that drive each pivot.
Verify
After deploying:
- Invoke your function (HTTP trigger, message, scheduled, etc.).
- Open SparkLogs and search
service:"<function-app-name>". Host-level startup logs (function indexing, scaling decisions) appear withserviceset to the function app name. Worker-level logs from your code appear alongside, withservicematching unless you setOTEL_SERVICE_NAMEdifferently. - If you don't see telemetry, double-check the application settings are persisted (Portal often requires a Save), then restart the function app and re-invoke.
Limitations and gotchas
- C# in-process .NET model is not supported — see the callout near the top of this page.
- Azure Portal "Log streaming" is disabled whenever
host.jsonhastelemetryMode: OpenTelemetry. Microsoft hasn't built a bridge between OTel-mode telemetry and the portal's log-stream pane, so the pane stays empty. Options:- Use SparkLogs's live Explore view as your stream. (Recommended — your logs are already there.)
- Keep Application Insights configured in parallel: leave
APPLICATIONINSIGHTS_CONNECTION_STRINGset (the host fans out telemetry to both App Insights and SparkLogs). The portal Log-streaming pane still won't work, but App Insights's Live Metrics view shows real-time logs and metrics as a close substitute. - For local development, run the app with
func startand read the console output directly.
logging.applicationInsightsinhost.jsonis ignored when OTel is enabled. Use the OTel log-level filters shown in Microsoft's troubleshooting guide.- Distributed-tracing visualization isn't available in SparkLogs yet. SparkLogs ingests
trace_idandspan_idas standard fields. You can search and group logs by trace ID, and tracing-data is preserved for future use, but there is no span-timeline view today. Same caveat for the Azure Portal's "Recent function invocation" panel: that drill-down only renders when telemetry is sent to Azure Monitor. If span visualization matters to your workflow now, keepAPPLICATIONINSIGHTS_CONNECTION_STRINGconfigured so the host fans out to both Application Insights (for the trace UI) and SparkLogs (for logs). - Double-counted request telemetry. If the worker also runs AspNetCoreInstrumentation (
.NET), the host's request span and the worker's request span both fire. Pick one — typically remove the worker-level HTTP instrumentation.
Reference
- Use OpenTelemetry with Azure Functions (Microsoft)
- Functions OpenTelemetry tutorial (Microsoft, end-to-end with
azd) - Language-specific OTel guides for advanced in-code setup