Langfuse exporter for Mastra observability
Trust: ★★★☆☆ (0.90) · 0 validations · developer_reference
Published: 2026-05-10 · Source: crawler_authoritative
Tình huống
Guide for integrating Langfuse observability platform with Mastra to export traces, monitor LLM performance, and link prompts for LLM applications.
Insight
The @mastra/langfuse package provides a LangfuseExporter that sends OpenTelemetry traces to Langfuse for observability of LLM applications. It supports zero-config setup via environment variables (LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_BASE_URL) or explicit constructor configuration with credentials. The exporter accepts options including serviceName, baseUrl (defaults to https://cloud.langfuse.com), environment, release, logLevel (debug|info|warn|error), and batch tuning parameters flushAt (max spans per OTEL export batch, default 500) and flushInterval (max seconds between flushes, default 20). Two operational modes are supported: realtime mode (realtime: true) flushes traces immediately for debugging, while batch mode (realtime: false, default) batches traces for production performance. For agent traces with AGENT_RUN root spans, the exporter automatically sets langfuse.trace.name to the agent name, langfuse.trace.metadata.agentId, and langfuse.trace.metadata.agentName. Similarly WORKFLOW_RUN spans set workflowId and workflowName metadata. Evaluators can be scoped to specific agents via trace name or agentId metadata filters in Langfuse. Custom traceName via mastra.metadata.traceName takes precedence over default agent name. Prompt linking works with Langfuse Prompt Management using withLangfusePrompt helper accepting name and version fields (both required for Langfuse v5), which sets fields on MODEL_GENERATION spans to link generations to stored prompts.
Hành động
Install via npm install @mastra/langfuse@latest, pnpm add, yarn add, or bun add. Set environment variables LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and LANGFUSE_BASE_URL. Configure in Mastra Observability config using new LangfuseExporter() for zero-config, or pass explicit options: { publicKey, secretKey, baseUrl, environment, release, realtime, flushAt, flushInterval, logLevel }. To scope evaluators per agent, filter in Langfuse by trace name equals agent name or metadata agentId equals the agent id. For prompt linking, use withLangfusePrompt helper with buildTracingOptions on Agent defaultGenerateOptions, passing { name: prompt.name, version: prompt.version }. High-volume spans can be excluded at observability level using excludeSpanTypes option with SpanType enum values like SpanType.MODEL_CHUNK.
Kết quả
Traces are exported to Langfuse dashboard showing model performance, token usage, and conversation flows. Agent and workflow traces are automatically scoped with metadata for evaluator filtering. Prompt generations are linked to versioned prompts in Langfuse Prompt Management for version tracking and metrics.
Điều kiện áp dụng
Requires @mastra/langfuse package. Compatible with Mastra core and observability modules. Langfuse v5 requires both name and version fields for prompt linking.
Nội dung gốc (Original)
Langfuse exporter
Langfuse is an open-source observability platform specifically designed for LLM applications. The Langfuse exporter sends your traces to Langfuse, providing detailed insights into model performance, token usage, and conversation flows.
Installation
npm:
npm install @mastra/langfuse@latestpnpm:
pnpm add @mastra/langfuse@latestYarn:
yarn add @mastra/langfuse@latestBun:
bun add @mastra/langfuse@latestConfiguration
Prerequisites
- Langfuse Account: Sign up at cloud.langfuse.com or deploy self-hosted
- API Keys: Create public/secret key pair in Langfuse Settings → API Keys
- Environment Variables: Set your credentials
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxx
LANGFUSE_BASE_URL=https://cloud.langfuse.com # Or your self-hosted URLZero-Config Setup
With environment variables set, use the exporter with no configuration:
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { LangfuseExporter } from '@mastra/langfuse'
export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [new LangfuseExporter()],
},
},
}),
})Explicit Configuration
You can also pass credentials directly (takes precedence over environment variables):
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { LangfuseExporter } from '@mastra/langfuse'
export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: process.env.LANGFUSE_BASE_URL,
environment: process.env.NODE_ENV,
release: process.env.GIT_COMMIT,
}),
],
},
},
}),
})Configuration options
Realtime vs Batch Mode
The Langfuse exporter supports two modes for sending traces:
Realtime Mode (Development)
Traces appear immediately in Langfuse dashboard, ideal for debugging:
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: true, // Flush after each event
})Batch Mode (Production)
Better performance with automatic batching:
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
realtime: false, // Default - batch traces
})Batch Tuning for High-Volume Traces
For self-hosted Langfuse deployments or streamed runs that produce many spans per second, you can tune the OTEL batch size and flush interval to reduce request pressure on the Langfuse ingestion endpoint:
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
flushAt: 500, // Maximum spans per OTEL export batch
flushInterval: 20, // Maximum seconds between flushes
})To suppress high-volume span types entirely (for example MODEL_CHUNK spans from streamed responses), use the observability-level excludeSpanTypes option rather than configuring the exporter:
import { SpanType } from '@mastra/core/observability'
new Observability({
configs: {
langfuse: {
serviceName: 'my-service',
exporters: [new LangfuseExporter()],
excludeSpanTypes: [SpanType.MODEL_CHUNK],
},
},
})Complete Configuration
new LangfuseExporter({
// Required credentials
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
// Optional settings
baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
realtime: process.env.NODE_ENV === 'development', // Dynamic mode selection
flushAt: 500, // Maximum spans per OTEL export batch
flushInterval: 20, // Maximum seconds between flushes
logLevel: 'info', // Diagnostic logging: debug | info | warn | error
// Langfuse-specific settings
environment: process.env.NODE_ENV, // Shows in Langfuse UI for filtering
release: process.env.GIT_COMMIT, // Git commit hash for version tracking
})Scoping evaluators per agent
Langfuse evaluators (such as LLM-as-a-Judge) can be filtered to run only against specific traces. The Mastra Langfuse exporter automatically scopes each trace to the agent or workflow that started it, so trace-level filters resolve to the right runs.
For every trace whose root span is an AGENT_RUN, the exporter sets:
langfuse.trace.name: the agent name (or id, when no name is set)langfuse.trace.metadata.agentId: the agent idlangfuse.trace.metadata.agentName: the agent name
The same applies to WORKFLOW_RUN root spans, which set langfuse.trace.metadata.workflowId and langfuse.trace.metadata.workflowName.
To scope an evaluator to a specific agent, configure either filter in Langfuse:
- Trace name: equals the agent name (for example,
weather-agent). - Metadata:
agentIdequals the agent id.
The trace name dropdown in Langfuse evaluator filters lists every distinct value seen, so each agent appears as its own entry once it has produced at least one trace.
If you set a custom traceName via mastra.metadata.traceName, your value takes precedence over the default agent name.
Prompt linking
You can link LLM generations to prompts stored in Langfuse Prompt Management. This enables version tracking and metrics for your prompts.
Using the Helper (Recommended)
Use withLangfusePrompt with buildTracingOptions for the cleanest API:
import { Agent } from '@mastra/core/agent'
import { buildTracingOptions } from '@mastra/observability'
import { LangfuseExporter, withLangfusePrompt } from '@mastra/langfuse'
const exporter = new LangfuseExporter()
// Fetch the prompt from Langfuse Prompt Management via the client
const prompt = await exporter.client.prompt.get('customer-support', { type: 'text' })
export const supportAgent = new Agent({
name: 'support-agent',
instructions: prompt.compile(), // Use the prompt text from Langfuse
model: 'openai/gpt-5.4',
defaultGenerateOptions: {
tracingOptions: buildTracingOptions(
withLangfusePrompt({ name: prompt.name, version: prompt.version }),
),
},
})The withLangfusePrompt helper accepts name and version fields for prompt linking. Langfuse v5 requires both fields.
Manual Fields
You can also pass manual fields if you’re not using the Langfuse SDK:
const tracingOptions = buildTracingOptions(withLangfusePrompt({ name: 'my-prompt', version: 1 }))Prompt Object Fields
The prompt object requires both name and version:
| Field | Type | Description |
|---|---|---|
name | string | The prompt name in Langfuse |
version | number | The prompt version number |
When set on a MODEL_GENERATION span, the Langfuse exporter automatically links the generation to the corresponding prompt.
Related
Liên kết
- Nền tảng: Dev Framework · Mastra
- Nguồn: https://mastra.ai/docs/observability/tracing/exporters/langfuse
Xem thêm: