OpenTelemetry exporter

Trust: ★★★☆☆ (0.90) · 0 validations · developer_reference

Published: 2026-05-10 · Source: crawler_authoritative

Tình huống

Mastra SDK guide for configuring the OpenTelemetry exporter to send traces and logs to OTEL-compatible observability platforms like Datadog, New Relic, SigNoz, MLflow, Dash0, Traceloop, and Laminar.

Insight

The @mastra/otel-exporter package sends traces and logs to any OTEL-compatible observability platform using standardized OpenTelemetry Semantic Conventions for GenAI v1.38.0. It supports multiple provider protocols: HTTP/Protobuf (SigNoz, New Relic, Laminar, MLflow), gRPC (Dash0, Datadog), and HTTP/JSON (Traceloop). The exporter automatically derives log endpoints from trace endpoints by replacing /v1/traces with /v1/logs. It uses BatchSpanProcessor for traces and BatchLogRecordProcessor for logs, with both signals enabled by default. Logs carrying traceId and spanId are correlated with traces via OTEL native trace context and mastra.traceId/mastra.spanId attributes. Key semantic conventions include: span naming patterns like ‘chat {model}’ for LLM operations, ‘execute_tool {tool_name}’ for tools, ‘invoke_agent {agent_id}’ for agents, and ‘invoke_workflow {workflow_id}’ for workflows. Key attributes include gen_ai.operation.name, gen_ai.provider.name, gen_ai.request.model, gen_ai.input.messages, gen_ai.output.messages, gen_ai.usage.input_tokens, gen_ai.usage.output_tokens, gen_ai.request.temperature, and gen_ai.response.finish_reasons. Configuration options include provider (dash0, signoz, newrelic, traceloop, laminar, custom), signals toggle, timeout (default 30000ms), batchSize (default 100), and logLevel. The custom provider supports endpoint, protocol (‘http/json’, ‘http/protobuf’, ‘grpc’), and custom headers.

Hành động

Install the base exporter package: npm install @mastra/otel-exporter@latest. Then install the protocol-specific package for your provider: HTTP/Protobuf requires @opentelemetry/exporter-trace-otlp-proto, gRPC requires @opentelemetry/exporter-trace-otlp-grpc plus @grpc/grpc-js, HTTP/JSON requires @opentelemetry/exporter-trace-otlp-http. Configure using environment variables (zero-config) or explicit provider configuration in OtelExporter constructor. For environment setup, set provider-specific variables like DASH0_API_KEY/DASH0_ENDPOINT for Dash0, SIGNOZ_API_KEY for SigNoz, NEW_RELIC_LICENSE_KEY for New Relic, TRACELOOP_API_KEY for Traceloop, or LMNR_PROJECT_API_KEY for Laminar. For explicit config, instantiate OtelExporter with provider object containing provider name as key and config object as value. For Datadog gRPC specifically, explicitly import @grpc/grpc-js and @opentelemetry/exporter-trace-otlp-grpc at the top of the file, and add bundler.externals configuration with ‘@grpc/grpc-js’. Enable log export by installing the matching @opentelemetry/exporter-logs-otlp-* package for your protocol. Disable signals using signals: { traces: true, logs: false } option.

Kết quả

Traces and logs are exported to the configured OTEL-compatible platform using standardized OpenTelemetry Semantic Conventions for GenAI v1.38.0. Spans are named following convention patterns (chat {model}, execute_tool {tool_name}, etc.) and include gen_ai.* attributes for AI observability. Logs are correlated with traces automatically.

Điều kiện áp dụng

Requires @mastra/core and @mastra/observability. Datadog integration requires gRPC bundler configuration. MLflow requires custom provider with x-mlflow-experiment-id header for experiment routing.


Nội dung gốc (Original)

OpenTelemetry exporter

The OpenTelemetry (OTEL) exporter sends your traces and logs to any OTEL-compatible observability platform using standardized OpenTelemetry Semantic Conventions for GenAI. This ensures broad compatibility with platforms like Datadog, New Relic, SigNoz, MLflow, Dash0, Traceloop, Laminar, and more.

Looking for bidirectional OTEL integration?: If you have existing OpenTelemetry instrumentation and want Mastra traces to inherit context from active OTEL spans, see the OpenTelemetry Bridge instead.

Installation

Each provider requires specific protocol packages. Install the base exporter plus the protocol package for your provider:

For HTTP/Protobuf Providers (SigNoz, New Relic, Laminar, MLflow)

npm:

npm install @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-proto

pnpm:

pnpm add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-proto

Yarn:

yarn add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-proto

Bun:

bun add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-proto

For gRPC Providers (Dash0, Datadog)

npm:

npm install @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js

pnpm:

pnpm add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js

Yarn:

yarn add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js

Bun:

bun add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js

For HTTP/JSON Providers (Traceloop)

npm:

npm install @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-http

pnpm:

pnpm add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-http

Yarn:

yarn add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-http

Bun:

bun add @mastra/otel-exporter@latest @opentelemetry/exporter-trace-otlp-http

Environment variables

All providers support zero-config setup via environment variables. Set the appropriate variables and the exporter will automatically use them:

ProviderEnvironment Variables
Dash0DASH0_API_KEY (required), DASH0_ENDPOINT (required), DASH0_DATASET (optional)
SigNozSIGNOZ_API_KEY (required), SIGNOZ_REGION (optional), SIGNOZ_ENDPOINT (optional)
New RelicNEW_RELIC_LICENSE_KEY (required), NEW_RELIC_ENDPOINT (optional)
TraceloopTRACELOOP_API_KEY (required), TRACELOOP_DESTINATION_ID, TRACELOOP_ENDPOINT (optional)
LaminarLMNR_PROJECT_API_KEY (required), LAMINAR_ENDPOINT (optional)

Provider configurations

MLflow

MLflow supports native Mastra tracing through its OTLP endpoint at /v1/traces. Use the custom provider with HTTP/Protobuf and include the experiment header so traces land in the correct MLflow experiment:

new OtelExporter({
  provider: {
    custom: {
      endpoint: `${process.env.MLFLOW_TRACKING_URI}/v1/traces`,
      protocol: 'http/protobuf',
      headers: {
        'x-mlflow-experiment-id': process.env.MLFLOW_EXPERIMENT_ID,
      },
    },
  },
})

Dash0

Dash0 provides real-time observability with automatic insights.

Zero-Config Setup

Set environment variables and use the exporter with an empty config:

# Required
DASH0_API_KEY=your-api-key
DASH0_ENDPOINT=ingress.us-west-2.aws.dash0.com:4317
 
# Optional
DASH0_DATASET=production
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { OtelExporter } from '@mastra/otel-exporter'
 
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      otel: {
        serviceName: 'my-service',
        exporters: [new OtelExporter({ provider: { dash0: {} } })],
      },
    },
  }),
})

Explicit Configuration

import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { OtelExporter } from '@mastra/otel-exporter'
 
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      otel: {
        serviceName: 'my-service',
        exporters: [
          new OtelExporter({
            provider: {
              dash0: {
                apiKey: process.env.DASH0_API_KEY,
                endpoint: process.env.DASH0_ENDPOINT, // e.g., 'ingress.us-west-2.aws.dash0.com:4317'
                dataset: 'production', // Optional dataset name
              },
            },
            resourceAttributes: {
              // Optional OpenTelemetry Resource Attributes for the trace
              ['deployment.environment']: 'dev',
            },
          }),
        ],
      },
    },
  }),
})

Info: Get your Dash0 endpoint from your dashboard. It should be in the format ingress.{region}.aws.dash0.com:4317.

SigNoz

SigNoz is an open-source APM alternative with built-in Tracing support.

Zero-Config Setup

# Required
SIGNOZ_API_KEY=your-api-key
 
# Optional
SIGNOZ_REGION=us  # 'us' | 'eu' | 'in'
SIGNOZ_ENDPOINT=https://my-signoz.example.com  # For self-hosted
new OtelExporter({ provider: { signoz: {} } })

Explicit Configuration

new OtelExporter({
  provider: {
    signoz: {
      apiKey: process.env.SIGNOZ_API_KEY,
      region: 'us', // 'us' | 'eu' | 'in'
      // endpoint: 'https://my-signoz.example.com', // For self-hosted
    },
  },
})

New Relic

New Relic provides comprehensive observability with AI monitoring capabilities.

Zero-Config Setup

# Required
NEW_RELIC_LICENSE_KEY=your-license-key
 
# Optional
NEW_RELIC_ENDPOINT=https://otlp.eu01.nr-data.net  # For EU region
new OtelExporter({ provider: { newrelic: {} } })

Explicit Configuration

new OtelExporter({
  provider: {
    newrelic: {
      apiKey: process.env.NEW_RELIC_LICENSE_KEY,
      // endpoint: 'https://otlp.eu01.nr-data.net', // For EU region
    },
  },
})

Traceloop

Traceloop specializes in LLM observability with automatic prompt tracking.

Zero-Config Setup

# Required
TRACELOOP_API_KEY=your-api-key
 
# Optional
TRACELOOP_DESTINATION_ID=my-destination
TRACELOOP_ENDPOINT=https://custom.traceloop.com
new OtelExporter({ provider: { traceloop: {} } })

Explicit Configuration

new OtelExporter({
  provider: {
    traceloop: {
      apiKey: process.env.TRACELOOP_API_KEY,
      destinationId: 'my-destination', // Optional
    },
  },
})

Laminar

Laminar provides specialized LLM observability and analytics.

Zero-Config Setup

# Required
LMNR_PROJECT_API_KEY=your-api-key
 
# Optional
LAMINAR_ENDPOINT=https://api.lmnr.ai/v1/traces
new OtelExporter({ provider: { laminar: {} } })

Explicit Configuration

new OtelExporter({
  provider: {
    laminar: {
      apiKey: process.env.LMNR_PROJECT_API_KEY,
    },
  },
})

Laminar-Native Exporter: For Laminar-specific features like native span paths, metadata, and tags rendering in the Laminar dashboard, consider using the dedicated @mastra/laminar exporter instead. It provides optimized integration with Laminar’s platform.

Datadog

Datadog APM provides application performance monitoring with distributed tracing. To send traces to Datadog via OTLP, you need the Datadog Agent running with OTLP ingestion enabled.

Datadog uses gRPC for OTLP ingestion, which requires explicit imports and bundler configuration to work correctly:

// Explicitly import gRPC dependencies for the bundler
import '@grpc/grpc-js'
import '@opentelemetry/exporter-trace-otlp-grpc'
import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { OtelExporter, type ExportProtocol } from '@mastra/otel-exporter'
 
export const mastra = new Mastra({
  // Add grpc-js to externals so it's handled at runtime
  bundler: {
    externals: ['@grpc/grpc-js'],
  },
  observability: new Observability({
    configs: {
      default: {
        serviceName: 'my-service',
        exporters: [
          new OtelExporter({
            provider: {
              custom: {
                endpoint: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://localhost:4317',
                protocol: (process.env.OTEL_EXPORTER_OTLP_PROTOCOL || 'grpc') as ExportProtocol,
                headers: {},
              },
            },
          }),
        ],
      },
    },
  }),
})

Info: The Datadog Agent must be configured with OTLP ingestion enabled. Add the following to your datadog.yaml:

otlp_config:
  receiver:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317

The default OTLP endpoint is http://localhost:4317 when running the Datadog Agent locally.

Warning: The explicit imports of @grpc/grpc-js and @opentelemetry/exporter-trace-otlp-grpc at the top of the file, along with the bundler.externals configuration, are required for the gRPC transport to work correctly. Without these, you may encounter connection issues.

Datadog-Native Exporter: For Datadog-specific features like automatic span type mapping, LLM span categorization, and simplified setup without gRPC configuration, consider using the dedicated @mastra/datadog exporter instead. It provides optimized integration with Datadog’s APM platform.

Custom/Generic OTEL Endpoints

For other OTEL-compatible platforms or custom collectors:

new OtelExporter({
  provider: {
    custom: {
      endpoint: 'https://your-collector.example.com/v1/traces',
      protocol: 'http/protobuf', // 'http/json' | 'http/protobuf' | 'grpc'
      headers: {
        'x-api-key': process.env.API_KEY,
      },
    },
  },
})

Signals

The exporter sends two OpenTelemetry signals:

  • Traces: Mastra spans, exported via BatchSpanProcessor.
  • Logs: Mastra log events, exported via BatchLogRecordProcessor. Logs that carry traceId and spanId are correlated with traces using both the OTEL log record’s native trace context and mastra.traceId / mastra.spanId attributes, so backends like Datadog, Grafana, and Honeycomb can join logs to traces automatically.

Both signals are enabled by default and share the same provider configuration. The log endpoint is derived from the trace endpoint by replacing the /v1/traces suffix with /v1/logs.

To disable a signal, set the signals option:

new OtelExporter({
  provider: {
    /* ... */
  },
  signals: {
    traces: true, // default
    logs: false, // disable log export
  },
})

Log export requires installing the matching OTLP log exporter package for your protocol:

npm:

# HTTP/JSON
npm install @opentelemetry/exporter-logs-otlp-http
# HTTP/Protobuf
npm install @opentelemetry/exporter-logs-otlp-proto
# gRPC
npm install @opentelemetry/exporter-logs-otlp-grpc @grpc/grpc-js

pnpm:

# HTTP/JSON
pnpm add @opentelemetry/exporter-logs-otlp-http
# HTTP/Protobuf
pnpm add @opentelemetry/exporter-logs-otlp-proto
# gRPC
pnpm add @opentelemetry/exporter-logs-otlp-grpc @grpc/grpc-js

Yarn:

# HTTP/JSON
yarn add @opentelemetry/exporter-logs-otlp-http
# HTTP/Protobuf
yarn add @opentelemetry/exporter-logs-otlp-proto
# gRPC
yarn add @opentelemetry/exporter-logs-otlp-grpc @grpc/grpc-js

Bun:

# HTTP/JSON
bun add @opentelemetry/exporter-logs-otlp-http
# HTTP/Protobuf
bun add @opentelemetry/exporter-logs-otlp-proto
# gRPC
bun add @opentelemetry/exporter-logs-otlp-grpc @grpc/grpc-js

If the matching log exporter package is not installed, log export is silently disabled and traces continue to work.

Configuration options

Complete Configuration

new OtelExporter({
  // Provider configuration (required)
  provider: {
    // Use one of: dash0, signoz, newrelic, traceloop, laminar, custom
  },
 
  // Per-signal toggles. Both default to true.
  signals: {
    traces: true,
    logs: true,
  },
 
  // Export configuration
  timeout: 30000, // Export timeout in milliseconds
  batchSize: 100, // Number of spans/logs per batch
 
  // Debug options
  logLevel: 'info', // 'debug' | 'info' | 'warn' | 'error'
})

OpenTelemetry semantic conventions

The exporter follows OpenTelemetry Semantic Conventions for GenAI v1.38.0, ensuring compatibility with observability platforms:

Span Naming

  • LLM Operations: chat {model}
  • Tool Execution: execute_tool {tool_name}
  • Agent Runs: invoke_agent {agent_id}
  • Workflow Runs: invoke_workflow {workflow_id}

Key Attributes

  • gen_ai.operation.name - Operation type (chat, tool.execute, etc.)
  • gen_ai.provider.name - AI provider (openai, anthropic, etc.)
  • gen_ai.request.model - Model identifier
  • gen_ai.input.messages - Chat history provided to the model
  • gen_ai.output.messages - Messages returned by the model
  • gen_ai.usage.input_tokens - Number of input tokens
  • gen_ai.usage.output_tokens - Number of output tokens
  • gen_ai.request.temperature - Sampling temperature
  • gen_ai.response.finish_reasons - Completion reasons

Protocol selection guide

Choose the right protocol package based on your provider:

ProviderProtocolRequired Package
Dash0gRPC@opentelemetry/exporter-trace-otlp-grpc
DatadoggRPC@opentelemetry/exporter-trace-otlp-grpc
SigNozHTTP/Protobuf@opentelemetry/exporter-trace-otlp-proto
New RelicHTTP/Protobuf@opentelemetry/exporter-trace-otlp-proto
TraceloopHTTP/JSON@opentelemetry/exporter-trace-otlp-http
LaminarHTTP/Protobuf@opentelemetry/exporter-trace-otlp-proto
CustomVariesDepends on your collector

Warning: Make sure to install the correct protocol package for your provider. The exporter will provide a helpful error message if the wrong package is installed.

Troubleshooting

Missing Dependency Error

If you see an error like:

HTTP/Protobuf exporter is not installed (required for signoz).
To use HTTP/Protobuf export, install the required package:
  npm install @opentelemetry/exporter-trace-otlp-proto

Install the suggested package for your provider.

Common Issues

  1. Wrong protocol package: Verify you installed the correct exporter for your provider
  2. Invalid endpoint: Check endpoint format matches provider requirements
  3. Authentication failures: Verify API keys and headers are correct

Liên kết

Xem thêm: