Sentry exporter

Trust: ★★★☆☆ (0.90) · 0 validations · developer_reference

Published: 2026-05-10 · Source: crawler_authoritative

Tình huống

Mastra observability configuration guide for setting up Sentry as a tracing exporter to monitor AI model performance, token usage, and tool executions using OpenTelemetry semantic conventions.

Insight

The @mastra/sentry package provides a SentryExporter class that sends Mastra traces to Sentry. Installation supports npm, pnpm, yarn, and bun via npm install @mastra/sentry@latest. The exporter maps Mastra span types to Sentry operations: AGENT_RUN → gen_ai.invoke_agent, MODEL_GENERATION → gen_ai.chat, TOOL_CALL → gen_ai.execute_tool, MCP_TOOL_CALL → gen_ai.execute_tool, workflow spans map to workflow.* operations, PROCESSOR_RUN → ai.processor, GENERIC → ai.span. MODEL_STEP and MODEL_CHUNK spans are skipped to reduce trace noise. For MODEL_GENERATION spans, the exporter captures gen_ai.system, gen_ai.request.model, gen_ai.response.model, gen_ai.response.text, gen_ai.response.tool_calls, gen_ai.usage.input_tokens, gen_ai.usage.output_tokens, gen_ai.request.temperature, gen_ai.request.stream, gen_ai.request.messages, and gen_ai.completion_start_time. For TOOL_CALL spans, it captures gen_ai.tool.name, gen_ai.tool.type (always ‘function’), gen_ai.tool.call.id, gen_ai.tool.input, gen_ai.tool.output, and tool.success. Configuration options include dsn (required), environment, tracesSampleRate (0.0-1.0), release, options (for additional Sentry.NodeOptions), and logLevel. Sampling tip: set to 1.0 for development, 0.1-0.2 for high-load production; to disable tracing entirely, omit tracesSampleRate rather than setting to 0.

Hành động

Install @mastra/sentry package, then set SENTRY_DSN environment variable. For zero-config setup, import SentryExporter and pass to Observability configs: new SentryExporter(). For explicit config, pass options: new SentryExporter({ dsn: process.env.SENTRY_DSN!, environment: 'production', tracesSampleRate: 1.0 }). Add to Mastra config under observability.configs.sentry.exporters. Prerequisites: Sentry account at sentry.io, DSN from Project Settings → Client Keys, and optional SENTRY_ENVIRONMENT and SENTRY_RELEASE variables. To sample 10% of transactions for high-load backends: tracesSampleRate: 0.1.

Kết quả

Traces appear in Sentry dashboard with hierarchical parent-child relationships, automatic token usage tracking for model generations, tool call execution monitoring with input/output capture, streaming data aggregation, and automatic error status capture.

Điều kiện áp dụng

Requires @mastra/sentry@latest, Node.js environment, Sentry account, and valid DSN. Supports Mastra Observability framework integration.


Nội dung gốc (Original)

Sentry exporter

Sentry is an application monitoring platform with AI-specific tracing capabilities. The Sentry exporter sends your traces to Sentry using OpenTelemetry semantic conventions, providing insights into model performance, token usage, and tool executions.

Installation

npm:

npm install @mastra/sentry@latest

pnpm:

pnpm add @mastra/sentry@latest

Yarn:

yarn add @mastra/sentry@latest

Bun:

bun add @mastra/sentry@latest

Configuration

Prerequisites

  1. Sentry Account: Sign up at sentry.io
  2. DSN: Get your Data Source Name from Project Settings → Client Keys
  3. Environment Variables: Set your configuration
SENTRY_DSN=https://[email protected]/...
 
# Optional
SENTRY_ENVIRONMENT=production
SENTRY_RELEASE=1.0.0

Zero-Config Setup

With environment variables set, use the exporter with no configuration:

import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { SentryExporter } from '@mastra/sentry'
 
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      sentry: {
        serviceName: 'my-service',
        exporters: [new SentryExporter()],
      },
    },
  }),
})

Explicit Configuration

You can also pass credentials directly (takes precedence over environment variables):

import { Mastra } from '@mastra/core'
import { Observability } from '@mastra/observability'
import { SentryExporter } from '@mastra/sentry'
 
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      sentry: {
        serviceName: 'my-service',
        exporters: [
          new SentryExporter({
            dsn: process.env.SENTRY_DSN!,
            environment: 'production',
            tracesSampleRate: 1.0, // Send 100% of transactions to Sentry
          }),
        ],
      },
    },
  }),
})

Configuration options

Complete Configuration

new SentryExporter({
  // Required settings
  dsn: process.env.SENTRY_DSN!, // Data Source Name - tells the SDK where to send events
 
  // Optional settings
  environment: 'production', // Deployment environment (enables filtering issues and alerts by environment)
  tracesSampleRate: 1.0, // Percentage of transactions sent to Sentry (0.0 = 0%, 1.0 = 100%)
  release: '1.0.0', // Version of your code deployed (helps identify regressions and track deployments)
 
  // Advanced Sentry options
  options: {
    // Any additional Sentry.NodeOptions
    integrations: [],
    beforeSend: event => event,
    // ... other Sentry SDK options
  },
 
  // Diagnostic logging
  logLevel: 'info', // debug | info | warn | error
})

Sampling Configuration

Control the percentage of transactions sent to Sentry. This is useful for high-volume applications:

new SentryExporter({
  dsn: process.env.SENTRY_DSN!,
  tracesSampleRate: 0.1, // Send 10% of transactions to Sentry (recommended for high-load backends)
})

Tip: Set to 1.0 (100%) for development and 0.1 to 0.2 (10-20%) for production high-load applications. To disable tracing entirely, don’t set tracesSampleRate at all rather than setting it to 0.

Span type mapping

Mastra span types are automatically mapped to Sentry operations:

Mastra SpanTypeSentry OperationNotes
AGENT_RUNgen_ai.invoke_agentContains tokens from child MODEL_GENERATION span
MODEL_GENERATIONgen_ai.chatIncludes usage stats, streaming data
MODEL_STEP(skipped)Skipped to simplify trace hierarchy
MODEL_CHUNK(skipped)Data aggregated in MODEL_GENERATION
TOOL_CALLgen_ai.execute_toolTool execution with input/output
MCP_TOOL_CALLgen_ai.execute_toolMCP tool execution
WORKFLOW_RUNworkflow.run
WORKFLOW_STEPworkflow.step
WORKFLOW_CONDITIONALworkflow.conditional
WORKFLOW_CONDITIONAL_EVALworkflow.conditional
WORKFLOW_PARALLELworkflow.parallel
WORKFLOW_LOOPworkflow.loop
WORKFLOW_SLEEPworkflow.sleep
WORKFLOW_WAIT_EVENTworkflow.wait
PROCESSOR_RUNai.processor
GENERICai.span

OpenTelemetry semantic conventions

The exporter uses standard GenAI semantic conventions with Sentry-specific attributes:

For MODEL_GENERATION spans:

  • gen_ai.system: Model provider (e.g., openai, anthropic)
  • gen_ai.request.model: Model identifier (e.g., gpt-5.4)
  • gen_ai.response.model: Response model
  • gen_ai.response.text: Output text response
  • gen_ai.response.tool_calls: Tool calls made during generation (JSON array)
  • gen_ai.usage.input_tokens: Input token count
  • gen_ai.usage.output_tokens: Output token count
  • gen_ai.request.temperature: Temperature parameter
  • gen_ai.request.stream: Whether streaming was requested
  • gen_ai.request.messages: Input messages/prompts (JSON)
  • gen_ai.completion_start_time: Time first token arrived

For TOOL_CALL spans:

  • gen_ai.tool.name: Tool identifier
  • gen_ai.tool.type: function
  • gen_ai.tool.call.id: Tool call ID
  • gen_ai.tool.input: Tool input (JSON)
  • gen_ai.tool.output: Tool output (JSON)
  • tool.success: Whether the tool call succeeded

For AGENT_RUN spans:

  • gen_ai.agent.name: Agent identifier
  • gen_ai.pipeline.name: Agent name (for Sentry AI view)
  • gen_ai.agent.instructions: Agent instructions
  • gen_ai.response.model: Model from child generation
  • gen_ai.response.text: Output text from child generation
  • gen_ai.usage.*: Token usage from child generation

Features

  • Hierarchical traces: Maintains parent-child relationships
  • Token tracking: Automatic token usage tracking for generations
  • Tool call tracking: Captures tool executions with input/output
  • Streaming support: Aggregates streaming responses
  • Error tracking: Automatic error status and exception capture
  • Workflow support: Tracks workflow execution steps
  • Simplified hierarchy: MODEL_STEP and MODEL_CHUNK spans are skipped to reduce noise

Liên kết

Xem thêm: