Guide for configuring the Arize OpenTelemetry exporter in Mastra to send traces to Phoenix or Arize AX observability platforms for AI application observability.
Insight
The @mastra/arize package provides an ArizeExporter that sends traces using OpenTelemetry and OpenInference semantic conventions, compatible with any OpenTelemetry platform that supports OpenInference. Two platforms are supported: Phoenix (open-source, self-hosted or Phoenix Cloud) and Arize AX (enterprise). For Phoenix, required config includes endpoint (ending in /v1/traces) via PHOENIX_COLLECTOR_ENDPOINT env var; optional PHOENIX_API_KEY for authenticated instances and PHOENIX_PROJECT_NAME (defaults to ‘mastra-service’). For Arize AX, required ARIZE_SPACE_ID and ARIZE_API_KEY env vars; optional ARIZE_PROJECT_NAME. The ArizeExporter constructor accepts: endpoint (required for Phoenix), spaceId (required for Arize AX), apiKey (required for authenticated endpoints), projectName, headers object for custom OTLP headers, logLevel (debug|info|warn|error), batchSize (default 512 spans), timeout (default 30000ms), and resourceAttributes for custom span attributes. Custom non-reserved metadata can be added via tracingOptions.metadata in agent.generate() calls - reserved fields (input, output, sessionId, thread/user IDs, OpenInference IDs) are excluded automatically. The exporter implements OpenInference Semantic Conventions for standardized trace structure.
Hành động
Install via npm install @mastra/arize@latest, pnpm add @mastra/arize@latest, yarn add @mastra/arize@latest, or bun add @mastra/arize@latest. Set environment variables (PHOENIX_COLLECTOR_ENDPOINT, PHOENIX_API_KEY, PHOENIX_PROJECT_NAME for Phoenix; ARIZE_SPACE_ID, ARIZE_API_KEY, ARIZE_PROJECT_NAME for Arize AX), then create ArizeExporter with no arguments for zero-config setup, or pass explicit config to constructor. Add exporter to Observability configs: new Observability({ configs: { arize: { serviceName: ‘mastra-service’, exporters: [new ArizeExporter({…})] } } }). For local testing with Docker: docker run —pull=always -d —name arize-phoenix -p 6006:6006 -e PHOENIX_SQL_DATABASE_URL=“sqlite:///:memory:” arizephoenix/phoenix:latest. Then set PHOENIX_COLLECTOR_ENDPOINT=http://localhost:6006/v1/traces. For custom metadata on traces, pass tracingOptions.metadata to agent.generate(): await agent.generate(input, { tracingOptions: { metadata: { companyId: ‘acme-co’, tier: ‘enterprise’ } } }).
Kết quả
Traces are exported to Phoenix or Arize AX using OpenTelemetry protocol with OpenInference semantic conventions, providing standardized observability data for AI applications.
Điều kiện áp dụng
Compatible with any OpenTelemetry platform that supports OpenInference semantic conventions. For Phoenix: endpoint required with path ending in /v1/traces. For Arize AX: spaceId and apiKey required.
Nội dung gốc (Original)
Arize exporter
Arize provides observability platforms for AI applications through Phoenix (open-source) and Arize AX (enterprise). The Arize exporter sends traces using OpenTelemetry and OpenInference semantic conventions, compatible with any OpenTelemetry platform that supports OpenInference.
Installation
npm:
npm install @mastra/arize@latest
pnpm:
pnpm add @mastra/arize@latest
Yarn:
yarn add @mastra/arize@latest
Bun:
bun add @mastra/arize@latest
Configuration
Phoenix Setup
Phoenix is an open-source observability platform that can be self-hosted or used via Phoenix Cloud.
Prerequisites
Phoenix Instance: Deploy using Docker or sign up at Phoenix Cloud
Endpoint: Your Phoenix endpoint URL (ends in /v1/traces)
API Key: Optional for unauthenticated instances, required for Phoenix Cloud
Environment Variables: Set your configuration
# RequiredPHOENIX_COLLECTOR_ENDPOINT=http://localhost:6006/v1/traces # Or your Phoenix Cloud URL# OptionalPHOENIX_API_KEY=your-api-key # For authenticated Phoenix instancesPHOENIX_PROJECT_NAME=mastra-service # Defaults to 'mastra-service'
Zero-Config Setup
With environment variables set, use the exporter with no configuration:
import { Mastra } from '@mastra/core'import { Observability } from '@mastra/observability'import { ArizeExporter } from '@mastra/arize'export const mastra = new Mastra({ observability: new Observability({ configs: { arize: { serviceName: 'mastra-service', exporters: [new ArizeExporter()], }, }, }),})
Explicit Configuration
You can also pass credentials directly (takes precedence over environment variables):
import { Mastra } from '@mastra/core'import { Observability } from '@mastra/observability'import { ArizeExporter } from '@mastra/arize'export const mastra = new Mastra({ observability: new Observability({ configs: { arize: { serviceName: process.env.PHOENIX_PROJECT_NAME || 'mastra-service', exporters: [ new ArizeExporter({ endpoint: process.env.PHOENIX_COLLECTOR_ENDPOINT!, apiKey: process.env.PHOENIX_API_KEY, projectName: process.env.PHOENIX_PROJECT_NAME, }), ], }, }, }),})
Quickstart with Docker: Test locally with an in-memory Phoenix instance:
With environment variables set, use the exporter with no configuration:
import { Mastra } from '@mastra/core'import { Observability } from '@mastra/observability'import { ArizeExporter } from '@mastra/arize'export const mastra = new Mastra({ observability: new Observability({ configs: { arize: { serviceName: 'mastra-service', exporters: [new ArizeExporter()], }, }, }),})
Explicit Configuration
You can also pass credentials directly (takes precedence over environment variables):
import { Mastra } from '@mastra/core'import { Observability } from '@mastra/observability'import { ArizeExporter } from '@mastra/arize'export const mastra = new Mastra({ observability: new Observability({ configs: { arize: { serviceName: process.env.ARIZE_PROJECT_NAME || 'mastra-service', exporters: [ new ArizeExporter({ apiKey: process.env.ARIZE_API_KEY!, spaceId: process.env.ARIZE_SPACE_ID!, projectName: process.env.ARIZE_PROJECT_NAME, }), ], }, }, }),})
Configuration options
The Arize exporter supports advanced configuration for fine-tuning OpenTelemetry behavior:
Complete Configuration
new ArizeExporter({ // Phoenix Configuration endpoint: 'https://your-collector.example.com/v1/traces', // Required for Phoenix // Arize AX Configuration spaceId: 'your-space-id', // Required for Arize AX // Shared Configuration apiKey: 'your-api-key', // Required for authenticated endpoints projectName: 'mastra-service', // Optional project name // Optional OTLP settings headers: { 'x-custom-header': 'value', // Additional headers for OTLP requests }, // Debug and performance tuning logLevel: 'debug', // Logging: debug | info | warn | error batchSize: 512, // Batch size before exporting spans timeout: 30000, // Timeout in ms before exporting spans // Custom resource attributes resourceAttributes: { 'deployment.environment': process.env.NODE_ENV, 'service.version': process.env.APP_VERSION, },})
Batch Processing Options
Control how traces are batched and exported:
new ArizeExporter({ endpoint: process.env.PHOENIX_COLLECTOR_ENDPOINT!, apiKey: process.env.PHOENIX_API_KEY, // Batch processing configuration batchSize: 512, // Number of spans to batch (default: 512) timeout: 30000, // Max time in ms to wait before export (default: 30000)})
Non-reserved span attributes are serialized into the OpenInference metadata payload and surface in Arize/Phoenix. You can add them via tracingOptions.metadata:
Reserved fields such as input, output, sessionId, thread/user IDs, and OpenInference IDs are excluded automatically.
OpenInference semantic conventions
This exporter implements the OpenInference Semantic Conventions for generative AI applications, providing standardized trace structure across different observability platforms.