Server Overview - Mastra HTTP Server Configuration

Trust: ★★★☆☆ (0.90) · 0 validations · developer_reference

Published: 2026-05-10 · Source: crawler_authoritative

Tình huống

Mastra server documentation covering HTTP server configuration, architecture, and features for exposing agents and workflows as API endpoints.

Insight

Mastra runs as an HTTP server using Hono as the underlying framework, generated via mastra build into the .mastra directory. The server handles request routing, middleware execution, authentication, and streaming responses. It exposes API endpoints for all registered agents and workflows, supports custom API routes and middleware, provides authentication across multiple providers (JWT, Clerk, Supabase, Firebase, Auth0, WorkOS), and includes request context for dynamic configuration. Stream data redaction is enabled by default, removing system prompts, tool definitions, and API keys from streaming chunks before sending to clients. The OpenAI Responses API routes are exposed as agent-backed adapters over Mastra agents, memory, and storage—currently experimental. Configuration uses a server object passed to the Mastra constructor with options for port (defaults to PORT env var or 4111) and host (defaults to MASTRA_HOST env var or ‘localhost’). The OpenAPI specification is available at http://localhost:4111/api/openapi.json and Swagger UI at http://localhost:4111/swagger-ui—disabled in production by default and enabled via server.build.openAPIDocs and server.build.swaggerUI config. The Responses API supports streaming, function calling via tools, stored continuations with previous_response_id, conversation threads via conversation_id, provider passthrough with providerOptions, and JSON output via text.format.

Hành động

To configure the server, pass a server object to the Mastra constructor with port and host options. Enable OpenAPI docs in production by setting server.build.openAPIDocs and server.build.swaggerUI to true. For custom adapters or frameworks like Express, use the Server Adapters documentation. The Responses API accepts agent_id to select the handling agent, previous_response_id for stored continuations, conversation_id for threads, model to override the agent’s configured model, and providerOptions for provider-specific passthrough.

Kết quả

Mastra generates a Hono-based HTTP server that exposes agents and workflows as REST endpoints. The OpenAPI spec and Swagger UI provide interactive API exploration. Streaming responses are automatically redacted of sensitive data. Agents handle Requests API calls through their configured memory and storage.

Điều kiện áp dụng

The Responses API routes are experimental. TypeScript requires module: ES2022 and moduleResolution: bundler—CommonJS and node resolution are not supported. OpenAPI and Swagger endpoints are disabled in production by default.


Nội dung gốc (Original)

Server overview

Mastra runs as an HTTP server that exposes your agents, workflows, and other functionality as API endpoints. The server handles request routing, middleware execution, authentication, and streaming responses.

Info: This page covers the server configuration options passed to the Mastra constructor. For running Mastra with your own HTTP server (Hono, Express, etc.), visit Server Adapters.

Server architecture

Mastra uses Hono as its underlying HTTP server framework. When you build a Mastra application using mastra build, it generates a Hono-based HTTP server in the .mastra directory.

The server provides:

  • API endpoints for all registered agents and workflows
  • Custom API routes and middleware
  • Authentication across providers
  • Request context for dynamic configuration
  • Stream data redaction for secure responses

Configuration

Configure the server by passing a server object to the Mastra constructor:

import { Mastra } from '@mastra/core'
 
export const mastra = new Mastra({
  server: {
    port: 3000, // Defaults to PORT env var or 4111
    host: '0.0.0.0', // Defaults to MASTRA_HOST env var or 'localhost'
  },
})

Info: Visit the configuration reference for a full list of available server options.

Server features

  • Middleware: Intercept requests for authentication, logging, CORS, or injecting request-specific context.
  • Custom API Routes: Extend the server with your own HTTP endpoints that have access to the Mastra instance.
  • Request Context: Pass request-specific values to agents, tools, and workflows based on runtime conditions.
  • Server Adapters: Run Mastra with Express, Hono, or your own HTTP server instead of the generated server.
  • Custom Adapters: Build adapters for frameworks not officially supported.
  • Mastra Client SDK: Type-safe client for calling agents, workflows, and tools from browser or server environments.
  • Authentication: Secure endpoints with JWT, Clerk, Supabase, Firebase, Auth0, or WorkOS.

REST API

You can explore all available endpoints in the OpenAPI specification at http://localhost:4111/api/openapi.json, which details every endpoint and its request and response schemas.

To explore the API interactively, visit the Swagger UI at http://localhost:4111/swagger-ui. Here, you can discover endpoints and test them directly from your browser.

Note: The OpenAPI and Swagger endpoints are disabled in production by default. To enable them, set server.build.openAPIDocs and server.build.swaggerUI to true respectively.

OpenAI Responses API

Mastra exposes OpenAI-compatible Responses and Conversations routes that let you use Mastra Agents as a Responses API. These routes are agent-backed adapters over Mastra agents, memory, and storage, so requests run through the selected Mastra agent instead of acting as a raw provider proxy.

These APIs are currently experimental.

Use agent_id to select the Mastra agent that should handle the request. Initial requests target an agent directly, and stored follow-up turns can continue with previous_response_id. You can also pass model to override the agent’s configured model for a single request. If you omit model, Mastra uses the model already configured on the agent.

The Responses routes support streaming, function calling (tools), stored continuations with previous_response_id, conversation threads through conversation_id, provider-specific passthrough with providerOptions, and JSON output through text.format.

For the full request and response contract, see the Responses API reference and Conversations API reference. For the complete list of HTTP routes, see server routes.

Stream data redaction

When streaming agent responses, the HTTP layer redacts system prompts, tool definitions, API keys, and similar data from each chunk before sending it to clients. This is enabled by default.

This behavior is only configurable by using server adapters. For server adapters, stream data redaction is enabled by default, too.

TypeScript configuration

Mastra requires module and moduleResolution settings compatible with modern Node.js. Legacy options like CommonJS or node aren’t supported.

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "ES2022",
    "moduleResolution": "bundler",
    "esModuleInterop": true,
    "forceConsistentCasingInFileNames": true,
    "strict": true,
    "skipLibCheck": true,
    "noEmit": true,
    "outDir": "dist"
  },
  "include": ["src/**/*"]
}

Liên kết

Xem thêm: