Streaming Overview

Trust: ★★★☆☆ (0.90) · 0 validations · developer_reference

Published: 2026-05-10 · Source: crawler_authoritative

Tình huống

Mastra streaming API documentation covering real-time, incremental responses from agents and workflows for developers implementing chat, long-form content, or multi-step workflow scenarios.

Insight

Mastra supports two streaming methods differentiated by AI SDK version: .stream() for V2 models requiring AI SDK v5+ with LanguageModelV2 interface, and .streamLegacy() for V1 models using AI SDK v4 with LanguageModelV1. Agent streaming accepts prompts as single strings, arrays of strings for multiple context pieces, or arrays of message objects with role and content properties. The textStream property provides a readable stream that emits text chunks incrementally. Additional stream properties include text (promise resolving to full response), finishReason (why streaming stopped), and usage (token count). For AI SDK v5+ compatibility, Mastra provides toAISdkV5Stream() utility from @mastra/ai-sdk to convert streams to AI SDK format with a from option (e.g., from: 'agent'). The toAISdkV5Messages() utility from @mastra/ai-sdk/ui converts message arrays to AI SDK v5+ format. Workflow streaming returns event-based structured data via Run.stream() rather than text chunks, with events including workflow-start type containing runId, from (‘WORKFLOW’), and payload with stepName, args, stepCallId, startedAt, and status. Workflow stream properties include status, result, and usage. For agents with background tasks, Agent.streamUntilIdle() keeps the stream open until background tasks complete.

Hành động

For agent streaming: call mastra.getAgent('testAgent').stream([{role: 'user', content: 'prompt'}]) and iterate with for await (const chunk of stream.textStream) to process chunks incrementally. For AI SDK v5+ conversion, import toAISdkV5Stream from @mastra/ai-sdk and pass the stream with {from: 'agent'} option. For workflow streaming: create a run with workflow.createRun(), then call run.stream({inputData: {value: 'data'}}) and iterate the resulting event stream. Use stream.status, stream.result, and stream.usage for workflow metadata.

Kết quả

Agent streams yield progressive text chunks via textStream, full text via text promise, finish reason via finishReason, and token usage via usage. Workflow streams emit events including workflow-start with runId, stepName, args, stepCallId, startedAt, and status.

Điều kiện áp dụng

Requires AI SDK v5+ for .stream() with LanguageModelV2; use .streamLegacy() for AI SDK v4 with LanguageModelV1. AI SDK v5+ models require upgrading to next major version if errors occur.


Nội dung gốc (Original)

Streaming overview

Mastra supports real-time, incremental responses from agents and workflows, allowing users to see output as it’s generated instead of waiting for completion. This is useful for chat, long-form content, multi-step workflows, or any scenario where immediate feedback matters.

Getting started

Mastra’s streaming API adapts based on your model version:

  • .stream(): For V2 models, supports AI SDK v5 and later (LanguageModelV2).
  • .streamLegacy(): For V1 models, supports AI SDK v4 (LanguageModelV1).

Streaming with agents

You can pass a single string for basic prompts, an array of strings when providing multiple pieces of context, or an array of message objects with role and content for precise control over roles and conversational flows.

Using Agent.stream()

A textStream breaks the response into chunks as it’s generated, allowing output to stream progressively instead of arriving all at once. Iterate over the textStream using a for await loop to inspect each stream chunk.

const testAgent = mastra.getAgent('testAgent')
 
const stream = await testAgent.stream([{ role: 'user', content: 'Help me organize my day' }])
 
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk)
}

Info: Visit Agent.stream() for more information.

Tip: For agents that dispatch background tasks, use Agent.streamUntilIdle() to keep the stream open until those tasks complete and the agent has had a chance to respond to their results.

Output from Agent.stream()

The output streams the generated response from the agent.

Of course!
To help you organize your day effectively, I need a bit more information.
Here are some questions to consider:
...

Agent stream properties

An agent stream provides access to various response properties:

  • stream.textStream: A readable stream that emits text chunks.
  • stream.text: Promise that resolves to the full text response.
  • stream.finishReason: The reason the agent stopped streaming.
  • stream.usage: Token usage information.

AI SDK v5+ Compatibility

AI SDK v5 (and later) uses LanguageModelV2 for the model providers. If you are getting an error that you are using an AI SDK v4 model you will need to upgrade your model package to the next major version.

For integration with AI SDK v5+, use the toAISdkV5Stream() utility from @mastra/ai-sdk to convert Mastra streams to AI SDK-compatible format:

import { toAISdkV5Stream } from '@mastra/ai-sdk'
 
const testAgent = mastra.getAgent('testAgent')
 
const stream = await testAgent.stream([{ role: 'user', content: 'Help me organize my day' }])
 
// Convert to AI SDK v5+ compatible stream
const aiSDKStream = toAISdkV5Stream(stream, { from: 'agent' })

For converting messages to AI SDK v5+ format, use the toAISdkV5Messages() utility from @mastra/ai-sdk/ui:

import { toAISdkV5Messages } from '@mastra/ai-sdk/ui'
 
const messages = [{ role: 'user', content: 'Hello' }]
const aiSDKMessages = toAISdkV5Messages(messages)

Streaming with workflows

Streaming from a workflow returns a sequence of structured events describing the run lifecycle, rather than incremental text chunks. This event-based format makes it possible to track and respond to workflow progress in real time once a run is created using .createRun().

Using Run.stream()

The stream() method returns a ReadableStream of events directly.

const run = await testWorkflow.createRun()
 
const stream = await run.stream({
  inputData: {
    value: 'initial data',
  },
})
 
for await (const chunk of stream) {
  console.log(chunk)
}

Info: Visit Run.stream() for more information.

Output from Run.stream()

The event structure includes runId and from at the top level, making it easier to identify and track workflow runs without digging into the payload.

{
  type: 'workflow-start',
  runId: '1eeaf01a-d2bf-4e3f-8d1b-027795ccd3df',
  from: 'WORKFLOW',
  payload: {
    stepName: 'step-1',
    args: { value: 'initial data' },
    stepCallId: '8e15e618-be0e-4215-a5d6-08e58c152068',
    startedAt: 1755121710066,
    status: 'running'
  }
}

Workflow stream properties

A workflow stream provides access to various response properties:

  • stream.status: The status of the workflow run.
  • stream.result: The result of the workflow run.
  • stream.usage: The total token usage of the workflow run.

Liên kết

Xem thêm: