Background tasks
Trust: ★★★☆☆ (0.90) · 0 validations · developer_reference
Published: 2026-05-10 · Source: crawler_authoritative
Tình huống
Guide for configuring and using background tasks in Mastra agents to dispatch long-running tool calls without blocking the agentic loop, targeting developers building agents that integrate slow external services, subagent delegations, or asynchronous workflows.
Insight
Background tasks let an agent dispatch a long-running tool call without blocking the agentic loop. The tool returns an immediate acknowledgement, the LLM continues responding, and the task runs to completion in the background. When it finishes, its result is written to memory and if you use streamUntilIdle() the agent is re-invoked automatically so the result is processed in the same call. Background tasks require a configured storage backend on the Mastra instance for persistence across process restarts. Configuration can be set at three layers in priority order: LLM per-call override via _background field in tool arguments, agent-level backgroundTasks.tools config, or tool-level backgroundTasks.enabled. The manager also supports global settings: globalConcurrency (default 10), perAgentConcurrency (default 5), backpressure (‘queue’), and defaultTimeoutMs (default 300000ms). Stream chunks emitted include background-task-started, background-task-running, background-task-progress, background-task-output, background-task-completed, background-task-failed, background-task-cancelled, background-task-suspended, and background-task-resumed. Tasks can suspend mid-execution via suspend(data) and resume via mastra.backgroundTaskManager.resume(taskId, resumeData). Lifecycle callbacks can be registered at tool-level (onComplete/onFailed), agent-level (onTaskComplete/onTaskFailed), and manager-level.
Hành động
Enable background tasks by setting backgroundTasks.enabled on the Mastra instance with a configured storage backend. At the tool level, set backgroundTasks.enabled: true on the tool definition with optional timeoutMs and maxRetries. At the agent level, use backgroundTasks.tools to opt in specific tools, set tools: ‘all’ to opt in every tool, or use disabled: true to force synchronous execution. The LLM can include a _background field in tool arguments to override configuration per-call. Use agent.streamUntilIdle() to keep the stream open until all dispatched background tasks complete and the LLM responds to results; set maxIdleMs to cap wait time between turns (default 5 minutes). For subagents, opt in each subagent on the supervisor via backgroundTasks.tools with per-subagent timeoutMs tuning. Tasks can suspend by calling suspend(data) from inside execute, releasing the concurrency slot; resume via mastra.backgroundTaskManager.resume(taskId, resumeData) which populates resumeData in the tool’s context on the resumed run. Use agent.resumeStreamUntilIdle(resumeData, { runId, toolCallId, memory }) to continue immediately on the same SSE connection. Register lifecycle callbacks at manager-level via onTaskComplete and onTaskFailed.
Kết quả
The tool returns an immediate acknowledgement, the LLM continues responding, and the task runs to completion in the background. When it finishes, its result is written to memory and agent.streamUntilIdle() re-invokes the agent automatically so the result is processed in the same call. Tasks persist across process restarts when storage is configured.
Điều kiện áp dụng
Added in @mastra/[email protected]. Requires a configured storage backend on the Mastra instance. Tasks are persisted so they survive process restarts.
Nội dung gốc (Original)
Background tasks
Added in: @mastra/[email protected]
Background tasks let an agent dispatch a long-running tool call without blocking the agentic loop. The tool returns an immediate acknowledgement, the LLM continues responding, and the task runs to completion in the background. When it finishes, its result is written to memory and if you use streamUntilIdle() the agent is re-invoked automatically so the result is processed in the same call.
When to use background tasks
Use background tasks when a tool call may take long enough that the user shouldn’t wait for it before seeing a response. Common cases:
- Subagent delegations that themselves run multi-step research or writing.
- Tool calls that hit slow external services, queues, or large data jobs.
- Workflows triggered from a tool call that may take minutes to complete.
For tool calls that return quickly, foreground execution using agent.stream() and agent.generate() is simpler.
Note: Background tasks require a configured storage backend on the Mastra instance. Tasks are persisted so they survive process restarts.
Quickstart
Background tasks are off by default. Enable them by setting backgroundTasks.enabled on the Mastra instance:
import { Mastra } from '@mastra/core'
import { LibSQLStore } from '@mastra/libsql'
export const mastra = new Mastra({
storage: new LibSQLStore({ id: 'storage', url: 'file:mastra.db' }),
backgroundTasks: {
enabled: true,
globalConcurrency: 10,
perAgentConcurrency: 5,
backpressure: 'queue',
defaultTimeoutMs: 300_000,
},
})The full set of options is listed in the backgroundTasks configuration reference.
Run a tool in the background
Enabling the manager doesn’t run anything in the background by itself as every tool defaults to foreground execution. You can run a tool in the background at one of three layers, in priority order:
- LLM per-call override: the model decides it should run in the background and includes a
_backgroundfield in the tool arguments. - Agent-level config: the agent declares which of its tools are background-eligible.
- Tool-level config: the tool itself declares it as background-eligible.
Tool-level
Set backgroundTasks.enabled: true on the tool definition. Tools opted in at this layer run in the background whenever called by an agent that has the manager enabled.
import { createTool } from '@mastra/core/tools'
import { z } from 'zod'
export const researchTool = createTool({
id: 'research',
description: 'Run a long research job',
inputSchema: z.object({ topic: z.string() }),
backgroundTasks: {
enabled: true,
timeoutMs: 600_000,
maxRetries: 1,
},
execute: async ({ context }) => {
// ...
},
})Agent-level
Use backgroundTasks.tools on the agent to opt in specific tools, override timeouts for individual tools, or run all background-eligible tools in the background. Use disabled: true to short-circuit background dispatch for the agent entirely.
import { Agent } from '@mastra/core/agent'
export const researcher = new Agent({
id: 'researcher',
instructions: 'You research topics and answer questions.',
model: 'openai/gpt-5.4',
tools: { researchTool, summarizeTool },
backgroundTasks: {
tools: {
researchTool: { enabled: true, timeoutMs: 600_000 },
summarizeTool: false,
},
},
})Set tools: 'all' to opt in every tool the agent has.
LLM per-call override
When a tool is registered on an agent that has background tasks enabled, the model can include a _background field in the tool arguments to override the resolved configuration for that specific call. The model only includes what it wants to override, all fields in _background are optional. The override is stripped from the arguments before the tool runs.
{
"topic": "solana",
"_background": { "enabled": true, "timeoutMs": 900_000 }
}Resolution order
When a tool call is dispatched, the resolved background config is computed in this priority order:
- LLM
_backgroundoverride (if present in the call’s arguments). - Agent-level
backgroundTasks.toolsentry for the tool. - Tool-level
backgroundTasksconfig. - Manager defaults (
defaultTimeoutMs,defaultRetries).
If the agent has backgroundTasks.disabled: true, every tool call runs synchronously regardless of the layers above.
Background tasks related stream chunks
When a tool call dispatches as a background task, two streams may surface lifecycle events for it: the agent’s own stream and the backgroundTaskManager.stream() SSE stream. Each stream covers a different set of chunk types:
| Chunk type | When it fires | Emitted by |
|---|---|---|
background-task-started | The task has been enqueued and assigned a taskId. | Agent stream |
background-task-running | The task picked up a worker and started executing. | Manager stream |
background-task-progress | Shows number of running background tasks. | Agent stream |
background-task-output | A streamed output chunk from the task’s execute. | Manager stream |
background-task-completed | The task finished successfully. The payload.result matches the eventual tool result. | Manager stream |
background-task-failed | The task threw or timed out. | Manager stream |
background-task-cancelled | The task was cancelled before completing. | Manager stream |
background-task-suspended | The tool called suspend() from inside its execute. | Manager stream |
background-task-resumed | A suspended task was resumed via manager.resume(taskId, resumeData). | Manager stream |
agent.stream().fullStream only emits the agent-loop chunks (background-task-started, background-task-progress) on its own. agent.streamUntilIdle() emits the same two chunks and additionally subscribes to the manager pubsub for the run’s memory scope and pipes the seven manager chunks (background-task-running, background-task-output, background-task-completed, background-task-failed, background-task-cancelled, background-task-suspended, background-task-resumed) into the same fullStream.
backgroundTaskManager.stream() only emits the seven manager chunks.
The full payload shapes are documented in the background task chunks reference.
Keep the agent stream open with streamUntilIdle()
agent.stream() returns once the LLM emits a final response even if a background task is still running. Use agent.streamUntilIdle() when you want the stream to stay open until every dispatched background task has completed and the LLM has had a chance to respond to the result:
const stream = await agent.streamUntilIdle('Research solana for me', {
memory: { thread: 't1', resource: 'u1' },
maxIdleMs: 5 * 60_000,
})
for await (const chunk of stream.fullStream) {
// chunks from the initial turn AND any continuation turns triggered by
// background task completions flow through here
}When a background task completes, the result is injected into the agent memory, streamUntilIdle() re-enters the agentic loop so the LLM can react to it. The stream closes when no tasks are running and no completions are queued.
maxIdleMs caps how long the stream waits between turns. The timer only runs while the wrapper is between turns, so a slow first token won’t close the stream. The default is 5 minutes.
Note: Visit
Agent.streamUntilIdle()for the full API.
Aggregate properties
streamUntilIdle() returns a MastraModelOutput that looks like the one from stream(), but only fullStream spans the initial turn and any auto-continuations. Aggregate properties (text, toolCalls, toolResults, finishReason, messageList, getFullOutput()) still resolve against the first turn’s internal buffer. If you need an aggregate view across continuations, consume fullStream yourself and accumulate.
Subagents in the background
Subagent invocations are dispatched as tool calls under the hood, so the same background configuration applies. The recommended pattern is to opt each subagent in on the supervisor, it’s clearer and lets you tune timeoutMs per subagent in one place:
import { Agent } from '@mastra/core/agent'
const supervisor = new Agent({
id: 'supervisor',
instructions: 'Coordinate research and writing using the available agents.',
model: 'openai/gpt-5.4',
agents: { researchAgent, writingAgent },
backgroundTasks: {
tools: {
researchAgent: { enabled: true, timeoutMs: 900_000 },
writingAgent: { enabled: true, timeoutMs: 900_000 },
},
},
})
const stream = await supervisor.streamUntilIdle('Research AI in education and write an article', {
memory: { thread: 't1', resource: 'u1' },
})Inheriting from the subagent
If a subagent isn’t listed under the supervisor’s backgroundTasks.tools but has background-eligible tools of its own (either via tool-level backgroundTasks.enabled: true or its own backgroundTasks.tools entry) the framework still dispatches the entire subagent invocation as a background task. The supervisor inherits the subagent’s intent: the subagent itself becomes the background task, and its inner tools run in the foreground inside the subagent’s loop.
The background config used for the inherited dispatch (for example waitTimeoutMs) is derived from the subagent’s own backgroundTasks config.
const researchAgent = new Agent({
id: 'research-agent',
description: 'Gathers factual information.',
model: 'openai/gpt-5-mini',
tools: { deepResearchTool },
backgroundTasks: {
tools: {
deepResearchTool: { enabled: true, timeoutMs: 600_000 },
},
waitTimeoutMs: 900_000,
},
})When this researchAgent is delegated to from a supervisor that has no backgroundTask configuration for the researchAgent, the supervisor still dispatches the whole researchAgent invocation as a background task, and deepResearchTool runs in the foreground inside that invocation, instead of dispatching its own nested background task.
Use this pattern when you want a subagent to behave consistently in the background regardless of which supervisor invokes it. Use the supervisor-side opt-in (above) when you want to tune background behavior centrally per supervisor.
Suspending and resuming
A background task can pause itself mid-execution and wait for an external signal before continuing. This is useful for human approvals, webhooks, or any flow where the next step depends on data that arrives later.
A tool calls suspend(data) from inside its execute, which:
- Persists
status: 'suspended'and thedatapayload on the task record. - Saves the workflow snapshot so the run survives process restarts.
- Emits a
background-task-suspendedchunk on the manager stream. - Releases the concurrency slot so other tasks can run.
Resume the task with mastra.backgroundTaskManager.resume(taskId, resumeData). The resumeData arrives in the tool’s execute options on the resumed run, and the task transitions back to running.
import { createTool } from '@mastra/core/tools'
import { z } from 'zod'
export const reviewTool = createTool({
id: 'review',
description: 'Submit a draft for human review.',
inputSchema: z.object({ draft: z.string() }),
outputSchema: z.object({ approvedBy: z.string(), edits: z.string().optional() }),
background: { enabled: true },
execute: async ({ draft }, context) => {
const { suspend, resumeData } = context.agent
if (!resumeData) {
await suspend?.({ awaiting: 'approval', draft })
return { approvedBy: '', edits: undefined }
}
const { reviewer, edits } = resumeData as { reviewer: string; edits?: string }
return { approvedBy: reviewer, edits }
},
})The first invocation of execute sees resumeData === undefined and calls suspend. After the task is resumed, the runtime restarts the tool with resumeData populated; the if branch falls through and the tool returns its real result.
To resume the task once an approval arrives:
await mastra.backgroundTaskManager?.resume(taskId, {
reviewer: '[email protected]',
edits: 'Reworded paragraph 3.',
})What happens to the agent loop
When a task suspends mid-streamUntilIdle(), the wrapper treats it as terminal for the current iteration and closes. To continue the agent immediately when the resume payload is in hand, call agent.resumeStreamUntilIdle(resumeData, { runId, toolCallId, memory }): the resumed bg task runs to completion, its result lands in the message list, and the agent runs a follow-up turn — all on the same SSE connection. If you’d rather drive the resume out-of-band, call mastra.backgroundTaskManager.resume(taskId, resumeData) directly and the result still writes into the thread for the next user turn to pick up.
Re-registering the executor on resume
The manager keeps tool executors in process memory. If the process restarts while a task is suspended, the executor closure is gone — the caller of resume() must re-register it first via manager.registerTaskContext(taskId, ...). Tasks dispatched and resumed inside the same process don’t need this.
Cancelling a suspended task
manager.cancel(taskId) works against suspended tasks the same way it works for running ones: the row flips to cancelled, the workflow snapshot is cleaned up, and a task.cancelled event fires.
Lifecycle callbacks
Each layer can register terminal-state callbacks. They don’t replace one another, and success/failure hooks fire for their respective outcomes:
- Tool-level
backgroundTasks.onComplete/onFailed: scoped to one tool. - Agent-level
backgroundTasks.onTaskComplete/onTaskFailed: scoped to all tasks dispatched by this agent. - Manager-level
onTaskComplete/onTaskFailed: scoped globally.
export const mastra = new Mastra({
storage,
backgroundTasks: {
enabled: true,
onTaskComplete: task => {
logger.info('Background task complete', { taskId: task.id, toolName: task.toolName })
},
onTaskFailed: task => {
logger.error('Background task failed', { taskId: task.id, error: task.error })
},
},
})Related
Agent.streamUntilIdle()reference- backgroundTasks configuration reference
- Supervisor agents
- Stream chunk types
- Storage
Liên kết
- Nền tảng: Dev Framework · Mastra
- Nguồn: https://mastra.ai/docs/agents/background-tasks
Xem thêm: