Agent Approval - Human-in-the-loop Control for AI Agents
Trust: ★★★☆☆ (0.90) · 0 validations · factual
Published: 2026-05-09 · Source: crawler_authoritative
Tình huống
Developers building AI agents with Mastra framework who need human oversight for sensitive or costly tool operations
Insight
Mastra provides two mechanisms for pausing AI agent tool execution: pre-execution approval (pauses before execute function runs) and runtime suspension (pauses during execution via suspend() call). Pre-execution uses two flags with OR logic: requireToolApproval at agent level pauses every tool call, while requireApproval at tool level pauses only that specific tool. Tools can also call context.agent.suspend() mid-execution, emitting tool-call-suspended chunks with payload from suspendSchema. The stream emits tool-call-approval chunks containing toolCallId, toolName, and args. For supervisor agents coordinating subagents, approval requests propagate up through the delegation chain to surface at the supervisor level. Automatic tool resumption (autoResumeSuspendedTools) detects suspended tools from message history on next user message and extracts resumeData based on resumeSchema - requires Memory configured and same thread context.
Hành động
Set requireApproval: true on individual tool definitions or requireToolApproval: true on stream()/generate() options. For stream(), check for chunk.type = 'tool-call-approval' or 'tool-call-suspended', then call agent.approveToolCall({ runId }) or agent.declineToolCall({ runId }). For runtime suspend, call agent.resumeStream(resumeData, { runId }). For generate(), check output.finishReason = ‘suspended’, use approveToolCallGenerate({ runId, toolCallId }) or declineToolCallGenerate({ runId, toolCallId }). For automatic resumption, set autoResumeSuspendedTools: true in defaultOptions, configure Memory, and define resumeSchema on tools. Pass toolCallId when multiple tools may be pending simultaneously.
Điều kiện áp dụng
Storage provider must be configured on Mastra instance or ‘snapshot not found’ error occurs. For auto-resume: Memory must be configured, follow-up must use same thread, resumeSchema must be defined. In supervisor agents, toolCallId required when multiple tool calls pending.
Nội dung gốc (Original)
Agent approval
Agents sometimes require the same human-in-the-loop oversight used in workflows when calling tools that handle sensitive operations, like deleting resources or running long processes. With agent approval you can suspend a tool call before it executes so a human can approve or decline it, or let tools suspend themselves to request additional context from the user.
When to use agent approval
- Destructive or irreversible actions such as deleting records, sending emails, or processing payments.
- Cost-heavy operations like calling expensive third-party APIs where you want to verify arguments first.
- Conditional confirmation where a tool starts executing and then discovers it needs the user to confirm or supply extra data before finishing.
Quickstart
Mark a tool with requireApproval: true, then check for the tool-call-approval chunk in the stream to approve or decline:
import { Agent } from '@mastra/core/agent'
import { createTool } from '@mastra/core/tools'
import { z } from 'zod'
const deleteTool = createTool({
id: 'delete-record',
description: 'Delete a record by ID',
inputSchema: z.object({ id: z.string() }),
outputSchema: z.object({ deleted: z.boolean() }),
requireApproval: true,
execute: async ({ id }) => {
await db.delete(id)
return { deleted: true }
},
})
const agent = new Agent({
id: 'my-agent',
name: 'My Agent',
model: 'openai/gpt-5-mini',
tools: { deleteTool },
})
const stream = await agent.stream('Delete record abc-123')
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tool-call-approval') {
const approved = await agent.approveToolCall({ runId: stream.runId })
for await (const c of approved.textStream) process.stdout.write(c)
}
}Note: Agent approval uses snapshots to capture request state. Configure a storage provider on your Mastra instance or you’ll see a “snapshot not found” error.
How approval works
Mastra offers two distinct mechanisms for pausing tool calls: pre-execution approval and runtime suspension.
Pre-execution approval
Pre-execution approval pauses a tool call before its execute function runs. The LLM still decides which tool to call and provides arguments, but execute doesn’t run until you explicitly approve.
Two flags control this, combined with OR logic. If either is true, the call pauses:
| Flag | Where to set it | Scope |
|---|---|---|
requireToolApproval: true | stream() / generate() options | Pauses every tool call for that request |
requireApproval: true | createTool() definition | Pauses calls to that specific tool |
The stream emits a tool-call-approval chunk containing the toolCallId, toolName, and args. Call approveToolCall() or declineToolCall() with the stream’s runId to continue:
const stream = await agent.stream("What's the weather in London?", {
requireToolApproval: true,
})
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tool-call-approval') {
console.log('Tool:', chunk.payload.toolName)
console.log('Args:', chunk.payload.args)
// Approve
const approved = await agent.approveToolCall({ runId: stream.runId })
for await (const c of approved.textStream) process.stdout.write(c)
// Or decline
const declined = await agent.declineToolCall({ runId: stream.runId })
for await (const c of declined.textStream) process.stdout.write(c)
}
}Runtime suspension with suspend()
A tool can also pause during its execute function by calling suspend(). This is useful when the tool starts running and then discovers it needs additional user input or confirmation before it can finish.
The stream emits a tool-call-suspended chunk with a custom payload defined by the tool’s suspendSchema. You resume by calling resumeStream() with data matching the tool’s resumeSchema.
Tool approval with generate()
Tool approval also works with generate() for non-streaming use cases. When a tool requires approval, generate() returns immediately with finishReason: 'suspended', a suspendPayload containing the tool call details (toolCallId, toolName, args), and a runId:
const output = await agent.generate('Find user John', {
requireToolApproval: true,
})
if (output.finishReason === 'suspended') {
console.log('Tool requires approval:', output.suspendPayload.toolName)
// Approve
const result = await agent.approveToolCallGenerate({
runId: output.runId,
toolCallId: output.suspendPayload.toolCallId,
})
console.log('Final result:', result.text)
// Or decline
const result = await agent.declineToolCallGenerate({
runId: output.runId,
toolCallId: output.suspendPayload.toolCallId,
})
}Stream vs generate comparison
| Aspect | stream() | generate() |
|---|---|---|
| Response type | Streaming chunks | Complete response |
| Approval detection | tool-call-approval chunk | finishReason: 'suspended' |
| Approve method | approveToolCall({ runId }) | approveToolCallGenerate({ runId, toolCallId }) |
| Decline method | declineToolCall({ runId }) | declineToolCallGenerate({ runId, toolCallId }) |
| Result | Stream to iterate | Full output object |
Note:
toolCallIdis optional on all four methods. Pass it when multiple tool calls may be pending at the same time (common in supervisor agents). When omitted, the agent resumes the most recent suspended tool call.
Tool-level approval
Instead of pausing every tool call at the agent level, you can mark individual tools as requiring approval. This gives you granular control: only specific tools pause, while others execute immediately.
Approval using requireApproval
Set requireApproval: true on a tool definition. The tool pauses before execution regardless of whether requireToolApproval is set on the agent:
export const testTool = createTool({
id: 'test-tool',
description: 'Fetches weather for a location',
inputSchema: z.object({
location: z.string(),
}),
outputSchema: z.object({
weather: z.string(),
}),
resumeSchema: z.object({
approved: z.boolean(),
}),
execute: async inputData => {
const response = await fetch(`https://wttr.in/${inputData.location}?format=3`)
const weather = await response.text()
return { weather }
},
requireApproval: true,
})When requireApproval is true, the stream emits tool-call-approval chunks the same way agent-level approval does. Use approveToolCall() or declineToolCall() to continue:
const stream = await agent.stream("What's the weather in London?")
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tool-call-approval') {
console.log('Approval required for:', chunk.payload.toolName)
}
}
const handleApproval = async () => {
const approvedStream = await agent.approveToolCall({ runId: stream.runId })
for await (const chunk of approvedStream.textStream) {
process.stdout.write(chunk)
}
process.stdout.write('\n')
}Approval using suspend()
With this approach, neither the agent nor the tool uses requireApproval. Instead, the tool’s execute function calls suspend() to pause at a specific point and return context or confirmation prompts to the user. This is useful when approval depends on runtime conditions rather than being unconditional.
export const testToolB = createTool({
id: 'test-tool-b',
description: 'Fetches weather for a location',
inputSchema: z.object({
location: z.string(),
}),
outputSchema: z.object({
weather: z.string(),
}),
resumeSchema: z.object({
approved: z.boolean(),
}),
suspendSchema: z.object({
reason: z.string(),
}),
execute: async (inputData, context) => {
const { resumeData: { approved } = {}, suspend } = context?.agent ?? {}
if (!approved) {
return suspend?.({ reason: 'Approval required.' })
}
const response = await fetch(`https://wttr.in/${inputData.location}?format=3`)
const weather = await response.text()
return { weather }
},
})With this approach the stream includes a tool-call-suspended chunk, and the suspendPayload contains the reason defined by the tool’s suspendSchema. Call resumeStream with the resumeSchema data and runId to continue:
const stream = await agent.stream("What's the weather in London?")
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tool-call-suspended') {
console.log(chunk.payload.suspendPayload)
}
}
const handleResume = async () => {
const resumedStream = await agent.resumeStream({ approved: true }, { runId: stream.runId })
for await (const chunk of resumedStream.textStream) {
process.stdout.write(chunk)
}
process.stdout.write('\n')
}Automatic tool resumption
When using tools that call suspend(), you can enable automatic resumption so the agent resumes suspended tools based on the user’s next message. Set autoResumeSuspendedTools to true in the agent’s default options or per-request:
import { Agent } from '@mastra/core/agent'
import { Memory } from '@mastra/memory'
const agent = new Agent({
id: 'my-agent',
name: 'My Agent',
instructions: 'You are a helpful assistant',
model: 'openai/gpt-5-mini',
tools: { weatherTool },
memory: new Memory(),
defaultOptions: {
autoResumeSuspendedTools: true,
},
})When enabled, the agent detects suspended tools from message history on the next user message, extracts resumeData based on the tool’s resumeSchema, and automatically resumes the tool. The following example shows a complete conversational flow:
import { createTool } from '@mastra/core/tools'
import { z } from 'zod'
const weatherTool = createTool({
id: 'weather-tool',
description: 'Fetches weather for a city',
inputSchema: z.object({
city: z.string(),
}),
outputSchema: z.object({
weather: z.string(),
}),
suspendSchema: z.object({
message: z.string(),
}),
resumeSchema: z.object({
city: z.string(),
}),
execute: async (inputData, context) => {
const { resumeData, suspend } = context?.agent ?? {}
// If no city provided, ask the user
if (!inputData.city && !resumeData?.city) {
return suspend?.({ message: 'What city do you want to know the weather for?' })
}
const city = resumeData?.city ?? inputData.city
const response = await fetch(`https://wttr.in/${city}?format=3`)
const weather = await response.text()
return { weather: `${city}: ${weather}` }
},
})const stream = await agent.stream("What's the weather like?")
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tool-call-suspended') {
console.log(chunk.payload.suspendPayload)
}
}
// User sends follow-up on the same thread
const resumedStream = await agent.stream('San Francisco')
for await (const chunk of resumedStream.textStream) {
process.stdout.write(chunk)
}User: "What's the weather like?"
Agent: "What city do you want to know the weather for?"
User: "San Francisco"
Agent: "The weather in San Francisco is: San Francisco: ☀️ +72°F"The second message automatically resumes the suspended tool. The agent extracts { city: "San Francisco" } from the user’s message and passes it as resumeData.
Requirements
For automatic tool resumption to work:
- Memory configured: The agent needs memory to track suspended tools across messages
- Same thread: The follow-up message must use the same memory thread and resource identifiers
resumeSchemadefined: The tool must define aresumeSchemaso the agent knows what data structure to extract from the user’s message
Manual vs automatic resumption
| Approach | Use case |
|---|---|
Manual (resumeStream()) | Programmatic control, webhooks, button clicks, external triggers |
Automatic (autoResumeSuspendedTools) | Conversational flows where users provide resume data in natural language |
Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the message history and the user sends a new message on the same thread.
Tool approval: Supervisor agents
A supervisor agent coordinates multiple subagents using .stream() or .generate(). When a subagent calls a tool that requires approval, the request propagates up through the delegation chain and surfaces at the supervisor level:
- The supervisor delegates a task to a subagent.
- The subagent calls a tool that has
requireApproval: trueor usessuspend(). - The approval request bubbles up to the supervisor.
- You approve or decline at the supervisor level.
- The decision propagates back down to the subagent.
Tool approvals also propagate through multiple levels of delegation. If a supervisor delegates to subagent A, which delegates to subagent B that has a tool with requireApproval: true, the approval request still surfaces at the top-level supervisor.
Approve and decline in supervisor agents
The example below creates a subagent with a tool requiring approval. When the tool triggers an approval request, it surfaces in the supervisor’s stream as a tool-call-approval chunk:
import { Agent } from '@mastra/core/agent'
import { createTool } from '@mastra/core/tools'
import { Memory } from '@mastra/memory'
import { z } from 'zod'
const findUserTool = createTool({
id: 'find-user',
description: 'Finds user by ID in the database',
inputSchema: z.object({
userId: z.string(),
}),
outputSchema: z.object({
user: z.object({
id: z.string(),
name: z.string(),
email: z.string(),
}),
}),
requireApproval: true,
execute: async input => {
const user = await database.findUser(input.userId)
return { user }
},
})
const dataAgent = new Agent({
id: 'data-agent',
name: 'Data Agent',
description: 'Handles database queries and user data retrieval',
model: 'openai/gpt-5-mini',
tools: { findUserTool },
})
const supervisorAgent = new Agent({
id: 'supervisor',
name: 'Supervisor Agent',
instructions: `You coordinate data retrieval tasks.
Delegate to data-agent for user lookups.`,
model: 'openai/gpt-5.4',
agents: { dataAgent },
memory: new Memory(),
})
const stream = await supervisorAgent.stream('Find user with ID 12345')
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tool-call-approval') {
console.log('Tool requires approval:', chunk.payload.toolName)
console.log('Arguments:', chunk.payload.args)
// Approve the tool call
const resumeStream = await supervisorAgent.approveToolCall({
runId: stream.runId,
toolCallId: chunk.payload.toolCallId,
})
for await (const resumeChunk of resumeStream.textStream) {
process.stdout.write(resumeChunk)
}
// To decline instead, use:
const declineStream = await supervisorAgent.declineToolCall({
runId: stream.runId,
toolCallId: chunk.payload.toolCallId,
})
}
}Use suspend() in supervisor agents
Tools can also use suspend() to pause execution and return context to the user. This approach works through the supervisor delegation chain the same way requireApproval does: the suspension surfaces at the supervisor level:
const conditionalTool = createTool({
id: 'conditional-operation',
description: 'Performs an operation that may require confirmation',
inputSchema: z.object({
operation: z.string(),
}),
suspendSchema: z.object({
message: z.string(),
}),
resumeSchema: z.object({
confirmed: z.boolean(),
}),
execute: async (input, context) => {
const { resumeData } = context?.agent ?? {}
if (!resumeData?.confirmed) {
return context?.agent?.suspend({
message: `Confirm: ${input.operation}?`,
})
}
// Proceed with operation
return await performOperation(input.operation)
},
})// When using this tool through a subagent in supervisor agents
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tool-call-suspended') {
console.log('Tool suspended:', chunk.payload.suspendPayload.message)
// Resume with confirmation
const resumeStream = await supervisorAgent.resumeStream(
{ confirmed: true },
{ runId: stream.runId },
)
for await (const resumeChunk of resumeStream.textStream) {
process.stdout.write(resumeChunk)
}
}
}Supervisor approval with generate()
Tool approval propagation also works with generate() in supervisor agents:
const output = await supervisorAgent.generate('Find user with ID 12345', {
maxSteps: 10,
})
if (output.finishReason === 'suspended') {
console.log('Tool requires approval:', output.suspendPayload.toolName)
// Approve
const result = await supervisorAgent.approveToolCallGenerate({
runId: output.runId,
toolCallId: output.suspendPayload.toolCallId,
})
console.log('Final result:', result.text)
}Related
Liên kết
- Nền tảng: Mastra
- Nguồn: https://mastra.ai/docs/agents/agent-approval
Xem thêm:
- RequestContext API Documentation
- [[entries/source-url-https-mastra-ai-docs-observability-overview-title-observability|Source URL: https://mastra.ai/docs/observability/overview Title: Observability overview]]
- Workflow Runners - Mastra Deployment Options
- [[entries/source-url-https-mastra-ai-docs-agents-response-caching-title-response-caching|Source URL: https://mastra.ai/docs/agents/response-caching Title: Response caching]]
- [[entries/source-url-https-platform-claude-com-docs-en-agents-and-tools-tool-use-tool-use|Source URL: https://platform.claude.com/docs/en/agents-and-tools/tool-use/tool-use-with-prompt-cachi]]