Processors in Mastra Agents
Trust: ★★★☆☆ (0.90) · 0 validations · developer_reference
Published: 2026-05-10 · Source: crawler_authoritative
Tình huống
Mastra agent SDK guide for configuring processors to transform, validate, or control messages in the agent execution pipeline. Developers building agents with input/output guardrails, content moderation, token management, or custom message processing logic.
Insight
Processors in Mastra are middleware components that intercept and transform messages at specific points in agent execution. There are three processor arrays on an Agent: inputProcessors (run before messages reach the LLM), outputProcessors (run after LLM response, before returning to users), and errorProcessors (run when LLM API calls throw). Built-in processors include ModerationProcessor for content safety (categories: hate, harassment, violence; threshold 0.7; strategy: block), TokenLimiter to prevent context window overflow by removing older messages when token count exceeds a limit (e.g., new TokenLimiter(127000)), ToolCallFilter to remove tool calls from LLM input with filterAfterToolSteps option, ToolSearchProcessor for dynamic tool discovery with search_tools and load_tool meta-tools, and ProviderHistoryCompat for cross-provider message compatibility. Custom processors implement the Processor interface with methods: processInput() receives messages snapshot and returns MastraDBMessage[] for one-time transformation before agentic loop; processInputStep() runs at each step of the agentic loop with stepNumber, model, toolChoice parameters; processLLMRequest() runs after MessageList conversion to provider format and before provider call (transient changes don’t persist to memory); processOutputResult() transforms final messages and receives result object with text, usage, finishReason, steps; processOutputStream() filters streaming chunks (return null drops chunk, return ChunkType passes through); processOutputStep() runs after each LLM step for validation with retry capability. Text content lives in message.content.parts as array, not on message.content directly—iterate parts and filter by part.type === ‘text’. Processors can also emit custom data-* chunks via writer.custom() for streaming. The abort() function with retry: true option enables quality checks and iterative refinement, respecting maxProcessorRetries limit. onViolation callbacks fire on any policy Vi phạm for logging or alerting. Memory processors are automatically added to pipeline ordering: [Memory Processors] → [Your inputProcessors] for input, [Your outputProcessors] → [Memory Processors] for output. Workflows using createWorkflow() can be used as processors for parallel execution with conditional branching. processAPIError() handles API rejections like context length exceeded errors. Built-in PrefillErrorHandler auto-handles Anthropic prefill errors.
Hành động
To use a built-in processor, import it from @mastra/core/processors and add to inputProcessors, outputProcessors, or errorProcessors array: new Agent({ name: ‘agent’, model: ‘openai/gpt-5’, inputProcessors: [new ModerationProcessor({ model: ‘openai/gpt-5-mini’, categories: [‘hate’], threshold: 0.7, strategy: ‘block’ })], outputProcessors: […] }) Processors run in array order. To create a custom processor, implement Processor interface: export class CustomProcessor implements Processor { id = ‘custom’ async processInput({ messages }): Promise<MastraDBMessage[]> { return messages.map(msg ⇒ ({ …msg, content: { …msg.content, parts: msg.content.parts?.map(part ⇒ part.type === ‘text’ ? { …part, text: part.text.toLowerCase() } : part) } })) } async processOutputStream({ part, state }): Promise<ChunkType | null> { state.wordCount ??= 0; if (part.type === ‘text-delta’) state.wordCount += part.payload.text.split(/\s+/).length; return part } } For per-step control, implement processInputStep() returning { model, toolChoice, tools, systemMessages } to override. To request retry with feedback: abort(‘Quality too low’, { retry: true, metadata: { score } }) For custom stream events, use writer?.custom({ type: ‘data-event-name’, data: {…} }) and listen for chunk.type === ‘data-event-name’ in fullStream. Override processors per-call: agent.stream(‘prompt’, { inputProcessors: [new TokenLimiter(2000)], maxProcessorRetries: 5 }). Use prepareStep() callback on generate/stream as shorthand for processInputStep(). For workflow-as-processor: import { createWorkflow, createStep } from ‘@mastra/core/workflows’; moderationWorkflow = createWorkflow({ id: ‘pipeline’, inputSchema: ProcessorStepSchema, outputSchema: ProcessorStepSchema }).parallel([createStep(new PIIDetector()), createStep(new PromptInjectionDetector())]).map(…).commit(); then add workflow to inputProcessors array.
Kết quả
Processors transform input messages before LLM processing, validate/filter/modify LLM outputs, handle API errors, manage streaming chunks, enable retry mechanisms with abort(), emit custom events to clients, add metadata to messages accessible via response.uiMessages, and persist state across chunks and steps within a single request.
Nội dung gốc (Original)
Processors
Processors transform, validate, or control messages as they pass through an agent. They run at specific points in the agent’s execution pipeline, allowing you to modify inputs before they reach the language model or outputs before they’re returned to users.
Processors are configured as:
inputProcessors: Run before messages reach the language model.outputProcessors: Run after the language model generates a response, but before it’s returned to users.
You can use individual Processor objects or compose them into workflows using Mastra’s workflow primitives. Workflows give you advanced control over processor execution order, parallel processing, and conditional logic.
Some processors implement both input and output logic and can be used in either array depending on where the transformation should occur.
Some built-in processors also persist hidden system reminder messages using <system-reminder>...</system-reminder> text plus metadata.systemReminder. These reminders stay available in raw memory history and retry/prompt reconstruction paths, but standard UI-facing message conversions and default memory recall hide them unless you explicitly opt in.
When to use processors
Use processors to:
- Normalize or validate user input
- Add guardrails to your agent
- Detect and prevent prompt injection or jailbreak attempts
- Moderate content for safety or compliance
- Transform messages (e.g., translate languages, filter tool calls)
- Limit token usage or message history length
- Redact sensitive information (PII)
- Apply custom business logic to messages
Mastra includes several processors for common use cases. You can also create custom processors for application-specific requirements.
Quickstart
Import and instantiate the processor, then pass it to the agent’s inputProcessors or outputProcessors array:
import { Agent } from '@mastra/core/agent'
import { ModerationProcessor } from '@mastra/core/processors'
export const moderatedAgent = new Agent({
name: 'moderated-agent',
instructions: 'You are a helpful assistant',
model: 'openai/gpt-5-mini',
inputProcessors: [
new ModerationProcessor({
model: 'openai/gpt-5-mini',
categories: ['hate', 'harassment', 'violence'],
threshold: 0.7,
strategy: 'block',
}),
],
})Execution order
Processors run in the order they appear in the array:
inputProcessors: [new UnicodeNormalizer(), new PromptInjectionDetector(), new ModerationProcessor()]For output processors, the order determines the sequence of transformations applied to the model’s response.
With memory enabled
When memory is enabled on an agent, memory processors are automatically added to the pipeline:
Input processors:
[Memory Processors] → [Your inputProcessors]Memory loads message history first, then your processors run.
Output processors:
[Your outputProcessors] → [Memory Processors]Your processors run first, then memory persists messages.
This ordering ensures that if your output guardrail calls abort(), memory processors are skipped and no messages are saved. See Memory Processors for details.
Attach processors to an agent
Processors are configured on the agent through three arrays:
import { Agent } from '@mastra/core/agent'
import { PrefillErrorHandler, TokenLimiter, ModerationProcessor } from '@mastra/core/processors'
const agent = new Agent({
name: 'support-agent',
model: 'openai/gpt-5',
instructions: '...',
inputProcessors: [
new TokenLimiter(4000),
new ModerationProcessor({ model: 'openai/gpt-4.1-nano' }),
],
outputProcessors: [new ModerationProcessor({ model: 'openai/gpt-4.1-nano' })],
errorProcessors: [new PrefillErrorHandler()],
})inputProcessorsrun before the LLM.outputProcessorsrun during and after the LLM response.errorProcessorsrun when the LLM API call throws, so they can recover from provider errors.
Each array also accepts a function that returns an array, so processors can be built per-request from RequestContext:
new Agent({
// ...
inputProcessors: ({ requestContext }) => {
const limit = requestContext.get('tokenLimit') ?? 4000
return [new TokenLimiter(limit)]
},
})Override processors per call
agent.generate() and agent.stream() accept the same three arrays. When you pass one, it replaces the matching array on the agent for that call only. Memory, workspace, and other framework-managed processors still run around your array.
await agent.stream('Summarize this', {
inputProcessors: [new TokenLimiter(2000)],
maxProcessorRetries: 5,
})Create custom processors
Custom processors implement the Processor interface.
Processor methods receive two arguments for accessing the conversation:
messages: A snapshot array ofMastraDBMessageobjects for the current stage.messageList: The liveMessageListinstance. Use it to read other stages, or to add, remove, or replace messages in place.
Text lives in message.content.parts, not on message.content itself. Iterate parts and filter by part.type === 'text' to read user or assistant text. A flattened message.content.content string exists for legacy compatibility and can be used as a fallback. See Message arguments in the Processor reference for full details.
Transform input messages
import type { Processor, ProcessInputArgs } from '@mastra/core/processors'
import type { MastraDBMessage } from '@mastra/core/memory'
export class CustomInputProcessor implements Processor {
id = 'custom-input'
async processInput({ messages }: ProcessInputArgs): Promise<MastraDBMessage[]> {
// Transform messages before they reach the LLM.
// Text lives in content.parts — iterate parts and rewrite text parts only.
return messages.map(msg => ({
...msg,
content: {
...msg.content,
parts: msg.content.parts?.map(part =>
part.type === 'text' ? { ...part, text: part.text.toLowerCase() } : part,
),
},
}))
}
}The processInput() method receives messages, systemMessages, and an abort() function. Return a MastraDBMessage[] to replace messages, or { messages, systemMessages } to also modify system messages.
See the Processor reference for all available arguments and return types.
Control each step
While processInput() runs once at the start of agent execution, processInputStep() runs at each step of the agentic loop (including tool call continuations). This enables per-step configuration changes like dynamic model switching or tool choice modifications.
import type {
Processor,
ProcessInputStepArgs,
ProcessInputStepResult,
} from '@mastra/core/processors'
export class DynamicModelProcessor implements Processor {
id = 'dynamic-model'
async processInputStep({
stepNumber,
model,
toolChoice,
messageList,
}: ProcessInputStepArgs): Promise<ProcessInputStepResult> {
// Use a fast model for initial response
if (stepNumber === 0) {
return { model: 'openai/gpt-5-mini' }
}
// Disable tools after 5 steps to force completion
if (stepNumber > 5) {
return { toolChoice: 'none' }
}
// No changes for other steps
return {}
}
}The method receives the current stepNumber, model, tools, toolChoice, messages, and more. Return an object with any properties you want to override for that step, for example { model, toolChoice, tools, systemMessages }.
See the Processor reference for all available arguments and return types.
Rewrite the LLM request before the provider call
Use processLLMRequest() when you need to rewrite the final prompt that Mastra sends to the model. This hook runs after Mastra converts the MessageList into the provider-facing prompt format (LanguageModelV2Prompt) and immediately before the provider call.
Use the message-based hooks for conversation changes:
processInput(): Change the conversation once before the agentic loop starts.processInputStep(): Change messages or step configuration before each LLM call.processLLMRequest(): Change only the outbound prompt for the current provider call.
Changes returned from processLLMRequest() are transient. They don’t persist back to MessageList, memory, UI history, or future provider calls. This makes the hook a good fit for provider compatibility rewrites, role/content normalization, or other model-specific prompt changes that shouldn’t alter stored conversation history.
The method receives prompt, model, stepNumber, steps, state, and the shared processor context. Calling abort() from processLLMRequest() emits the normal tripwire response and stops the call.
See the Processor reference for all available arguments and return types.
Use the prepareStep() callback
The prepareStep() callback on generate() or stream() is a shorthand for processInputStep(). Internally, Mastra wraps it in a processor that calls your function at each step. It accepts the same arguments and return type as processInputStep(), but doesn’t require creating a class:
await agent.generate('Complex task', {
prepareStep: async ({ stepNumber, model }) => {
if (stepNumber === 0) {
return { model: 'openai/gpt-5-mini' }
}
if (stepNumber > 5) {
return { toolChoice: 'none' }
}
},
})Transform output messages
import type { Processor } from '@mastra/core/processors'
import type { MastraDBMessage } from '@mastra/core/memory'
export class CustomOutputProcessor implements Processor {
id = 'custom-output'
async processOutputResult({ messages }): Promise<MastraDBMessage[]> {
// Transform messages after the LLM generates them
return messages.filter(msg => msg.role !== 'system')
}
}The method also receives a result object with the full generation data — text, usage (token counts), finishReason, and steps (each containing toolCalls, toolResults, etc.). Use it to track usage or inspect tool calls:
import type { Processor } from '@mastra/core/processors'
export class UsageTracker implements Processor {
id = 'usage-tracker'
async processOutputResult({ messages, result }) {
console.log(`Tokens: ${result.usage.inputTokens} in, ${result.usage.outputTokens} out`)
console.log(`Finish reason: ${result.finishReason}`)
return messages
}
}Filter streamed output
The processOutputStream() method transforms or filters streaming chunks before they reach the client:
import type { Processor } from '@mastra/core/processors'
import type { ChunkType } from '@mastra/core/stream'
export class StreamFilter implements Processor {
id = 'stream-filter'
async processOutputStream({ part }): Promise<ChunkType | null> {
// Drop text-delta chunks that contain the word "secret"
if (part.type === 'text-delta' && part.payload.text.includes('secret')) {
return null
}
// Return the (possibly modified) chunk to emit it
return part
}
}Return values:
- A
ChunkTypeemits that chunk. Return the originalpartto pass it through unchanged. nullorundefineddrops the chunk. Both behave the same way, so a method that falls through without returning also drops the chunk.- Dropping only affects one chunk. To stop the stream entirely, call
abort().
To also receive custom data-* chunks emitted by tools via writer.custom(), set processDataParts = true on your processor. This lets you inspect, modify, or block tool-emitted data chunks before they reach the client.
Validate each response
The processOutputStep() method runs after each LLM step, allowing you to validate the response and optionally request a retry:
import type { Processor } from '@mastra/core/processors'
export class ResponseValidator implements Processor {
id = 'response-validator'
async processOutputStep({ text, abort, retryCount }) {
const isValid = await validateResponse(text)
if (!isValid && retryCount < 3) {
abort('Response did not meet requirements. Try again.', { retry: true })
}
return []
}
}For more on retry behavior, see Retry mechanism in Advanced patterns.
Persist data across chunks and steps
Output methods receive a state object that persists for the lifetime of one request. State is keyed by the processor’s id, so each processor sees only its own data, and it’s shared between processOutputStream, processOutputStep, and processOutputResult. A new state object is created for every new agent.generate() or agent.stream() call.
import type { Processor } from '@mastra/core/processors'
export class WordCounter implements Processor {
id = 'word-counter'
async processOutputStream({ part, state }) {
state.wordCount ??= 0
if (part.type === 'text-delta') {
state.wordCount += part.payload.text.split(/\s+/).filter(Boolean).length
}
return part
}
async processOutputResult({ messages, state }) {
console.log(`Total words: ${state.wordCount}`)
return messages
}
}Built-in utility processors
Mastra provides utility processors for common tasks:
For security and validation processors, see the Guardrails page for input/output guardrails and moderation processors. For memory-specific processors, see the Memory Processors page for processors that handle message history, semantic recall, and working memory.
TokenLimiter
Prevents context window overflow by removing older messages when the total token count exceeds a specified limit. Prioritizes recent messages and preserves system messages.
import { Agent } from '@mastra/core/agent'
import { TokenLimiter } from '@mastra/core/processors'
const agent = new Agent({
name: 'my-agent',
model: 'openai/gpt-5.4',
inputProcessors: [new TokenLimiter(127000)],
})See the TokenLimiterProcessor reference for custom encoding, strategy, and count mode options.
ToolCallFilter
Removes tool calls and results from messages sent to the LLM, saving tokens on verbose tool interactions. Optionally exclude only specific tools. This filter only affects the LLM input, filtered messages are still saved to memory.
By default, ToolCallFilter filters the initial input before the agent loop starts. Use filterAfterToolSteps to also filter during each loop step while preserving recent tool-producing steps.
new ToolCallFilter({
filterAfterToolSteps: 2,
})See the ToolCallFilter reference for configuration options and the Memory Processors page for pre-memory filtering.
ToolSearchProcessor
Enables dynamic tool discovery for agents with large tool libraries. Instead of providing all tools upfront, the processor gives the agent search_tools and load_tool meta-tools to find and load tools by keyword on demand, reducing context token usage.
See the ToolSearchProcessor reference for configuration options and usage examples.
ProviderHistoryCompat
Handles provider-specific history incompatibilities when agents reuse messages across model providers. It can rewrite the outbound LLM request before the provider call, or recover from known provider API errors and retry.
Add ProviderHistoryCompat explicitly when you need provider history compatibility rules, reactive API error recovery, custom compatibility rules, or predictable processor ordering.
See the ProviderHistoryCompat reference for setup, built-in rules, and custom rule options.
Advanced patterns
Ensure a final response with maxSteps
When using maxSteps to limit agent execution, the agent may return an empty response if it attempts a tool call on the final step. Use processInputStep() to force a text response on the last step:
import type {
Processor,
ProcessInputStepArgs,
ProcessInputStepResult,
} from '@mastra/core/processors'
export class EnsureFinalResponseProcessor implements Processor {
readonly id = 'ensure-final-response'
private maxSteps: number
constructor(maxSteps: number) {
this.maxSteps = maxSteps
}
async processInputStep({
stepNumber,
systemMessages,
}: ProcessInputStepArgs): Promise<ProcessInputStepResult> {
// On the last step, prevent tool calls and instruct the LLM to summarize
if (stepNumber === this.maxSteps - 1) {
return {
tools: {},
toolChoice: 'none',
systemMessages: [
...systemMessages,
{
role: 'system',
content:
'You have reached the maximum number of steps. Summarize your progress so far and provide a best-effort response. If the task is incomplete, clearly indicate what remains to be done.',
},
],
}
}
return {}
}
}Add it to inputProcessors and pass the same maxSteps value to generate() or stream():
const MAX_STEPS = 5
const agent = new Agent({
inputProcessors: [new EnsureFinalResponseProcessor(MAX_STEPS)],
// ...
})
await agent.generate('Your prompt', { maxSteps: MAX_STEPS })Emit custom stream events
Output processors receive a writer object that lets you emit custom data chunks back to the client during streaming. This is useful for use cases like streaming moderation results or sending UI update signals without blocking the original stream.
import type { Processor } from '@mastra/core/processors'
export class ModerationProcessor implements Processor {
id = 'moderation'
async processOutputResult({ messages, writer }) {
// Run moderation on the final output
const text = messages
.filter(m => m.role === 'assistant')
.flatMap(m => m.content.parts?.filter(p => p.type === 'text'))
.map(p => p.text)
.join(' ')
const result = await runModeration(text)
if (result.requiresChange) {
// Emit a custom event to the client with the moderated text
await writer?.custom({
type: 'data-moderation-update',
data: {
originalText: text,
moderatedText: result.moderatedText,
reason: result.reason,
},
})
}
return messages
}
}On the client, listen for the custom chunk type in the stream:
const stream = await agent.stream('Hello')
for await (const chunk of stream.fullStream) {
if (chunk.type === 'data-moderation-update') {
// Update the UI with moderated text
updateDisplayedMessage(chunk.data.moderatedText)
}
}Custom chunk types must use the data- prefix (e.g., data-moderation-update, data-status).
By default, processOutputStream() skips data-* chunks so it doesn’t accidentally operate on tool telemetry or other processors’ output. To inspect, modify, or block these chunks in a processor, set processDataParts = true on that processor:
class ModerationCollector implements Processor {
id = 'moderation-collector'
processDataParts = true
async processOutputStream({ part, state }) {
if (part.type === 'data-moderation-update') {
state.warnings ??= []
state.warnings.push(part.data)
}
return part
}
}Add metadata to messages
You can add custom metadata to messages in processOutputResult. This metadata is accessible via the response object:
import type { Processor } from '@mastra/core/processors'
import type { MastraDBMessage } from '@mastra/core/memory'
export class MetadataProcessor implements Processor {
id = 'metadata-processor'
async processOutputResult({
messages,
}: {
messages: MastraDBMessage[]
}): Promise<MastraDBMessage[]> {
return messages.map(msg => {
if (msg.role === 'assistant') {
return {
...msg,
content: {
...msg.content,
metadata: {
...msg.content.metadata,
processedAt: new Date().toISOString(),
customData: 'your data here',
},
},
}
}
return msg
})
}
}Access the metadata with generate():
const result = await agent.generate('Hello')
// The response includes uiMessages with processor-added metadata
const assistantMessage = result.response?.uiMessages?.find(m => m.role === 'assistant')
console.log(assistantMessage?.metadata?.customData)For streaming, access metadata from the finish chunk payload or the stream.response promise.
Use workflows as processors
You can use Mastra workflows as processors to create complex processing pipelines with parallel execution, conditional branching, and error handling:
import { createWorkflow, createStep } from '@mastra/core/workflows'
import {
ProcessorStepSchema,
PromptInjectionDetector,
PIIDetector,
ModerationProcessor,
} from '@mastra/core/processors'
import { Agent } from '@mastra/core/agent'
// Create a workflow that runs multiple checks in parallel
const moderationWorkflow = createWorkflow({
id: 'moderation-pipeline',
inputSchema: ProcessorStepSchema,
outputSchema: ProcessorStepSchema,
})
.parallel([
createStep(
new PIIDetector({
strategy: 'redact',
}),
),
createStep(
new PromptInjectionDetector({
strategy: 'block',
}),
),
createStep(
new ModerationProcessor({
strategy: 'block',
}),
),
])
.map(async ({ inputData }) => {
return inputData['processor:pii-detector']
})
.commit()
// Use the workflow as an input processor
const agent = new Agent({
id: 'moderated-agent',
name: 'Moderated Agent',
model: 'openai/gpt-5.4',
inputProcessors: [moderationWorkflow],
})After a .parallel() step, each branch result is keyed by its processor ID (e.g. processor:pii-detector). Use .map() to select the branch whose output the next step should receive.
If a branch uses a mutating strategy like redact, map to that branch so its transformed messages carry forward. If all branches only block, any branch works. Pick any one since none of them modify the messages.
When an agent is registered with Mastra, processor workflows are automatically registered as workflows, allowing you to view and debug them in the Studio.
Retry mechanism
Processors can request that the LLM retry its response with feedback. This is useful for implementing quality checks, output validation, or iterative refinement:
import type { Processor } from '@mastra/core/processors'
export class QualityChecker implements Processor {
id = 'quality-checker'
async processOutputStep({ text, abort, retryCount }) {
const qualityScore = await evaluateQuality(text)
if (qualityScore < 0.7 && retryCount < 3) {
// Request a retry with feedback for the LLM
abort('Response quality score too low. Please provide a more detailed answer.', {
retry: true,
metadata: { score: qualityScore },
})
}
return []
}
}
const agent = new Agent({
id: 'quality-agent',
name: 'Quality Agent',
model: 'openai/gpt-5.4',
outputProcessors: [new QualityChecker()],
maxProcessorRetries: 3, // Maximum retry attempts. If unset, retries are disabled (unless errorProcessors are configured, in which case it defaults to 10).
})The retry mechanism:
- Only works in
processOutputStep()andprocessInputStep()methods - Replays the step with the abort reason added as context for the LLM
- Tracks retry count via the
retryCountparameter - Respects
maxProcessorRetrieslimit on the agent
Violation callbacks
All processors expose an onViolation property that fires whenever a policy violation is detected — both when abort() is called (block strategy) and when a processor issues a warning (warn strategy). Use it for alerting, logging, or side effects without affecting the processor’s main logic:
import { ModerationProcessor, CostGuardProcessor } from '@mastra/core/processors'
const moderation = new ModerationProcessor({
model: 'openai/gpt-5-nano',
strategy: 'block',
})
moderation.onViolation = ({ processorId, message, detail }) => {
// Log to external monitoring, send alerts, update dashboards
monitor.track('processor_violation', { processorId, message, detail })
}
const costGuard = new CostGuardProcessor({
maxCost: 10.0,
scope: 'resource',
window: '30d',
})
costGuard.onViolation = ({ processorId, message, detail }) => {
alertSystem.notify(`[${processorId}] ${message}`)
}The callback receives a ProcessorViolation object with:
processorId: The ID of the processor that detected the violationmessage: A human-readable description of what was violateddetail: Processor-specific metadata (e.g. cost usage, detected PII types, moderation categories)
onViolation is part of the base Processor interface, so any custom processor can use it too. The runner automatically invokes it when any processor calls abort(). Errors thrown inside the callback are silently caught to prevent interfering with the processor pipeline.
Abort and tripwire chunks
Calling abort(reason, options) throws a TripWire error that ends processing. On streams, Mastra emits a tripwire chunk clients can detect:
for await (const chunk of stream.fullStream) {
if (chunk.type === 'tripwire') {
console.log('Blocked by', chunk.payload.processorId, '-', chunk.payload.reason)
break
}
}For agent.generate(), the result exposes the same information as result.tripwire with result.finishReason === 'other'.
abort accepts a second options argument:
retry: trueasks the agent to retry instead of ending. Retries requiremaxProcessorRetriesto be set on the agent or call.metadataattaches structured data to thetripwirechunk so downstream consumers can branch on categories likepii,quality, ormoderation.
API error handling
The processAPIError method handles LLM API rejections — errors where the API rejects the request (such as 400 or 422 status codes) rather than network or server failures. This lets you modify the request and retry when the API rejects the message format.
import { APICallError } from '@ai-sdk/provider'
import type { Processor, ProcessAPIErrorArgs, ProcessAPIErrorResult } from '@mastra/core/processors'
export class ContextLengthHandler implements Processor {
id = 'context-length-handler'
processAPIError({
error,
messageList,
retryCount,
}: ProcessAPIErrorArgs): ProcessAPIErrorResult | void {
if (retryCount > 0) return
if (APICallError.isInstance(error) && error.message.includes('context length exceeded')) {
const messages = messageList.get.all.db()
if (messages.length > 4) {
messageList.removeByIds([messages[1]!.id, messages[2]!.id])
return { retry: true }
}
}
}
}Mastra includes a built-in PrefillErrorHandler that automatically handles the Anthropic “assistant message prefill” error. This processor is auto-injected and requires no configuration.
Related documentation
- Guardrails: Security and validation processors
- Memory Processors: Memory-specific processors and automatic integration
- Processor Interface: Full API reference for processors
- ToolSearchProcessor Reference: API reference for dynamic tool search
Liên kết
- Nền tảng: Dev Framework · Mastra
- Nguồn: https://mastra.ai/docs/agents/processors
Xem thêm: