Mastra Storage Configuration Guide

Trust: ★★★☆☆ (0.90) · 0 validations · developer_reference

Published: 2026-05-10 · Source: crawler_authoritative

Tình huống

Configuration guide for setting up storage adapters in Mastra so agents can persist and recall conversation data across interactions.

Insight

Mastra uses storage adapters to enable agents to remember previous interactions. Supported providers include: libSQL, PostgreSQL, MongoDB, Upstash, Redis, Cloudflare D1, Cloudflare KV & Durable Objects, Convex, DynamoDB, LanceDB, and Microsoft SQL Server. libSQL is recommended for easiest setup as it doesn’t require a separate database server. Configuration scopes: (1) Instance-level storage - shared by all agents, workflows, observability traces, and scores via MastraCompositeStore; (2) Agent-level storage - overrides instance-level, enables isolated data boundaries per agent. Composite storage via MastraCompositeStore routes different domains (memory, workflows, observability) to different providers with specific class names like MemoryLibSQL, WorkflowsPG, ObservabilityStorageClickhouse. Storage initialization is automatic on first interaction. Conversations are organized using two identifiers: Thread (conversation session) and Resource (owning entity like user/organization). Thread title generation can be enabled with generateTitle: true option in Memory config, optionally with custom model and instructions. Semantic recall requires a vector database in addition to standard storage. Record size limits vary by provider: DynamoDB (400 KB), Convex (1 MiB), Cloudflare D1 (1 MiB). Base64-encoded attachments should be uploaded to external storage (S3, R2, GCS) with URL references instead to avoid limits.

Hành động

  1. Install storage adapter package: npm install @mastra/libsql (or other provider package). 2. Import the store class: import { LibSQLStore } from '@mastra/libsql'. 3. Configure in Mastra instance or agent: Add storage: new LibSQLStore({ id: 'mastra-storage', url: 'file:./mastra.db' }) to config. For absolute paths when running multiple processes: use file:/absolute/path/to/mastra.db. For agent-level storage, pass storage to Memory: memory: new Memory({ storage: new PostgresStore(...) }). 4. Pass thread and resource IDs to generate()/stream() calls: { thread: 'conversation-abc-123', resource: 'user_123' }. 5. For large attachments, create an input processor implementing Processor interface that uploads base64 data to external storage and replaces with URL references, then add to agent config: inputProcessors: [new AttachmentUploader()].

Kết quả

Agents persist conversation history, threads, and resources across interactions. All agents share instance-level storage by default. Agent-level storage creates isolated data boundaries. Thread titles are auto-generated asynchronously after first user message. Large attachments are stored externally with URL references to bypass provider record limits.

Điều kiện áp dụng

Requires @mastra/core and a storage adapter package (@mastra/libsql, @mastra/pg, etc.). Semantic recall requires additional vector database setup. Studio automatically manages thread/resource IDs if not provided.


Nội dung gốc (Original)

Storage

For agents to remember previous interactions, Mastra needs a storage adapter. Use one of the supported providers and pass it to your Mastra instance.

import { Mastra } from '@mastra/core'
import { LibSQLStore } from '@mastra/libsql'
 
export const mastra = new Mastra({
  storage: new LibSQLStore({
    id: 'mastra-storage',
    url: 'file:./mastra.db',
  }),
})

Sharing the database with Studio: When running mastra dev alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:

url: 'file:/absolute/path/to/your/project/mastra.db'

Relative paths like file:./mastra.db resolve based on each process’s working directory, which may differ.

This configures instance-level storage, which all agents share by default. You can also configure agent-level storage for isolated data boundaries.

Mastra automatically initializes the necessary storage structures on first interaction. See Storage Overview for domain coverage and the schema used by the built-in database-backed domains.

Supported providers

Each provider page includes installation instructions, configuration parameters, and usage examples:

Tip: libSQL is the easiest way to get started because it doesn’t require running a separate database server.

Configuration scope

Storage can be configured at the instance level (shared by all agents) or at the agent level (isolated to a specific agent).

Instance-level storage

Add storage to your Mastra instance so all agents, workflows, observability traces, and scores share the same storage backend:

import { Mastra } from '@mastra/core'
import { PostgresStore } from '@mastra/pg'
 
export const mastra = new Mastra({
  storage: new PostgresStore({
    id: 'mastra-storage',
    connectionString: process.env.DATABASE_URL,
  }),
})
 
// Both agents inherit storage from the Mastra instance above
const agent1 = new Agent({ id: 'agent-1', memory: new Memory() })
const agent2 = new Agent({ id: 'agent-2', memory: new Memory() })

This is useful when all primitives share the same storage backend and have similar performance, scaling, and operational requirements.

Composite storage

Composite storage is an alternative way to configure instance-level storage. Use MastraCompositeStore to route memory and any other supported domains to different storage providers.

import { Mastra } from '@mastra/core'
import { MastraCompositeStore } from '@mastra/core/storage'
import { MemoryLibSQL } from '@mastra/libsql'
import { WorkflowsPG } from '@mastra/pg'
import { ObservabilityStorageClickhouse } from '@mastra/clickhouse'
 
export const mastra = new Mastra({
  storage: new MastraCompositeStore({
    id: 'composite',
    domains: {
      memory: new MemoryLibSQL({ url: 'file:./memory.db' }),
      workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
      observability: new ObservabilityStorageClickhouse({
        url: process.env.CLICKHOUSE_URL,
        username: process.env.CLICKHOUSE_USERNAME,
        password: process.env.CLICKHOUSE_PASSWORD,
      }),
    },
  }),
})

This is useful when different types of data have different performance or operational requirements, such as low-latency storage for memory, durable storage for workflows, and high-throughput storage for observability.

Agent-level storage

Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need to keep data separate or use different providers per agent.

import { Agent } from '@mastra/core/agent'
import { Memory } from '@mastra/memory'
import { PostgresStore } from '@mastra/pg'
 
export const agent = new Agent({
  id: 'agent',
  memory: new Memory({
    storage: new PostgresStore({
      id: 'agent-storage',
      connectionString: process.env.AGENT_DATABASE_URL,
    }),
  }),
})

Threads and resources

Mastra organizes conversations using two identifiers:

  • Thread: A conversation session containing a sequence of messages.
  • Resource: The entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.

Both identifiers are required for agents to store information:

.generate():

const response = await agent.generate('hello', {
  memory: {
    thread: 'conversation-abc-123',
    resource: 'user_123',
  },
})

.stream():

const stream = await agent.stream('hello', {
  memory: {
    thread: 'conversation-abc-123',
    resource: 'user_123',
  },
})

Note: Studio automatically generates a thread and resource ID for you. When calling stream() or generate() yourself, remember to provide these identifiers explicitly.

Thread title generation

Mastra can automatically generate descriptive thread titles based on the user’s first message when generateTitle is enabled.

Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.

export const agent = new Agent({
  id: 'agent',
  memory: new Memory({
    options: {
      generateTitle: true,
    },
  }),
})

Title generation runs asynchronously after the agent responds and doesn’t affect response time.

To optimize cost or behavior, provide a smaller model and custom instructions:

export const agent = new Agent({
  id: 'agent',
  memory: new Memory({
    options: {
      generateTitle: {
        model: 'openai/gpt-5-mini',
        instructions: 'Generate a 1 word title',
      },
    },
  }),
})

Semantic recall

Semantic recall has different storage requirements - it needs a vector database in addition to the standard storage adapter. See Semantic recall for setup and supported vector providers.

Handling large attachments

Some storage providers enforce record size limits that base64-encoded file attachments (such as images) can exceed:

ProviderRecord size limit
DynamoDB400 KB
Convex1 MiB
Cloudflare D11 MiB

PostgreSQL, MongoDB, and libSQL have higher limits and are generally unaffected.

To avoid this, use an input processor to upload attachments to external storage (S3, R2, GCS, Convex file storage, etc.) and replace them with URL references before persistence.

import type { Processor } from '@mastra/core/processors'
import type { MastraDBMessage } from '@mastra/core/memory'
 
export class AttachmentUploader implements Processor {
  id = 'attachment-uploader'
 
  async processInput({ messages }: { messages: MastraDBMessage[] }) {
    return Promise.all(messages.map(msg => this.processMessage(msg)))
  }
 
  async processMessage(msg: MastraDBMessage) {
    const attachments = msg.content.experimental_attachments
    if (!attachments?.length) return msg
 
    const uploaded = await Promise.all(
      attachments.map(async att => {
        // Skip if already a URL
        if (!att.url?.startsWith('data:')) return att
 
        // Upload base64 data and replace with URL
        const url = await this.upload(att.url, att.contentType)
        return { ...att, url }
      }),
    )
 
    return { ...msg, content: { ...msg.content, experimental_attachments: uploaded } }
  }
 
  async upload(dataUri: string, contentType?: string): Promise<string> {
    const base64 = dataUri.split(',')[1]
    const buffer = Buffer.from(base64, 'base64')
 
    // Replace with your storage provider (S3, R2, GCS, Convex, etc.)
    // return await s3.upload(buffer, contentType);
    throw new Error('Implement upload() with your storage provider')
  }
}

Use the processor with your agent:

import { Agent } from '@mastra/core/agent'
import { Memory } from '@mastra/memory'
import { AttachmentUploader } from './processors/attachment-uploader'
 
const agent = new Agent({
  id: 'my-agent',
  memory: new Memory({ storage: yourStorage }),
  inputProcessors: [new AttachmentUploader()],
})

Liên kết

Xem thêm: