Now available — v2.2

Your AI tools
forget everything.
Fix that.

Every session starts from zero. Smara gives Claude Code, Cursor, and Codex a shared memory that persists, decays naturally, and follows you across tools.

npx @smara/mcp-server --init
0
Memories stored
0
MCP tools
0
AI tools supported
0
Install time
Works with your tools
Claude Code Cursor Windsurf Codex VS Code Any MCP client

The problem

You explain the same things to your AI.
Every. Single. Session.

"I use Tailwind, not Bootstrap." "The API is at /v2, not /v1." "We deploy to Fly.io." You've said it before. Your AI forgot.

Smara stores context automatically as you work. Next session — in any tool — your AI already knows.

without smara
# Monday in Claude Code
you: We use Postgres, not MySQL
Got it! I'll use Postgres.
# Tuesday in Cursor — new session
cursor: Setting up MySQL connection...
you: No! We use Postgres!
# Wednesday in Claude Code again
claude: What database are you using?
you: ... 🤬
with smara
# Any day, any tool — context is already loaded
[smara] Loading 23 memories for this project...
[smara] Uses Postgres on Fly.io (relevance: 0.97)
[smara] Tailwind CSS, not Bootstrap (0.94)
[smara] API v2 at /api/v2, JWT auth (0.91)
claude: I'll connect to your Postgres on Fly.io and use the v2 API...

How it works

Three steps. Thirty seconds.

1

Install

Add Smara to your MCP config. One JSON block. Works in Claude Code, Cursor, Windsurf, or any MCP client.

{
  "smara": {
    "command": "npx",
    "args": ["-y", "@smara/mcp-server"],
    "env": {"SMARA_API_KEY": "sk_..."}
  }
}
2

Code normally

Smara runs silently. It auto-loads context at conversation start and stores new facts as you work. No manual commands.

# You never see this happen:
auto-stored: "Uses Next.js 14 app router"
auto-stored: "API keys in .env.local"
auto-stored: "Prefers server components"
3

Context follows you

Switch to Cursor, open Codex, start a new Claude session — your context is already there. One pool, every tool.

Claude Cursor Codex
same memories, same context
ranked by relevance × freshness

See it in action

Your context, everywhere.

Demo 1 — Install & auto-capture
claude-code — ~/myproject
$ npx @smara/mcp-server --init
Smara MCP server installed
API key configured
you: set up auth — we use Clerk, not Auth0
[smara] stored: "Project uses Clerk for auth, not Auth0"
you: deploy this to Vercel staging branch
[smara] stored: "Deploys to Vercel, staging branch for preview"
// 2 memories captured. Zero effort.

Smara silently captures decisions as you work. No tagging, no commands — just code normally.

Demo 2 — Cross-tool recall
cursor — ~/myproject
// Next day, different tool. New session.
[smara] Loading context for ~/myproject...
[smara] ✓ Auth: Clerk (0.97)
[smara] ✓ Deploy: Vercel staging (0.95)
[smara] ✓ DB: Postgres on Supabase (0.89)
you: add a login page
cursor: I'll set up a Clerk sign-in page and deploy to Vercel staging...
// No re-explaining. It knows.

Switch from Claude Code to Cursor — your context follows. Zero re-explaining.

Demo 3 — Smart decay
smara — memory health
$ smara memories --project myproject
█████████░ Auth: Clerk 0.97 — used 3d ago
████████░░ Deploy: Vercel 0.89 — used 5d ago
██████░░░░ DB: Postgres 0.64 — used 2w ago
███░░░░░░░ Old: tried Redis cache 0.12 — 30d ago
█░░░░░░░░░ Stale: "use port 3001" 0.03 — 60d, contradicted
// Important facts persist. Old noise fades.

Ebbinghaus decay keeps context fresh. Frequently-used facts stay strong. Old noise fades naturally.

Demo 4 — API integration
bash — api call
$ curl smara.io/v1/memories/search \
  -H "Authorization: Bearer sk_..." \
  -d '{"query": "what database?"}'
{
  "memories": [{ "fact": "Uses Postgres on Supabase",
    "relevance": 0.97, "source": "claude-code",
    "decay": 0.94 }]
}

Simple REST API. Store, search, context — 3 endpoints. Works from any language.

REST API

Three calls. That's it.

No SDKs to install. No schemas to define. Just REST.

POST /v1/memoriesPOST
curl -X POST https://api.smara.io/v1/memories \
  -H "Authorization: Bearer smara_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "user_id":    "user_42",
    "fact":       "Prefers dark mode. Uses vim keybindings.",
    "importance": 0.7,
    "visibility": "private"  // or "team"
  }'
Response
{ "id": "mem_8f3a...", "decay_score": 1.0, "status": "stored" }
GET /v1/users/:user_id/contextGET
curl https://api.smara.io/v1/users/user_42/context \
  -H "Authorization: Bearer smara_your_key"
Response — full context, ranked by relevance × decay
{ "user_id": "user_42",
  "memory_count": 23,
  "context": [
    { "fact": "Prefers dark mode. Uses vim keybindings.", "score": 0.96 },
    { "fact": "Working on a Rust CLI tool for log parsing", "score": 0.89 }
  ] }

Team collaboration New

Shared memory
for your whole team.

Your AI automatically sorts memories — project decisions go to the team, personal preferences stay private. Override anytime. Search returns both.

Aliceadmin
Team Memory Pool
Bobmember
team memory flow
# Alice stores a team memory
POST /v1/memories
{ team_id: "eng-team", visibility: "team",
  fact: "Auth service uses JWT with RS256" }
# Bob searches — sees private + team
GET /v1/memories/search?include_team=true
» "Auth service uses JWT with RS256" 0.94

Role-based access

Admin, member, or read-only. Control who can store, search, and manage team memories across the pool.

Private + team memory

AI classifies automatically — "uses RS256" goes to the team, "prefers dark mode" stays private. Override with natural language or explicit visibility field.

Email invitations

Invite teammates with a link. They accept, join the team, and immediately see shared context in their AI tools.

AI Agents Coming soon

Persistent agents with memory, skills, and shared team context. Create once, plug into any LLM or MCP client.

Memory control

Control your memory in plain English.

No dashboards needed. Just tell your AI what you want. Smara's MCP tools handle the rest.

natural language memory control
you: show my memories
claude: [calls list_memories] You have 14 memories. Top 3:
  1. "Uses Postgres on Fly.io" (private, score: 0.97)
  2. "Auth: Clerk" (private, score: 0.94)
  3. "Team convention: tabs not spaces" (team, score: 0.91)
you: make the Postgres one private
claude: [calls update_memory] Done — "Uses Postgres on Fly.io" is now private.
you: share the auth memory with the team
claude: [calls update_memory] "Auth: Clerk" is now visible to your whole team.
you: forget the Redis experiment
claude: [calls delete_memory] Memory deleted. Won't affect future context.

MCP tools

7 tools. All the memory you need.

Every Smara MCP server exposes these tools to your AI. No configuration required.

store_memory write
Store a fact with importance score and optional team or project tag. AI calls this automatically during conversation.
search_memories read
Semantic search across private + team memories. Returns ranked results by similarity × decay score.
get_user_context read
Load full context at conversation start. Called automatically by the MCP server on every new session.
update_memory write
Change a memory's visibility (private/team), importance, or fact content. Natural language triggers this tool.
list_memories read
Paginated list of all memories. Supports filter by source, visibility, or decay score range.
delete_memory write
Permanently delete a memory by ID. "Forget the Redis experiment" triggers this automatically.
get_usage read
Check memory usage, team quota, and API call counts against your current plan limits. No arguments needed.

Ebbinghaus decay

Memory that forgets
on purpose.

Infinite context is noise. Smara uses Ebbinghaus decay scoring — the same curve that models human forgetting.

Recent facts stay sharp. Old trivia fades. Important memories that keep getting reinforced persist forever. Contradictions are auto-resolved — "uses MySQL" gets replaced when you mention Postgres.

R = e(-t/S) S = importance × access freq.

The result: your AI's context window is always fresh, relevant, and right-sized. No bloat, no stale facts.

Who it's for

Built for people who ship with AI

Solo developers

You switch between Claude Code and Cursor daily. Smara means you never re-explain your stack, preferences, or project conventions. Just code.

Dev teams

Shared context across team members and tools. New developers inherit institutional knowledge. AI onboards itself from the team memory pool.

AI agent builders

Your AI agents need persistent memory across sessions. Three API calls replace a custom database, embedding pipeline, and retrieval system.

Under the hood

Everything you'd build yourself,
already done.

Universal protocol

MCP for Claude Code, Cursor, Windsurf. REST API for everything else. One memory layer, every tool you use.

Zero-config auto-memory

Install the MCP server. That's it. Context loads at conversation start. New facts store silently as you work.

Ebbinghaus decay

Old trivia fades. Important facts persist. Contradictions auto-resolve. Your context stays fresh without manual cleanup.

Source tagging

Every memory knows its origin — Claude Code, Cursor, Codex, or your API. Filter by source or see everything together.

Team collaboration New

Shared memory pools for your team. Private stays private. Team memories visible to all members. Role-based access.

AI Agents Soon

Persistent agents with memory, skills, and team context. Create once, use across any LLM or MCP client.

FAQ

Questions

What is Smara?+

Smara is a persistent memory layer for AI coding tools. It gives Claude Code, Cursor, Windsurf, Codex, and any MCP-compatible client shared context that survives across sessions.

Install with one command: npx @smara/mcp-server --init

How does Ebbinghaus decay work?+

Smara uses Ebbinghaus forgetting curves to score memory relevance over time. Important facts persist while trivial details naturally fade, just like human memory.

The formula is R = e^(-t/S) where t is time elapsed and S is memory strength based on importance and access frequency. A memory referenced daily stays near score 1.0; something mentioned once 60 days ago fades toward 0.

What AI tools does Smara work with?+

Any MCP-compatible tool: Claude Code, Cursor, Windsurf, Codex, VS Code with Copilot, and any custom MCP client.

Smara also provides a REST API for direct integration from any language or framework — Python, TypeScript, Go, Ruby, anything that can make HTTP requests.

How do teams work? What stays private?+

Create a team, invite members by email, and memories are automatically sorted into private or team:

  • Team memories — project decisions, architecture choices, conventions. Visible to all members. Example: "Auth service uses JWT with RS256"
  • Private memories — personal preferences, individual notes. Only you. Example: "I prefer dark mode and tabs over spaces"

Your AI classifies automatically via MCP — no manual tagging. Override by saying "make that private" or setting visibility: "team". Combined search returns both private and team memories together.

Is my data private?+

Yes. Each API key isolates your data completely. Memories are encrypted in transit (TLS 1.3) and stored in isolated PostgreSQL schemas. We never use your data for training.

You can delete all memories at any time via the delete_memory MCP tool or the REST API.

How much does Smara cost?+

Three plans:

  • Free — $0/mo. 10,000 memories, 1 team/3 members, 2 AI agents, Full API+MCP, Ebbinghaus decay.
  • Developer — $19/mo. 200K memories, 3 teams/10 members, 10 AI agents + 5 custom skills, Priority pipeline, Email support.
  • Pro — $99/mo. 2M memories, Unlimited teams/50 members, Unlimited agents+skills, Dedicated queue, Slack support + SLA.

No credit card required for the free tier. Cancel anytime.

Pricing

Start free.
Scale when you need to.

Free
$0
For trying it out and side projects
  • 10,000 memories
  • 1 team / 3 members
  • 2 AI agents
  • Full API + MCP access
  • Ebbinghaus decay scoring
Get started free
Pro
$99/mo
For teams shipping at scale
  • 2,000,000 memories
  • Unlimited teams / 50 members
  • Unlimited agents + skills
  • Dedicated embedding queue
  • Slack support + SLA
Subscribe

Get your API key

Free tier. 10,000 memories. No credit card required.

Early access — we need your feedback

Smara is in beta.
Help us build it right.

We're a small team shipping fast. The best features come from real developers hitting real problems. Try Smara, break things, and tell us what's missing.

Or reach out directly

[email protected] @parallelromb GitHub