A Rust database that pre digests knowledge for single pass transformer consumption. Not a wrapper. A ground up storage engine that thinks.
AI has a memory problem
Current AI systems consume context in a single pass with no ability to revisit, reflect, or recognize what they are missing.
AI gets a single context window. No re-reading, no follow ups. Every token must count.
AI can't sense what it doesn't know. It can't feel uncertainty or detect its own knowledge gaps.
AI reads linearly but attends associatively. Raw text dumps waste compute on noise.
Built different
| Feature | Traditional DB | Vector DB | MenteDB |
|---|---|---|---|
| Token budget optimization | |||
| Temporal reasoning | |||
| Knowledge gap detection | |||
| Causal graph traversal | |||
| Speculative pre assembly | |||
| Pain signal feedback | |||
| Write inference | |||
| Semantic search |
A cognitive engine, not just storage
Seven core systems that transform MenteDB from a database into an active participant in your AI's reasoning.
Continuous memory ingestion with real time belief updates as conversations unfold.
Automatically derives new knowledge from stored memories at write time, not query time.
Maps conversation paths through topic space to predict where dialogue is heading.
Detects knowledge gaps and creates placeholder memories so the AI knows what it does not know.
Prevents contradictory memories from polluting context by isolating conflicting beliefs.
Records negative feedback and emotional triggers to prevent the AI from repeating mistakes.
Predicts upcoming queries and pre builds context windows, like branch prediction for knowledge.
See it in action
Simple by design
Store memories, recall context, and let MenteDB handle the complexity.
use mentedb::prelude::*;let db = MenteDb::open("./agent-memory")?;db.store(MemoryNode::new(agent_id,MemoryType::Episodic,"User prefers dark mode".into(),embedding,))?;let context = db.recall(r#"RECALL memories WHERE tag = "preferences" LIMIT 10"#)?;// Pre-assembled, token-budget-optimizedprintln!("{}", context.format);
Six layers, one engine
A purpose built stack where every layer is designed for AI memory, from storage pages to cognitive processing.
MenteDB ships a production MCP server with 32 tools across 6 categories. Connect Claude, Cursor, or any MCP client in seconds.
storesearchrecallgetforgetforget_allingestprocess_turnsearch_textsearch_vectorsearch_by_tagrelateget_relatedfind_pathget_subgraphfind_contradictionspropagate_beliefconsolidateapply_decaycompressevaluate_archivalextract_factsgdpr_forgetrecord_paindetect_phantomsresolve_phantomrecord_trajectorypredict_topicsdetect_interferencecheck_streamwrite_inference$ mentedb-mcp setup copilot # or claude, cursor, vscode
Works with any MCP-compatible client: Claude Desktop, Cursor, Copilot CLI, VS Code, Windsurf, custom agents. The server exposes all 32 tools over the standard MCP protocol with zero additional configuration.
Learn more about all 32 tools →0.29ms
Avg Insert
<1ms
Context Assembly at 10K
90.7%
Token Savings
5/5
Quality Benchmarks
0%
Stale Beliefs Returned
Measured, not promised
Every claim backed by reproducible tests. Quality validated on every commit, performance measured with Criterion.
7/7 passing
Superseded memories correctly excluded via graph edges
90.7% reduction in memory retrieval tokens over 20 turns
100 turns, 3 projects, 0% stale returns, 0.29ms insert
U-curve ordering maintains 100% LLM compliance
100% useful memories vs 80% naive (+20pp improvement)
MenteDB PASS vs Mem0 FAIL on stale beliefs, 4.8x faster
10,000 memories, 6/6 belief changes tracked, 0 stale returns
Measured across memory counts
| Benchmark | 100 | 1,000 | 10,000 |
|---|---|---|---|
| Insert | 13ms | 244ms | 2.65s |
| Context Assembly | 217us | 342us | 693us |
Context assembly stays sub-millisecond even at 10k memories. Insert scales linearly with predictable throughput.
Up and running in seconds