Open source, Apache 2.0Beta

Your AI's memory,
engineered from scratch

A Rust database that pre digests knowledge for single pass transformer consumption. Not a wrapper. A ground up storage engine that thinks.

The Problem

AI has a memory problem

Current AI systems consume context in a single pass with no ability to revisit, reflect, or recognize what they are missing.

One Shot Consumption

AI gets a single context window. No re-reading, no follow ups. Every token must count.

Zero Self Awareness

AI can't sense what it doesn't know. It can't feel uncertainty or detect its own knowledge gaps.

Flat Context, Associative Thinking

AI reads linearly but attends associatively. Raw text dumps waste compute on noise.

Comparison

Built different

FeatureTraditional DBVector DBMenteDB
Token budget optimization
Temporal reasoning
Knowledge gap detection
Causal graph traversal
Speculative pre assembly
Pain signal feedback
Write inference
Semantic search

Features

A cognitive engine, not just storage

Seven core systems that transform MenteDB from a database into an active participant in your AI's reasoning.

Stream Processing

Continuous memory ingestion with real time belief updates as conversations unfold.

Write Inference

Automatically derives new knowledge from stored memories at write time, not query time.

Trajectory Tracking

Maps conversation paths through topic space to predict where dialogue is heading.

Phantom Memories

Detects knowledge gaps and creates placeholder memories so the AI knows what it does not know.

Interference Shielding

Prevents contradictory memories from polluting context by isolating conflicting beliefs.

Pain Signals

Records negative feedback and emotional triggers to prevent the AI from repeating mistakes.

Speculative Pre Assembly

Predicts upcoming queries and pre builds context windows, like branch prediction for knowledge.

Real World Examples

See it in action

Session 1: Store project decisions (React, TypeScript, Vite)
Create causal edges between decisions
 
Session 2: "I switched from Vite to Webpack"
> Contradiction detected with stored Vite decision
> Old memory marked as superseded
> Belief propagation updates downstream
 
Session 3: Developer returns after a week
MenteDB serves resume context:
"You were setting up React/TypeScript.
Switched from Vite to Webpack on March 15.
Open question: production build config."

Code

Simple by design

Store memories, recall context, and let MenteDB handle the complexity.

use mentedb::prelude::*;
 
let db = MenteDb::open("./agent-memory")?;
 
db.store(MemoryNode::new(
agent_id,
MemoryType::Episodic,
"User prefers dark mode".into(),
embedding,
))?;
 
let context = db.recall(
r#"RECALL memories WHERE tag = "preferences" LIMIT 10"#
)?;
 
// Pre-assembled, token-budget-optimized
println!("{}", context.format);

Architecture

Six layers, one engine

A purpose built stack where every layer is designed for AI memory, from storage pages to cognitive processing.

API
MQLRESTWebSocket
Cognitive
StreamSpeculativePainPhantom
Intelligence
Belief PropagationWrite Inference
Index
HNSWBitmapsTemporal
Graph
CSR/CSCTraversal
Storage
Buffer PoolWALPages
MCP Integration

32 tools. One server. Zero config.

MenteDB ships a production MCP server with 32 tools across 6 categories. Connect Claude, Cursor, or any MCP client in seconds.

Tool Categories

Memory
8 tools
storesearchrecallgetforgetforget_allingestprocess_turn
Search
3 tools
search_textsearch_vectorsearch_by_tag
Graph
6 tools
relateget_relatedfind_pathget_subgraphfind_contradictionspropagate_belief
Consolidation
6 tools
consolidateapply_decaycompressevaluate_archivalextract_factsgdpr_forget
Cognitive
9 tools
record_paindetect_phantomsresolve_phantomrecord_trajectorypredict_topicsdetect_interferencecheck_streamwrite_inference

Install

Requires Rust

$ cargo install mentedb-mcp

Setup (auto-configures your client)

$ mentedb-mcp setup copilot  # or claude, cursor, vscode

Works with any MCP-compatible client: Claude Desktop, Cursor, Copilot CLI, VS Code, Windsurf, custom agents. The server exposes all 32 tools over the standard MCP protocol with zero additional configuration.

Learn more about all 32 tools →

0.29ms

Avg Insert

<1ms

Context Assembly at 10K

90.7%

Token Savings

5/5

Quality Benchmarks

0%

Stale Beliefs Returned

Benchmarks

Measured, not promised

Every claim backed by reproducible tests. Quality validated on every commit, performance measured with Criterion.

Quality Benchmarks

7/7 passing

Stale Belief

Superseded memories correctly excluded via graph edges

Delta Savings

90.7% reduction in memory retrieval tokens over 20 turns

Sustained Conversation

100 turns, 3 projects, 0% stale returns, 0.29ms insert

Attention Budget

U-curve ordering maintains 100% LLM compliance

Noise Ratio

100% useful memories vs 80% naive (+20pp improvement)

Mem0 Comparison

MenteDB PASS vs Mem0 FAIL on stale beliefs, 4.8x faster

10K Scale

10,000 memories, 6/6 belief changes tracked, 0 stale returns

Performance (Criterion)

Measured across memory counts

Benchmark1001,00010,000
Insert13ms244ms2.65s
Context Assembly217us342us693us

Context assembly stays sub-millisecond even at 10k memories. Insert scales linearly with predictable throughput.

Quickstart

Up and running in seconds

Terminal
$ cargo add mentedb
Updating crates.io index
Adding mentedb v0.1.0
 
$ mentedb serve --port 5555
MenteDB v0.1.0
Listening on 0.0.0.0:5555
 
$ curl http://localhost:5555/health
{"status":"ok","version":"0.1.0"}