mnemonic-ai: Persistent Memory for AI Coding Agents
Built and published an npm package that gives AI coding agents long-term memory across sessions, using multi-signal search, knowledge graphs, and reinforcement-based memory decay.
The situation
AI coding agents lose all context between sessions. Existing memory tools are simple vector stores — they work for 50 facts, then fall apart. Duplicates pile up, stale information poisons search results, everything scores the same, and after a few hundred memories the context window is filled with noise instead of signal.
What I tried
Designed a memory system with multiple retrieval signals: keyword matching and semantic similarity run in parallel, results are fused using rank-based scoring, then weighted by relevance, recency, and importance. Built a knowledge graph that auto-extracts entities from saved facts with bidirectional edges and aliases. Added reinforcement-based memory decay so frequently accessed facts stay strong while unused ones fade. Bi-temporal tracking handles contradictions — when a fact is superseded, the old version is hidden from searches but history is preserved. Published on npm with pre-built binaries for macOS and Linux.
What I learned
The hardest problem was scoring. Pure vector similarity returns plausible but wrong results. Combining keyword and semantic search with rank fusion was the breakthrough — it surfaces the fact that actually helps, not just the closest embedding. Memory decay needed careful tuning: too aggressive and useful facts disappear, too slow and noise accumulates. The reinforcement model with importance-weighted floors found the right balance.