Recursive Large Model (RLM) Memory System - A Model Context Protocol (MCP) server that provides AI agents with persistent memory and semantic file discovery.
The core philosophy: The AI Agent is intentionally blinded to the file system. Instead of using ls, grep, find, or dir, the AI relies on the MCP to be its eyes and memory.
┌─────────────────────────────────────────────────────────────────┐
│ YOU (Developer) │
│ │
│ npm start → Opens UI at http://localhost:3848 │
│ View all projects and memories in real-time │
│ Test all tools via the testing interface │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ projects/ directory │
│ rlm-memory-mcp-server/projects/ │
│ ├── jumpinotech/.rlm/ │
│ ├── my-app/.rlm/ │
│ └── another-project/.rlm/ │
└─────────────────────────────────────────────────────────────────┘
▲
│
┌─────────────────────────────────────────────────────────────────┐
│ AI Agents (Claude Code, Codex, etc) │
│ │
│ NEW! Bi-directional communication: │
│ Agent asks: "What files for this task?" → MCP answers │
│ MCP asks: "Is indexing complete?" → Agent confirms │
└─────────────────────────────────────────────────────────────────┘
cd rlm-memory-mcp-server
# Install dependencies
npm install
# Build
npm run build
# Create .env file with your Gemini API key
echo 'GEMINI_API_KEY=your-key-here' > .env
# Start the UI (for you to view memories and test tools)
npm start
# → Opens http://localhost:3848| Command | Description |
|---|---|
npm start |
Start the UI server (for viewing memories + testing) |
npm run dev |
Start UI in development mode with auto-reload |
npm run mcp |
Run MCP server directly (for testing) |
npm run build |
Build TypeScript to JavaScript |
Create a .env file:
# Required for AI features
GEMINI_API_KEY=your-gemini-api-key
# Optional
UI_PORT=3848Get a Gemini API key at Google AI Studio.
Add to ~/.claude.json:
{
"mcpServers": {
"rlm-memory": {
"command": "node",
"args": ["D:\\rlm_memory\\rlm-memory-mcp-server\\dist\\index.js"]
}
}
}Or use CLI:
claude mcp add rlm-memory -- node D:\\rlm_memory\\rlm-memory-mcp-server\\dist\\index.jsAdd to ~/.codex/config.toml:
[mcp_servers.rlm-memory]
command = "node"
args = ["D:\\rlm_memory\\rlm-memory-mcp-server\\dist\\index.js"]Add to ~/.gemini/mcp.json:
{
"servers": {
"rlm-memory": {
"command": "node",
"args": ["D:\\rlm_memory\\rlm-memory-mcp-server\\dist\\index.js"]
}
}
}For AI Agent Integration: See example_agents.md for concise rules AI agents should follow.
| Tool | Purpose |
|---|---|
rlm_init |
Initialize a new project for tracking |
rlm_status |
Get project statistics |
rlm_list_projects |
List all tracked projects |
| Tool | Purpose |
|---|---|
rlm_query |
PRIMARY - Ask MCP about relevant files for a user request |
rlm_recall_memory |
Retrieve relevant past context by keywords |
rlm_find_files_by_intent |
Semantic file search by natural language |
| Tool | Purpose |
|---|---|
rlm_index_codebase |
Scan & index existing codebase |
rlm_verify_index |
Verify indexing is complete (post-index check) |
rlm_smart_memory |
RECOMMENDED - Create memory with rich metadata |
rlm_create_memory |
Basic memory creation (legacy) |
| Tool | Purpose |
|---|---|
rlm_manage_sitemap |
Delete, move, or update file entries when codebase changes |
The main tool for AI agent ↔ MCP communication.
AI agent asks: "The user wants to fix the login button, what files should I look at?" MCP's Gemini searches memory + file map + edit history and returns relevant files with context.
{
"project_name": "my-app",
"user_request": "The user wants to fix the submit button color on the login form",
"include_memories": true,
"include_suggestions": true,
"max_files": 10
}Returns:
relevant_files: Files with descriptions, recent changes, component type, feature arearelevant_memories: Past work related to this requestai_analysis: Explanation of how to approach the tasksuggestions: Tips for the AI agent
Creates memory entries with rich metadata. The AI agent provides detailed context, and Gemini:
- Extracts optimal keywords for semantic search
- Classifies files by component type (button, form, modal, api-endpoint, etc.)
- Classifies files by feature area (auth, checkout, dashboard, etc.)
- Tracks edit history for each file
{
"project_name": "my-app",
"user_prompt": "Fix the submit button color",
"changes_context": "Changed the submit button in LoginForm to use the primary theme color instead of hardcoded blue. Also added hover state styling.",
"files_modified": [
{
"path": "src/components/LoginForm.tsx",
"change_type": "modified",
"change_summary": "Updated button color to use theme.primary, added hover state"
}
],
"new_features": ["themed-buttons"],
"affected_areas": ["auth", "ui"]
}After indexing a codebase, this tool asks: "Is this everything? Are you sure?"
{
"project_name": "my-app",
"expected_features": ["authentication", "payment", "dashboard"],
"report_format": "summary"
}Returns:
- Files indexed grouped by type and feature area
- Potential gaps detected (e.g., "No test files found")
- Confirmation prompt for the AI agent
Keep your sitemap in sync when the codebase changes.
AI agents can use this tool to:
- Delete entries for files that no longer exist
- Move entries when files are renamed/moved
- Update metadata (description, keywords, component_type, feature_area)
{
"project_name": "my-app",
"operations": [
{ "action": "delete", "file_path": "src/old-component.tsx" },
{ "action": "move", "file_path": "src/Button.tsx", "new_path": "src/ui/Button.tsx" },
{
"action": "update",
"file_path": "src/api/auth.ts",
"updates": {
"description": "JWT authentication service",
"keywords": ["jwt", "auth", "token"],
"feature_area": "security"
}
}
]
}Returns:
- Summary of successful/failed operations
- Detailed results for each operation
- Current sitemap entry count
Each file in the map now includes:
component_type: button, form, modal, hook, service, api-endpoint, etc.feature_area: auth, checkout, dashboard, user-profile, etc.edit_history: Array of past changes with dates and summaries
The rlm_find_files_by_intent tool now:
- Uses component type and feature area to narrow results
- Considers edit history for relevance scoring
- Won't return ALL buttons when you ask for ONE specific button
- Provides reasoning for why files were selected
All tools work without Gemini API (keyword-based fallback):
rlm_query: Uses weighted keyword matchingrlm_smart_memory: Infers types from file pathsrlm_find_files_by_intent: Basic keyword search
{
"project_name": "jumpinotech",
"working_directory": "D:\\projects\\jumpinotech"
}Creates projects/jumpinotech/.rlm/ with memory storage.
{
"project_name": "jumpinotech",
"keywords": ["auth", "login", "session"]
}Returns relevant memories from past work.
{
"project_name": "jumpinotech",
"user_prompt": "I need to fix the submit button color"
}Uses AI to find relevant files from the semantic map.
{
"project_name": "jumpinotech",
"user_prompt": "Fix login timeout",
"changes_summary": "Increased session timeout from 30min to 2hrs",
"files_modified": ["src/config/auth.ts"],
"keywords": ["auth", "session", "timeout"]
}{
"project_name": "jumpinotech",
"directory_path": "D:\\projects\\jumpinotech",
"max_files": 200,
"read_content": true
}Now also extracts: component_type, feature_area, and prompts for verification.
User: "Help me work on this new project"
│
▼
┌──────────────────────────────────────┐
│ 1. rlm_init │
│ Initialize project │
└──────────────────────────────────────┘
│
▼
Ready for RLM workflow!
User: "Index this codebase"
│
▼
┌──────────────────────────────────────┐
│ 1. rlm_init + rlm_index_codebase │
│ Scans directory, builds file map │
│ with AI-generated descriptions │
└──────────────────────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ 2. rlm_verify_index │
│ MCP asks: "Is this everything?" │
│ Shows what was indexed + gaps │
└──────────────────────────────────────┘
│
▼
Project is ready!
User: "Fix the submit button"
│
▼
┌──────────────────────────────────────┐
│ 1. rlm_query (PRIMARY TOOL) │
│ "User wants to fix submit button" │
│ → Gets: Relevant files, past │
│ memories, AI suggestions │
└──────────────────────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ 2. AI reads & fixes the files │
│ Using context from rlm_query │
└──────────────────────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ 3. rlm_smart_memory (MANDATORY!) │
│ Records changes with rich context │
│ Updates file map with edit history│
└──────────────────────────────────────┘
User: "Fix the submit button"
│
▼
┌──────────────────────────────────────┐
│ 1. rlm_recall_memory │
│ keywords: ["submit", "button"] │
└──────────────────────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ 2. rlm_find_files_by_intent │
│ "Fix submit button not working" │
└──────────────────────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ 3. AI reads & fixes the files │
└──────────────────────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ 4. rlm_create_memory │
│ Records what was done │
└──────────────────────────────────────┘
rlm-memory-mcp-server/
├── src/
│ ├── index.ts # MCP server (for AI agents via stdio)
│ ├── ui/
│ │ └── server.ts # Web UI (for you at localhost:3848)
│ ├── services/
│ │ ├── database.ts # File-based storage
│ │ └── gemini.ts # Gemini AI (semantic search, keywords)
│ ├── tools/
│ │ ├── query.ts # NEW: rlm_query
│ │ ├── smart-memory.ts # NEW: rlm_smart_memory
│ │ ├── verify-index.ts # NEW: rlm_verify_index
│ │ ├── index-codebase.ts # Enhanced with types
│ │ ├── find-files.ts # Enhanced semantic search
│ │ ├── recall-memory.ts
│ │ ├── create-memory.ts
│ │ └── init-status.ts
│ └── schemas/ # Zod validation
├── projects/ # All project data stored here
│ ├── jumpinotech/.rlm/
│ └── my-app/.rlm/
├── dist/ # Built JavaScript
├── .env # Your API keys
└── package.json
Open http://localhost:3848 after running npm start:
- Real-time updates - Auto-refreshes every 5 seconds
- Project browser - See all tracked projects
- Memory viewer - View all memories with timestamps
- File map - See the semantic file index with component types and feature areas
- Search - Filter projects by name
- Tool testing - Test all MCP tools directly from the UI
Centralized storage in projects/ means:
- One place to back up all AI memories
- Easy to view across all projects in the UI
- No cluttering project repos with
.rlmfolders - Works even if you delete project folders
Yes! Falls back to keyword matching. AI features just won't be as smart.
Just copy the projects/ folder.
rlm_query: Comprehensive - Searches files + memories + edit history, returns AI analysis and suggestionsrlm_recall_memory: Simple - Just searches memories by keywords
rlm_smart_memory: Rich metadata - Extracts component types, feature areas, tracks edit historyrlm_create_memory: Basic - Just stores the memory entry
MIT