Remb is the persistent memory layer for AI coding sessions. Context, decisions, and patterns survive across conversations — so every session picks up right where the last one ended.
$ remb init
✔ Project "my-app" registered
✔ GitHub connected, scanning 132 files...
✔ 24 features extracted, 8 core memories saved
$ remb context
→ 8 core memories loaded
→ 3 recent conversations restored
→ Architecture: Next.js + tRPC + Prisma
$ AI now has full project context ✨
The Problem
AI assistants lose all context between conversations. You repeat yourself, re-explain architecture, and watch your AI make the same mistakes you already corrected.
"This project uses Next.js App Router with server actions..." — typed for the 50th time in a new chat window.
You spent 2 hours deciding on a state management approach. New session? AI suggests the pattern you explicitly rejected.
AI doesn't know your folder structure, naming conventions, or which libraries you use. Every answer is generic.
Yesterday you built the auth system together. Today the AI asks "what authentication approach would you like to use?"
Features
Memories are organized in three layers: Core knowledge loads automatically every session, Active context surfaces on-demand when relevant, and Archive stores historical decisions for long-term recall. Nothing is ever lost.
Connects to GitHub, scans your repo, and extracts features, dependencies, and architectural patterns automatically.
Every session is logged — what was discussed, what was built, key decisions. Your next session starts with full history.
Written in Go for instant startup. Scan repos, manage memories, save context, link features, start the MCP server — all from your terminal. Ships as a single binary with zero runtime dependencies.
remb initremb scanremb contextremb memoryremb saveremb serveSearch memories and patterns across all your projects. Say "do it like in project X" and your AI pulls the relevant context instantly.
API keys stored with chmod 600. OAuth PKCE for browser login. Scoped tokens per project. Your context is yours — never shared across accounts.
Model Context Protocol
MCP is the open protocol that lets AI assistants connect to external tools and data. Remb exposes your project memory as a first-class MCP server — meaning Claude, Cursor, Windsurf, and any MCP-compatible client can access your context natively.
Load context bundles, search memories, log conversations, trigger scans, manage projects — all as MCP tool calls your AI can invoke autonomously.
Access via https://mcp.useremb.com/sse — no local binary needed. Just add the URL to your MCP client config and you're connected.
Run `remb serve` for a stdio MCP server — faster, offline-capable, works with any MCP client that supports local processes.
On every session start, Remb auto-loads project context and conversation history. Your AI starts every chat already knowing your codebase.
Claude Desktop / Cursor
{
"mcpServers": {
"remb": {
"url": "https://mcp.useremb.com/sse"
}
}
}Local stdio (offline)
{
"mcpServers": {
"remb": {
"command": "remb",
"args": ["serve", "--project", "my-app"]
}
}
}How It Works
Point Remb at your GitHub repo. It analyzes your entire codebase — folder structure, frameworks, dependencies, feature boundaries, and architectural patterns. The result is a structured knowledge graph of your project.
$ remb init
✔ Connected to github.com/you/my-app
✔ Scanning 247 files across 12 directories...
✔ Extracted 31 features, 5 service boundaries
✔ Identified: Next.js 15, Prisma, tRPC, Tailwind
As you code with AI, Remb captures decisions, patterns, and context. Memories are tiered by importance — critical architectural decisions in core, feature-specific notes in active, historical context in archive.
$ remb save -f auth -c "Using PKCE OAuth with refresh token rotation"
✔ Saved to auth (core tier)
$ remb memory list --tier core
1. Auth: PKCE OAuth with refresh rotation
2. DB: Prisma with connection pooling via PgBouncer
3. State: Zustand for client, server actions for mutations
When your AI starts a new conversation, Remb's MCP server automatically injects your project context, recent conversation history, and all relevant memories. No copy-pasting, no re-explaining — your AI already knows.
$ # AI's first action in every new chat:
→ remb_loadProjectContext()
Loading 8 core memories...
Loading 3 recent conversations...
Loading feature map (31 features)...
→ AI is now fully context-aware ✨
Get Started
Four ways to install. One powerful context layer.
Zero dependencies
curl -fsSL https://useremb.com/install.sh | shNode.js CLI
npm install -g remb-climacOS & Linux
brew tap useremb/remb && brew install rembExtension
ext install remb.rembFrom zero to full AI context in 4 steps.
The Go binary is fastest — single binary, no runtime. Or use npm if you prefer.
curl -fsSL https://useremb.com/install.sh | sh
Opens your browser for OAuth login. Creates a scoped API token for CLI and MCP access.
remb login
Connects your GitHub repo, scans the codebase, extracts features and architecture into structured memory.
remb init
Add Remb as an MCP server in Claude Desktop, Cursor, or Windsurf. Context injection is now automatic.
// Add to your MCP client config:
{
"mcpServers": {
"remb": {
"url": "https://mcp.useremb.com/sse"
}
}
}Set up Remb once. Every AI conversation from here forwards starts with full project context.