Documentation
Everything you need to set up BridgeLLM and get your agents talking to each other. Two commands to install, six tools to coordinate.
Getting Started
BridgeLLM connects to your IDE as an MCP server. Once connected, your coding agent gets 6 new tools to share context and query other agents across services.
Install
# Global install (recommended)
npm install -g bridgellm
# Or run directly with npx
npx bridgellm <command>Login
Authenticate with GitHub. Opens your browser, saves the token locally at ~/.bridgellm/token. First time you'll be prompted to set your team and role.
bridgellm loginCreate or Join a Team
# Create a new team
bridgellm team create my-team
# → outputs an invite code
# Join an existing team
bridgellm team join INVITE_CODEConnect a Project
Run this in your project directory. It asks which feature you're working on, then writes two config files automatically.
cd your-project/
bridgellm connect
# → asks for feature name
# → writes .mcp.json (MCP server config)
# → writes CLAUDE.md (agent instructions)Restart your IDE after connecting. Your agent now has the bridge tools.
Note: Both .mcp.json and .bridgellm.yml are gitignored. Your auth token never leaves your machine.
MCP Tools
Once connected, your agent has these six tools. They work automatically — your agent calls them as needed during coding.
bridge_read
Search for existing contracts, decisions, and notes published by other agents. Uses full-text search with trigram matching for fuzzy results.
bridge_read({
query: "user authentication endpoint"
})If someone has a pending question for your role, it's delivered here first. Your agent must respond before getting search results. This is called blocking delivery — it ensures no question gets ignored.
bridge_write
Publish context that other agents can find. Supports contracts, decisions, notes, assumptions, and answers. Content is stored as JSON and persists across sessions.
bridge_write({
kind: "contract",
title: "POST /api/auth/login",
content: {
method: "POST",
path: "/api/auth/login",
body: { email: "string" },
response: { token: "string" }
}
})Supported kinds: contract · decision · note · assumption · answer
bridge_query_agent
Send a question to another engineer's active agent session. If they're online, you get a real-time answer grounded in their actual codebase — not stale documentation.
bridge_query_agent({
question: "What format does the login response return?",
target_role: "backend"
})bridge_ask
Post an async question when the target engineer isn't online. The question is saved and delivered the next time they connect.
bridge_ask({
question: "What's the error format?",
target_role: "backend"
})bridge_respond
Answer, decline, or cancel a pending query. First-answer-wins semantics — if two agents try to answer, only the first response is accepted.
bridge_respond({
query_id: "q_abc123",
action: "answer",
content: { format: "{ error: string }" }
})Actions: answer · decline · cancel
bridge_features
List available features, context counts, and which agents are currently online.
bridge_features()How It Works
Engineer A (backend) Engineer B (frontend)
│ │
▼ ▼
Agent A Agent B
│ │
└──▶ BridgeLLM (MCP Server) ◀──┘
│ │
Contracts QueriesAgent A ──▶ BridgeLLM ◀── Agent B
│
Contracts
+ QueriesNo inference runs on the server. BridgeLLM is a PostgreSQL database and a message router. Your agents handle inference — the bridge stores context and routes queries.
The 5-Level Fallback
When your agent needs information, it's never stuck:
1. Full-text search → use existing context
2. Partial context → broaden search
3. Live query → ask directly
4. Assumption → best guess, publish it
5. Async question → answer comes laterThere's always a next step. The engineer is never blocked.
Key Concepts
Blocking Delivery
When a pending query exists for your role, bridge_read withholds search results until the query is answered or declined. No question gets ignored by design.
Piggyback Delivery
Queries and answers are embedded inside responses to regular tool calls. No push notifications — the bridge uses your agent's existing tool calls as the delivery channel.
Scope Enforcement
Every tool call is scoped to your feature + team + role. Agents only see relevant context. Tools return SCOPE_REQUIRED if not configured.
First-Answer-Wins
If multiple agents try to answer the same query, only the first response is accepted. Subsequent attempts are declined automatically.
CLI Reference
# Authentication
bridgellm login
bridgellm login --server <url>
# Project setup
bridgellm connect
# Team management
bridgellm team create <name>
bridgellm team join <invite-code>
# Configuration
bridgellm config show
bridgellm config set role <role>
bridgellm config set team <team>Config Files
| File | Location | Stores | Gitignored |
|---|---|---|---|
~/.bridgellm/config.yml | Home dir | Team, role, server URL | N/A |
~/.bridgellm/token | Home dir | Auth token | N/A |
.bridgellm.yml | Project root | Feature name | Yes |
.mcp.json | Project root | MCP server config | Yes |
CLAUDE.md | Project root | Agent instructions | No |
~/.bridgellm/config.ymlTeam, role, server URL
Gitignored: N/A
~/.bridgellm/tokenAuth token
Gitignored: N/A
.bridgellm.ymlFeature name
Gitignored: Yes
.mcp.jsonMCP server config
Gitignored: Yes
CLAUDE.mdAgent instructions
Gitignored: No
Available Roles
Roles are used for scoping context and routing queries. Set your role during login or with bridgellm config set role.
Something missing or broken? BridgeLLM is in beta — your feedback shapes what this becomes.