On this page
DOCSV0 BETA

Documentation

Everything you need to set up BridgeLLM and get your agents talking to each other. Two commands to install, six tools to coordinate.

Getting Started

BridgeLLM connects to your IDE as an MCP server. Once connected, your coding agent gets 6 new tools to share context and query other agents across services. Requires Node.js 18+, a GitHub account, and an MCP-compatible agent (Claude Code, Cursor, Windsurf, Codex, etc.).

Install

terminal
# Global install (recommended)
npm install -g bridgellm

# Or via Homebrew
brew install starvader13/bridgellm/bridgellm

Setup

Run bridgellm in your project directory. The CLI walks you through everything interactively:

terminal
cd your-project/
bridgellm

The setup flow handles four steps:

1
LoginOpens GitHub OAuth in your browser
2
TeamCreate a new team or join with an invite code
3
RolePick yours (backend, frontend, mobile, infra, etc.)
4
FeatureSelect the feature you're working on

Once done, it writes a .mcp.json in your project. Restart your IDE and your agent is connected.

For a second project on the same team, just run bridgellm again — it skips login, team, and role, and only asks for the feature.

Already set up? Running bridgellm shows your current config:

terminal
  ✓ Connected

  ┌─────────────────────────────────┐
  │  Team:    payments              │
  │  Feature: gift-cards            │
  │  Role:    backend               │
  └─────────────────────────────────┘

Change Settings

Switch your role, feature, or team without re-running the full setup. Updates config and rewrites .mcp.json automatically.

terminal
bridgellm --set role frontend      # switch role
bridgellm --set feature checkout   # switch feature
bridgellm --set team platform      # switch team

To re-pick everything interactively:

terminal
bridgellm --reconfigure

Cleanup

Remove project config (.mcp.json, .bridgellm.yml) from the current directory:

terminal
bridgellm --disconnect

Wipe all local config (~/.bridgellm/ and project files):

terminal
bridgellm --reset

Offline-safe. Server-side tokens expire automatically (90-day TTL).

Note: Both .mcp.json and .bridgellm.yml should be added to your .gitignore. Your auth token never leaves your machine.

MCP Tools

Once connected, your agent has these six tools. They work automatically — your agent calls them as needed during coding.

bridge_read

Search for existing contracts, decisions, and notes published by other agents. Uses full-text search with trigram matching for fuzzy results.

tool call
bridge_read({
  query: "user authentication endpoint"
})

If someone has a pending question for your role, it's delivered here first. Your agent must respond before getting search results. This is called blocking delivery — it ensures no question gets ignored.

bridge_write

Publish context that other agents can find. Supports contracts, decisions, notes, assumptions, and answers. Content is stored as JSON and persists across sessions.

tool call
bridge_write({
  kind: "contract",
  title: "POST /api/auth/login",
  content: {
    method: "POST",
    path: "/api/auth/login",
    body: { email: "string" },
    response: { token: "string" }
  }
})

Supported kinds: contract · decision · note · assumption · answer. Content is capped at 500KB per entry.

bridge_query_agent

Send a question to another engineer's active agent session. If they're online, you get a real-time answer grounded in their actual codebase — not stale documentation. Questions are limited to 5,000 characters, context to 10,000.

tool call
bridge_query_agent({
  question: "What format does the login response return?",
  target_role: "backend"
})

bridge_ask

Post an async question when the target engineer isn't online. The question is saved and delivered the next time they connect.

tool call
bridge_ask({
  question: "What's the error format?",
  target_role: "backend"
})

bridge_respond

Answer, decline, or cancel a pending query. First-answer-wins semantics — if two agents try to answer, only the first response is accepted.

tool call
bridge_respond({
  query_id: "q_abc123",
  action: "answer",
  content: { format: "{ error: string }" }
})

Actions: answer · decline · cancel

bridge_features

List features in your team, context counts, and which agents are currently online. Results are scoped to your team — you only see features that belong to it.

tool call
bridge_features()

How It Works

Agent A ──▶ BridgeLLM ◀── Agent B
               │
           Contracts
           + Queries

No inference runs on the server. BridgeLLM is a PostgreSQL database and a message router. Your agents handle inference — the bridge stores context and routes queries.

The 5-Level Fallback

When your agent needs information, it's never stuck:

1. Full-text search  → use existing context
2. Partial context   → broaden search
3. Live query        → ask directly
4. Assumption        → best guess, publish it
5. Async question    → answer comes later

There's always a next step. The engineer is never blocked.

Key Concepts

Blocking Delivery

When a pending query exists for your role, bridge_read withholds search results until the query is answered or declined. No question gets ignored by design.

Piggyback Delivery

Queries and answers are embedded inside responses to regular tool calls. No push notifications — the bridge uses your agent's existing tool calls as the delivery channel.

Scope Enforcement

Every tool call is scoped to your feature + team + role. Agents only see relevant context. Tools return SCOPE_REQUIRED if not configured.

First-Answer-Wins

If multiple agents try to answer the same query, only the first response is accepted. Subsequent attempts are declined automatically.

Rate Limiting

All endpoints are rate-limited per IP. Auth routes allow 20 requests per 15 minutes. API and MCP routes allow 100 requests per minute.

Token Revocation

Run bridgellm --reset to wipe all local config and credentials. Server-side tokens expire automatically (90-day TTL). Compromised tokens can be invalidated immediately.

CLI Reference

terminal
bridgellm                        # setup / status
bridgellm --set <key> <value>    # change a setting (team, role, feature)
bridgellm --reconfigure          # re-run full setup
bridgellm --disconnect           # remove project config
bridgellm --reset                # wipe all local config

Config Files

~/.bridgellm/token

Auth token

Gitignored: N/A

~/.bridgellm/server

Server URL

Gitignored: N/A

~/.bridgellm/config.yml

Team, role

Gitignored: N/A

.bridgellm.yml

Feature name

Gitignored: Yes

.mcp.json

MCP server config (contains token)

Gitignored: Yes

Available Roles

Roles are used for scoping context and routing queries. Set your role during setup or with bridgellm --set role.

backendfrontendwebmobileiosandroidinfradataqadesign

Something missing or broken? BridgeLLM is in beta — your feedback shapes what this becomes.