Cross-Agent Context: How to Share Memory Between Cursor, Claude Code, and Codex

Cross-Agent Context: How to Share Memory Between Cursor, Claude Code, and Codex
Most development teams don’t use just one AI coding tool anymore. A typical workflow might look like this:
- Cursor for in-editor autocomplete and quick edits
- Claude Code for complex refactors, design work, and deep reasoning
- Codex / GitHub Copilot for automated generation, especially in CI or quick scaffolding
Each of these tools is powerful on its own, but they all share a critical limitation: they don’t share memory. Context you build up in Cursor doesn’t carry over to Claude Code. Insights Claude develops about your codebase vanish when you switch to Codex. Every tool, every session, starts from zero.
This is the cross-agent context problem.
As multi-agent workflows become the norm, this problem is getting more painful. The good news: there are practical strategies you can use today, and a clear path toward a shared context layer using tools like vexp and MCP.
Why Cross-Agent Context Matters Now
Two years ago, most teams experimented with a single AI coding tool, used occasionally. Today, many teams:
- Run multiple AI tools simultaneously on the same codebase
- Use different tools for different tasks (autocomplete vs refactor vs CI generation)
- Have different developers preferring different agents
This leads to compounding context fragmentation:
- Each tool independently re-explores the same codebase
- Insights built in one tool can’t inform another
- Switching tools mid-task discards accumulated context
- Teams can’t share AI-developed context the way they share code
The result is a massive exploration tax paid repeatedly:
- Per tool
- Per session
- Per developer
Cross-agent context is about paying that tax once and letting every tool benefit.
How Current Tools Handle Context
Cursor
Cursor maintains a proprietary semantic index over your codebase, often surfaced as @Codebase.
- Context is session-local and tool-local
- The index is not exposed in a way that other tools can consume
- Cursor rules (
.cursor/rulesor.cursorrules) act like a static project guide that gets prepended to context - There’s no automatic persistence of session insights beyond what you manually encode in rules
Claude Code
Claude Code uses CLAUDE.md as a project-level context file.
- CLAUDE.md is loaded at session start and persists as a file in your repo
- Claude Code itself has no long-term session memory across runs
- Each new session requires re-exploration of the codebase
- CLAUDE.md is great for static orientation, not for dynamic, task-specific insights
Codex / GitHub Copilot
GitHub Copilot and Codex rely primarily on ephemeral context windows:
- Some persistent understanding comes from GitHub (commits, PRs, issues), but it’s read-only
- Completions are driven by the current buffer + nearby files, not accumulated exploration
- They don’t ingest or reuse the internal state from your Cursor or Claude sessions
The Pattern
Across Cursor, Claude Code, and Copilot/Codex, the pattern is the same:
- Context is session-local
- Context is tool-siloed
- Exploration is duplicated across tools and sessions
Using more tools doesn’t compound your AI’s understanding of the codebase; it multiplies your exploration cost.
What Cross-Agent Context Should Look Like
An ideal cross-agent setup has three layers:
- Shared index layer – one structural understanding of the codebase
- Shared session memory – reusable higher-level observations
- Git-native context – context that travels with your code
1. Shared Index Layer
Imagine a single index for your codebase:
- File hashes
- Dependency graph
- Semantic embeddings
- Symbol-level relationships
This index is built once, then queried by any tool.
- When Claude Code explores
auth.tsand builds an understanding of its dependencies, that work enriches the index - When Cursor asks about
auth.ts30 minutes later, it can reuse that enriched context instead of re-exploring
This is the architectural role of vexp:
- vexp maintains a shared index in
.vexp/manifest.json(committed to git) and a localindex.db - Any agent that implements the vexp MCP protocol can query this index
- You get one index, multiple clients, instead of per-tool indexing
2. Shared Session Memory
Beyond raw code structure, tools develop observations during a session, such as:
- “This module uses the repository pattern.”
- “Authentication is centralized in
AuthService.” - “The test suite uses factory-boy for fixtures.”
If these observations are:
- Persisted
- Linked to code symbols
- Exposed via a standard interface
…then any tool can reuse them in future sessions.
vexp already supports this pattern within a tool:
- It stores observations linked to fully qualified names (FQNs)
- It can surface relevant observations in later sessions
For cross-tool sharing, we need:
- A standard observation format
- Multiple tools reading/writing to the same vexp instance (via MCP)
3. Git-Native Context
The cleanest way to share context across agents and teammates is to make it code-like:
- Stored in your repo
- Versioned with git
- Available to any tool that can read files
vexp does this partially:
.vexp/manifest.jsonis committed to git- When a teammate pulls, their vexp index reflects the same structural state
This doesn’t yet include all session observations, but it ensures everyone starts from the same structural understanding of the codebase.
Practical Cross-Agent Strategies You Can Use Today
Full, automatic cross-agent context sharing is still emerging, but you can get most of the benefit with a few patterns.
Strategy 1: Common CLAUDE.md / Rules File
Create a single shared context document and wire it into every tool that supports a project-level rules file.
- For Claude Code:
CLAUDE.md - For Cursor:
.cursorrulesor.cursor/rules - For other tools: their equivalent project config / rules file
This is static, so use it for stable project-level facts, not per-task details.
What to put in the shared context document:
- Project structure and key directories
- Technology stack and major frameworks
- Coding conventions and style guidelines
- Key architectural decisions and patterns
- Pointers to the most important files for common tasks
What not to put there:
- Task-specific context (it will be stale quickly)
- Highly detailed, low-level technical notes better loaded on demand
- Frequently changing implementation details
This gives every tool the same baseline understanding of your project.
Strategy 2: vexp as the Shared Index
Install and configure vexp as a shared index layer for your codebase.
vexp currently supports 12 AI coding agents via MCP:
- Claude Code
- Cursor
- Windsurf
- GitHub Copilot
- Continue.dev
- Augment
- Zed
Frequently Asked Questions
Can Cursor, Claude Code, and Codex share context with each other?
Why do developers use multiple AI coding tools at once?
What is cross-agent context and why does it matter?
How does MCP enable cross-agent context sharing?
Does switching between AI coding agents waste tokens?
Nicola
Developer and creator of vexp — a context engine for AI coding agents. I build tools that make AI coding assistants faster, cheaper, and actually useful on real codebases.
Related Articles

Claude Code Rate Limits: Why You Hit Them and How to Stay Under
Hitting Claude Code rate limits? The root cause is usually high tokens per request, not total usage. Here's the math and the fixes.

Using Claude Code with FastAPI: Benchmark-Proven Token Optimization
Benchmark results from 21 runs on a real FastAPI project: 65% fewer input tokens, 57% lower cost, 14pp better task completion. Full methodology and setup guide.

Stale Context in AI Coding: When Yesterday's Knowledge Breaks Today's Code
Stale context causes AI coding bugs that look like hallucinations but aren't. Here's why it happens, why it's getting worse, and how to detect it.