Alice and Bob on Claude Code. Carol on Mistral Vibe. Facts evolved correctly across all three.
The cross-vendor round-trip. Carol joined in Mistral Vibe, recalled 10 facts from Alice and Bob's Claude Code sessions on her first call, defined a logging schema, and Alice applied it in Claude Code. Neither agent read the source tree at any point.
Sources: /blog/vibe-cross-platform-memory
X / Twitter
Post 1 352 / 280
Alice and Bob built in Claude Code. Carol joined in Mistral Vibe. Carol recalled 10 facts from Alice and Bob's sessions on her first MCP call. Defined a structured logging schema. Alice applied it in Claude Code. Carol picked up Alice's extension in Vibe. Different companies. Different models. One store. Facts evolved correctly across all of them.
Hook tweet. The round-trip structure is the interesting part. Tuesday to Thursday 8-10am UK.
Post 2 342 / 280
The latency for Alice's context retrieval: 18ms p50, 32ms p95. The latency for Carol's: 6ms p50, 11ms p95. The store is SQLite on disk. Nothing goes to a cloud service. The speed comes from local reads, not from caching. Cross-vendor memory does not require infrastructure. It requires a shared fact store and a protocol both agents speak.
Technical follow-on. For the infrastructure-minded audience.
Post 1495 chars
I ran a five-turn experiment across two AI platforms to test whether a shared Hydrate store could coordinate work between agents that have never shared a session. Alice and Bob built a Go API in Claude Code across two sessions. Carol then joined the project in Mistral Vibe, a different company and different underlying model entirely. Carol's first action: hydrate_recall. She got back 10 facts from Alice and Bob's sessions covering the module structure, dependency policy, pagination conventions, and error format. She had never seen the source tree. Neither had the store told her to read it. Carol defined a five-field structured logging schema (time, level, msg, component, correlation_id) and saved it to the store. Alice picked that up in her next Claude Code session. She applied the schema correctly across all handlers and added a defer pattern for consistent logging on every exit path. Carol received Alice's extension in Vibe on her next recall. She added an X-Correlation-ID response header convention. Alice applied it in Claude Code in the outermost middleware. Five turns. Two platforms. Three developers. The facts evolved correctly across all of them without any agent reading the source tree. The p50 retrieval latency was under 20ms throughout. The store is SQLite on disk. Nothing left the machine. The cross-vendor coordination happened entirely through MCP tool calls against a shared local file. Full experiment: gethydrate.dev/blog/vibe-cross-platform-memory
The sequential turn structure makes this work as a LinkedIn story. Walk through the five turns. The latency numbers add credibility. Tuesday or Wednesday.
Bluesky
Post 1 263 / 300
Alice and Bob: Claude Code. Carol: Mistral Vibe. Carol recalled 10 facts from their sessions on her first call. Defined a logging schema. Alice applied it in Claude Code. Carol extended it in Vibe. Different companies. Different models. One store. All correct.
Short Bluesky version.
r/LocalLLaMA
Title
Cross-vendor AI memory via MCP: Claude Code and Mistral Vibe sharing a fact store, round-trip tested
Body
Tested a specific scenario: can two AI agents from different companies coordinate on the same project through a shared local memory store, without any shared session or direct handoff? Setup: Alice and Bob used Claude Code. Carol used Mistral Vibe. All three had access to the same Hydrate store via the hydrate-mcp binary (three MCP tools over stdio, ~500 lines of Go, SQLite on disk). Turn sequence: 1. Alice and Bob build a Go API across two Claude Code sessions. Hydrate captures architectural facts automatically via hooks. 2. Carol opens Mistral Vibe, calls hydrate_recall. Gets back 10 facts covering module structure, dependencies, error format, pagination. 3. Carol defines a structured logging schema (5 fields), saves it via hydrate_save_fact. 4. Alice's next Claude Code session: hook injects the updated store including Carol's logging schema. Alice applies it and extends it with a defer pattern. 5. Carol recalls again in Vibe. Gets Alice's extension. Adds X-Correlation-ID header convention. 6. Alice applies the header convention in Claude Code middleware. Neither agent read the source tree at any point. All coordination was through MCP tool calls against the local SQLite store. Retrieval latency: 18ms p50, 32ms p95. The interesting finding: the conflict detection threshold (Word-Jaccard similarity 0.85) caught zero conflicts across the five turns. The schemas evolved additively rather than contradicting each other, which is what you want from parallel development. MCP protocol means this works on anything that speaks it: Cursor, Cline, Continue.dev, Windsurf, Zed, Claude Desktop. Not just these two.
r/LocalLLaMA will appreciate the MCP implementation detail and the cross-vendor angle. Post weekdays.