Carol's lint task: 12 turns to 3
Team bench angle on canon propagation. Carol's lint session dropped from 12 turns to 3 because she already knew gopkg.in/yaml.v3 was in go.mod. Alice established that in session one. Carol inherited it without asking.
Sources: /blog/team-bench-39-percent-cheaper
X / Twitter
Post 1 348 / 280
Carol's lint task: 12 turns in the baseline. 3 turns with Hydrate. The reason: she already knew gopkg.in/yaml.v3 was in go.mod and internal/jsonlines was the canonical reader. She did not need to verify it. Alice established those conventions in session one. Carol inherited them. Three developers. Zero turns spent rediscovering the same facts.
Hook tweet. Tuesday to Thursday 8-10am UK. The specific file and package names make this credible. Thread the team context below.
Post 2 318 / 280
The full team result: 9 sessions, $1.015, 64 turns. Baseline: 8 sessions, $1.664, 88 turns. 39% lower cost. 27% fewer turns. And the number I did not expect: 53% fewer cache-read tokens. Not more. When Alice's conventions are injected, Bob and Carol do not read files to find them. Fewer reads. Fewer cache events.
Stats follow-on. The 53% fewer cache-reads is counterintuitive and worth the explanation.
Post 1716 chars
I ran a controlled team benchmark: three developers, isolated Docker containers, shared git remote, no direct communication. Baseline: Hydrate disabled. Team run: Hydrate team sync enabled. Same project, same model, same prompts. Carol's lint task in the baseline took 12 turns. With Hydrate it took 3. The difference was one fact: gopkg.in/yaml.v3 was already in go.mod and internal/jsonlines was the canonical reader. In the baseline, Carol spent turns finding that out by reading files. In the team run, the context block told her before her first prompt. The headline numbers: 39% lower total cost ($1.015 vs $1.664) and 27% fewer turns (64 vs 88). But the number that surprised me was this: 53% fewer cache-read tokens consumed with Hydrate, not more. I expected the opposite. The injected context is cached, so cache reads should rise. Instead the per-session total fell. The reason: baseline sessions have developers reading the same files repeatedly across turns. Every Read call on a file already in the conversation generates cache-read tokens. Hydrate replaces those file-read loops with a single injected block, cached once. The injected facts are cached, but the files are never read again. The mechanism driving this is what I am calling canon propagation. Alice's first session established the codebase's character: module structure, test conventions, dependency policy, error format. In the baseline run, Bob and Carol each spent 3 to 5 turns rediscovering those conventions. In the team run, they arrived already knowing them. That is the economic argument for team memory. Not smarter agents. Fewer repeated discoveries. Full benchmark data: gethydrate.dev/blog/team-bench-39-percent-cheaper
Lead with Carol's specific numbers first, then the team headline. The 53% cache-read finding is the most interesting detail for LinkedIn. Tuesday or Wednesday.
Bluesky
Post 1 278 / 300
Carol's lint task: 12 turns without team context, 3 turns with. She already knew gopkg.in/yaml.v3 was in go.mod. Alice put that in the store in session one. 39% lower total cost. 53% fewer cache-read tokens. Three developers, isolated containers, no communication except git.
Bluesky version. Keep the specific filenames, they signal credibility.