Hydrate social kits
31 kits. Sorted by most recent.
| ID | Title | Angle | Status | Platforms | Created |
|---|---|---|---|---|---|
| 0012 | What 100 developers on Claude Code actually costs in 2026 Enterprise cost angle. Anthropic quietly revised their Claude Code cost documentation: average developer $6/day to $13/day, 90th percentile $12 to $30. Full 100-developer cost model, benchmarked Hydrate savings ($134K to $234K/year), plus the compounding factors not in that number: onboarding, churn, cross-tool fragmentation. | benchmark | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0006 | 18 sessions, 3 developers, 39% lower cost. And 53% fewer cache reads. Benchmark angle on the team bench experiment. Three isolated Docker containers, shared git remote, no direct communication. Baseline vs Hydrate team sync. 39% lower cost, 27% fewer turns, and the surprising finding: 53% fewer cache-read tokens with Hydrate, not more. | benchmark | draft | X (2) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0002 | Alice built a full Go REST API from 13 facts and zero source files Technical angle on the tier-migration-verified benchmark. Alice's working directory had only a .git folder. 13 injected facts, 195 tokens, one prompt. Claude produced 5 files with every convention correct. It also cited Bob by name in a code comment. | technical | draft | X (3) · LinkedIn · Bluesky (2) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0024 | Anthropic doubled their Claude Code cost estimate. Here's what actually changed. Business Insider reported Anthropic revised their Claude Code documentation without announcement. Average developer daily cost: $6 to $13. 90th percentile: $12 to $30. The explanation is accurate but unhelpful: better models get used harder. For a 100-developer team, that is $343,200 a year. Context re-establishment is not in that number. Hydrate cuts it by 39%. | announcement | draft | X (3) · LinkedIn · Bluesky (2) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0019 | Capture without consolidation is a notebook you write in but never read back Two features form a loop. Dream system: offline consolidation between sessions, scans the fact store, detects contradictions, produces a structured report. Inner monologue: lightweight working memory during sessions, runs 7 rules, feeds uncertain signals as seeds to the dream system. Both deterministic, both under a millisecond. | technical | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0003 | Bob described the entire codebase without reading a single file Narrative angle on the tier-migration-verified benchmark. Bob's container had a fresh workspace and no source files. One prompt. Claude listed every convention accurately and then named exactly where its knowledge came from: the hook. | narrative | draft | X (2) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0011 | 67% to 97%: the cache hit rate is the whole cost story Mechanism post. Dick's first session ran at 67% cache hit rate. Every Hydrate-enabled session across all 12 runs hit 97-98%. The 10x pricing differential between fresh input and cached reads makes that gap the entire economic argument. Also why warm cache narrows the Haiku-Sonnet quality gap. | technical | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0014 | The 67% to 97% gap explains every number in the benchmark Mechanism kit. Anthropic charges 10x more for fresh input than cached reads. Without Hydrate, sessions run at ~67% cache hit rate. With Hydrate, 97-98%, every run. That gap, compounded across every turn, is the entire cost story. Why warm Haiku closes the quality gap to cold Sonnet. | technical | draft | X (3) · LinkedIn · Bluesky (2) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0015 | Alice's five decisions became Bob and Carol's starting facts Canon propagation: how one developer's captured conventions become every developer's session context. From the 18-session Go CLI benchmark. Bob and Carol spent 3-5 turns re-discovering what Alice had already figured out. With Hydrate team sync, those turns collapsed to zero. | technical | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0008 | Carol's lint task: 12 turns vs 3 Benchmark angle on the team memory experiment. Session s6, carol, lint and schema validation task. Baseline 12 turns at $0.244. Team run 3 turns at $0.129. The five-turn exploration pattern named precisely, and what it means at scale. | benchmark | draft | X (3) · LinkedIn · Bluesky (2) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0020 | Cold-start amnesia: the 6/10 codebase that passed every test The multi-sprint simulation's quality story. Two teams, three sprints, same REST API. The no-Hydrate team's code was correct. Routes worked. Automated grader still scored it 6/10 vs 8/10. The reason: three sprints of cold-start amnesia, visible in the structure. | contrarian | draft | X (3) · LinkedIn · Bluesky (2) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0021 | Context assembly latency: the actual numbers Developers ask what Hydrate adds to their synchronous hook latency before wiring it in. The context-preview endpoint answers that directly: latency_ms on every call. Full path (FTS5, render, conflict scan) is 18ms p50, 32ms p95. MCP recall-only path is 6ms p50, 11ms p95. Both sub-100ms at p95 against a 3-second hook timeout budget. | technical | draft | X (2) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0030 | Claude Code to Mistral Vibe and back. One fact store. Five turns. Announcement angle on the cross-vendor memory round trip. Alice and Bob on Claude Code. Carol on Mistral Vibe. Full round trip in 5 turns: Claude Code to Hydrate to Mistral Vibe to Hydrate to Claude Code. Neither agent read the source tree. Each acted on recalled Hydrate facts. The honest friction: Vibe-to-Claude Code direction required a manual pull in the current version. | announcement | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0018 | Dehydrate is a hygiene tool. It is not a cost multiplier. The honest benchmark: a dehydrated civichub project got the same -22% cost saving as the un-dehydrated equivalent. Dehydrate did not add a new saving. It preserved Hydrate's saving at 82% less CLAUDE.md length. When to use it, when to skip it, and the three modes. | technical | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0028 | Developer churn does not just cost a salary. It costs every Claude Code session that developer ever ran. Founder angle on the invisible knowledge drain from developer churn. Tech industry churn: 15 to 25% annually. Without Hydrate, the context from every session a departing developer ran disappears. With Hydrate, it accumulates in the shared store and fires automatically on the new hire's first prompt. Alice's day one: 13 facts, full Go REST API, correct. | founder | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0013 | 37 contradictions in 47 facts Technical angle on the dream system. 47 facts in a single developer's store, all three dream cycle types run against them. Result: 37 contradictions found. The depth differentiation test (micro/standard/deep each catching what the previous missed). Deterministic, no model call, milliseconds. Reports that fade. | technical | draft | X (3) · LinkedIn · Bluesky (2) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0005 | Eve's onboarding cost $0.162. Without Hydrate: $0.510. Same 7/10. Benchmark angle on the onboarding scenario. Eve joins a three-sprint project. Hydrate-with-Haiku vs Sonnet-with-sprint-docs. Same quality, same failures, 3.15x cheaper. The honest finding: Hydrate didn't reduce violations. It changed which model was sufficient. | benchmark | draft | X (2) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0031 | Hydrate memory is now on Gemini CLI. One binary. Proven on a real store. Announcement angle on hydrate-mcp and the Gemini CLI integration. Lead with the proof: real Hydrate store, real MCP handshake, real Gemini session, no mocks. A fact about a civichub-c project that the user told Claude Code weeks ago was recalled verbatim by Gemini. Why MCP not a bespoke plugin. Three tools. What is coming next. Honest trade-off: tool-use vs auto-inject. | announcement | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0010 | Haiku: 2 out of 7 without memory, 7 out of 7 with Reliability framing on the Haiku benchmark result. Without memory, Haiku fails 5 of 7 sessions. With Hydrate, 7 of 7. The mechanism is cold discovery consuming the context budget before the task does. Compared against Opus: same 7/7 reliability at 9% of the cost. | benchmark | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0009 | Hybrid cell: 12% of Opus cost, identical output Benchmark angle covering the five-cell comparison table. The Hybrid cell (Sonnet seed, Haiku+Hydrate thereafter) ships 7/7 at $0.20 per session: 12% of raw Opus. The direct empirical counter to the tokenmaxxing problem. | benchmark | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0025 | Hydrate is not a Claude Code plugin The question I get asked: isn't this just a Claude Code plugin? The hooks are one of two paths. The second path is MCP over stdio, which works with Gemini CLI, Cursor, Cline, Continue, Windsurf, Zed, and Claude Desktop today. If Anthropic deleted the hook API tomorrow, Hydrate still runs everywhere. That's the architecture decision behind the question. | contrarian | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0017 | Hydrate is not just a Claude Code plugin. It works on Gemini CLI too. Two integration shapes, one SQLite store. Hooks for Claude Code: auto-inject, fires every turn, zero developer action. MCP for everything else: Gemini CLI, Cursor, Cline, Zed, and any other agent that speaks the protocol. The trade-offs honestly, and the Gemini CLI proof. | technical | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0022 | 12 runs, 5 scenarios: what I got wrong about memory and AI quality The synthesising post from the full benchmark programme. I designed it to answer whether memory improves AI agent quality. It does, sometimes. That is not the main finding. Across all five scenario types, the consistent result was that memory makes cheaper models sufficient, not that it makes agents smarter. | contrarian | draft | X (3) · LinkedIn · Bluesky (2) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0026 | Opus is the wrong default for most of your sessions Contrarian angle on Claude Code's Opus 4.7 default. The tokeniser alone costs 35% more per token. The million-token context window compounds it across long sessions. Memory changes the equation: a warm-context Haiku session handles tasks that would require Sonnet or Opus cold. Hybrid routing costs $0.20 per session vs $1.67 for raw Opus. Same 7/7 result rate. | contrarian | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0004 | 37 contradictions in one developer's fact store. Why that's not surprising. Narrative angle on the dream system. The pgvector moment: four sessions later, Claude suggested an operator it had been told was unavailable. Both facts existed. They had never been in the same context window. A memory system that captures but never consolidates accumulates contradiction debt. | narrative | draft | X (2) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0007 | Round 2 was 52% cheaper. Here's what baseline developers spent those extra turns doing. Benchmark angle on the round-by-round dynamics of the team bench experiment. Round 2 (each developer's second task) was the strongest: -52% cost, -44% turns. Five file-read turns collapse to zero. Carol's lint task: 12 turns to 3. | benchmark | draft | X (2) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0029 | Solo developers: one rule for cutting your Claude Code bill without cutting quality Technical angle on model routing for solo developers. Sonnet for new decisions. Haiku for established implementation. Hydrate as the bridge. The benchmark support, the mechanism (97% cache hit rate), the practical signal for which session is which, and the 80/20 allocation in practice. | technical | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0023 | Tokenmaxxing: TechCrunch got the diagnosis right, missed the fix TechCrunch's April 2026 tokenmaxxing piece cited 861% code churn, 9.4x more rework, and 2x throughput at 10x cost. The diagnosis is accurate. The missing piece: a metric that survives contact with reality (cost per shipped session) and the workflow change that improves it. The benchmark table has both. | contrarian | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0016 | One binary, two hooks, one SQLite file. Here's what actually happens. Explainer for developers who want to understand the mechanism. hydrate init does three things. Two hooks do the work: UserPromptSubmit injects context before every prompt turn, Stop captures new facts after each session. The round trip takes under a second at each end. Verified with curl. | technical | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0027 | Why I built an MCP server instead of a Claude Code plugin Founder reasoning on the decision to build Hydrate as a platform-agnostic memory system with native Claude Code hooks rather than a Claude Code add-on. The unversioned hook API risk, the MCP portability argument, what it means for pricing, and the honest trade-off: MCP loses automatic capture. | founder | draft | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-07 |
| 0001 | Frank at 2am: same fix, half the cost Narrative angle on the firefighter benchmark. Same bug, same 8/10 fix, same passing build. Haiku-with-memory cost $0.075. Sonnet-without cost $0.138. The whole cost story compressed into one on-call shift. | narrative | ready | X (3) · LinkedIn · Bluesky (1) · Mastodon · Reddit (2) · HN · dev.to | 2026-05-04 |