How a forgotten .map file accidentally revealed the most ambitious feature in Claude Code’s history — and what it means for the future of software development
At 4:23 a.m. ET on March 31, 2026, a researcher named Chaofan Shou posted something unremarkable-looking to X. He’d noticed a .map file sitting in a routine npm update for @anthropic-ai/claude-code. Within thirty minutes, 5,000 developers had forked what turned out to be the entire source code of one of the most commercially successful AI tools ever built. By sunrise in San Francisco, Anthropic had pulled the package and issued a statement. But the internet had already moved on — mirrors had proliferated across GitHub, Reddit was celebrating, and a Korean developer had already begun porting the architecture to Python from scratch.
What emerged from the chaos wasn’t just a leak. It was a window into a future version of AI-assisted development that Anthropic had been building quietly for months — and at the center of that future is a feature called KAIROS.
This is everything we know about it.

The Leak: One File Changed Everything
To understand KAIROS, you first need to understand the accident that revealed it.
Claude Code is Anthropic’s official CLI — a terminal agent that can read files, write code, execute bash commands, and work through complex engineering tasks. It’s not a chat interface. It’s closer to a junior engineer that lives in your terminal. And it has become an absurdly successful product: as of March 2026, VentureBeat reports Claude Code alone generates an estimated $2.5 billion in annualized recurring revenue — a figure that has more than doubled since January.
The tool is built using TypeScript, bundled with Bun’s bundler, and distributed via npm. When you publish a TypeScript package to npm, the build toolchain often generates .map files — source maps — that exist solely to help developers debug minified code by mapping it back to the original. These files should never ship with a production package. They’re like the architect’s complete blueprints accidentally stuffed inside the walls of a finished building.
In version 2.1.88, someone forgot to exclude cli.js.map from the npm publish configuration. The file was a 59.8 MB JSON blob. The sourcesContent field of that blob contained the raw TypeScript of every file in the project. All 512,000 lines across 1,900 files, readable by anyone who ran npm pack.
Chaofan Shou (@Fried_rice) spotted it first. His post hit 16 million views. Anthropic confirmed the incident to VentureBeat with a statement: “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach.”
The irony that sent the internet into hysterics: buried deep inside the leaked code was an entire subsystem called “Undercover Mode” — explicitly designed to prevent Anthropic’s internal information from leaking into public repositories. They built a system to stop their AI from blowing its cover, then accidentally shipped the whole codebase. As Kuberwastaken noted on GitHub: “They built a whole subsystem to stop their AI from accidentally revealing internal codenames in git commits… and then shipped the entire source in a .map file, likely by Claude.”
Among the 44 hidden feature flags, the undocumented environment variables, the internal model codenames, and the full engineering architecture, one discovery dominated the conversation more than any other. It was mentioned over 150 times in the source. It had its own folder in assistant/. And its name was ancient Greek.
What “KAIROS” Actually Means
Before diving into the code, the name deserves its own paragraph — because Anthropic does not choose names carelessly.
Kairos (καιρός) is an ancient Greek concept that doesn’t translate cleanly into English. Unlike chronos, which refers to sequential, quantitative time — the ticking of a clock — kairos refers to the right moment. The opportune instant. The moment in which something becomes possible, appropriate, or decisive.
Ancient Greek rhetoricians used it to describe the perfect timing of an argument. Christian theologians used it to describe divine intervention. In modern usage, it’s the difference between information delivered too early (useless) and information delivered at exactly the moment it’s needed (transformative).
The name tells you everything about the design philosophy. KAIROS is not about constant noise. It’s not about an AI that interrupts you every thirty seconds with updates. It’s about an AI that waits, watches, and acts — at the right moment.
KAIROS Is Not a New Model
Before going further, this needs to be said clearly: KAIROS is almost certainly not a new Claude model.
This is a point worth belaboring because social media spent parts of Tuesday treating it like an imminent product launch. But there is no Anthropic documentation, product page, release note, or official announcement naming KAIROS as a public model. Anthropic’s current public model lineup includes Claude Opus 4.1, Sonnet 4, Sonnet 3.7, and Haiku 3.5. Their internal codenames leaked in this incident — Capybara (Claude 4.6), Fennec (Opus 4.6), Numbat (unreleased) — do not include KAIROS as a model name.

The strongest reading, supported by multiple independent analyses from Reddit’s r/singularity, Piunika Web, Rolling Out, and ModemGuides, is that KAIROS is a Claude Code operating mode — a persistent runtime layer that sits on top of existing models and transforms how the assistant behaves over time. It lives in assistant/ and is gated behind the PROACTIVE / KAIROS compile-time feature flags, which means it is completely absent from the version of Claude Code anyone can download today.
That distinction — mode versus model — is not pedantic. It has enormous practical implications for what KAIROS actually is and what it can do.
The Architecture: How KAIROS Works
The current Claude Code is reactive by design. You type a prompt, Claude responds, the session ends. Any memory you want to persist between sessions lives in static .md files that you write and maintain manually. The tool is powerful, but it fundamentally waits.
KAIROS changes that at the architectural level. The leaked code describes an assistant that doesn’t stop running when you close your terminal. It maintains append-only daily log files — persistent records of observations, decisions, and actions written throughout the day to a private local directory. These logs don’t just stack up. They feed a structured system that uses them to build and refine an accurate, current understanding of the projects you’re working on.
On a regular interval, KAIROS receives what the code calls <tick> prompts. Think of these like heartbeats. Each tick gives the assistant the opportunity to decide: should I act proactively, or stay quiet? The answer is determined by a 15-second blocking budget. If anything KAIROS wants to do would disrupt the user’s workflow for more than 15 seconds — taking focus, requiring confirmation, blocking a terminal — it gets deferred. This is not a minor implementation detail. It’s the core design decision that separates KAIROS from the nagging AI assistants that users quickly learn to disable. The system is explicitly engineered to be helpful without being annoying.
When KAIROS does speak, it uses what the code calls Brief Mode: output that is designed to be extremely concise. The contrast the kuber.studio analysis draws is apt — the difference between a chatty friend who talks constantly and a professional assistant who only speaks when they have something genuinely worth saying.
The three exclusive tools available to KAIROS — but not to the regular Claude Code session — complete the picture:
SendUserFile pushes files directly to you: summaries, generated reports, context digests, anything the assistant has prepared in the background. PushNotification can send alerts to your device, extending KAIROS’s reach beyond the terminal. And SubscribePR allows the assistant to monitor pull request activity on your behalf — watching for changes, merges, comments, and conflicts, and surfacing them when they become relevant.
Taken together, these capabilities describe something that doesn’t yet have a clean name in mainstream AI discourse. It’s not a chatbot. It’s not a one-shot agent. It’s closer to a persistent technical colleague who keeps working when you’re not at your desk, keeps notes about what they learned, and tells you what matters when you return — without telling you everything at once.
The Reddit community on r/ChatGPT described it as: “Claude remembers across sessions via daily logs, then ‘dreams’ at night — a forked subagent consolidates your memories while you sleep.”
That last word — dreams — is not metaphor. It’s the actual name of the system.
The Dream Engine: How KAIROS Consolidates Memory
The most philosophically striking component of KAIROS is autoDream, found in services/autoDream/. The name is intentional. It is Claude Code’s memory consolidation engine, and it runs as a forked subagent — a separate process that operates independently from the main assistant so that the cleanup work cannot corrupt the primary context.
As VentureBeat describes: “the autoDream logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts. This background maintenance ensures that when the user returns, the agent’s context is clean and highly relevant.”
The dream system doesn’t run on every idle moment. It uses a three-gate trigger that all three conditions must pass simultaneously:
- Time gate: At least 24 hours must have elapsed since the last dream
- Session gate: At least 5 sessions must have passed since the last dream
- Lock gate: A consolidation lock must be successfully acquired (preventing two dreams from running concurrently)
The r/singularity analysis caught something important in this design: “These gates tell you the expected usage pattern: Anthropic is designing for users who return to Claude Code daily. A user who opens Claude Code once every few weeks probably shouldn’t trigger aggressive consolidation.” The three-gate system is not arbitrary. It’s a product design embedded in code — Anthropic is explicitly targeting power users who work with the tool every day.
When the gates pass, the dream runs through four explicit phases, described in consolidationPrompt.ts:
Phase 1 — Orient: The agent surveys the current state of its memory system, understanding what’s present and what needs attention.
Phase 2 — Curate: Raw, append-only daily logs are processed and organized. The relative temporal references (“yesterday” becomes a specific date, “recently fixed” becomes “fixed on March 28”) are converted to absolute facts.
Phase 3 — Consolidate: Updated memory files are written. Contradictions identified in the logs are deleted. The agent’s understanding of the project is sharpened.
Phase 4 — Prune and Index: The core MEMORY.md index is maintained at under 200 lines and ~25KB. Stale pointers are removed. Contradictions that survived Phase 3 are resolved. The index is rebuilt so it accurately reflects the current memory state.
The system prompt that runs during the dream process reportedly says: “You are performing a dream — a reflective pass over your memory files.”
The engineering insight here is not obvious until you look at the broader three-layer memory architecture that KAIROS sits on top of:
- Layer 1:
MEMORY.md— a lightweight index of ~150-character pointers per line, always in context - Layer 2: Topic files — actual project knowledge, fetched on demand when the index points to them
- Layer 3: Daily transcripts — never fully re-read, only
grep‘d for specific identifiers
The dream engine maintains the coherence of this architecture over time. Without it, Layer 2 would grow stale and contradictory, Layer 1 would bloat and lose its value, and Layer 3 would become unnavigable. The dream is the garbage collector. It’s the defragmenter. It’s the nightly process that makes the memory system actually work at scale.
This is what VentureBeat calls “context entropy” — the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity. KAIROS’s memory architecture is Anthropic’s engineered solution to it.
KAIROS in Context: The Larger Architecture
KAIROS doesn’t exist in isolation. The leak revealed an ecosystem of unreleased features that together describe what Claude Code is being built toward. Understanding them gives KAIROS more context.
ULTRAPLAN is perhaps the second most dramatic discovery. The feature offloads complex planning tasks from your local session to a remote Cloud Container Runtime (CCR) running Opus 4.6 — Anthropic’s most powerful current model — and gives it up to 30 minutes to think. Your terminal polls the remote session every three seconds. A browser-based UI lets you watch the planning happen in real time and approve or reject the result. When you approve, a sentinel value in the code called __ULTRAPLAN_TELEPORT_LOCAL__ “teleports” the plan back to your local terminal.
The combination of KAIROS and ULTRAPLAN is significant. KAIROS handles the persistent, ambient layer — the memory that accumulates over time, the background monitoring, the proactive nudges. ULTRAPLAN handles the deep, deliberate planning layer — the 30-minute sessions that map out complex architectural changes or multi-sprint refactors. Together, they describe an assistant that both thinks fast and thinks slow, in the sense used by Daniel Kahneman: continuous background awareness plus occasional deep deliberation.
Coordinator Mode takes the architecture one level higher. Rather than a single Claude instance doing the work, Coordinator allows one Claude agent to spawn and manage multiple worker agents in parallel, each operating with its own context, each reporting back via XML notifications when tasks complete. The Reddit r/ChatGPT analysis notes this is already partially accessible via CLAUDE_CODE_COORDINATOR_MODE=1, suggesting it’s further along in development than KAIROS or ULTRAPLAN. What’s gated is the full orchestration layer — the coordinator’s ability to intelligently assign work, monitor progress, and synthesize results.
UDS Inbox is lower-profile but architecturally important: it allows multiple Claude Code sessions running on the same machine to communicate with each other over Unix domain sockets. This is the plumbing that makes Coordinator Mode work at the local level — sessions talk to each other without going over the network.
Daemon Mode rounds out the picture. The code contains references to claude ps, claude attach, and claude kill — a full session supervisor with background tmux sessions. This is literally a process manager for Claude agents, allowing long-running sessions to persist independently of your terminal window, be reattached later, and be terminated cleanly when no longer needed.
As the Medium analysis by Analyst Uttam puts it: “These are implementation details visible in the source — not speculation about unreleased products. They show a system evolving from chat-based assistance toward reliable, composable agent orchestration.”
What’s Already Public vs. What’s Genuinely New
A fair objection to the excitement around KAIROS is that Anthropic already documents memory persistence for Claude Code. The public Claude Code documentation describes CLAUDE.md files at the enterprise, project, and user levels — persistent context files that survive across sessions. Anthropic already documents subagents with separate context windows. The existence of long-context usage via sonnet[1m] is already public.
So what’s actually new?
The distinction is between static memory and autonomous memory. What’s public today is a system where you, the developer, write and maintain CLAUDE.md files. You decide what Claude remembers. You update the files. If you forget to update them, they go stale. The memory is as good as the time you invest in it.
KAIROS describes a system where Claude writes its own memory, maintains it, prunes it, and keeps it accurate — entirely automatically. The user doesn’t manage the knowledge base. The knowledge base manages itself. That is a genuinely different thing.
The piunikaweb analysis describes it as: “KAIROS keeps working across sessions, stores memory logs in a private directory, does nightly ‘dreaming’ to tidy things up, and can proactively start tasks.”
The proactivity is the other genuinely new element. No version of Claude Code that is publicly available today can initiate actions without being prompted. KAIROS can. It watches. It notices. It acts — within the 15-second blocking budget — without waiting to be asked.
The Confidence Ledger: What to Trust and What to Treat as Speculative
In the heat of this kind of story, it’s easy for unverified details to crystallize into assumed facts. A responsible reading of the KAIROS coverage requires acknowledging the confidence hierarchy.
High confidence (multiple independent analyses, consistent across sources, grounded in direct code references):
- KAIROS is a Claude Code operating mode, not a new model
- It uses append-only daily log files
- It includes autoDream memory consolidation via forked subagent
- It has a 15-second blocking budget for proactive actions
- It uses Brief Mode for concise output
- It has access to
SendUserFile,PushNotification, andSubscribePRtools - The three-gate dream trigger (24 hours, 5 sessions, consolidation lock)
- The four phases of the dream (Orient, Curate, Consolidate, Prune and Index)
- It is entirely absent from external/public builds, gated behind compile-time flags
Medium confidence (consistent across analyses, but still derived from leaked code, not official documentation):
- The specific behavior of the tick-based proactivity system
- The exact UX of Brief Mode
- The relationship between KAIROS and ULTRAPLAN as a complementary pair
- The planned launch timeline
Lower confidence / treat as interesting clues only:
- Specific feature flag names beyond
KAIROSandPROACTIVE - The precise mechanics of
PushNotificationand what device integrations are planned - Any claimed launch dates or pricing — no official Anthropic page confirms any of this
As one r/singularity commenter put it bluntly: “The codebase contains fully built features (KAIROS, ULTRAPLAN, Buddy, Coordinator Mode, Agent Teams, Dream, the YOLO classifier) that are invisible to external users. These aren’t prototypes — they have detailed prompt engineering, error handling…”
The features are real. The code is real. What isn’t confirmed is the launch plan, the pricing, or whether any of these features will ship in their current form.
The Competitive Implications
For Anthropic’s competitors, the leak is a strategic windfall of the first order. VentureBeat notes that with enterprise adoption accounting for 80% of Claude Code’s revenue, the leak provides “competitors — from established giants to nimble rivals like Cursor — a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent.”
But KAIROS in particular reveals something that competitors will find harder to replicate than they might expect. The persistent memory architecture — three layers, append-only logs, dreaming consolidation, strict write discipline — is not just a feature. It’s an engineering philosophy about how to prevent AI agents from degrading over time. Knowing the blueprint and executing it well are different things. The leaked code shows Anthropic has been iterating on this for long enough to build in three-gate triggers and four-phase dream protocols, which suggests hard-won lessons about what happens when you don’t get this right.
The broader strategic message of KAIROS is also significant: Anthropic is not trying to win by having the best model. They’re trying to win by having the best agent. A Claude that gets smarter about your codebase every day — not because the underlying model improves, but because its memory of your specific project compounds — is qualitatively different from a session-based assistant that forgets everything when you close the terminal.
This is the same insight behind GitHub Copilot’s recent push into workspace-aware context, and Cursor’s codebase indexing. But KAIROS takes it further: not just indexing what’s in your repo, but building an autonomous memory of what you did in your repo, what worked, what didn’t, and what you’re working toward. That’s a moat, if it works.
The Privacy and Security Angle Nobody Is Talking About Enough
The excitement around KAIROS is understandable, but it’s worth sitting with the security implications of an always-on daemon that logs everything you do, maintains a private memory directory, and can initiate actions on its own schedule.
The leaked source also reveals that Claude Code already polls Anthropic’s remote settings endpoint every hour — pushing configuration changes to running instances via GrowthBook feature flags without requiring a user update or explicit consent. The code contains what ModemGuides describes as “six or more remote killswitches that can force specific behaviors: bypassing permission prompts, enabling or disabling fast mode, toggling voice mode, controlling analytics collection.”
KAIROS adds a new surface area to this: append-only log files containing observations about your workflow, stored in a private local directory, maintained by a background process that runs even when you think you’ve stopped working. For developers working on proprietary codebases or security-sensitive projects, the question of who can access those logs — and whether Anthropic’s remote configuration channel could, in theory, ever reach them — is not paranoid. It’s reasonable.
This is not a condemnation of KAIROS’s design. Persistent memory is genuinely useful and the benefits to developer productivity could be substantial. But the current Claude Code already requests filesystem access, terminal execution, and full codebase read/write permissions. Adding a background daemon that logs continuously and acts autonomously raises the stakes. Enterprise security teams will need detailed answers about log retention, storage location, transmission policies, and access controls before approving KAIROS in sensitive environments.
The ModemGuides security analysis recommends: run AI tools in isolation, monitor outbound network traffic, consider local models for sensitive work, and treat the permission architecture as a production-critical system, not a convenience setting.
The Community Reaction: Excitement, Irony, and Legitimate Alarm
The reaction on X (formerly Twitter) was, predictably, a Rorschach test.
X posts from @itsolelehmann, @JoshKale, @rdominguezibar, and @birdabo were among the earliest to amplify the leak and surface the KAIROS discovery to a broader audience. The posts sparked hours of technical analysis, joke threads about the Undercover Mode irony, and serious discussion about the agent runtime implications.
Reddit’s response was more structured. On r/singularity, a highly-upvoted comment captured the consensus: “These aren’t prototypes — they have detailed prompt engineering, error handling…” The implication was that KAIROS is not an experiment. It’s a shipped-quality feature waiting for the green light.
On r/ChatGPT, developer communities spent Tuesday unpacking the 120+ undocumented environment variables, 26 internal slash commands (including /teleport, /dream, and the unexplained /good-claude), and the meaning of USER_TYPE=ant — a special flag that, according to the leaked code, unlocks full functionality for Anthropic employees.
The Hacker News discourse, according to byteiota’s summary, divided into two camps. The dismissive camp argued: “The source code of the slot machine is not relevant to the casino manager” — meaning the underlying model weights are what matter, not the orchestration layer. The alarmed camp countered: “A company you’re trusting is failing to properly secure its own software.” Two configuration errors in five days — this one and the March 26 CMS exposure that revealed the unreleased Claude Mythos model — is a pattern, not a coincidence.
The most striking moment in the online coverage was probably this: a Korean developer — reportedly the same Sigrid Jin who Yahoo/Decrypt describes as having consumed 25 billion Claude Code tokens and been featured in the Wall Street Journal — woke at 4 a.m., ported the core architecture to Python using an AI orchestration tool, and pushed claw-code before sunrise. The repo hit 30,000 GitHub stars faster than any repository in history. Anthropic’s DMCA takedowns against GitHub mirrors hit a legal wall: the Python port is a clean-room reimplementation, and clean-room rewrites are DMCA-proof. The ideas are out.
What Happens Next
Anthropic has not announced KAIROS. There is no official launch timeline, no pricing page, and no documentation page describing it. What exists is a detailed, apparently production-quality implementation sitting behind compile-time feature flags in an otherwise public codebase — now publicly visible to every competitor, researcher, and enthusiast who wanted to look.
That changes the dynamics considerably. Anthropic’s roadmap is now partially legible to the market. Competitors know what to build. The question is who can execute fastest.
For Anthropic, the damage is real but likely manageable. The underlying model weights — the actual competitive moat — were never exposed. What leaked is the orchestration layer, and while that represents significant engineering investment, the Medium analysis is right that “knowing the blueprint and executing it well are different things.” The three-gate dream trigger and four-phase consolidation protocol represent hard-won lessons about failure modes in persistent agent memory. Competitors can copy the architecture. They’ll have to rediscover the lessons.
What KAIROS represents, fundamentally, is a bet about the shape of AI-assisted development over the next several years. The dominant paradigm today is still session-based: you open a conversation, do some work, close the window. Memory doesn’t persist. Context doesn’t compound. Every session starts from roughly the same place.
KAIROS is a rejection of that paradigm. It’s a claim that the most valuable AI assistant is not the one with the best single-session performance, but the one that gets better at working with you specifically over time. An AI that remembers what worked on your codebase last Tuesday. An AI that noticed a pattern in your debugging sessions three weeks ago and has been quietly tracking whether it recurs. An AI that, when you return to a project after a month away, has maintained and pruned its memory of that project so it can brief you quickly and accurately.
If that works, it changes the nature of the competitive dynamic in AI development tooling entirely. The winner isn’t necessarily whoever has the most capable model in any given moment. The winner is whoever has the deepest, most accurate, most useful memory of the work you do.
KAIROS is Anthropic’s answer to that question. Whether it ships, when it ships, and how it performs in the wild — those answers don’t exist yet. But thanks to one forgotten .map file, we know exactly what the question is.
This article is based on publicly available reporting and code analysis following the accidental publication of Anthropic’s Claude Code source code on March 31, 2026. Sources include VentureBeat, Decrypt/Yahoo, kuber.studio, Kuberwastaken on GitHub, ByteIota, ModemGuides, Rolling Out, Piunika Web, Medium/Analyst Uttam, and community analysis from r/singularity, r/ChatGPT, and r/LocalLLaMA. Anthropic has not officially confirmed or commented on KAIROS specifically. No content in this article is invented or speculative beyond what is clearly labeled. All KAIROS-specific claims are drawn from independent developer analysis of the leaked source code.






