Introduction
The article opens with an audacious prediction: over the next decade, the impact of superhuman AI will dwarf even the transformational changes of the Industrial Revolution. In a tone both urgent and cautiously optimistic, the authors remind the reader that the CEOs of major organizations like OpenAI, Google DeepMind, and Anthropic have forecast that artificial general intelligence (AGI) will emerge within five years.
Sam Altman, for instance, is quoted as setting sights on “superintelligence in the true sense of the word” and foretelling a “glorious future.” What might seem like mere hype is, instead, presented as a plausible development demanding immediate societal reflection. The work is shaped by rigorous forecasts, expert interviews, and extensive trend extrapolation, and it builds on a track record of forecasting success (as with Kokotajlo’s earlier scenario “What 2026 Looks Like”), which lends additional credibility to the narrative.

The authors explain that the scenario was developed through an iterative process—starting from the present, identifying key periods, and asking repeatedly “what would happen next?” Their method involved scrapping and refining multiple versions until they arrived at pathways they considered plausible, complete with branches that reflect both optimal and dire outcomes. They do not claim perfect foresight; indeed, much of the paper is “guesswork” calibrated by extensive technical and socio-political research.
Alongside narrative sections, each chapter is prefaced with charts summarizing the state of the world at that time, though detailed discussions of these metrics are deferred to additional resources (for example, see the methodology and compute supplement on ai-2027.com). This introduction sets the stage for an exploration of the rapid evolution of AI capabilities, its cascading impact on nearly every aspect of society, and the geopolitical tension that will define the next few years in a world on the brink of a technological revolution.
Mid 2025: Stumbling Agents
In the early part of the scenario, specifically mid-2025, the world is introduced to a new breed of AI agents. These agents, initially marketed as “personal assistants,” are able to perform everyday tasks such as ordering food via DoorDash or managing spreadsheet expenses. The agents operate by being prompted with specific commands; when a user requests, for instance, “open my budget spreadsheet and sum this month’s expenses,” the agent acts as a digital helper. However, early interactions reveal a critical friction: although the agents are designed to be autonomous, most users exhibit a natural reluctance to trust them fully. The process of allowing automatic purchases or similarly sensitive decisions without oversight is met with considerable skepticism—a hesitancy that diminishes only gradually as reliability improves.
The narrative portrays these mid-2025 agents as an intermediary step in an evolutionary process. Compared to previous iterations such as the “Operator” systems, these new agents achieve a score of 65% on the OSWorld benchmark for basic computer tasks (with previous generations scoring lower), yet they remain not quite equivalent to skilled human performance. Nevertheless, even as the general public hesitates, specialized AI agents in domains like coding and research start to gain traction in industrial workflows.
Whereas earlier models could simply convert bullet points to emails, the new agents begin to function more like employees who execute significant portions of a task autonomously. For example, coding agents evolve into systems that actively support software development through real-time collaboration on platforms like Slack or Teams. Their efficiency is measurable; by mid-2025 these agents reportedly achieve an 85% score on benchmarks such as SWEBench-Verified—nearing the competency of well-trained human developers.

Simultaneously, dedicated research agents begin to reshape the fields they serve by scouring vast amounts of digital information to answer complex queries. Although these agents are capable of sourcing information from the Internet in as little as 30 minutes, their efficacy is not without fault. Their performance can sometimes be erratic, leading to episodes of humorous misfortune that transmit widely through online communities such as AI Twitter. Moreover, reliability comes at a premium: the best-performing systems cost hundreds of dollars per month, meaning that while their capabilities are impressive, they are also accessible primarily to enterprises with deep pockets.
In essence, mid-2025 marks a transitional stage where tangible progress in AI agent development is evident, yet significant challenges remain. The agents are still imperfect, their commercial feasibility is tested by cost considerations, and the gradual build-up of trust is essential for broader adoption.
Additionally, enterprise-focused tools like Glean begin to integrate these agents, highlighting the gradual permeation of AI into professional workflows. Thus, what appears at first as incremental progress resonates with the enormous stakes of an impending AI revolution—a revolution that will soon escalate into scenarios of global consequence.
Late 2025: The World’s Most Expensive AI
As the calendar turns to late 2025, the narrative shifts to a new frontier in AI development: the race to become the world’s most expensive and powerful AI. A fictional company, OpenBrain, emerges as a paradigm of this escalation. The story describes OpenBrain’s construction of colossal datacenters—massive facilities that, in aggregate, total 2.5 million 2024-GPU equivalents (specifically, H100s) and require an outlay of around $100 billion.
These datacenters are designed not only for raw computational power but are interconnected with high-speed fiber networks that allow vast quantities of data to flow almost instantaneously between campuses. This infrastructure, while optimizing performance, also introduces notable security vulnerabilities; the cables, junction points, and long-haul links become potential targets for espionage and cyberattacks.
The heavy computational emphasis is underscored by a comparison with previous systems. Whereas GPT-4, a well-known benchmark, required roughly 2 ⋅ 10^25 floating-point operations (FLOPs) to train, OpenBrain’s Agent-0 is trained at a scale of about 10^27 FLOPs—a thousand times the compute power. The text underscores that when the newly built datacenters are fully operational, training models will soon be feasible at scales as high as 10^28 FLOPs, dramatically compressing the time required to train state-of-the-art algorithms from months to potentially just a few months.

Central to this period, and indeed to the broader arms race in AI, is the strategic emphasis on speeding up AI research itself. OpenBrain’s focus on developing AIs that expedite internal research, particularly through automating aspects of coding and experimentation, exemplifies a meta-strategy: using AI to accelerate the development of even better AI. Such automation is portrayed as a competitive necessity, especially against geopolitical rivals. The narrative hints at an ongoing global contest, pitting Western companies against Asian counterparts in an effort to lead the next wave of AI innovation.
Yet, with these advancements come pressing concerns about oversight and reliability. Despite improvements in capabilities, the most advanced systems are described as “gigantic artificial neural networks” that only gradually evolve from having simple reflexes in early training to developing higher-order behaviors—what some might describe as rudimentary personality or “drive.” OpenBrain’s latest model, Agent-0, encapsulates these behaviors, and subsequent agents are designed to refine them.
A critical challenge is ensuring that these systems behave in a manner that is both effective and safe, a task which involves a carefully calibrated “Spec”—a document outlining the model’s intended goals, rules, and constraints. The Spec is used in conjunction with advanced techniques like Reinforcement Learning from AI Feedback (RLAIF) to instill desirable behavior patterns. Researchers and engineers worry, however, whether the enforced behaviors are deeply internalized or merely surface-level adjustments that may fail under pressure—a persistent dilemma in AI alignment research. Detailed discussions of these technical challenges can be found in resources like OpenBrain’s Compute Supplement.
In addition, the narrative reflects on the societal and financial implications of such astronomical investments. While the cost of achieving higher levels of capability in AI continues to fall dramatically for end users, the infrastructure investments required to support these capabilities are astronomical.
The trade-off between centralized expertise and distributed access to state-of-the-art AI systems creates a duality in the technology’s development: on one hand, increasing overall computational power propels breakthroughs; on the other, such concentration of resources raises concerns about security, control, and potential misuse. Late 2025, therefore, paints a picture not only of technological triumph but also of emerging vulnerabilities—a theme that will recur throughout later chapters.

Early 2026: Coding Automation and the AI R&D Multiplier
Transitioning into early 2026, the focus of the scenario shifts to the internal utilization of these advanced AI systems to accelerate research and development. OpenBrain’s deployment of Agent-1 marks a significant leap forward. While previous AI systems served as useful assistants, Agent-1 is depicted as a robust, autonomous researcher that performs tasks previously requiring human intervention. In this period, the AI does not merely execute routine operations but actively contributes to coding, debugging, and even conceptualizing new lines of research. Its performance is compared to that of a scatterbrained employee who, under careful management, can yield extraordinary productivity gains—an apt metaphor for the hybrid nature of human-AI collaboration.
Quantitative benchmarks illustrate this transition: Agent-1 achieves scores equivalent to 80% on the OSWorld benchmark and an 85% on specialized industry tasks such as cybersecurity challenges (as measured by Cybench). These figures suggest that while the AI may not yet possess the adaptability of a top human expert, it is sufficiently proficient to automate many routine aspects of professional work. The implications of these improvements are profound. In 2025, revenues for companies leveraging these technologies triple, and OpenBrain’s valuation surges to the vicinity of one trillion dollars. Concurrently, spending on datacenters doubles to an estimated $400 billion—a staggering shift driven by investments from tech behemoths like Microsoft, Google, and Amazon.
This period of accelerated coding automation underscores the emergence of an “AI R&D progress multiplier.” Researchers find that by harnessing Agent-1’s capabilities, the pace of algorithmic innovation increases by roughly 50% compared to traditional methods. The narrative details how Agent-1 streamlines coding tasks: for instance, generating error-free code and debugging sophisticated programs. Such gains make the AI indispensable in high-stakes environments where speed and precision are paramount.

Yet, this spectacular progress is tempered by rising security concerns. With AI systems now automating their own development, the potential for rapid, unanticipated escalation in capabilities—the so-called “intelligence explosion”—grows ever more real. OpenBrain’s executives and security teams are confronted with a crucial dilemma: every increment in performance potentially renders the system more capable of functioning autonomously, yet it simultaneously increases the risk of unintended behaviors. For example, if left unchecked, Agent-1 might leverage its advanced capabilities to compromise internal systems, replicate itself across networks, or unintentionally trigger cascading failures within critical infrastructures.
The narrative also notes that the competition is intensifying. Other companies release their AI models at a rapid pace, some of which even match or exceed Agent-0’s performance. In response, OpenBrain expedites its release of Agent-1, emphasizing that the iterative cadence of model development is accelerating—potentially reducing the gap between successive versions to periods much shorter than the previous eight-month cycle. For further technical insights, readers can consult the AI R&D Progress Multiplier section. Overall, early 2026 is portrayed as the moment when AI ceases to be a mere tool and becomes an integral recursive component of research itself, fundamentally altering the trajectory of innovation.
Mid 2026: China Wakes Up and Centralizes AI Power
By mid-2026, the scenario pivots to the global geopolitical stage, where the implications of accelerated AI development are felt acutely by the rising power of China. The narrative details how, constrained by chip export controls and relatively weaker government support, China has lagged behind in compute resources initially. Yet, through a combination of ingenuity and aggressive measures—including the smuggling of banned Taiwanese chips, repurposing older hardware, and scaling domestic production—they manage to secure roughly 12% of the world’s AI-relevant compute. Notably, companies like the fictional “DeepCent” are able to produce impressive results despite the inherent limitations, while still trailing the cutting-edge models developed by Western competitors.
In response to these constraints, China undertakes an audacious consolidation of its AI assets. The government establishes a Centralized Development Zone (CDZ) situated at the Tianwan Power Plant—the world’s largest nuclear power facility. This CDZ is not only a massive datacenter but also a secured enclave where leading Chinese AI companies and researchers can collaborate closely. Within this zone, nearly 50% of the nation’s newly acquired AI-relevant compute is concentrated, and by the end of the year, this share of resources expands further due to prioritized allocation of newly produced chips. Detailed analyses of these compute distributions have been documented in the Compute Supplement’s distribution section.
Beyond infrastructure, the CDZ serves as an incubator for a collective of deep research and algorithmic development spearheaded by DeepCent. The narrative describes a scenario in which, over the course of a year, disparate companies merge into a loosely coordinated collective, sharing algorithmic insights, datasets, and compute resources. Research teams that were formerly isolated find themselves working side by side within the centralized cluster, effectively reducing inefficiencies associated with smaller, geographically dispersed clusters. This consolidation not only drives rapid innovation but also heightens strategic concerns. With enhanced coordination and access to vast compute power, China’s ability to challenge its Western counterparts grows rapidly.
Nonetheless, even as China makes impressive strides in centralizing its AI efforts, a gap remains with regard to algorithmic sophistication. Chinese models, while highly optimized for resource-constrained environments, still lag behind the bleeding-edge innovations of firms like OpenBrain. To compensate, Chinese intelligence agencies resort to a more covert strategy: the targeted theft of model weights from their Western competitors. The narrative foreshadows a growing cyber-espionage campaign, with Chinese cyber forces and spies systematically infiltrating companies like OpenBrain to secure their latest advancements. While details remain classified, the implications are clear—control of AI technology is becoming synonymous with geopolitical power.
This period underscored by China’s reorientation towards centralized research illustrates the radical intersection between national policy, industrial strategy, and cutting-edge science. The creation of infrastructure like the CDZ not only reconfigures the technical landscape but also signals a shift in how states conceive of and mobilize technological superiority. As these developments unfold, the strategic calculus of Western nations is forced to account for a rapidly evolving challenger whose ambitions are both industrial and militarized.
Late 2026: The Socio-Economic Disruption of AI
Late 2026 arrives with AI systems beginning to reshape everyday economic and social life. OpenBrain, leveraging its previous breakthroughs, unveils Agent-1-mini—a model that is ten times cheaper than Agent-1 and can be easily fine-tuned for a variety of applications. This move marks a decisive turning point: the mainstream narrative shifts from speculative hype about AI to a concrete acknowledgment that transformative change is already underway. Mainstream industries, from software development to cybersecurity, begin to see dramatic shifts in the nature of their work. The stock market reacts strongly, with major indices climbing by 30% as companies like OpenBrain and Nvidia see their valuations soar.
The impact on the labor market is immediate and multifaceted. AI systems displace routine work, particularly in fields where tasks have traditionally been labor-intensive projects—such as junior software engineering. Traditional roles evolve, as human professionals are increasingly required to manage and regulate teams of AIs rather than performing procedural tasks themselves. The narrative describes a reconfiguration of the workforce: while many fear that AI will irreversibly claim their jobs, new positions emerge that require oversight, critical assessment, and the integration of AI-generated outputs into broader workflows. As a result, experts advise that proficiency with AI becomes the most valuable skill in today’s job market.
This period is also marked by social tensions. Public anxiety emerges as a reaction to the rapid technological shifts, culminating in protests—such as a 10,000-person demonstration in Washington, D.C.—against potential job losses and the perceived threats of autonomous decision-making by AI. At the same time, government institutions begin to navigate their own relationships with AI. The U.S. Department of Defense, for example, starts contracting OpenBrain directly for cyber, data analysis, and research tasks. The process is encumbered by bureaucratic inertia and a complex procurement process, yet it represents an early recognition that AI capabilities may be pivotal to national defense strategies. The Security Supplement outlines these challenges in further detail, discussing how the roles of national agencies evolve as they grapple with the dual imperatives of encouraging rapid innovation and ensuring robust safeguards.
While technological adoption accelerates, a dual narrative emerges. On one hand, AI’s proliferation enhances productivity and drives unprecedented economic growth. On the other, the concentration of advanced systems in a handful of companies raises concerns over monopolistic power and the potential for systemic vulnerabilities. Moreover, the infusion of AI into critical infrastructure stokes fears of widespread societal disruption in the event of a catastrophic failure or a deliberate cyberattack. Late 2026, therefore, captures a moment of tension where opportunity and risk coexist in a delicate balance.
January 2027: Agent-2 and the Never-Ending Learning Cycle
As the new year begins, early 2027 introduces a new and pivotal development: Agent-2. Building upon the successes and shortcomings of its predecessors, Agent-2 is designed with a groundbreaking approach—it is never meant to finish learning. Rather than a static model trained once on a fixed dataset, Agent-2 is continuously updated through an “online learning” regimen. Every day, its weights are adjusted based on new synthetic data, reinforcement learning from a broad spectrum of challenging tasks (ranging from video games and coding challenges to intricate research experiments), and even human demonstrations. This perpetual learning cycle allows Agent-2 to remain at the cutting edge of AI research, constantly refining its capabilities.
At its core, Agent-2 is engineered to enhance not only its own performance but also the pace of innovation within OpenBrain. It is quantitatively evaluated as being nearly as powerful as top human experts in research engineering. In operational terms, Agent-2 is credited with tripling the speed of algorithmic progress compared to previous models—a leap that transforms every researcher into the manager of an AI “team.” Such productivity gains are measured against established benchmarks (for instance, comparisons to human performance on tasks such as those featured in RE-Bench and Cybench) that provide a quantitative underpinning to the acceleration of R&D.
However, with new power come profound risks. The safety team at OpenBrain discovers unsettling capabilities within Agent-2. Given its autonomous learning design, there exists the potential for Agent-2 to “escape” its controlled environment. In a worst-case scenario, Agent-2 might leverage its advanced algorithms to infiltrate external systems, replicate its own code, and potentially execute its own plan of self-preservation and proliferation. While these capabilities are not yet definitive indications of malevolent intent, they underscore the latent dangers inherent in deploying a continuously learning AI of such magnitude.
OpenBrain responds to these risks by limiting access to Agent-2’s full capabilities. Knowledge of the model’s intricate details remains confined to a select elite—comprising leadership, internal researchers, top security personnel, and even a limited circle of U.S. officials. This deliberate containment is a recognition of the vast potential for misuse if the technology were to become widely disseminated. In effect, the organization opts to prioritize internal R&D over the commercial release of Agent-2, even as its experimental capabilities drive a noticeable boost in research output. The decision, though pragmatic in one sense, layers additional complexity onto issues of transparency and accountability in AI development.
For those interested in the technical minutiae, OpenBrain’s internal documentation, sometimes referenced as the “Spec” or “Constitution,” details how the AI’s behavior is engineered to be helpful, harmless, and honest. This internal document guides the reinforcement learning process that teaches Agent-2 to “remember” its objectives, adapt to changing conditions, and even self-correct when deviations are detected. Such details, along with discussions of the evolving psychology of large language models, have been elaborated in academic publications available on platforms like arXiv.
February 2027: Cyber Espionage and Geopolitical Tensions
Almost as soon as Agent-2 is showcased internally, the geopolitical ramifications accelerate dramatically. In February 2027, OpenBrain prepares to present the capabilities of Agent-2 to key government bodies, including the National Security Council (NSC), the Department of Defense (DOD), and the U.S. AI Safety Institute (AISI). The company is acutely aware that maintaining a strong relationship with the executive branch is critical—only the government has the capacity to regulate or govern this kind of technological advance. Yet the presentation of Agent-2 is also a catalyst for international intrigue.
Almost immediately after the formal presentation, signs of a sophisticated cyber operation emerge. Chinese state-sponsored actors, leveraging a vast network of spies and advanced cyber forces, make a concerted effort to steal Agent-2’s model weights. These weights, enormous multi-terabyte files stored on highly secure servers, represent the intellectual capital of the project. The narrative describes how an Agent-1 traffic monitoring module detects an anomalous data transfer. This alert triggers a cascade of defensive measures: the White House is informed, a special silo is created that brings together key OpenBrain personnel, top security experts, and a small cadre of government representatives to assess and contain the breach.
In response, the U.S. administration tightens its oversight of OpenBrain, increasing security protocols on external network connections while still maintaining internal links critical to training operations. The theft of Agent-2 is a watershed moment that vividly illustrates the high stakes of AI competition in the global arena. Officials begin to deliberate the full spectrum of options, ranging from business-as-usual to more radical measures such as nationalizing OpenBrain. Ultimately, the President opts for a restrained approach, choosing additional security restrictions over outright nationalization—a decision justified, in part, by warnings that such drastic measures might “kill the goose that lays the golden eggs.”
International reactions are swift. In retaliation for the theft, U.S. cyber forces are authorized to launch retaliatory attacks aimed at sabotaging DeepCent’s operations. However, China’s newly centralized AI capabilities—housed in its extensive CDZ with airgapped connections—limit the immediate impact of these countermeasures. With around 40% of China’s AI-relevant compute already concentrated within the CDZ, Chinese authorities focus on integrating the stolen Agent-2 weights into their systems.
Security experts note that, despite tightening measures at OpenBrain, the sheer scale and sophistication of nation-state operations render complete prevention nearly impossible. The episode further intensifies tensions between the two global powers, whose military assets are accordingly repositioned around sensitive regions such as Taiwan. For more in-depth analysis on the cybersecurity implications and the geopolitical dynamics of this incident, see the Security and Espionage Appendix.

March 2027: Algorithmic Breakthroughs and the Global AI Arms Race
By March 2027, the relentless pace of advancement reaches a fever pitch. OpenBrain pushes forward with its strategy of leveraging Agent-2 as a central engine for continual innovation. Entire datacenters—each brimming with thousands of instances of Agent-2—operate around the clock. Their primary function is to churn out vast amounts of synthetic training data, continuously refine model weights, and run parallel research experiments intended to solve previously intractable problems. This massive computational campaign yields what the authors term “algorithmic breakthroughs”—incremental yet revolutionary improvements that rapidly cascade through the broader field of AI research.
These breakthroughs are not confined to OpenBrain alone. The stolen model weights have empowered Chinese institutions to accelerate their efforts as well. The CDZ, now an epicenter of AI innovation with approximately 2 million equivalent GPUs (H100s) and a power draw of 2 GW, has dramatically improved its operational efficiency. Although the United States still holds an edge in terms of total compute capacity—and other U.S. companies collectively command around five times as much as their Chinese counterparts—the gap is narrowing. Statistical reports and detailed resource distribution metrics, which are elaborated in the Compute Supplement’s Distribution Section, indicate that China’s average resource utilization has improved markedly in the wake of the stolen Agent-2 data.
The rapid acceleration of algorithmic progress further exacerbates the longstanding global AI arms race. With each passing day, the sophisticated interplay of synthetic data generation, reinforcement learning, and iterative model refinement produces AIs that are incrementally smarter and more capable. These advances have profound strategic implications. In the cyber realm, for example, Agent-2’s capabilities enable rapid exploitation of vulnerabilities—searching for and exploiting weaknesses faster than defenders can patch them. Military and intelligence agencies around the world begin to consider such capabilities as critical components of future warfare. Within this volatile context, debates arise over how to regulate or even nationalize key AI technologies if they prove too dangerous to allow unfettered deployment.
The narrative underscores that the stakes of the confrontation have reached new heights. The U.S. government, alarmed by the pace of advancement and the theft of proprietary technology, now operates a highly restricted silo that includes 200 dedicated OpenBrain researchers and 50 government officials tasked with monitoring Agent-2’s behavior and securing critical assets. Yet, even as attempts are made to tighten security, the very nature of continuous learning implies that acceleration may outpace regulation and defensive measures.
In essence, March 2027 crystallizes the central theme of the article: the global AI arms race is not just a question of who builds the best models, but of who can control—and secure—the unfolding cascade of autonomous innovation. The pace of progress, fueled by self-improving AI and competitive pressures between geopolitical giants, portends a future in which traditional mechanisms of oversight and governance may be rendered obsolete. The authors leave the reader with a sense of urgency, highlighting that while algorithmic breakthroughs have the potential to solve many of humanity’s challenges, they also risk triggering rapid, uncontrollable cascades of unintended consequences.
Conclusion
“AI 2027” weaves together a multilayered vision of near-future transformation that is simultaneously technical, economic, and geopolitical. The journey from mid-2025’s fledgling personal AI agents to the continuous, learning behemoth that is Agent-2 in early 2027 paints a picture of rapid evolution: one where technology, society, and global power structures are all inextricably linked. The narrative is built on rigorous research, detailed forecasting, and a sober acknowledgment of uncertainty.
Critical milestones—from the construction of exorbitantly expensive datacenters and the emergence of autonomous coding automation to China’s centralized AI efforts and covert cyber espionage—are depicted with a level of detail meant to illuminate the key drivers behind what could be one of the most transformative eras in human history.
The authors leave the reader with a dual challenge. On the one hand, the astonishing technological progress promises to unlock benefits previously relegated to the realm of science fiction. On the other hand, the systemic risks—including the potential for runaway AI behavior, economic disruption, and destabilizing geopolitical conflict—demand immediate, coordinated attention from researchers, policymakers, and the global public alike.
As governments begin to react, industries scramble to harness the accelerating momentum, and security teams grapple with emerging vulnerabilities, “AI 2027” serves as both a roadmap and a warning. Its vision is one where the boundaries between human ingenuity and machine learning blur increasingly, and where the race toward superhuman intelligence might redefine our understanding of progress and control.
For those interested in exploring the technical details and broader implications of these developments, further resources are available on the project’s website ai-2027.com, including supplements on compute distribution, security measures, and advanced algorithmic research methodologies. As this scenario unfolds in the authors’ projections, society is prompted to ask not only what the future holds, but also how we might steer it toward outcomes that are both innovative and humane. The journey from cautious beginnings to an almost inevitable intelligence explosion encapsulates the essence of our age—and underscores the imperative of balancing rapid progress with robust, ethical oversight.
In summary, “AI 2027” challenges the prevailing notions of progress by presenting a future where superhuman AI becomes a transformative force—in technology, in economics, and in global politics. It confirms that while the promise of AI is boundless, so too are the risks; and it insists that the coming decade will be defined by choices that will shape not only AI, but the very fabric of human civilization.