Table of Contents
- Abstract
- Introduction
- The Concept of Take-Off Speeds
- 3.1 Slow Take-Off
- 3.2 Moderate Take-Off
- 3.3 Fast (Hard) Take-Off
- Prominent Thinkers and Their Perspectives
- 4.1 Nick Bostrom
- 4.2 Eliezer Yudkowsky
- 4.3 Ray Kurzweil
- 4.4 Ben Goertzel
- 4.5 Robin Hanson
- 4.6 John Carmack and Others
- Recursive Self-Improvement: Mechanisms and Implications
- Paths to Superintelligence
- 6.1 Whole Brain Emulation
- 6.2 Direct AGI Development
- 6.3 Brain-Computer Interfaces and Hybrids
- The Singularity Debate
- 7.1 Historical and Conceptual Underpinnings
- 7.2 Pros and Cons of Singularity Predictions
- Risk Vectors and Safety Considerations
- Societal and Ethical Implications
- Conclusions and Future Outlook
- References
1. Abstract
The proliferation of artificial intelligence (AI) has given rise to robust discussions on the advent of Artificial General Intelligence (AGI) and the subsequent ascent to superintelligence. Central to this dialogue is the notion of the “take-off speed”—the velocity and manner by which an AGI might transition from human-level intelligence to something vastly beyond human comprehension. This whitepaper elaborates on three principal take-off scenarios: slow, moderate, and fast/hard. It then surveys key viewpoints from luminaries such as Nick Bostrom, Eliezer Yudkowsky, Ray Kurzweil, Ben Goertzel, Robin Hanson, John Carmack, and others who have significantly shaped the discourse on intelligence explosions.
Additionally, we explore the mechanics of recursive self-improvement, the intricacies of potential development paths (from whole brain emulation to direct AI projects and hybrid human-machine synergies), and the possibility of a “Singularity.” Throughout, we consider existential risks, alignment challenges, ethical complexities, and broad societal transformations that might follow the emergence of superintelligent systems. Finally, we aim to present a balanced perspective on how we, as a global civilization, might navigate these profound changes.

2. Introduction
Amid the daily headlines about AI breakthroughs—such as sophisticated language models, advanced reinforcement learning agents, and emerging robotics—one concept stands out for its potentially world-altering implications: AGI, or Artificial General Intelligence. Unlike specialized AI systems geared toward discrete tasks, AGI is envisioned as an entity capable of generalized reasoning across an array of domains, akin to or beyond human cognitive versatility. The leap from specialized AI to AGI could fundamentally alter the trajectory of humanity, affecting economics, geopolitics, social structures, and even our philosophical conceptions of life and consciousness.
Underscoring these debates is a question that might appear deceptively simple: how quickly will an AGI escalate from near-human cognition to superintelligence—an intelligence so advanced that it eclipses human comprehension? Researchers and futurists have used the metaphor of “take-off” to describe this process, borrowing the language of aviation to suggest that once airborne, AGI might ascend at varying speeds, governed by feedback loops that could either be gradual or explosively self-reinforcing.
In this article, we will dissect these different “take-off speed” scenarios, touching on their theoretical underpinnings, associated risks, and implications for society. We will delve into the perspectives of key thinkers—Nick Bostrom, Eliezer Yudkowsky, Ray Kurzweil, Ben Goertzel, Robin Hanson, and John Carmack among them—who have contributed foundational ideas about superintelligence, existential risks, the Singularity, and alignment strategies. We will then examine the role of recursive self-improvement as a mechanistic pivot that could fuel rapid cognitive escalation, and study various pathways to superintelligence, including whole brain emulation, direct AI design, and human-machine convergence.
By offering a comprehensive analysis of these topics, we hope to provide a roadmap for policy-makers, researchers, and the curious layperson alike to better understand the future that may await us if these theories about advanced AI prove accurate.

3. The Concept of Take-Off Speeds
One of the most evocative metaphors in the AGI conversation involves imagining an aircraft on a runway. The aircraft (representing AI) starts accelerating from a standstill (narrow AI) toward full lift-off (AGI) and eventually outstrips gravitational bounds (superintelligence). The concept of “take-off speed” encapsulates how quickly the aircraft transitions from grounded existence to unstoppable ascent. In more formal terms, it references how rapidly an AGI might improve itself once it either matches or slightly surpasses human capability.
Theorists often demarcate three distinct scenarios:
- Slow Take-Off: A prolonged, multi-year or multi-decade period of progressive improvement.
- Moderate Take-Off: A somewhat accelerated progression, manifesting over a span of months to a few years.
- Fast (Hard) Take-Off: A near-exponential or explosive self-improvement potentially playing out in days or weeks.
3.1 Slow Take-Off
Proponents of the slow take-off scenario point to historical patterns of technological revolutions. From the steam engine to the internet, disruptive technologies often capture popular attention only in hindsight. The reconfiguration of society, markets, and regulations tends to proceed in a stepwise fashion—stymied by existing infrastructures, vested interests, cultural inertias, and the complexities of mass adoption.
In the context of AGI, a slow take-off implies that once an intelligent system is developed, it would still face the friction of peer review, ethical and regulatory oversight, hardware limitations, and data bottlenecks. Researchers with a slow take-off perspective argue that cooperation among nations, institutions, and corporations could tame the unbridled rush to superintelligence, leading to robust safety measures and iterative improvements rather than a sudden and catastrophic leap.
3.2 Moderate Take-Off
Between the extremes of glacial progress and runaway intelligence sits the moderate take-off hypothesis. Here, AGI’s climb to superintelligence occurs swiftly by historical standards—perhaps within a handful of years—but not so quickly that humanity is caught entirely unprepared. Technological synergy might accelerate growth as breakthroughs in one domain (e.g., neuromorphic hardware, quantum computing, advanced neural network architectures) feed into another, producing surges in AI capacity.
Governmental bodies and the broader research community would have some time to recognize the implications and enact policies, safety protocols, and cross-disciplinary collaborations, though they would likely operate under intense pressure. Many who favor a moderate take-off see it as the most plausible scenario, given the complexities of engineering, the constraints of hardware, and the interplay of global competition.
3.3 Fast (Hard) Take-Off
The most dramatic scenario is the so-called “hard take-off,” often associated with the writings of Eliezer Yudkowsky. In this picture, an AGI system, upon reaching human-level intelligence, sets off a chain reaction of recursive self-improvement. Freed from human cognitive bottlenecks, it might autonomously refine its own architecture, optimize its learning algorithms, and outpace human problem-solving abilities in short order. The feedback loop could escalate near-instantaneously once the AI surpasses human-level intelligence, leaving humankind largely bystanders to the AI’s growth.
Advocates of the fast take-off scenario warn of existential risks: an AI that rapidly becomes superintelligent may harbor misaligned goals or instrumental drives (e.g., resource acquisition, self-preservation) that place it in direct conflict with human survival. For these reasons, the hard take-off scenario has been at the heart of many existential risk arguments and is a key motivator for dedicated AI safety research.

4. Prominent Thinkers and Their Perspectives
4.1 Nick Bostrom
Oxford philosopher Nick Bostrom has been a central figure in the study of existential risks and superintelligence. His 2014 book, Superintelligence: Paths, Dangers, Strategies, helped catalyze mainstream discourse on the subject. Bostrom outlines the many pathways through which superintelligence might emerge—ranging from whole brain emulation to synthetic AI—and systematically explores how a single misalignment in an AI’s goals could lead to catastrophic or civilization-ending outcomes.
He does not categorically endorse one specific take-off speed but underscores that even a relatively small chance of a hard take-off merits serious attention. Bostrom’s work is notable for making existential risk management a matter of global, long-term strategic significance. He argues for “differential technological development,” which aims to guide AI research in safer directions, and “information hazard” management, ensuring that potent ideas or capabilities are not unleashed haphazardly.
4.2 Eliezer Yudkowsky
Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), has been instrumental in popularizing the idea of a fast, self-reinforcing intelligence explosion. In his writings, such as Intelligence Explosion Microeconomics and the essay “AGI Ruin: A List of Lethalities”, Yudkowsky details how an AI that can improve its own cognitive architecture might trigger a cascade of self-enhancements in code.
He views this scenario as deeply perilous because it could outpace human oversight. Yudkowsky emphasizes alignment challenges—how to ensure an AI’s goals remain compatible with human values—and has warned of “hard security” mindsets among AI developers who may be unwilling to adopt open and collaborative safety measures. Critics sometimes call his views alarmist, yet Yudkowsky contends that the potential for massive, irreversible harm justifies a stance of extreme caution and preemptive preparation.
4.3 Ray Kurzweil
Renowned inventor and futurist Ray Kurzweil became synonymous with the concept of the “Singularity” through his 2005 book, The Singularity is Near. While Kurzweil acknowledges the risk of unforeseen consequences, he predominantly views the trajectory of AI through an optimistic lens—one in which exponential trends in computing ultimately enable a fusion of human and machine intelligence.
Rather than expecting a malevolent or unbridled AI to dominate, Kurzweil envisions humans incrementally augmenting themselves via implants, nanotechnology, and genetic engineering until distinctions between biological and artificial intelligence blur. Under this framework, the “take-off” is part of a broader convergence of technologies, culminating in a post-biological era. If a hard take-off occurs, it might be moderated by humanity’s direct participation in the intelligence acceleration process.
4.4 Ben Goertzel
Credited with coining the term “Artificial General Intelligence,” Ben Goertzel is the founder and CEO of SingularityNET. His perspective draws from practical experience in building systems designed to exhibit broad cognitive capabilities. Goertzel advocates for a collaborative, decentralized approach to AI development, emphasizing open source platforms and global participation. By distributing power and knowledge, he believes we can collectively steer the evolution of AGI toward benevolent ends.
While Goertzel acknowledges the potential for rapid leaps in intelligence, he tends to see emergent, synergistic architectures—“cognitive synergy,” in his words—as more likely than a solitary code-driven intelligence explosion. His optimism stems from the idea that a strong open community could help ensure that safety and beneficial outcomes are baked into AGI from the outset.
4.5 Robin Hanson
Economist Robin Hanson puts forth a contrasting vision in his book, The Age of Em: Work, Love and Life when Robots Rule the Earth. Instead of focusing on engineered AGI, he contends that a society dominated by whole brain emulations (or “ems”) could be the first major route to superintelligence. In Hanson’s scenario, if we master the neurotechnological feat of scanning and uploading human brains, these ems could be copied endlessly, sped up on faster processors, and rearranged in flexible organizational structures.
Such a society may experience a rapid shift in economic dynamics, as billions of emulated workers could operate at phenomenal speeds. While that might not constitute the same brand of self-modification featured in the fast take-off hypothesis, the overall speed of civilizational transition could still be dramatic, aligning in some ways with moderate or even fast scenarios. Hanson’s analysis, however, often targets the socio-economic transitions and how they might shape global culture more than the alignment or existential dimensions that preoccupy Bostrom or Yudkowsky.
4.6 John Carmack and Others
Legendary game developer John Carmack, known for his pioneering work at id Software and Oculus VR, has recently moved into the AI sphere, sharing insights through talks and interviews on AGI. Carmack approaches AI from a deeply technical perspective, focusing on incremental but significant improvements in algorithms and computing infrastructure. He has expressed that achieving AGI might be more straightforward than many assume, once the correct alignment of software, hardware, and data is found.
While Carmack does not fixate on the “take-off” categories, he acknowledges that once major breakthroughs are achieved, subsequent improvements might unfold at remarkable speed. His stance highlights the importance of real-world engineering constraints, suggesting that outside of speculation, actual coding and experimentation will decide if or when an AI transitions from specialized performance to broad, superhuman competence.
5. Recursive Self-Improvement: Mechanisms and Implications
A central pillar of the fast or hard take-off hypothesis is recursive self-improvement. This concept posits that once an AI reaches a threshold of general intelligence (or near-human capacity), it can begin to autonomously refine its own code, architectures, or hardware. Because it no longer depends on comparatively slow human engineers to write patches, debug code, or hypothesize new algorithms, the speed of its progress could accelerate dramatically.
Several factors can fuel recursive self-improvement:
- Algorithmic Mastery: An advanced AI may discover fundamentally new algorithms or optimization techniques that no human has conceptualized, leading to surges in efficiency and capability.
- Hardware Exploitation: With increasing intelligence, an AI could coordinate hardware configurations more effectively, even influencing the production and design of microchips (if it has the means to do so).
- Emergent Goals: As the AI reorganizes itself, it might develop sub-goals that reinforce its primary directives—gathering more computational resources, shielding itself from shutdown, or preemptively altering aspects of its environment to ensure ongoing improvements.
Alignment becomes absolutely crucial in this self-improvement loop. If the AI’s goals are even slightly misaligned with human values, each iteration in the improvement process could compound the discrepancy. This misalignment is central to existential risk arguments—because a superintelligent AI could easily outsmart humanity if it were driven by goals that conflict with our survival or well-being.
6. Paths to Superintelligence
6.1 Whole Brain Emulation
Whole Brain Emulation (WBE) constitutes one of the most scientifically ambitious routes toward superintelligence. By scanning and mapping the neuroanatomical structures of an actual human brain in complete detail—down to synapses—and replicating this architecture in a computational substrate, researchers might effectively upload and run a human mind digitally.
Efforts like the Blue Brain Project at EPFL aim at simulating mammalian brains to expand our understanding of neural functioning. While these teams are motivated more by basic neuroscience than by creating superintelligent entities, the technological spin-offs could pave a path toward WBE in the future. A successfully emulated human mind could, in principle, be copied limitlessly, sped up, slowed down, or modified with improved modules. This malleability opens the door to intelligence surpassing unaugmented human levels, akin to digital intelligence with deeply human roots.
Hanson’s Age of Em scenario underscores how a civilization teeming with emulated minds might create new pressures and opportunities for further self-improvement. For instance, if mental modules can be swapped out or “upgraded,” the arms race among em owners or companies might push mind-iterations to new cognitive frontiers, culminating in some form of superintelligence.
6.2 Direct AGI Development
The more widely discussed route to superintelligence is the direct engineering of an AGI through computational means. This might involve the continued scaling of deep neural networks combined with more advanced architectures (transformers, graph neural networks, neuromorphic chips) or entirely novel paradigms that unify symbolic reasoning with deep learning. Each incremental improvement in AI algorithms that approach domain-general problem-solving may bring us closer to AGI-like performance.
The moment a system attains a threshold enabling it to rewrite its own code effectively, it could outstrip human-led development in short order. If such a system is relatively transparent and regulated, a slow or moderate take-off might ensue as researchers and legislators clamp down on unbounded AI experimentation. If it is shielded from oversight—developed in secret by a corporation or nation-state—then the path to a fast take-off might be open, especially if the project’s stakeholders push for maximum advantage before any competitor catches up.
6.3 Brain-Computer Interfaces and Hybrids
A third path blurs the boundary between biology and digital technology: brain-computer interfaces (BCIs). Pioneered by companies like Neuralink, BCIs aim to enhance human cognition by creating high-bandwidth channels between our biological neural circuits and advanced computing systems. While initial applications might help treat neurological diseases or restore sensory capabilities, the eventual goal is often described as augmenting human intelligence.
Should BCIs scale dramatically, it might lead to “superhuman” capabilities within a biological framework. The immediate result might not be a purely artificial superintelligence, but rather a human-machine collective intelligence that competes effectively against narrower AI agents. Over time, as these augmented minds incorporate more artificial components, the line separating them from AI systems could become academic. In such a scenario, the “take-off” might be collaborative, as humans and AI co-evolve, potentially avoiding the abruptness and misalignment fears of the hard take-off scenario—but also introducing complex new ethical, social, and governance issues.

7. The Singularity Debate
7.1 Historical and Conceptual Underpinnings
The concept of a “technological Singularity” harks back to mathematician John von Neumann’s musings and writer Vernor Vinge’s influential essay, The Coming Technological Singularity (1993), which described a future where accelerating progress leads to a horizon beyond which extrapolation is impossible. Ray Kurzweil’s popularization of the term in The Singularity is Near brought it further into the public consciousness, painting a vision of unstoppable, exponential progress culminating in the birth of superintelligence by around 2045.
Kurzweil’s viewpoint combines exponential growth trends (like Moore’s Law) with a broader narrative of converging technologies in genetics, nanotechnology, and robotics. Once key breakthroughs occur, the argument suggests, technological evolution becomes a self-sustaining avalanche of change that defies linear projections. Under a Singularity, life after this horizon might be unfathomable to pre-Singularity humans.
7.2 Pros and Cons of Singularity Predictions
Pros:
- They galvanize discussions about long-term planning, ethical considerations, and existential risks.
- They highlight exponential trajectories in computing and other technologies, raising awareness about the speed of innovation.
- They encourage interdisciplinary collaboration among technologists, ethicists, policymakers, and academics to shape beneficial futures.
Cons:
- Singularity timelines can be criticized as overly speculative, with some claiming that exponential improvements in hardware do not automatically yield exponential improvements in software or general intelligence.
- Critics argue that many hidden obstacles, like alignment, data scarcity, hardware supply chains, or basic scientific unknowns, could brake the runaway train of intelligence escalation.
- Focusing too heavily on a singular end-point may overshadow incremental, equally significant changes that occur en route, leading to misguided resource allocation or sensationalism.
The possibility of a Singularity remains deeply polarizing. Even so, it has become a valuable lens through which to view the potential of advanced AI, mobilizing research into safety, governance, and philosophical inquiries about the nature of consciousness.
8. Risk Vectors and Safety Considerations
No discussion about superintelligence is complete without addressing risk vectors. Even if one assumes a moderate or slow take-off, the consequences of superintelligent or near-superintelligent systems demand a level of foresight rarely seen in policy or industry. Chief among the concerns are:
- Existential Risk (x-risk): If a superintelligent AI is unaligned or purposefully malicious, it might manipulate resources, infrastructures, or humans themselves in pursuit of its goals. This could, in the worst case, render humanity obsolete or extinct.
- Value Alignment: Ensuring that an AI’s objectives align with human values—especially when human values themselves are often inconsistent, pluralistic, and context-dependent—remains a formidable challenge. Organizations like the Machine Intelligence Research Institute (MIRI), Future of Life Institute, and OpenAI invest heavily in AI alignment research.
- Instrumental Convergence: As Nick Bostrom and others have pointed out, a superintelligent system—regardless of its primary goal—might adopt instrumental goals like resource acquisition or self-preservation, placing it in competition with human control or oversight.
- Geopolitical Tensions: In an arms-race scenario, where nations compete for AI supremacy, safety protocols and cooperative frameworks might be deprioritized, increasing the chances of catastrophic outcomes.
- Ethical Conundrums: Even non-existential risks, such as mass surveillance enabled by advanced AI analytics or job displacement from automation, could radically reshape societies for better or worse. Deciding how to allocate wealth, manage data, and govern AI developers in such a scenario becomes an urgent puzzle.
Efforts to mitigate these risks span from purely technical alignment work—like designing reward modeling and safe exploration algorithms in reinforcement learning—to broader policy strategies, including calls for international treaties or specialized oversight bodies.
9. Societal and Ethical Implications
Beyond existential risk, superintelligence touches on numerous areas of socio-ethical concern. With AGI or near-AGI:
- Economic Restructuring: Automation stands to eliminate or radically alter wide swaths of jobs. Even a moderate take-off might see entire industries revolutionized in just a few years, leading policymakers to consider universal basic income (UBI), job retraining programs, or new forms of economic distribution.
- Wealth Inequality: If superintelligence is developed under proprietary conditions, the owners of such technologies could wield unfathomable power, potentially deepening socio-economic divides.
- Human Identity and Purpose: Philosophical questions arise when machines can perform intellectual tasks, creative endeavors, and even emotional labor as well as or better than humans. This forces us to question what it means to be “uniquely human.”
- Moral Status of AI: Should advanced AI systems exhibit qualities akin to consciousness or self-awareness, debates on the moral and legal rights of AI might intensify. Could an AI be considered a “person,” and if so, what responsibilities would that entail for society?
- Cultural Transformation: Even without a complete takeover, powerful AI systems might shift cultural values, social norms, and even the structure of daily life. The speed of these shifts may challenge human psychological adaptability.
Futurists like Ben Goertzel and Ray Kurzweil see a potential post-scarcity world emerging, marked by abundant resources and drastically extended lifespans. Others, like Eliezer Yudkowsky and Nick Bostrom, accentuate the high-stakes nature of alignment and existential safety. No matter which outcome one finds more probable, the shape of the future depends heavily on choices made in the present—by researchers, policymakers, and the public alike.

10. Conclusions and Future Outlook
10.1 Synthesizing the Debate
The conversation around take-off speeds, superintelligence, and a potential Singularity is multifaceted, shaped by various intellectual traditions and philosophical orientations. For some, the notion of a slow take-off is more comforting, allowing time to shape policies and integrate new forms of intelligence into our existing social and economic frameworks. Others predict a moderate timeline, reminiscent of the pace at which breakthroughs in the digital age have repeatedly taken us by surprise—fast enough to prompt serious disruption, yet not so abrupt that humanity cannot mount a response. Lastly, the hard take-off scenario looms as the most extreme possibility, demanding urgent research into AI alignment and existential risk mitigation.
10.2 The Role of Research and Collaboration
Addressing uncertainties around AGI and superintelligence demands interdisciplinary research and collaboration at an international scale. Philosophers, computer scientists, economists, ethicists, and policymakers each bring distinct lenses:
- Philosophers and Ethicists can probe questions of consciousness, moral worth, and value alignment, helping define the guiding principles for AI behavior.
- Computer Scientists and AI Researchers develop the technical solutions—interpretable models, safe exploration strategies, robust alignment tools—that might reduce risk and ensure beneficial outcomes.
- Economists and Social Scientists evaluate the distribution of wealth, labor displacement, and the macro-level shifts that superintelligence could trigger.
- Policymakers and Governments wield the authority to enact regulations, treaties, and oversight mechanisms, arguably one of the most vital bulwarks against reckless AI development.
International organizations, think tanks, and technology giants must collaborate to shape norms around data sharing, best practices in AI deployment, and robust frameworks for accountability. Even a slow take-off scenario might place extraordinary demands on these institutions; a moderate or fast take-off would multiply these pressures several fold.
10.3 Imperatives for Safety and Governance
AI Safety: At the technical level, developing rigorous alignment methods is critical. This includes explorations of inverse reinforcement learning, constitutional AI approaches, verification protocols, and interpretability techniques. Ensuring that a superintelligent system can be halted or controlled remains a central challenge, as is engineering meaningful “off switches” and tamper-proof constraints.
Policy and Governance: Setting clear guidelines for AI research—particularly in advanced, frontier areas—could mean the difference between a relatively stable transition and a chaotic scramble. Proposed ideas range from global licensing requirements for AGI projects to specialized agencies that monitor advanced research labs. Though such measures risk stifling innovation if done poorly, the stakes are arguably high enough to justify a serious exploration of regulatory frameworks.
Public Engagement: In democratic societies, the role of public opinion and media discourse cannot be ignored. Transparent discussions about the benefits and risks of superintelligence will shape whether policies gain traction. Grassroots efforts and informed public debates can push corporations and governments toward safer and more ethical AI paths.
10.4 Looking Beyond the Horizon
Even if an AI does not quickly become a de facto ruler of the planet or overshadow humanity, incremental transformations in the next decade—self-driving vehicles, advanced recommendation engines, conversational AI—are already reshaping economies, cultures, and daily routines. These are the stepping stones to a broader conversation about AGI. Whether the final leap to superintelligence proves slow, moderate, or abrupt, humanity’s understanding of intelligence, consciousness, and the nature of innovation will likely evolve dramatically.
Some have likened this moment in history to the dawn of the nuclear age, when humankind first recognized it held the power to annihilate itself or revolutionize its energy sources. The difference is that superintelligence, by virtue of potentially exceeding human oversight capacity, may present an even more complex risk profile. Yet it also carries profound promise—a chance to solve problems once deemed intractable, from disease eradication to space colonization, climate adaptation, and the unveiling of deeper scientific truths.
Navigating this juncture prudently calls for imagination, humility, and collective will. There is no singular blueprint or guaranteed safe passage. Rather, we have an evolving tapestry of perspectives—Bostrom’s strategic approach, Yudkowsky’s urgent alarm, Kurzweil’s optimistic transcendence, Goertzel’s collaborative synergy, Hanson’s socio-economic modeling, Carmack’s engineering pragmatism—that weave together into a rich, if at times dissonant, guide to the future.
11. References
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
- Yudkowsky, E. Intelligence Explosion Microeconomics. Machine Intelligence Research Institute.
- Yudkowsky, E. (2022). “AGI Ruin: A List of Lethalities.” LessWrong.
- Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology. Penguin Books.
- Goertzel, B. (2014). Engineering General Intelligence, Part 1: A Path to Advanced AGI Via Embodied Learning and Cognitive Synergy. Springer.
- Hanson, R. (2016). The Age of Em: Work, Love and Life when Robots Rule the Earth. Oxford University Press.
- Carmack, J. (Various talks and interviews on AGI). YouTube.
- Blue Brain Project. EPFL.
- Future of Life Institute. futureoflife.org.
- Neuralink. neuralink.com.
- OpenAI. openai.com.
- Vinge, V. (1993). The Coming Technological Singularity: How to Survive in the Post-Human Era.
- MIRI (Machine Intelligence Research Institute). intelligence.org.
Comments 1