Artificial intelligence has often felt like an arms race. New architectures, bigger models, advanced pipelines—they all promise the next frontier of technological breakthroughs. Yet we still seem far from an AI that genuinely “understands.” We’re chasing an elusive goal: an AI that thinks, adapts, and evolves the way humans do. But now, a fascinating wave of research is gaining traction. Two key players—NDEA and a new startup helmed by renowned AI researcher François Chollet—are taking the limelight with innovative visions of artificial general intelligence (AGI).
NDEA stands for a relatively new initiative that’s not just about scaling up. Instead, it’s about building an AI that continuously learns with no obvious bottlenecks in sight. Chollet’s venture? A brand-new lab focused precisely on AGI. It’s a big statement in a crowded space. Skeptics wonder if these developments are hype or genuine leaps forward. Will these fresh approaches rewrite AI fundamentals? Or will they join the long list of ephemeral trends that leave us stuck with the same old limitations?
Some say we’re on the cusp of a renaissance in machine learning. Others see only incremental tweaks to deep learning. But behind the buzzwords, there’s a wave of change. It’s disruptive and potentially explosive. This article will dive deep. We’ll explore NDEA’s “Deep Learning-Guided Program Synthesis,” an ambitious method that wants to fuse neural networks with symbolic reasoning. We’ll then see how Chollet’s new startup aims to fast-track AGI by harnessing distinct frameworks, bridging the gap between data-driven learning and self-sufficient reasoning. Get ready to peel back the layers of the next AI revolution.
A New Paradigm for Human-Like Learning

It’s one thing to train a neural network on billions of labeled images. It’s another to achieve true adaptability. Humans learn rapidly from small amounts of data. We can generalize from a handful of examples and transfer knowledge across domains. According to The Decoder, NDEA’s new methodology, called “Deep Learning-Guided Program Synthesis,” aims to replicate that kind of versatility. This isn’t about memorizing massive datasets. It’s about building AI systems that deconstruct tasks and reconstruct solutions from limited input.
For years, program synthesis intrigued researchers. The concept: machine-generated programs that fulfill a high-level specification. Typically, the approach demands specialized solvers or exhaustive search. That’s slow and often impractical. Now, with NDEA’s new direction, deep learning acts as the guiding light. It narrows down the search space and sculpts the path to solutions. The result, if successful, is an AI that can reason about program structures more like humans reason about everyday tasks.
But will it work at scale? Critics argue that bridging neural networks and symbolic logic remains tricky. The synergy is appealing in theory. In practice, the two paradigms can clash. Deep nets excel at pattern recognition, but they’re notoriously opaque. Symbolic methods are interpretable but often fragile in diverse or noisy real-world settings. Despite those warnings, NDEA is marching forward. They assert that advanced representation learning can unify both. They say that’s how humans do it—we latch onto patterns, then transform them into procedural logic. The big question: can a synthetic intelligence replicate that?
NDEA’s goals aren’t modest. They don’t just want to run code. They want to craft AI that, once it sees a pattern, can reapply that knowledge to novel problems. That’s the essence of “lifelong learning” or “continual learning.” If NDEA’s approach is correct, the AI should never hit a dead end. It should keep evolving, keep upgrading, and keep surprising its own makers.
Chollet’s Big Bet on AGI
François Chollet, best known for creating the Keras deep learning framework, is no stranger to innovation. He championed user-friendly interfaces that skyrocketed TensorFlow’s popularity. Now, he’s plotting a new trajectory. According to TechCrunch, Chollet recently founded a new AI lab laser-focused on artificial general intelligence. This isn’t about incremental improvements or refining existing architectures. This is about going all in on what many consider the holy grail of computing: a system with human-like general intelligence.
The hype around AGI has been around for decades. The concept is older than many of today’s leading AI scientists. But the approach has always been elusive. Neural networks soared in popularity once they started beating benchmarks in image classification, speech recognition, and natural language processing. Yet each success typically involved specialized modeling. True generality remained out of reach. Chollet’s vision? A radical rethinking. He believes modern systems lack a “model of the world” that enables meaningful abstraction. In simpler terms, they’re good at correlation, but not so good at building conceptual structures that generalize beyond the training domain.
Chollet’s new AI lab, so far unnamed to the public, reportedly emphasizes efficient generalization, innate curiosity, and minimal reliance on massive labeled data. According to SiliconANGLE, the startup’s early prototypes show promise in puzzle-solving tasks that stump existing neural net architectures. Insider tips hint at a “modular intelligence” approach. That’s Chollet’s trademark style—break down tasks into composable units, each learned independently but integrated at run time. This modular method might let AI pivot from one domain to another with minimal overhead. If it works, it could be a big step toward an AI that doesn’t crumble outside a narrow domain.
Still, the proof remains to be seen. Skeptics abound. Some experts argue that “AGI” is a marketing term these days, slapped onto any AI that might be more flexible than average. Chollet’s reputation lends credence to the endeavor, but the field is full of pitfalls. Nonetheless, the energy is palpable. People are curious. Could Chollet’s lab be the one to do it?
Forget NVIDIA: NDEA’s Quest for Self-Improvement
Hardware giant NVIDIA dominated the AI hardware conversation for years. Its GPUs powered the deep learning revolution, training colossal models like GPT and Vision Transformers. But as VentureBeat reports, NDEA wants to shift the focus. They aren’t satisfied with hardware-driven leaps. Instead, they target a software ecosystem that keeps improving without upper limits. The key phrase: “with no bottlenecks in sight.” That’s audacious in a discipline typically shackled by resource constraints.
The main idea? They claim that “Deep Learning-Guided Program Synthesis” can bypass many scaling issues. Traditional deep learning demands enormous computational resources. But once a model saturates, the marginal gains shrink. In contrast, program synthesis could let the AI generate new, more efficient solutions as it learns. Think of it like a self-expanding library of knowledge. The AI calls up subroutines, modifies them, and rewrites its own code. It’s not fully autonomous—human oversight remains crucial—but the principle is that once the AI masters a building block, it can reapply it anywhere. No more re-training from scratch. No more major hardware leaps.
VentureBeat’s article suggests this could eventually challenge the GPU-centric status quo. Why buy more powerful hardware if your algorithmic approach is flexible enough to expand on the fly? That’s the vision, at least. Technology watchers are torn. On one hand, training large language models (LLMs) already costs tens of millions of dollars. On the other, software breakthroughs often outshine raw hardware upgrades over time. We’ve seen it with compilers, operating systems, and even quantum simulation. Maybe NDEA’s approach will highlight a new vector for optimization—one that’s not so dependent on the next best chipset.
The Spirit of Collaboration or Competition?
AGI is a lofty aspiration. It might take a massive, coordinated push among companies, universities, and nonprofits. NDEA’s roots are somewhat mysterious. The initiative seems to include a patchwork of AI researchers from academia and industry. Meanwhile, Chollet’s lab, though fresh, has star power. People wonder: will these two groups collaborate, or will they compete for the same prize?
We can’t say for certain. Neither side has released official statements confirming joint efforts. But rumor has it that NDEA’s “Deep Learning-Guided Program Synthesis” could integrate well with modular architectures. If so, Chollet’s approach might form a natural synergy. Imagine an AI that learns new programmatic solutions on the fly, seamlessly plugging them into a modular framework that orchestrates different tasks. The outcome: a system that iterates, reasons, and reorganizes itself at will. That’s beyond most current neural networks.
But let’s be real. The quest for AGI isn’t exactly a small, friendly race. When massive venture capital pours in, secrecy and competition intensify. Collaboration is nice, but so is building a moat around your technology. Both NDEA and Chollet’s startup want to be first—or at least be recognized as the pioneer that cracked the AGI code. If cooperation happens, it might come later, once fundamental breakthroughs are secure. For now, expect parallel paths with each camp pushing boundaries in its own way.
Balancing Data-Hungry Models with Symbolic Reasoning

One of AI’s biggest challenges is data hunger. Conventional machine learning demands loads of labeled examples. That’s expensive and time-consuming. Worse yet, machine learning models can break down when they see something that deviates significantly from training data. It’s like teaching a toddler only one type of cookie and expecting them to understand a wide array of pastries on first glance. They might be baffled by a croissant. They just haven’t seen anything like it.
That’s why program synthesis is appealing. Once the system learns a general “cookie-making” function, it can adapt to pastries of many forms with minimal new data. Similarly, Chollet’s lab is rumored to be emphasizing domain transfer. By focusing on building robust conceptual hierarchies, the new startup aims to reduce the reliance on brute-force data collection. It’s not about vacuuming the internet for every crumb of text or image. It’s about applying a form of reasoning that learns fundamental patterns, then reworks them for new tasks.
NDEA’s approach, if effective, could also mitigate the dreaded catastrophic forgetting phenomenon. Typical deep learning models forget old tasks when trained on new ones. By embedding knowledge into program-like structures, NDEA’s system might store stable representations that remain intact. That’s a game-changer. Imagine a system that learns to solve math problems, then picks up advanced geometry without losing arithmetic. Or, an AI that masters driving a car and later masters flying a drone, all without overwriting the original skill. That’s the dream. Whether it’s feasible at scale remains to be seen.
The Question of Interpretability
Transparency in AI is essential. If we can’t understand how a model makes decisions, we might be wary of using it in high-stakes settings like healthcare or finance. Pure deep neural nets tend to be black boxes—unpacking their internal logic is complicated. But program synthesis offers a clue. Programs are inherently structured. They have syntax, semantics, and defined operations. If “Deep Learning-Guided Program Synthesis” truly merges symbolic representation with learned features, we might get a more interpretable AI.
Analysts say that’s a big selling point for both regulators and industry. Governments worldwide are scrutinizing AI’s ethical dimensions. Companies face lawsuits if their algorithms discriminate. By adopting a more interpretable framework, NDEA and Chollet’s lab could preempt some of these concerns. They might be able to show the chain of reasoning behind a decision, not just the final output. That might pave the way for faster adoption in critical areas like drug discovery, where transparent reasoning is paramount. Patients don’t want a blind guess. They want to know why a specific drug or treatment is chosen.
Still, interpretability often imposes constraints. Symbolic logic is precise but can be rigid. Meanwhile, deep learning is flexible but opaque. The skill lies in merging the two. That’s the high-wire act these new projects are attempting. If they succeed, it might break new ground. If they fail, we’ll learn that bridging that gap is trickier than we thought.
Economies of Scale and the Future
Let’s talk money. AGI research costs a fortune. DeepMind, OpenAI, Meta, and others spent billions on supercomputers and top-tier researchers. The question is: can new players like NDEA or a startup led by Chollet compete at that scale? Perhaps they have wealthy backers. Or maybe they’ll rely on cheaper, more efficient approaches that outsmart the brute-force tactic. It’s worth noting that behind every major AI milestone—be it GPT-level language or AlphaGo-level gameplay—there’s been staggering financial investment.
NDEA’s proposition suggests we might not need to rely on infinite data or monstrous hardware. If the system can bootstrap itself—learning to build better solutions over time—maybe we can avoid that ballooning cost. Similarly, if Chollet’s lab focuses on modular, small-scale intelligence that grows organically, the overhead could be less than that of training a monolithic transformer on an endless supply of text. That’s speculation, but it aligns with the rhetoric in the articles. They’re painting a picture of a new wave in AI that isn’t anchored to big data or big GPU farms. It’s a wave that tries to replicate human-level ingenuity at the core.
But we can’t discount the potential for brand alliances or partnerships. NVIDIA might still have a role by producing specialized hardware for symbolic-deep integration. Or perhaps cloud providers like AWS will pitch in. We see synergy everywhere in tech. The real question is which model or approach will dominate in the next five years.
The Industry’s Reaction
AI veterans are intrigued but guarded. Some hail these new projects as the next big leap. Others see hype overshadowing substance. Merging symbolic reasoning with neural networks isn’t new—it’s been attempted many times under various labels (neuro-symbolic AI, hybrid AI, etc.). The difference now might be timing. We have more powerful computing, better algorithms, and a deeper understanding of deep learning’s strengths and weaknesses.
The excitement is reminiscent of the mid-2010s deep learning surge. Then, a flurry of breakthroughs in image and speech tasks led to an explosion in research, funding, and commercial products. Are we on the brink of another wave of such breakthroughs? Possibly. But sometimes, optimism runs ahead of reality. The crucial test is whether these methods can succeed in real-world scenarios, not just toy benchmarks. The next 12 to 24 months will be revealing.
Meanwhile, job listings at NDEA hint at a multidisciplinary approach. They’re hiring programming language experts, deep learning specialists, cognitive scientists, and even philosophers of mind. Chollet’s new startup is also attracting attention in the job market. People from top AI labs at Google, Meta, and Stanford have been rumored to jump ship. That suggests both organizations are building formidable teams. If they can stay cohesive, we might witness breakthroughs that alter the AI landscape dramatically.
Public Hopes and Fears

AGI raises perennial concerns. If we create an intelligence on par with humans, what then? Will it displace jobs en masse? Could it spiral out of control? Skeptical voices warn that forging ahead without robust safety measures is perilous. Others argue that AGI’s benefits, from healthcare to climate modeling, are too significant to ignore. The public conversation is a mix of excitement and dread, reminiscent of the hype around self-driving cars or nuclear fusion.
NDEA’s official statements are relatively optimistic. They maintain that an AI capable of self-improvement, guided by deep learning, can be safely stewarded with the right moral and technical scaffolding. They highlight transparent program structures as a built-in safety feature. Chollet has also been vocal about AI safety, advocating a careful approach to advanced systems. His track record suggests a thoughtful stance on ethics, though details about his startup’s guidelines remain scarce.
Watchdogs are paying attention. Regulatory bodies in the EU, U.S., and Asia want to ensure responsible development. It’s not just about controlling labor displacement; it’s also about guarding against potential misuse by malicious actors. As new AI labs spring up, regulators scramble to keep pace. The success or failure of NDEA’s approach could shape future policies. If their “lifelong learning” AI proves stable and controllable, it might become a model for the industry. If not, it could spark new calls for regulation or even moratoria on certain lines of research.
The Technical Crux: Can They Deliver?
All these ideas sound fantastic in press releases. But do they work? Historically, bridging deep learning and symbolic reasoning has proven complex. Each method has strengths but also deep-seated limitations. Symbolic methods struggle with nuance and high-dimensional data. Neural nets excel there but falter when tasked with explicit rule-following or compositionality. The synergy is anything but trivial.
NDEA’s solution is to let deep learning guide the search for programs. If the network can prune the search space intelligently, the overhead might be manageable. Meanwhile, the actual reasoning is handled by interpretable program-like structures. In principle, that could combine the best of both worlds. Yet in practice, we have to see how they handle noise, domain shifts, and adversarial examples.
Chollet’s approach might differ. He’s hinted at a more “modular” neural architecture that retains certain symbolic capacities. His prior work on “Abstraction and Reasoning” datasets showed how networks could solve tasks requiring compositional thinking. It’s a puzzle-based approach, reliant on meta-learning or few-shot learning. The new lab presumably expands on that. If they can unify these concepts into a robust system, it might genuinely close the gap to general intelligence.
Experts keep a cautious eye on real-world demos. If either group can show progress on standard benchmarks—like bAbI tasks, CLEVR, or more advanced reasoning sets—that would validate their claims. If they can tackle tasks unsolvable by other AI systems, that’d be a watershed moment.
Integration and the Road to AGI
Let’s assume these efforts bear fruit. What might an integrated AI ecosystem look like? Perhaps a system that ingests data from multiple domains: language, vision, robotics, and more. It abstracts core concepts and stores them in symbolic “programs” that it can debug, revise, and reuse. Then, when confronted with a new domain—say, astronomy—the AI leverages its existing knowledge. It might adapt the knowledge to stargazing tasks with minimal additional training.
Imagine the potential for scientific research. The AI scans scientific papers, extracts ideas, and merges them with its existing code library. It hypothesizes new experiments, verifies them, and learns from results. That’s a loop of constant improvement. It’s reminiscent of human scientists, except scaled to machine speed. That’s what NDEA’s “with no bottlenecks in sight” vision references. The self-improving cycle never stops. Each new discovery seeds the next. If there are no fundamental blocks, the system’s capabilities grow exponentially.
Chollet’s new startup presumably converges on a similar notion. An AGI that solves puzzles might also solve real-world problems. Over time, it transitions from puzzle-solving to complex engineering tasks, creative writing, or even autonomous research. The boundary between “smart AI” and “general AI” blurs. We don’t yet know how soon or how robustly that might happen. But many in the community sense that we’re approaching an inflection point.
The Human Element
As we chase smarter systems, we can’t forget the human dimension. AI, especially if it edges toward general intelligence, impacts society profoundly. Employment patterns might shift. Education systems might adapt. Our own creativity might be challenged. Some see a dystopian scenario in which human labor becomes obsolete. Others imagine a utopia of abundance, where AI handles drudgery, leaving humans free to explore, create, and collaborate.
NDEA’s vision emphasizes “learning like humans,” which suggests a possible alignment with how we naturally think and explore. That could also mean a closer synergy between human scientists and AI colleagues. Chollet has long been an advocate of thoughtful design in AI tools, ensuring they empower rather than replace. Yet the transition is uncertain. It will take nuance, governance, and cultural adaptation to ensure these powerful systems serve us all equitably.
It also raises philosophical questions. If an AI truly learns like a human, does that imply consciousness? Probably not by default. But as these systems become more advanced, we might face new ethical dilemmas. Should they have rights? Responsibilities? Or are they just advanced software? These debates have simmered in AI circles for decades. With tangible progress on the horizon, the debates might soon break into mainstream discourse.
Conclusion: A New Dawn or a Mirage?
The arrival of NDEA’s “Deep Learning-Guided Program Synthesis” and François Chollet’s AGI-focused lab sets the stage for a potentially seismic shift in AI. Many are excited, some are skeptical, but everyone is watching. On one side, NDEA envisions an AI that synthesizes its own programs, learning continually like a human. On the other, Chollet’s lab pushes for a modular approach to real general intelligence, free from the narrow constraints of current deep nets.
The synergy of these ideas—neural networks plus symbolic reasoning—could redefine what we think AI can accomplish. If they succeed, the door to genuine AGI might open faster than we ever expected. If they fail, it’ll be another lesson in the stubborn complexity of intelligence. In any case, the conversation is shifting. We’re no longer satisfied with specialized models that excel at one task but fail at another. We want intelligence that’s flexible, interpretable, and constantly learning.
Is it a new dawn? Possibly. Some developments in AI turn out to be mirages. But the momentum behind NDEA, combined with the star power of Chollet, suggests something big is brewing. The next few years could be pivotal. We might look back on 2025 as the year a new chapter began in the AI saga. Or we might just see incremental advances. Only time will tell.
Until then, the race for AGI continues, fueled by ambition, curiosity, and an unshakable belief that real intelligence can be built. Keep watching. We’re all part of this story, whether we’re researchers, end-users, or just fascinated spectators. One way or another, the game is on.