There is a principle, barely a decade old, that quietly explained why public discourse felt so broken for so long. Why the wrong people kept winning arguments. Why seductive, sweeping theories seemed impervious to criticism. Why intellectually rigorous counterpoints vanished into the void while elegant falsehoods spread across continents. The principle was Brandolini’s Law — and until very recently, it was one of the most reliable rules in modern intellectual life.
It may no longer be.

The Law That Explained Everything
In January 2013, an Italian software developer named Alberto Brandolini was doing two things at once: reading Daniel Kahneman’s Thinking, Fast and Slow and watching a televised political debate between former Italian Prime Minister Silvio Berlusconi and journalist Marco Travaglio. The collision of Kahneman’s insights about cognitive asymmetry and the spectacle of misinformation flowing unchecked through live television produced a single, crystallizing tweet.
“The amount of energy needed to refute bullshit,” Brandolini wrote, “is an order of magnitude bigger than to produce it.”
He called it the Bullshit Asymmetry Principle. The internet called it Brandolini’s Law. And it spread — because everyone who had ever tried to argue with a conspiracy theorist, fact-check a demagogue, or rebut a bestselling economic thesis that misused its source data recognized the truth in it immediately.
The law is deceptively simple. Producing a false but confident claim requires almost no cognitive investment. A bullshitter, as philosopher Harry Frankfurt famously analyzed in On Bullshit, operates entirely outside a concern for truth. He is not lying, precisely — he simply doesn’t care whether what he says is true or false. That indifference is, paradoxically, a massive competitive advantage. The person committed to truth must gather evidence, contextualize it, anticipate counterarguments, check sources, and communicate it clearly. The bullshitter just has to say something compelling.
What made the law feel so totalizing was the mechanism underlying it: there are infinitely more ways to be wrong about something than to be right. Every true fact can be distorted in an unlimited number of ways. The bullshitter has infinite raw material to work with. The truth-teller has exactly one target to hit.
The Asymmetry That Won the Culture Wars
For roughly fifty years, this asymmetry shaped the terrain of public intellectual life — particularly in the humanities and social sciences. Not in equal-opportunity ways. It consistently favored those with the leisure to produce expansive, seductive frameworks over those with the rigor to dismantle them.
Consider what it actually cost to write a serious rebuttal to Pierre Bourdieu’s Distinction — his 1979 sociological magnum opus arguing that aesthetic taste is a mechanism of class reproduction. The thesis is elegant, sweeping, and emotionally compelling. Engaging with it seriously requires mastery of the empirical sociology of multiple countries across decades, a working knowledge of philosophy of science, and the willingness to challenge a figure of enormous symbolic prestige within French academia. For a junior scholar, doing so was career suicide. For a senior one, it was a distraction from more productive work. And even if you wrote the rebuttal — who would read three hundred pages of dense methodological critique against three pages of exciting theory?
The same dynamic played out again, louder, with Thomas Piketty’s Capital in the Twenty-First Century, published in France in 2013 and in English in 2014. The book became a cultural event — topping bestseller lists, winning prizes, receiving the kind of breathless praise usually reserved for novels. Paul Krugman called it “the most important economics book of the year — and maybe of the decade.” It sold over 2.5 million copies by 2017.
And it had serious empirical problems.
Economist Phillip Magness, in a detailed empirical critique published in the Journal of Private Enterprise, documented what he and co-author Robert Murphy described as “insufficient citation of sources,” “inadequate explanations of methodologies,” cherry-picking within data presentations, and at least one case of what appeared to be intentional manipulation of a key chart used to support Piketty’s core “r>g” thesis. Mark Warshawsky at the Mercatus Center found the projection of a 670 percent world capital/income ratio by 2100 to rely on “simplistic and unrealistic assumptions.” Financial Times data journalist Chris Giles documented spreadsheet errors and methodological inconsistencies.
These critiques were real, substantive, and rigorously documented. And yet the book’s cultural impact was barely dented. The reason was precisely Brandolini’s asymmetry at work: it took dozens of academic hours across multiple institutions to produce refutations that most readers of the original book never encountered. The cultural moment belonged to the thesis, not the critique.

What AI Just Changed
Here is the shift that most commentators haven’t fully absorbed yet: the marginal cost of refutation has dropped toward zero.
This is not a metaphor. It is a structural change in the economics of intellectual labor.
A modern large language model — equipped with retrieval capabilities and trained on a vast corpus of peer-reviewed literature, primary sources, and empirical datasets — can, in minutes, produce a draft rebuttal of a theoretical claim that would previously have required weeks of concentrated expert work. It can cross-reference citations, flag methodological inconsistencies, surface counter-evidence from fields the original author ignored, and organize the entire response into a clear, readable format. It is not perfect. But it is fast. And it scales.
Research published in Frontiers in Artificial Intelligence by Dorian Quelle and Alexandre Bovet at the University of Zurich confirmed that LLM agents, when equipped with retrieval tools and the ability to cite sources, demonstrate “enhanced prowess” in fact-checking tasks compared to models without context. GPT-4, in their framework, outperformed GPT-3 on the PolitiFact dataset — though the researchers were careful to note that “accuracy varies based on query language and claim veracity” and that “caution is essential due to inconsistent accuracy.”
The caveat matters. We will return to it.
But even in its imperfect form, the AI-assisted refutation represents something qualitatively new. For the first time since the printing press democratized the production of text, we have a tool that democratizes not just writing — but argumentative infrastructure. The difference is significant. Gutenberg gave everyone the ability to publish. Large language models give everyone the ability to publish rigorously, with citations, structured counterarguments, and empirical support — even if they lack a PhD.
The Collapse of Credential Protection
There is a specific form of intellectual authority that Brandolini’s asymmetry protected: what we might call credential leverage. The idea that a title — professor, director, public intellectual — was itself a form of protection against challenge. That producing a well-crafted theory was automatically worth more than any criticism of it, because the cost of criticism was so prohibitively high that only the most credentialed could afford it. And those inside the same credentialed circles had social incentives not to challenge each other.
As the Brandolini’s Law Wikipedia entry notes, the challenge of refuting bullshit “does not come just from its time-consuming nature, but also from the challenge of defying and confronting one’s community.” Academics who spotted serious flaws in dominant theoretical frameworks often stayed silent — not because they were wrong, but because the social and professional cost of being right publicly was too high. Brandolini’s asymmetry was the mechanism, but intellectual cowardice — rational, strategic intellectual cowardice — was the result.
What happens when the cost of the refutation drops to near zero?
The calculation changes entirely. When any informed person with an internet connection and access to a capable language model can produce a sourced, well-structured rebuttal in under an hour, the credential leverage evaporates. The theory no longer protects itself through the sheer volume of effort required to challenge it. It has to actually be right.
This is not a theoretical prediction. We are watching it happen in real time across social media platforms where contested ideas now regularly meet within-hours pushback from people who, a decade ago, would never have had the research infrastructure to mount a credible challenge.
A Case Study: Virtue Signaling Under Pressure
One of the subtler downstream effects of Brandolini’s asymmetry was the sustainability of a particular social strategy: performative intellectual alignment. The practice of signaling good values, correct affiliations, and progressive credentials without ever being held to serious account for the internal coherence of those positions.
This strategy was effective precisely because the cost of checking it was too high. Pointing out that a prominent advocate of wealth redistribution maintained a private fortune, or that a celebrated environmental theorist had a carbon footprint larger than a small village, required investigation that no one had time to do systematically.
AI-assisted research changes this. What once required a team of journalists with institutional backing now requires a motivated individual with a laptop and a capable language model. Contradiction becomes visible. The gap between stated values and documented behavior narrows to a search query.
This is not about partisan point-scoring. It applies symmetrically. The same tools that expose left-wing hypocrisy expose right-wing demagoguery. The same tools that challenge progressive shibboleths challenge conservative ones. The advantage goes not to any particular ideology but to the side whose claims are actually true.

The Counter-Argument You Cannot Ignore
Here is where intellectual honesty requires a check.
The argument that AI kills Brandolini’s asymmetry in favor of truth assumes that the same technology will be used primarily to refute falsehood. But the 2024 World Economic Forum Global Risk Report ranked misinformation and disinformation as the most dangerous short-term global risk precisely because LLMs have “enabled an explosion in falsified information” and removed the necessity of specialized skills to create synthetic content.
In other words, the same technology that allows an individual to refute a bad theory in fifteen minutes also allows an individual to produce a bad theory — complete with fabricated citations, realistic-sounding data, and coherent prose — in five. The asymmetry may not simply flip; it could intensify in new directions.
The Springer Nature review of LLM fact-checking systems (2026) makes clear that hallucination remains a fundamental challenge: LLMs “tend to produce linguistically consistent but factually inaccurate or entirely fictional text.” A refutation produced by an LLM that itself hallucinates key citations achieves nothing except the illusion of rigor.
The question is not whether AI makes refutation cheaper. It does, demonstrably. The question is whether it makes accurate refutation cheaper faster than it makes plausible-sounding false refutation cheaper. And the answer to that is not yet known.
What research does suggest — from Quelle and Bovet’s work at the University of Zurich — is that LLMs equipped with real-time retrieval and source-citation frameworks perform substantially better than those without. The architecture of grounded, cited reasoning appears to provide meaningful protection against the worst failure modes. But it requires the user to want accurate conclusions, not just winning ones.
This is the crux. AI does not automatically favor truth. It favors argumentative efficiency. Whether that efficiency is deployed in service of reality depends entirely on the person deploying it.
The New Landscape of Intellectual Accountability
If the optimistic version of this thesis holds — even partially — the consequences for public intellectual culture are significant.
The first is that speed of rebuttal becomes a competitive variable in a way it never was before. In the pre-AI era, a book could dominate a news cycle for six months before serious academic criticism appeared in journals that the general public never read. Now, a well-reasoned rebuttal can appear within days on platforms with audiences the size of major newspapers. The temporal window in which a false theory can dominate public consciousness without serious challenge has narrowed dramatically.
The second is that the labor cost of intellectual honesty has decreased. The PhD student who noticed methodological problems in a senior colleague’s work but lacked the time or resources to document them properly now has a tool that can help compile evidence, structure arguments, and identify relevant literature in a fraction of the time. This does not eliminate the career risk of speaking up — but it reduces the effort cost of having something credible to say.
The third is that public discourse may become more legible. One of Brandolini’s asymmetry’s worst effects was not just that false theories won — it was that they created a kind of epistemic fog in which audiences couldn’t distinguish serious scholarship from theatrical confidence. When refutation was expensive, the appearance of authority was often sufficient. When refutation becomes cheap, being right becomes more important than sounding authoritative.
What This Demands of Us
None of this happens automatically. The optimistic scenario requires something from the people using these tools.
It requires the willingness to use AI as a research assistant for truth rather than a generation engine for preferred conclusions. There is a profound difference between asking a language model to “refute this theory” — which will produce a refutation regardless of whether it’s accurate — and asking it to “identify the strongest empirical challenges to this theory, with citations.” The first is motivated reasoning at machine speed. The second is something closer to intellectual seriousness at machine speed.
It requires epistemic humility about AI’s own limitations. As researchers at the University of Zurich noted, accuracy in LLM fact-checking “varies based on query language and claim veracity.” An AI confident in an incorrect conclusion is not an improvement on a human confident in an incorrect conclusion.
And it requires understanding that Brandolini’s asymmetry is not fully dead — it has shifted terrain. It now operates most powerfully not in the production of elaborate academic theories, which AI can challenge, but in the production of simple emotional narratives. A compelling personal story, a vivid image, a tribal rallying cry — these travel faster than any structured argument, regardless of how cheaply that argument can now be produced. AI has not solved the problem of motivated reasoning. It has made explicit reasoning cheaper.
The World That Is Coming
In 1845, the French economist Frédéric Bastiat articulated a version of Brandolini’s asymmetry before Brandolini was born: “In very few words they can announce a half-truth; and in order to demonstrate that it is incomplete, we are obliged to have recourse to long and dry dissertations.” He was describing the fundamental disadvantage of the person committed to accuracy in a world where eloquent oversimplification was always a competitive threat.
What Bastiat could not have imagined is a tool that compresses the “long and dry dissertation” into something almost as fast as the “half-truth.”
That tool now exists. It is imperfect, limited by hallucination, dependent on user intent, and still being actively improved. The research is ongoing — systematic reviews published into 2025 and 2026 continue mapping both its promise and its failures. But its directional effect on the economics of public reasoning is already visible to anyone paying attention.
For the first time in the history of public discourse, the barrier to entering a rigorous argument is approaching the barrier to entering a confident one. Credentials still matter. Expertise still matters. Original research still matters enormously. But the gatekeeping function of those credentials — the ability to win arguments not by being right but simply by being too expensive to refute — is eroding.
That is an extraordinary development. It will disrupt careers built on intellectual leverage. It will force ideas to compete more directly on the basis of their actual connection to reality. It will be messy, contested, and frequently abused.
But for those who have always believed that true ideas should win — not because of who holds them, but because of how well they correspond to the world — it is a moment worth paying attention to.
Brandolini’s Law may not be dead yet. But for the first time since 2013, it is seriously, credibly wounded. And the weapon that wounded it is sitting in your browser tab.







