Artificial intelligence is reshaping how information spreads online—both for better and for worse. In this interview, Professor Noah Giansiracusa, author of Robin Hood Math, explains how AI tools are accelerating the creation and circulation of fake news, the dangers this poses to democratic societies, and why raising public awareness and institutional accountability is now more urgent than ever.
Recently, you published a book titled Robin Hood Math, which discusses the use of algorithms to create fake news. Could you explain how this happens, and what can be done to raise awareness among social media users about this threat?
I think it helps to consider not just the creation but the entire life cycle of fake news, and how AI and other algorithmic systems influence each stage.
The first step is creation, where AI plays two major roles. First, it can generate highly convincing false media, such as deepfake videos. While this has been possible for years, the rapid progress in generative AI has made such tools easier to access and the results far more convincing. Second, even in text-based fake news—like fabricated articles that humans could once easily write—AI dramatically speeds up the process. Instead of producing a handful of articles manually or relying on human troll farms to create a few hundred, AI can instantly generate thousands at little to no cost.
Next is dissemination. Fake news has no impact unless people actually see it. Here, AI again comes into play. Convincing deepfakes are more likely to go viral, and the ability to mass-produce fake content increases the odds that some of it will spread widely. It becomes a numbers game. AI also assists in running armies of bot accounts that post, share, and engage with fake content, tipping the scales of visibility on social media.
AI is also being used defensively, such as in efforts to detect deepfakes and dismantle bot networks. While detection tools for fake articles and videos remain imperfect, AI-powered moderation has seen some success in targeting these organized bot operations.
As for raising awareness, education is essential but has its limits. Teaching people about the life cycle of fake news, showing examples, and encouraging them to double-check sources can help. However, individuals alone cannot carry this burden—we need stronger institutional support, greater responsibility from tech platforms, and more community-based fact-checking tools like “community notes.”
Do some social media platforms deliberately spread fake news to achieve specific goals? Why don’t these platforms seriously combat this phenomenon?
This is a subtle but important question. Some outlets deliberately spread fake news for profit, drawing traffic to monetize clicks, or for political influence. For example, critics have put Russia’s RT in this category.
But for most major platforms, it’s less about intentionally spreading fake news and more about how moderation policies are set—and how those policies shape what content thrives. Take X (formerly Twitter), for example. Elon Musk argues that the previous moderation approach unfairly stifled free speech and conservative voices. Many liberals counter that his current hands-off approach has allowed trolling and misinformation to flourish.
In both cases, the issue is not platforms deliberately producing fake news, but rather how their rules and enforcement (or lack thereof) indirectly amplify or suppress certain content—and thereby benefit some groups over others.
What are the negative effects of the spread of fake news with the help of algorithms and artificial intelligence?
AI is designed to automate and accelerate processes. That’s wonderful in areas like drug discovery, but when applied to fake news, it’s deeply troubling.
Misinformation has existed for centuries—since the printing press, or even earlier. What’s different now is the sheer speed, scale, and reach enabled by AI. Falsehoods can be created faster, spread more widely, and cost virtually nothing to produce. This puts immense strain on our already fragile information ecosystem—the very foundation democratic societies rely on to function.
How can we detect the algorithms that contribute to the creation of fake news? What role does AI play in this process?
As I mentioned earlier, AI can be used both offensively and defensively. The same technology that creates fake news at scale can also be deployed to detect it—spotting patterns in bot activity, deepfake artifacts, or suspicious posting behaviors.
The challenge is that this is an arms race: as AI-generated content improves, detection tools must constantly evolve to keep up. For now, the most effective countermeasure has been using AI to dismantle coordinated bot networks rather than reliably identifying fake content itself.