The Deepfake Dilemma
In the last few years, AI-generated deepfakes have impersonated world leaders, disrupted political campaigns and seeded viral misinformation—often faster than the truth can catch up. And with global elections at stake, this isn’t just a tech issue. It’s a threat to democracy itself.
To better understand the path forward, we spoke with members of the AI Think Tank—an expert group of leaders working on the cutting edge of artificial intelligence (AI), cybersecurity, data integrity and enterprise software. Their consensus: Fighting deepfakes requires more than clever algorithms. It demands layered strategies that blend detection, education and ethical governance.
“No single entity can combat this threat (Deepfakes) alone.”
The Deepfake Challenge: Misinformation at Machine Speed
Synthetic media powered by generative AI can now mimic voices, replicate facial expressions and simulate convincing video—all with alarming precision. That power, while revolutionary in some contexts, has also become a weapon of disinformation.
A report from Insikt Group examined 82 deepfakes targeting public figures in 38 countries over just one year’s time, noting that “Deepfakes aimed at financial gain, election manipulation, character assassination, and spreading non-consensual pornography are on the rise.”
“Deepfakes threaten election integrity and public discourse in our digital world. To counter this, we need a comprehensive strategy,” says Anand Santhanam, Global Principal Delivery Leader at Amazon Web Services. “The organizations that lead on this issue will not only protect their reputations but help preserve the foundation of informed democratic processes.”
“Advanced AI-driven detection tools should be deployed. Blockchain-based verification can help authenticate legitimate content.”
AI Detection Tools: Staying Ahead of the Threat
One of the most pressing challenges is knowing what to watch for and staying ahead of evolving deepfake techniques. Detection tools must be smarter, faster and more adaptive.
“Platforms must implement real-time content moderation, flagging and labeling AI-generated media while providing contextual transparency,” explains Suri Nuthalapati, Data and AI Leader, Americas at Cloudera. “Advanced AI-driven detection tools should be deployed to analyze facial inconsistencies, metadata and voice patterns. Blockchain-based verification can help authenticate legitimate content, reducing the spread of misinformation.”
That’s affirmed by the World Economic Forum, which notes that in the era of deepfakes, “Blockchain technology has the power to restore trust and confidence in the digital ecosystem.”
Yet detection technology is only as effective as its ability to evolve. “These systems must be continuously refined as adversarial techniques evolve,” Santhanam says.
Authentication, Provenance and Blockchain Backups
To verify the origin of digital content, many leaders join Suri Nuthalapati as advocates for blockchain-based authentication and watermarking technologies. This creates a digital “paper trail” for original content, helping platforms and users discern between real and fake.
“We are currently working on a project that aims to solve some of these problems through the use of blockchain technology and AI,” says Egbert von Frankenberg, CEO of Knightfox App Design. “The aim is to offer a new alternative to image verification through a combined verification mechanism.”
Gordon Pelosse, EVP at AI CERTs, also champions this approach: “Detection systems together with blockchain verification methods and watermarking technology can help monitor content origins.”
“Teach digital literacy in schools. People need to recognize deepfakes before real harm is done.”
Media Literacy: Another Missing Layer
Technology can detect, but people must discern. That’s why nearly every expert emphasizes the importance of public education and digital literacy.
“Teaching digital literacy in schools and social media campaigns can make a big difference,” says Sarah Choudhary, CEO of Ice Innovations. “People need to be educated on how to recognize deepfakes so they don’t fall for misinformation.”
Aravind Nuthalapati, Cloud Technology Leader at Microsoft, echoes the sentiment: “Promote digital literacy through public education, empowering users to identify manipulation.”
From labeling policies to school curriculum reform, public awareness is the connective tissue between detection and trust.
Deepfake Policies, Standards and Industry Collaboration
Tech solutions must also be backed by stronger platform policies and coordinated governance efforts.
“Introduce clear identification methods for AI-produced media while funding educational programs,” advises Pelosse. “Enhance election content moderation practices while working alongside governments and AI experts to establish ethical guidelines.”
Jim Liddle, Chief Innovation Officer of AI at Nasuni, adds, “Organizations should contribute to establishing industry standards… and disclose when content is AI generated.” Transparency, he argues, builds trust and resilience across the ecosystem.
And while regulation is an eventual necessity, Nikhil Jathar, CTO of AvanSaber, suggests the early focus should be on proactive mitigation. “Clear labeling of AI-generated content, swift removal of harmful deepfakes and collaboration on industry-wide standards are crucial.”
“Platforms and media organizations must take a multilayered approach.”
A Future Built on Collective Accountability
The complexity of the deepfake threat demands a collaborative solution. No single company, government or platform can manage the issue alone.
“Establish cross-industry intelligence sharing networks,” urges Santhanam. “No single entity can combat this threat alone.”
Roman Vinogradov, VP of Product at Improvado, proposes crowdsourced and gamified approaches to verification: “Deploy decentralized verification networks enabling collective fact-checking, transparent labeling standards and global collaboration frameworks.”
And as David Obasiolu, Co-Founder of Vliso, reminds us, “It’s critical that platforms and media organizations take a multilayered approach, blending technical solutions, stronger policy enforcement and real investment in public education.”
Actionable Strategies for Media Leaders Fighting Fakes:
Based on guidance from AI Think Tank members, here are six key steps platforms and media organizations should consider when working to guard against deepfakes:
- Implement real-time detection tools that analyze visual anomalies, voice discrepancies, and metadata.
- Use blockchain or watermarking systems to verify original content and flag tampering.
- Clearly label AI-generated content and provide users with contextual transparency.
- Invest in digital literacy campaigns, both through public outreach and educational institutions.
- Create cross-industry coalitions for intelligence sharing, ethical AI practices and threat tracking.
- Define internal standards and disclosure policies for how AI is used across content creation and moderation.
Deepfakes: A Test of Our Digital Future
The rise of deepfakes presents one of the most urgent challenges of the AI era. It tests not only the limits of technology but the strength of our social and democratic systems.
As the members of the AI Think Tank illustrate, no single tool or tactic will be enough. Platforms must invest in adaptive technologies, governments must establish flexible yet enforceable standards and the public must be equipped with the literacy to question what they see and hear.
Because in the race between truth and deception, those who stay passive will lose the plot—and the public’s trust along with it.