Can Government Be AI-Native? Elsa, Public Trust and the Future of GovTech
The FDA’s debut of Elsa—a generative artificial intelligence (AI) system designed to assist with internal scientific reviews—marks a turning point in the U.S. government’s relationship with artificial intelligence. Rather than simply adopting AI tools to speed up existing workflows, Elsa signals an ambition to reimagine how government agencies operate at a systemic level.
We asked members of the AI Think Tank—leaders responsible for enterprise AI adoption, governance and innovation—to weigh in. What opportunities and risks do they see as AI is more deeply embedded into public infrastructure? How might this shape trust, transparency and citizen engagement over the long term?
“The real opportunity lies in AI’s ability to eliminate bureaucratic friction that makes people feel like government doesn’t work for them.”
A Shift from Bureaucracy to Intelligence
Jim Liddle, Chief Innovation Officer of Data Intelligence and AI at Nasuni, calls Elsa the “government’s first serious attempt to become an AI-native institution.” For Liddle, “it’s not just about faster drug reviews—it’s more about reimagining how government could work in the age of AI.”
“The real opportunity lies in AI’s ability to eliminate bureaucratic friction that makes people feel like government doesn’t work for them,” he says. But Liddle warns that trust could erode quickly if AI decisions become too opaque.
“The risk isn’t technical failure—it’s what I refer to as the ‘black box’ problem at scale,” he explains. “People already distrust decisions they can’t understand, and AI has the potential to amplify this.” Agencies have a choice, according to Liddle: Use AI to increase transparency and accountability, or hide behind its complexity.
Efficiency Gains, But Not Without Guardrails
Rodney Mason, Head of Marketing and Brand Partnerships at LTK, believes Elsa has clear upside. “AI can enhance operational efficiencies by automating labor-intensive tasks, such as document reviews, data analysis and regulatory reviews,” he says.
This, he argues, lets human experts focus on high-value decisions that benefit the public. But Mason is cautious. “Efficiency gains must not come at the cost of due diligence in high-stakes scenarios such as approving medical innovations where errors could ripple into public health crises.”
Mason advocates for robust ethical frameworks and clear accountability measures to ensure AI serves the public interest, not just internal performance metrics.
Smarter Review, Greater Scrutiny
Anand Santhanam, Global Principal Delivery Leader at AWS, emphasizes the potential for pattern recognition at scale. “AI can spot patterns across thousands of documents that human reviewers might miss,” he says. That could lead to earlier detection of risks and faster approval of treatments.
However, Santhanam also notes the importance of perception. “When people learn that AI influenced a drug approval decision, they’ll want to know exactly how and why,” he says. Transparency is critical. “The key is positioning AI as a powerful research assistant, not as a replacement for human judgment in matters of public safety.”
“One careless misstep in a review could slow approvals—or worse, risk patient safety.”
Accountability Versus Speed
Divya Parekh, Founder of The DP Group, is excited about what Elsa represents—but equally cautious. “Elsa isn’t just another app—it’s a chance to reclaim human bandwidth,” she says. “Scientific reviewers can offload rote tasks and reclaim hours for deep analysis and stakeholder dialogue.”
Yet Parekh is wary of blind spots. “One careless misstep in a review could slow approvals—or worse, risk patient safety,” she warns. Her prescription: Pair AI with “immutable audit trails” and “transparent rationales for every AI suggestion. Do that,” she says, “and government becomes not just faster, but more accountable—and worthy of our confidence.”
Proactive Intelligence, Not Reactive Systems
Roman Vinogradov, VP of Product at Improvado, sees Elsa as a pivot toward smarter governance. This is about moving “from reactive governance to proactive intelligence,” he says.
Vinogradov is optimistic that AI can accelerate everything from policy evaluation to scientific analysis—but only if agencies commit to explainability. “If algorithms make decisions without clear auditability, public trust erodes,” he says. “Transparent AI isn’t just good policy—it’s the cornerstone of digital democracy.”
Supporting, Not Replacing, Human Judgment
Suri Nuthalapati, Data and AI Leader at Cloudera, believes AI should augment, not replace, government decision-making. “The biggest opportunity lies in boosting efficiency—streamlining reviews, accelerating insights and reducing bottlenecks in complex regulatory workflows,” he says.
But he underscores the risk of overreliance, noting that opaque models in high-stakes environments like public health or safety can erode trust fast. His advice: Build auditable systems, maintain human oversight and disclose data sources. “Done right, AI can modernize government without compromising integrity.”
“AI can help us achieve the aims of the right and the left in this country. We just need to try.”
Transformative Potential—If Designed Right
Peter Guagenti, CEO of EverWorker, believes AI like Elsa has the power to help government “do more with more”—a reversal of the long-standing mantra of doing more with less.
“AI makes it possible for every worker to access capabilities that they lacked the skills to do for themselves,” he says. “In an era with a massive budget deficit and rising inflation, AI can finally change the calculus of the desire for better government services while still reducing taxpayer costs.”
Guagenti also sees the potential bipartisan benefits of AI. “AI can help us achieve the aims of the right and the left in this country. We just need to try.”
Why Infrastructure and Governance Should Go Hand in Hand
Nikhil Jathar, CTO of AvanSaber, notes that Elsa’s real benefit may lie in accelerating decisions at scale. “AI can analyze thousands of clinical trial documents in hours versus weeks of manual review,” he says, noting that compresses timelines for potentially life-saving innovations.
Still, Jathar cautions that the risks—like bias, opacity or undermined accountability—aren’t theoretical. If implementation lacks explainability, public confidence will suffer. He advocates for “mandatory algorithmic auditing, public disclosure of AI decision frameworks human oversight for critical determinations. This ensures AI enhances rather than replaces human judgement in governance.”
Human-in-the-Loop: A Key AI Safeguard
Vishal Bhalla, CEO and Founder of AnalytAIX, believes the most powerful benefit of AI in government is reducing cycle times—whether it’s for drug approvals or complaint resolution. “Faster, data-driven decisions can improve health outcomes, lower the cost of products and make government more responsive,” he says.
But Bhalla stresses that trust hinges on oversight. “That’s why human-in-the-loop systems are critical to maintaining oversight, ethics and public trust,” he says. “AI must always serve people.”
Rebuilding—Or Undermining—Public Trust
David Obasiolu, Principal Consultant and AI Security and Governance Engineer at Vliso AI, sees Elsa as a moment of reckoning. “When systems like this start informing public decisions, trust becomes the benchmark, not just speed,” he says.
He believes governments must establish fairness, transparency and oversight as default design principles. “Without the foundation, we risk building fast but shallow systems that undermine long-term legitimacy.”
A New Standard for Digital Democracy
Gordon Pelosse, EVP of Partnerships and Enterprise Strategy at AI CERTs, calls Elsa a symbol of what’s possible—but also what’s at stake. Faster processes and scaled expertise can redefine how government works, he says.
But Pelosse emphasizes that AI used in high-stakes decisions—from healthcare to safety—demands more than performance. “Without transparency, explainability and accountability, public trust can break down,” he warns, referring to the debut of Elsa as “a crucial test.”
“Ultimately, the future of GovTech rests on balancing rapid innovation with transparent governance.”
Balancing Innovation with Oversight
Aravind Nuthalapati, Cloud Technology Leader for Data and AI at Microsoft, says “Integrating generative AI like Elsa into GoveTech can dramatically boost operational efficiency, accelerate informed decision-making and proactively highlight critical insights from vast data pools.”
Yet Nuthalapati is clear-eyed about the stakes. “Bias, data privacy concerns and reduced transparency” remain critical risks. The future of GovTech isn’t just automation—it’s explainable, auditable systems that uphold public trust, he explains. He emphasizes the need for robust frameworks that require human oversight and open access to decision-making processes. “Ultimately, the future of GovTech rests on balancing rapid innovation with transparent governance.”
Takeaways for AI Leaders and Policymakers
- AI in government should focus on efficiency and transparency.
- Tools like Elsa are most effective when paired with clear human oversight.
- Avoid the ‘black box’ problem—AI decisions must be explainable.
- Use AI to eliminate bureaucracy, not accountability.
- The most trusted systems will offer both speed and ethical clarity.
Risks and Rewards of Tools Like Elsa
Elsa may be the first of many generative AI tools to reshape how government operates, but its success will depend on more than its algorithms. Leaders must design AI systems not just for accuracy or speed but for auditability, trust and public understanding. The real challenge isn’t making AI work for government—it’s making sure it works for the people government serves.