Building Trust in AI: Strategies for Executives in a Fast-Moving Era
Artificial Intelligence 10 min

Building Trust in AI: Strategies Leaders Can Use Now

As AI innovation accelerates faster than regulation, senior leaders face rising pressure to build trust with customers, regulators and internal teams. Members of the Senior Executive AI Think Tank share how they embed transparency, safety and accountability into fast-moving development cycles, the trade-offs they accept or reject, and the governance practices that allow organizations to innovate confidently without sacrificing integrity.

by AI Editorial Team on December 1, 2025

As artificial intelligence advances at breakneck speed, the question of trust has become more urgent than ever. How do senior leaders ensure that innovation doesn’t outpace safety—and that every stakeholder, from customers to regulators and employees, retains confidence in rapidly evolving AI systems? Members of the Senior Executive AI Think Tank—a curated group of seasoned AI leaders and ethics experts—are confronting this challenge head-on. With backgrounds at Microsoft, Salesforce, Morgan Stanley and beyond, these executives are uniquely positioned to share practical, real-world strategies for building trust even in regulatory gray areas.

And their insights come at a critical moment: A recent global study by KPMG found that only 46% of people worldwide are willing to trust AI systems, despite widespread adoption and optimism about AI’s benefits. That “trust gap” is more than just a perception issue—it’s a barrier to realizing AI’s full business potential. Against this backdrop, the Think Tank’s lessons are not theoretical, but actionable frameworks for leading organizations in a world where regulation lags, public concern mounts and the stakes for getting trust wrong have never been higher.

Human-Centered Design as the Foundation of Trust

For Vishal Bhalla, CEO and Founder of AnalytAIX, trust in AI begins with a simple principle: Systems must remain “always in service of the human.” He emphasizes real-time human-AI collaboration during and after training as the fastest path to credibility. 

Bhalla notes that this approach not only strengthens alignment but also accelerates adoption. “We have gained a fair amount of traction and even won awards when our clients see that it is indeed real,” he explains, especially when subject matter experts stay involved long enough to feel confident stepping in to retrain or redirect the system as needed. Trust is earned through transparency and participation—not promises.

Interpretable Systems and Open Dialogue Build Early-Stage Trust

Monojit Banerjee, Lead in the AI platform organization at Salesforce, believes trust begins with visibility into how AI systems behave. He stresses the importance of interpretability tools such as Sparse Autoencoders (SAE) to provide what he calls a “more scientific approach” to explaining model behavior—an antidote to vague, “vibe-based” descriptions that leave stakeholders uneasy. “Using technologies such as SAE and other interpretability tools… helps with both rapid development and building trust in AI systems,” he says.

Banerjee pairs technical clarity with transparent communication, noting that open dialogue through blogs and whitepapers help stakeholders navigate emerging ethical norms. For Banerjee, interpretability is not a luxury—it’s a requirement for moving fast responsibly.

Transparency and Cross-Functional Alignment as Speed Multipliers

For Pradeep Kumar Muthukamatchi, Principal Cloud Architect at Microsoft, trust starts with clear communication tailored to each audience—customers, regulators and internal teams. “I prioritize clear communication of AI capabilities, limitations and risks,” he says, noting that responsible AI principles are embedded into design reviews and rewarded culturally. His philosophy challenges the idea that governance slows innovation: “Speed is important, but trust compounds, so I choose to move fast with alignment, not in isolation.”

Similarly, Prashant Kondle, Digital/AI Transformation Specialist at Ivis Technologies, argues that rapid innovation demands structured accountability. He has developed an AI Risk Assurance Framework to create a shared foundation across teams. “It establishes clear guardrails for assessing model integrity, bias, explainability and readiness for governance,” he explains, “so every team operates with a shared understanding of ethical and regulatory expectations.”

“Embed ethics, compliance and risk assessment into the AI development lifecycle, rather than as afterthoughts.”

Gordon Pelosse, Executive Vice President at AI CERTs, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Gordon Pelosse, Executive Vice President of Partnerships and Enterprise Strategy at AiCerts

SHARE IT

Ethics Embedded in the AI Lifecycle

Gordon Pelosse, Executive Vice President of Partnerships and Enterprise Strategy at AiCerts, says trust in AI stems from building systems that are understandable and accountable from the ground up. He emphasizes the importance of documenting model limitations and uncertainties: “Ensure stakeholders can understand how and why our AI systems make decisions.”

Pelosse ties this clarity to operational discipline. He stresses that organizations must “embed ethics, compliance and risk assessment into the AI development lifecycle, rather than as afterthoughts,” and be willing to slow down for validation, red-team testing or bias audits. Taking shortcuts, he says, is not a trade-off worth entertaining; trust is the prerequisite for innovation that lasts.

“If your engineers and your ethics team aren’t in the same room, you’re already behind.”

Divya Parekh, Founder of The DP Group, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Divya Parekh, Founder of The DP Group

SHARE IT

Responsible Speed and Aligned Acceleration

The pressure to innovate quickly can often conflict with ethical responsibilities. Divya Parekh, Founder of The DP Group, cautions: “Trust isn’t a press release. It’s built through transparent communication, co-created guardrails and alignment between intent and impact. If your engineers and your ethics team aren’t in the same room, you’re already behind.” She calls this “aligned acceleration”—moving fast while protecting credibility and integrity.

Dileep Rai, Manager of Oracle Cloud Technology at Hachette Book Group (HBG), highlights practical steps to operationalize this principle. “I anchor AI innovation in transparent data use, human-in-the-loop oversight for sensitive decisions, and ongoing risk reviews tied to business value. Speed without conscience creates fragility. Speed with clarity creates trust.”

Stress-Testing for Real-World Reliability

For Sarah Choudhary, CEO of Ice Innovations, trust is not earned through assurances but through rigorous testing. She integrates internal red team reviews into every innovation sprint to “test for bias, misuse and explainability gaps.” This constant scrutiny ensures that performance, ethics and safety evolve together rather than sequentially—a key differentiator at a time when AI risks are becoming more complex and intertwined.

Choudhary also views transparency as a competitive advantage, especially when formal regulation lags behind technology. She leans on voluntary disclosure and cross-industry collaboration to keep systems accountable and trustworthy. “The fundamental trade-off isn’t speed versus safety; it’s between short-term wins and long-term credibility,” she says.

Governance Built Directly Into the Architecture

Trust is not a layer added to AI systems, thinks Raghu Para of Ford Motor Company, but a structural component designed from the outset. He builds transparency, explainability and auditability directly into model workflows so “regulators can trace decisions, customers can see fairness and employees can trust the process.” This embedded governance ensures that every decision can be interrogated—not just the outputs but the reasoning behind them.

Para practices what he calls “responsible scaffolding,” which includes sandbox testing, bias checks and model cards to maintain clarity even when innovation moves faster than regulation. Speed matters, but clarity ensures AI “doesn’t just move fast, but also moves right.”

Transparent Communication as a Cultural Norm

For Roman Vinogradov, VP of Product at Improvado, trust is built through consistent, proactive communication across teams and customers. He recommends leaders “engage stakeholders regularly, sharing objectives, challenges and how you address safety concerns,” emphasizing that early disclosure prevents misalignment later. Vinogradov believes that responsible experimentation—being open about risks while pushing boundaries—is essential when regulation lags.

He also notes that trust often requires accepting slower rollout timelines in exchange for more rigorous testing or user feedback loops. For Vinogradov, transparency is not a box-checking exercise but a cultural operating model.

Accountable Velocity and Distributed Responsibility

For Chandrakanth Lekkala, Principal Data Engineer at Narwal.ai, the core challenge is designing systems that maintain transparency at the speed of deployment, not through slow, periodic reviews. He calls this principle “accountable velocity,” which shifts the focus from “Can we build this?” to “Who bears the downstream costs?” This shift requires distributed accountability across teams rather than relying on centralized oversight alone. “Trust comes from making consequences visible and immediate,” he explains, “not aspirational principles, but enforceable mechanisms where stakeholders share accountability.”

“Trust isn’t earned through principles. It’s stress-tested through breakage.”

Bhubalan Mani, Lead - Supply Chain Technology & Analytics of GARMIN, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Bhubalan Mani, Lead for Supply Chain Technology and Analytics at GARMIN

SHARE IT

Reversibility and Crisis Readiness

Bhubalan Mani, Lead for Supply Chain Technology and Analytics at GARMIN, believes trust is proven through failure—and recovery. “Trust isn’t earned through principles. It’s stress-tested through breakage,” he says, emphasizing adversarial red teaming and crisis simulation as operational necessities. Mani builds “reversibility architectures” that ensure every deployment includes kill switches and same-day rollback paths.

He also refuses to launch systems without predefined shutdown criteria, even under revenue pressure. “The trade-off I accept: higher upfront friction establishing reversibility architectures,” he says. But for him, speed without reversibility is an unacceptable risk.

Trust as a Growth Strategy

Aditya Vikram Kashyap, Vice President of Firmwide Innovation at Morgan Stanley, argues that trust is not an optional byproduct of AI—it is the foundation for sustainable growth. He emphasizes that all systems must be auditable and understandable: “If a model’s outcome cannot be explained in clear language, it does not ship.” For him, speed alone is meaningless if the consequences of deployment cannot be fully owned.

Kashyap rejects the perceived trade-off between speed and integrity. Instead, he argues that “trust paired with ambition scales.” He believes that the real marker of leadership in AI is not rapid deployment, but the confidence that clients, regulators and employees can rely on the system without hesitation.

How to Start Building Trust Now

  • Build trust through human-in-the-loop collaboration. Engage stakeholders actively in training and deployment so AI aligns with human intent and expectations.
  • Use interpretable AI and open dialogue. Employ tools like Sparse Autoencoders and share ethical reasoning through blogs or whitepapers to create trust and consensus on ethical norms.
  • Prioritize transparency and communication. Clearly explain AI capabilities, limitations and risks to each stakeholder to build confidence and alignment across teams.
  • Embed transparent guardrails across the AI lifecycle. Establish measurable standards for bias, explainability and governance so teams operate with shared ethical expectations.
  • Document model limitations and uncertainties. Make ethics, compliance and risk assessment integral to the AI development lifecycle, not an afterthought.
  • Adopt “aligned acceleration” for responsible speed. Move fast without compromising integrity, ensuring engineering and ethics teams collaborate closely from the start.
  • Apply principle-based constraints and responsible velocity. Prototype quickly, but enforce fairness, auditability and human oversight to prevent fragile or unsafe deployments.
  • Stress-test AI systems with red teams and reversibility measures. Simulate crises and implement kill switches or rollback protocols to ensure trust through reliability under pressure.
  • Embed governance directly into AI architectures. Include sandbox testing, bias checks and model cards so decisions are transparent, explainable and auditable.
  • Engage stakeholders proactively. Regularly share objectives, challenges and safety measures to foster a collaborative culture that allows responsible experimentation.
  • Establish accountable velocity and distributed responsibility. Make consequences visible and immediate to ensure stakeholders share ownership of outcomes.
  • Institutionalize adversarial testing and crisis simulations. Build “muscle memory” for failure and enforce shutdown criteria to prevent irreversible errors.
  • Treat trust as a strategic growth advantage. Systems that are auditable, explainable and reliable accelerate adoption, credibility and long-term business impact.

Moving Fast, Moving Right

As AI continues to advance faster than regulation and societal norms, building trust across customers, employees and regulators is not an option but a strategic imperative. Insights from the Senior Executive AI Think Tank show that transparency, ethical guardrails, rigorous testing and human-in-the-loop collaboration are essential to moving fast without compromising credibility or safety. Trust is not a one-time effort; it compounds over time through consistent, accountable and auditable practices that align innovation with stakeholder expectations.

Looking ahead, organizations that embed these principles into their AI strategy will gain a competitive edge, fostering adoption, long-term credibility and sustainable impact. The real opportunity lies in advancing with speed while remaining responsible, ensuring the AI-driven future we create is both ambitious and trustworthy—a future that stakeholders can rely on with confidence.


Copied to clipboard.