AI tools are proliferating across enterprises at unprecedented speed. Yet implementation does not guarantee adoption. According to a McKinsey report on generative AI adoption, while organizations are investing heavily, many struggle to translate experimentation into sustained value. The gap is rarely technical—it is behavioral.
Members of the Senior Executive AI Think Tank, a curated group of experts in enterprise AI, generative AI and machine learning strategy, agree: whether AI becomes a trusted decision-support system—or a tool employees quietly resist—depends largely on the signals sent by the C-suite.
Executives shape consequence structures, model risk tolerance, determine measurement standards and define what success looks like. In short, employees learn how to treat AI by watching how leaders treat it.
Below, Think Tank members share what C-suite leaders most often get wrong—and what they must do differently to ensure their organizations gain real, measurable value from AI.
“When it is safer to use AI than to ignore it, adoption becomes real.”
Signals Over Systems: The Trust Equation
Pawan Anand, Associate Vice President of Communications, Media and Technology at Persistent Systems, an AI-led digital engineering and enterprise modernization partner, sees a consistent pattern across large-scale deployments.
“AI adoption by non-executive employees depends less on the model and more on the signals the C-suite sends,” Anand says. “If leaders frame AI as cost-cutting or surveillance, teams comply publicly and resist privately.”
In contrast, he notes, trust grows when leaders model its use transparently.
“If they model its use, admit its limits and stay accountable for outcomes, trust grows,” Anand says.
For Anand, ensuring positive value requires structural clarity.
“Leaders ensure positive value by defining clear human ownership, rewarding thoughtful overrides and measuring decision quality—not tool usage,” he explains. “When it is safer to use AI than to ignore it, adoption becomes real.”
Make Judgment Visible
Andre Shojaie, an executive leader in AI governance and digital strategy and Founder of HumanLearn, argues that accountability modeling is more powerful than training budgets.
“What most shapes employee adoption is not training or access to tools,” Shojaie says. “It is how the C-suite models accountability when AI is involved.”
He warns against executives using algorithmic recommendations as a shield.
“When leaders treat AI recommendations as something they can hide behind, teams learn to resist quietly or ignore the system,” he explains. “When leaders openly explain how AI informed a decision and where human judgment overruled it, trust follows.”
In his view, AI earns legitimacy as a decision-support system when leaders make judgment visible.
“Teams gain real value when AI clarifies what matters, sharpens reasoning and still leaves responsibility with humans,” Shojaie says.
Celebrate the Good Catch
Mohan Krishna Mannava, Data Analytics Leader at Texas Health, believes leaders often undermine trust by focusing too narrowly on efficiency.
“To gain employee trust, the C-suite must move away from efficiency talk, which employees often interpret as job cuts,” he says.
More importantly, he argues, executives must normalize failure.
“Leaders influence teams most by how they react to AI failure,” Mannava says. “If the C-suite only rewards speed, employees will quietly resist or blindly follow flawed data.”
Instead, he urges leaders to celebrate overrides.
“True trust is built when leaders celebrate ‘the good catch,’ publicly rewarding an employee who overrides the AI to prevent a mistake,” he says. “Stop measuring how often AI is used. Instead, measure the specific instances where a human’s context made the AI’s answer better.”
“Treat AI like a new hire who needs context, correction and patience.”
Model It Yourself
Jason Barnard, Founder and CEO of Kalicube, a premium digital branding consultancy that leverages billions of data points to shape online perception, emphasizes executive visibility.
“My experience: Getting to grips with AI tools myself and sharing those experiences—good and bad—helps immensely,” Barnard says. “Teams follow what leaders actually do.”
He offers a practical analogy: “Treat AI like a new hire who needs context, correction and patience. Measure quality and celebrate the employees who catch AI errors and improve processes.”
Barnard also expands the lens beyond internal adoption.
“Public AI systems—Google, ChatGPT, Perplexity, Claude, Gemini—are interacting with your prospects 24/7,” he says. “They are effectively untrained employees representing your brand right now. Are they getting your story right?”
For Barnard, the leadership question extends outward.
“Who’s training the AI already selling for you—or more likely against you?”
Address the Survival Component
Jim Liddle, an entrepreneur and enterprise AI strategist with more than 25 years building and scaling companies, says leaders underestimate psychological resistance.
“For many, AI resistance is a survival problem,” Liddle says. “You cannot ask someone to enthusiastically work or train their own replacement and expect genuine buy-in.”
He believes clarity about role evolution is critical: “People need to know their job is safe and how their role evolves—not disappears—using AI.”
He also highlights underinvestment in training.
“Execs budget for the AI software and expect adoption to be free,” he says. “Employees can easily hit a roadblock and abandon use of the AI tools before anyone notices.”
Sustained investment, he argues, must continue “over weeks, months and even into the next year” to pay forward in terms of employee buy-in.
Redefine Success Metrics
Pradeep Kumar Muthukamatchi, Principal Cloud Architect at Microsoft, says intent defines perception.
“The single biggest influence is the intent behind deployment,” he says. “If the C-suite prioritizes efficiency over empowerment, teams will view AI as a threat and quietly sabotage it.”
He advocates for involvement and explainability.
“Trust is only earned when leaders involve employees early, allowing them to ‘teach’ the AI,” he says. “Employees cannot trust a ‘black box’ that contradicts their intuition without rationale.”
Ultimately, success must be reframed.
“Measure decision quality, not just speed,” he says. “When leaders visibly use AI to augment their own strategic thinking, it signals that the technology is a tool for elevation, not replacement.”
Build Consequence Architecture
Bhubalan Mani, Lead of Supply Chain Technology and Analytics at GARMIN, frames adoption as a systems issue.
“Employees don’t resist AI. They resist accountability vacuums where using AI becomes riskier than ignoring it,” he says.
He points to research that shows 62% of employees in companies with AI strategies led by leadership are fully engaged.
“The difference is consequence architecture,” Mani says. “When leaders hold employees accountable for AI errors without modeling risk, adoption becomes theater.”
The key, he says, is measurement.
“Organizations tracking usage breed compliance,” Mani explains. “Those measuring decision quality where humans caught errors build trust.”
When executives publicly share AI failures and protect those who override with better context, positive impact rises.
Embed Learning Into Work
Su Belagodu, Managing Partner at Intellectus Advisors, emphasizes structural integration.
“Non-executive teams adapt to AI when learning is built into their daily work, not treated as extra credit,” Belagodu says.
To her, leadership’s role is architectural.
“Leadership enables this by creating clarity around where AI fits, making experimentation safe and tying AI use to real deliverables,” Belagodu says. “Adoption follows structure, not enthusiasm.”
“Some people are afraid of the technology itself, but for many, it’s about what using AI might signal.”
Eliminate the Fear Signal
Daria Rudnik, Team Architect and Executive Leadership Coach at Daria Rudnik Coaching & Consulting, hears a consistent concern.
“Some people are afraid of the technology itself, but for many, it’s about what using AI might signal,” she says. “I hear this often: ‘I’m not afraid AI will replace me. I’m afraid my manager will think it can.’”
She believes this is why C-suite behavior, and clarity around intent, is so important.
“It’s not about doing more with less, but about doing more with more,” Rudnik says. “When leaders openly state that AI needs human judgment, context and oversight, and that skillful use of AI elevates rather than replaces roles, adoption becomes safer.”
She notes that teams will engage when they believe AI is a way to grow their impact—and not merely another way to “optimize.”
Create a Culture Where AI Thrives
Chandrakanth Lekkala, Principal Data Engineer at Narwal.ai, says visibly using AI tools themselves, allocating dedicated budgets, celebrating experiments publicly and sharing transparent metrics are all C-suite behaviors that critically shape AI adoption.
“Leadership authenticity and consistent support determine whether AI becomes genuinely valuable,” he says.
He also stresses psychological safety.
“Establish an environment where questioning AI outputs is encouraged,” Lekkala says, “and create feedback channels where employees shape AI implementation.”
Radical Transparency Drives Trust
Uttam Kumar, Engineering Manager at American Eagle Outfitters, highlights the importance of transparency.
“The decision to be radically transparent about the goals of AI implementation is the primary driver of workforce trust,” he says.
When leaders clarify that AI removes “drudgery”—such as manual stock counting in retail—rather than headcount, resistance declines.
He also calls for frontline inclusion.
“Value is gained when executives involve frontline employees in the early design phase, ensuring the AI solves real-world pain points,” Kumar says.
Technology, he reminds leaders, “is only as effective as the people who use it.”
Reward What You Want Repeated
Aishwarya Shah, independent researcher, brings the conversation back to incentives.
“AI adoption rises or dies based on what leaders reward,” she says.
She stresses visible usage and guardrails.
“Employees trust AI when the C-suite uses it visibly, sets clear guardrails and holds leaders accountable for outcomes—not just experimentation.”
In her view, behavior outpaces policy.
“When leadership is silent, inconsistent or risk-averse, AI gets ignored,” Shah says. “How leaders model AI use, fund it and govern it determines whether it becomes trusted or sidelined.”
How Leaders Can Earn Employee Buy-In
- Signal empowerment, not surveillance. Frame AI as a tool that strengthens human judgment rather than replaces it.
- Make decision-making visible. Explain how AI informed choices and where human reasoning prevailed.
- Celebrate intelligent overrides. Publicly reward employees who catch AI errors and improve outcomes.
- Model AI use yourself. Share your experiences openly to normalize learning and iteration.
- Address survival concerns directly. Clarify how roles will evolve and invest in sustained training.
- Measure decision quality over usage rates. Track where human insight improved AI output.
- Design consequence architecture. Make it safer to use AI thoughtfully than to ignore it.
- Embed AI into daily workflows. Build learning and experimentation into core deliverables.
- Eliminate fear signals. Explicitly state that AI elevates roles rather than signaling redundancy.
- Demonstrate authentic commitment. Fund, govern and visibly support AI initiatives consistently.
- Practice radical transparency. Clarify goals early and involve frontline employees in design.
- Reward responsible use. Incentivize accountability and protect employees who engage thoughtfully.
Adoption Is a Leadership Choice
Enterprise AI initiatives rarely fail because the models are weak. They fail because the leadership signals are mixed.
Employees are extraordinarily perceptive. They notice what executives reward, what they tolerate, what they measure and what they ignore. If AI is framed as a cost lever, it will be treated as a threat. If it is tracked through usage dashboards alone, it will become a compliance exercise. If leaders hide behind its outputs, trust will erode quietly.
But when executives model curiosity, make their judgment visible, protect thoughtful overrides and invest in role evolution—not just software licenses—AI adoption shifts from reluctant experimentation to sustained value creation.
In the end, AI does not transform organizations—leadership behavior does. Companies extracting real advantage from AI don’t necessarily have the most advanced models, but they do have leaders who send the clearest signals.
