How Companies Can Scale AI Beyond Pilot Projects
Artificial Intelligence 11 min

The Hidden Barrier Between AI Pilots and Real Business Value

Many companies have launched AI pilots but struggle to translate those experiments into sustained business value. Senior Executive AI Think Tank members explain why the challenge is less about technology and more about operating models, leadership accountability, data infrastructure and organizational learning—and what executives must do to finally scale AI.

by AI Editorial Team on March 12, 2026

Across industries, executives are investing aggressively in artificial intelligence. Yet despite billions spent on experimentation, relatively few organizations have turned AI pilots into scalable platforms that generate repeatable value.

According to PwC’s Global CEO Survey, 56% of CEOs report they’ve seen neither revenue nor cost benefits from investments in AI—a signal that experimentation alone is not enough to create enterprise impact.

Members of the Senior Executive AI Think Tank—a curated group of leaders specializing in enterprise AI, machine learning and digital transformation—say the problem is rarely technical. Instead, organizations struggle with leadership alignment, operating models, governance and cultural change.

Below, their insights reveal a consistent theme: Scaling AI requires redesigning how companies operate—not simply deploying more technology.

Leadership and Operating Models Determine AI Success

Sathish Anumula, Enterprise and Business Architect for IBM Corporation, argues that the biggest barrier to scaling AI isn’t the technology itself but the way organizations are structured to absorb it.

“The gap between AI pilots and enterprise-scale platforms is rarely a technology problem,” Anumula says. “It’s a leadership and operating model challenge.”

Companies often prove that AI works in controlled environments but lack the infrastructure and governance to deploy it broadly. As a result, initiatives stall in what Anumula calls the “pilot plateau.”

“Organizations prove value in sandboxes but lack the infrastructure and organizational will to industrialize,” he explains.

To move forward, leaders must shift their mindset away from isolated use cases toward platform thinking.

“Scaling AI requires investing in shared data infrastructure, reusable pipelines and embedded governance,” he says. “It demands treating data as a product, reorganizing around outcomes with federated AI talent, and executive sponsorship that is operational—not ceremonial.”

For Anumula, the question isn’t whether AI technology is ready. It’s whether organizations are ready to use it.

“The technology has never been more capable,” he says. “The real question is whether our organizations are built to absorb that capability at scale.”

“Organizations keep asking ‘How do we adopt AI for work?’ when the right question is ‘How do we redesign work around AI?’”

Tipu Swaran, Director - Technology Strategy & Digital Transformation of Discover, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Tipu Swaran, Technology Strategy and Digital Transformation Executive at a major financial services company

SHARE IT

Redesigning Work for an AI-Native Organization

For Tipu Swaran, a Technology Strategy and Digital Transformation Executive at a major financial services company, scaling AI requires something deeper than adopting new tools.

“Most organizations treat AI scaling as a technology problem when it’s actually an operating model problem,” Swaran says. “Organizations keep asking ‘How do we adopt AI for work?’ when the right question is ‘How do we redesign work around AI?’”

This distinction is critical. Simply layering AI onto legacy processes often creates complexity without generating meaningful productivity gains.

“Leaders must reimagine the fabric of how decisions are made, how work is done and how value gets created,” Swaran says.

This means integrating AI across strategy, organizational design and technology architecture simultaneously.

“Like every transformational technology, AI presents two diverging paths,” he explains. “Bolt it onto legacy structures or reimagine the operating model around it.”

The consequences of each approach can be dramatic.

“The first leads to unrealized potential,” Swaran says. “The second leads to competitive advantage.”

Industry-Specific Platforms Drive Real Impact

Another reason organizations struggle to scale AI is that many pilots rely on generic models that lack operational relevance.

Justin Newell, CEO of INFORM, says sustainable impact comes when AI is designed specifically for industry workflows.

“Companies get stuck when AI is treated as a standalone layer instead of an integrated operational engine,” Newell says.

INFORM develops AI-driven decision software for logistics, aviation, manufacturing and supply chains—sectors where operational complexity demands specialized solutions.

“The pilot trap is averted when software is verticalized,” he explains. “Designed specifically for managing complexities in logistics, supply chain, auto, aviation and production.”

Generic AI tools often struggle to navigate real-world operational constraints. Industry expertise becomes essential.

“Success means shifting from generic models to repeatable, industry-specific applications,” Newell says.

When AI platforms reflect the realities of operational environments, they can move quickly from experimentation to deployment.

“Scalability comes when AI is built by those who understand industry bottlenecks and focus on immediate operational needs,” he says.

At that point, AI stops being an experiment and becomes infrastructure.

Clear Evaluation Metrics Prevent ‘Pilot Attachment’

Even when AI pilots succeed technically, organizations often struggle to decide which ones should scale.

Daria Rudnik, Team Architect and Executive Leadership Coach at Daria Rudnik Coaching & Consulting, says the issue frequently comes down to unclear evaluation criteria.

“The problem many organizations face is that they are ready to jump into experimentation,” Rudnik says. “But very few think about how they will judge across them.”

She recalls working with a company running multiple AI experiments simultaneously.

“When it was time to scale, teams felt attached to their tools,” she explains. “There was no shared understanding of what ‘working’ meant.”

The company eventually created a cross-functional team to establish shared metrics and business criteria.

“They defined common criteria and business metrics first, then selected the tools that met them,” Rudnik says. “Scaling requires clear evaluation standards, transparent trade-offs and early communication.”

Without these guardrails, AI experimentation can quickly spiral into fragmented initiatives competing for attention and resources.

“Companies that manage to move past the experimentation stage into production are very clear that pilots are temporary,” Rudnik says. “The next step is to evaluate results and move forward with what fits the organization’s needs.”

Industrializing AI Requires Product Thinking

For many leaders, the biggest shift required to scale AI is moving from project thinking to product thinking.

Pawan Anand, Associate Vice President of Communications, Media and Technology at Persistent Systems, sees organizations struggle when AI remains tied to temporary experiments.

“Moving from pilots to platforms requires shifting from experimentation to industrialization,” Anand says.

AI systems must be embedded directly into operational workflows.

“AI must be tied to a clear business owner, shared data architecture, reusable pipelines and embedded governance,” he explains.

Too often, companies optimize algorithms but ignore the systems around them.

“The hard part is redesigning workflows so AI is native to operations.”

Without this shift, organizations fall into a cycle of “pilot fatigue.”

“Fragmented data, no funding model, unclear accountability and change resistance,” Anand says. “Organizations optimize models but never reengineer the system around them.”

Building scalable AI means building infrastructure—not just models.

Product Ownership Sustains AI Value

Similarly, Uttam Kumar, Engineering Manager at American Eagle Outfitters, believes AI should be treated as a living product rather than a one-time deployment.

“Most organizations get stuck because they treat AI as a series of IT projects with a start and stop date,” Kumar says. “You must shift toward a product-centric operating model where cross-functional squads own the entire lifecycle.”

These teams combine data scientists, product managers and business leads who understand customer needs.

“This ensures the AI actually solves a friction point in the customer journey,” he says.

Without sustained ownership, early success can quickly fade.

“Models often wither once the initial pilot funding dries up,” Kumar explains.

Treating AI as a product, however, ensures it evolves alongside the business.

“Viewing AI as a core product is the only way to ensure it keeps delivering value to the customer over the long term,” he adds.

“Most organizations don’t fail to scale AI because of technology; they fail because they never decide what AI is accountable for.”

Andre Shojaie, AI Leadership & Governance of HumanLearn, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Andre Shojaie, an executive leader in AI governance and digital strategy and Founder of HumanLearn

SHARE IT

Governance and Accountability Unlock Scale

For Andre Shojaie, an executive leader in AI governance and digital strategy and Founder of HumanLearn, the real issue preventing scale is accountability.

“Most organizations don’t fail to scale AI because of technology,” Shojaie says. “They fail because they never decide what AI is accountable for.”

Pilots succeed because they operate in controlled environments, he says, but “platforms fail because no one owns outcomes across products, data and decisions.”

Shojaie argues AI must be treated as a core operating capability.

“That means clear decision rights, shared data contracts and governance tied to business outcomes rather than model performance.”

The biggest bottleneck is often the organizational middle layer.

“Strategy says AI matters, teams experiment locally, but no one redesigns processes or roles,” he says.

Until that happens, AI remains trapped in experimentation.

Infrastructure and MLOps Form the Backbone

From a technical standpoint, scalable AI depends on robust infrastructure.

Pradeep Kumar Muthukamatchi, Principal Cloud Architect at Microsoft, says many organizations accumulate technical debt during experimentation.

“Organizations often stall in ‘pilot purgatory’ because they lack a unified MLOps backbone,” he says.

Instead of building reusable platforms, teams create one-off solutions.

“True scalability demands centralized infrastructure that standardizes data ingestion, governance and deployment pipelines.”

AI models should be managed like any other enterprise software.

“You must treat models like software—prioritizing continuous monitoring, versioning and clear business ROI.”

Without that discipline, experimentation rarely evolves into enterprise capability.

Data Platforms Are the Foundation for Scalable AI

Chandrakanth Lekkala, Principal Data Engineer at Narwal.ai, helps enterprises modernize digital ecosystems through cloud-native data platforms, AI engineering and quality engineering. In that work, Lekkala frequently sees organizations struggle because their infrastructure was never designed for production-scale AI.

“Placing AI in scaled platforms needs to be viewed as a product, not a project,” Lekkala says.

Many organizations successfully launch pilots but underestimate the complexity of operationalizing them. As pilots multiply, technical debt accumulates and data pipelines become fragmented.

“Top-tier executives need to institute proper governance, standardization of infrastructure and executive sponsorship with quantifiable ROI objectives,” he explains.

Without those foundations, pilots often fail to translate into operational systems.

“Organizations usually hit three areas,” Lekkala says. “They lack the data infrastructure to run at production scale; they struggle to integrate AI into existing systems and workflows; and they underestimate the change management required to engage users.”

Lekkala says overcoming these barriers requires more than technical expertise—it demands coordinated leadership across technology, operations and business teams.

“Delivering success requires cross-functional interaction, committed platform workforces and a transition from an experimental mentality to operational perfection,” he says.

Learning Velocity Defines AI Advantage

For Bhubalan Mani, Lead of Supply Chain Technology and Analytics at GARMIN, the organizations that succeed with AI measure something different.

“AI at scale is not a tech problem,” Mani says. “It is a learning speed problem.”

Many companies focus on adoption metrics rather than improving decisions.

“Firms stall because they measure adoption, not decision quality or reuse,” Mani says. “They optimize for demos instead of capabilities that compound across products and functions.”

Successful organizations treat each deployment as infrastructure.

“Every model in production must leave behind shared data pipelines, governance patterns and change playbooks that make the next build faster,” Mani says.

This accelerates the next innovation cycle.

“Leaders stop asking ‘Which pilot wins?’ and start asking ‘How much cheaper and smarter does this pilot make the next one?’” he says. “Learning velocity—not model accuracy—becomes the defining KPI of an AI platform.”

“At the pilot stage, the person who built it knows its limits. At scale, the person using it doesn’t.”

Jim Liddle, Chief Innovation Officer of Data Intelligence and AI at Nasuni, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Jim Liddle, entrepreneur, investor and enterprise AI strategist

SHARE IT

Culture Determines Whether AI Amplifies Good Decisions

Finally, Jim Liddle, entrepreneur, investor and enterprise AI strategist, believes scaling AI ultimately depends on organizational thinking.

“Most organizations don’t have a scaling problem,” Liddle says. “They have a thinking problem.”

Liddle notes that pilots succeed because knowledgeable teams oversee them closely.

“At the pilot stage, the person who built it knows its limits,” he says. “At scale, the person using it doesn’t.”

Companies often assume deploying AI automatically improves decision-making.

“AI doesn’t make organizations smarter,” Liddle says. “It amplifies whatever thinking culture already exists.”

If decision-making processes are weak, AI simply accelerates mistakes.

“If the culture is shallow, AI just moves faster to the wrong answer.”

How to Move Beyond the Pilot Stage

  • Treat AI scaling as an operating model challenge. Leaders must redesign data infrastructure, governance and team structures to absorb AI capabilities.
  • Redesign work around AI rather than adding AI to existing workflows. Organizations that rethink how work happens unlock significantly more value.
  • Develop industry-specific AI platforms. Solutions grounded in operational expertise scale faster and deliver measurable results.
  • Define evaluation criteria before launching multiple pilots. Shared metrics and business objectives prevent experimentation from becoming fragmented.
  • Industrialize AI with clear ownership and reusable infrastructure. Operational workflows, not just algorithms, determine whether AI creates lasting value.
  • Adopt a product-centric model for AI systems. Cross-functional teams should own the entire lifecycle to ensure long-term impact.
  • Create governance and accountability for AI outcomes. Decision rights, data contracts and outcome-based incentives are essential for scale.
  • Build a unified MLOps backbone. Standardized pipelines and monitoring systems turn experimentation into repeatable capability.
  • Build AI-ready data infrastructure before attempting large-scale deployment. Standardized platforms, strong governance and cross-functional collaboration are essential to integrate AI into real systems and sustain enterprise-scale performance.
  • Measure learning velocity rather than pilot success. Each deployment should accelerate future innovation.
  • Strengthen decision culture before scaling AI. AI amplifies existing thinking patterns—good or bad.

Turning AI Investment Into Enterprise Value

The path from AI pilot to enterprise platform is far less about algorithms than about organizational design. Leaders must rethink operating models, build shared infrastructure and establish clear accountability for outcomes.

As members of the Senior Executive AI Think Tank emphasize, companies that succeed with AI treat it not as a technology deployment but as a foundational business capability. Those that fail to make this shift risk remaining stuck in pilot purgatory—while more adaptive competitors turn experimentation into sustained advantage.


Copied to clipboard.