AI Is Now Strategy—Here’s How Org Charts Must Change
Artificial Intelligence 9 min

AI Is Now Strategy—Here’s How Org Charts Must Change

As AI becomes central to competitive strategy, executive teams face a structural reckoning. Insights from members of the Senior Executive AI Think Tank reveal how AI councils, Chief AI Officers and other new roles can drive alignment, accountability and measurable business impact—without slowing innovation.

by AI Editorial Team on January 22, 2026

As AI becomes inseparable from competitive strategy, executives are confronting a difficult question: Who actually owns AI? Traditional org charts, designed for slower cycles of change, often fail to clarify accountability when algorithms influence revenue, risk and brand trust simultaneously.

Without oversight and clear ownership of responsibility, issues like “shadow AI” deployments that increase compliance and reputational risk can quickly get out of hand. To prevent this problem, executive teams are rethinking AI councils, Chief AI Officers and cross-functional pods as strategic infrastructure—not bureaucratic overhead.

Members of the Senior Executive AI Think Tank—a curated group of leaders specializing in machine learning, generative AI and enterprise AI deployment—argue that this structure matters, but not in the way most organizations assume. Below, they break down how leading organizations are restructuring for AI: what belongs at the center, what should be embedded in the business and how executive teams can assign clear ownership without slowing innovation.

“When accountability is fragmented, AI drifts into shadow use. When control is overcentralized, innovation suffocates.”

Aditya Vikram Kashyap, Vice President, Firmwide Innovation at Morgan Stanley, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Aditya Vikram Kashyap, Vice President of Firmwide Innovation at Morgan Stanley

SHARE IT

Centralized Authority With Real Decision Power

As AI becomes enterprise infrastructure, members of the Senior Executive AI Think Tank agree on one point: AI needs clear executive ownership with real authority. Not as a symbolic role or an IT subfunction, but positioned close to the CEO to shape strategy, investment and risk. Without that clarity, AI initiatives proliferate without cohesion, and accountability quickly erodes.

Mohan Krishna Mannava, Data Analytics Leader at Texas Health, argues that alignment starts with a Chief AI Officer reporting directly to the CEO or COO, accountable for both financial impact and risk. “Governance must be centralized in a lean AI council,” he says, while execution is pushed into the business. Crucially, Mannava emphasizes separating development from oversight, assigning AI ethics, compliance and audit responsibility to independent risk teams to prevent blind spots as systems scale.

That hybrid model resonates with Pradeep Kumar Muthukamatchi, Principal Cloud Architect at Microsoft, who warns against isolating AI in innovation labs. “Executive teams must dismantle the AI silo,” he says, replacing standalone AI efforts with a central Chief AI Officer (CAIO) function that sets governance, data standards and platforms, paired with cross-functional pods embedded in operational teams. The result, he argues, is direct accountability for outcomes within P&L ownership, ensuring AI remains a competitive lever rather than a cost center.

Aditya Vikram Kashyap, Vice President of Firmwide Innovation at Morgan Stanley, frames the issue as one of authority, not technology. “AI reshapes authority long before it improves performance,” he says. Centralized ownership of standards and escalation, combined with decentralized execution, creates the balance organizations need. “When accountability is fragmented, AI drifts into shadow use,” Kashyap says. “When control is overcentralized, innovation suffocates.”

Federated Execution Through Embedded AI Pods

While authority may sit at the center, execution must live inside the business. AI delivers its greatest value when cross-functional teams are embedded directly within operational units—close to real workflows, domain expertise and measurable outcomes.

Sathish Anumula, Sr. Customer Success Manager and Architect at IBM Corporation, describes this as a hub-and-spoke model: A central AI hub establishes infrastructure, standards and compliance, while agile AI pods operate inside functions like marketing, supply chain and manufacturing. “These pods mix engineers with experts in the field,” Anumula says, maintaining a clear reporting line to business leaders for ROI, with a dotted line to the CAIO for technical governance. The result is centralized oversight without slowing delivery.

Uttam Kumar, Engineering Manager at American Eagle Outfitters, sees similar benefits, advocating for decentralized, agile “AI pods” or “friction teams” embedded across functions, “allowing for rapid experimentation and domain-specific solution deployment.” Central coordination, Kumar says, should be lightweight—focused on shared platforms and standards—so AI is treated as a utility rather than a siloed initiative.

Roman Vinogradov, VP of Product at Improvado, emphasizes that this structure accelerates adoption because teams understand how AI connects to their goals. “Cross-functional AI pods tackle specific projects or challenges,” he says, ensuring accountability while fostering collaboration. “Regular check-ins will help track progress and facilitate communication among teams.”

Governance, Risk and the Fight Against Shadow AI

As AI scales, its failure modes become more consequential—and more visible. Several Think Tank members stress that governance is not about slowing innovation, but about protecting the enterprise when models drift, bias emerges or unapproved tools proliferate.

Bhubalan Mani, Lead of Supply Chain Technology and Analytics at GARMIN, argues that most organizations focus on who builds AI rather than who owns outcomes when it fails. “The org chart should define who owns accountability when AI fails,” he says. For Mani, that means a Chief AI Officer who sets priorities and kill criteria, supported by a lean AI council that defines the boundaries for safe experimentation.

Dileep Rai, Manager of Oracle Cloud Technology at Hachette Book Group (HBG), frames governance as a shift from hierarchy to stewardship. AI, he says, should be guided by cross-functional councils that bind strategy, risk, data and operations into a single rhythm. “Small AI pods embedded in business units ensure local accountability,” he says, while the council maintains ethical and architectural coherence across the enterprise.

From a technical standpoint, Chandrakanth Lekkala, Principal Data Engineer at Narwal.ai, reinforces the need for explicit decision rights. Hybrid reporting structures, RACI clarity and ethics review boards with veto power ensure accountability doesn’t erode as AI adoption accelerates. “This matrix balances centralized strategy with decentralized execution,” Lekkala says, making governance operational rather than theoretical.

“When teams know who owns the vision, who owns delivery and how fast decisions get made, AI stops being hype.”

Divya Parekh, Founder of The DP Group, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Divya Parekh, Founder of The DP Group

SHARE IT

Alignment Over Org Charts

Several Think Tank members push back on the assumption that better org charts automatically lead to better AI outcomes. In their view, alignment, trust and clarity of purpose matter far more than titles or reporting lines.

Greg Shewmaker, CEO of r.Potential, cautions that structure alone cannot create value. “The real need is to orchestrate human and digital capacity in a way that allocates AI effectively and reflects each company’s unique capabilities, constraints and trust levels,” he says. Without clarity on how AI amplifies real work, he adds, pilots remain disconnected and accountability remains elusive.

Divya Parekh, Founder of The DP Group, agrees that structure is secondary to alignment. She advocates starting with an AI council to clarify ownership and decision speed, then building cross-functional pods that move work forward. “The structure is not the secret,” Parekh says. “When teams know who owns the vision, who owns delivery and how fast decisions get made, AI stops being hype.”

Daria Rudnik, Team Architect and Executive Leadership Coach at Daria Rudnik Coaching & Consulting, takes the argument further, questioning whether traditional org charts are useful at all. “AI shouldn’t trigger new org charts or isolated AI units,” she says. Instead, she urges leaders to organize around skills, problems and shared outcomes, using AI to enable adaptive collaboration rather than reinforcing rigid hierarchies.

“If AI is mismanaged internally, it can distort the company’s ‘soul’ and corrupt its external message.”

Jason Barnard, Founder and CEO of Kalicube, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Jason Barnard, Founder and CEO of Kalicube

SHARE IT

AI as Shared Enterprise Capability, Not a Silo

As AI increasingly shapes how organizations operate and are perceived, ownership itself must be redefined. AI cannot belong to a single function without distorting enterprise coherence.

Jason Barnard, Founder and CEO of Kalicube, frames this as a brand risk. AI influences internal behavior, which in turn shapes the external narrative. “If AI is mismanaged internally, it can distort the company’s ‘soul’ and corrupt its external message,” Barnard says. Because AI systems amplify these signals at scale, misalignment becomes difficult to correct once it takes hold.

The solution, says Barnard, is to establish AI pods per department, overseen by an AI Orchestrator, to guarantee internal alignment and external consistency. When responsibility is shared and alignment is explicit, AI reinforces enterprise identity. When it is fragmented, it quietly undermines it.

Key Tips When Restructuring for AI

  • Appoint a Chief AI Officer (CAIO) reporting to the CEO or COO. Give this role clear accountability for AI strategy, investment, risk management and financial impact.
  • Establish a lean AI Council. Include key C-suite members (CTO, CFO, Legal, Risk) to oversee standards, governance and escalation pathways.
  • Separate oversight from execution. Assign ethics, compliance and audit responsibilities to independent risk functions to prevent conflicts of interest and blind spots.
  • Create cross-functional AI pods within operational teams. Each pod (marketing, supply chain, operations) should include both technical and domain experts.
  • Use a hub-and-spoke model. The central hub sets standards, platforms and compliance; pods deliver tangible business outcomes.
  • Focus on measurable impact. Assign accountability to business leaders for ROI and outcomes, while the CAIO maintains technical oversight.
  • Define clear decision rights and RACI matrices to track who owns outcomes, who approves initiatives, and who monitors ethical and operational risks.
  • Set up ethics review boards and kill criteria to prevent runaway AI experiments or shadow AI deployments.
  • Centralize oversight, decentralize execution. Ensure councils coordinate strategy, risk and policy, while embedded teams deliver value locally.
  • Focus on clarity, not titles. Ensure teams understand who owns the AI vision, who delivers outcomes and how decisions are made.
  • Organize around skills, problems and outcomes rather than rigid hierarchies. Use AI to reveal strengths and enable cross-functional collaboration.
  • Communicate purpose and impact. Teams should see AI as a tool to amplify work, not as a separate initiative or side function.
  • Maintain enterprise alignment. Ensure AI initiatives reinforce brand, culture and strategic priorities rather than creating fragmented or conflicting outcomes.

AI Doesn’t Need More Titles—It Needs Ownership

As AI reshapes competitive dynamics, executive teams are discovering that governance is strategy. Titles alone cannot create alignment, and decentralization without oversight invites risk. The most resilient organizations are those that treat AI as enterprise infrastructure—centrally guided, locally executed and collectively owned.

AI will continue to compress decision cycles and blur functional boundaries, but leaders who design for accountability, adaptability and trust today will be best positioned to turn AI from a technological advantage into a durable operating model.


Copied to clipboard.