Mohan Krishna Mannava
Data Analytics LeaderTexas Health
Mohan Krishna Mannava
Published content

expert panel
The notion of a “steady state” has quietly disappeared from modern enterprise leadership. In its place is a reality defined by continuous disruption, where artificial intelligence is not just accelerating change but compounding it. Organizations are no longer transforming in phases—they are operating in a constant state of reinvention. For executives, this requires a shift from managing change as an event to leading within change as an environment. Members of the Senior Executive AI Think Tank—a curated group of experts in machine learning, generative AI and enterprise AI applications—bring a front-line perspective to this challenge. Their work across healthcare, cloud architecture, enterprise platforms and AI governance show that the organizations that succeed are not those with the most advanced tools, but those with the most adaptive operating models and leadership mindsets. According to McKinsey’s 2025 report on the state of AI, companies are rapidly scaling AI adoption, yet many struggle to translate that investment into sustained business value—often because their structures, decision-making processes and cultures are not designed for continuous change. To help their fellow leaders better cope with these evolving demands, Think Tank members outline the capabilities executives can no longer treat as optional. Through real-world insights and expert perspectives, they explore how leaders are redesigning operating models, reshaping team expectations and building organizations that don’t just withstand disruption, but continuously learn and perform within it.

expert panel
Mar 11, 2026
Europe has spent the last decade establishing itself as the global leader in technology regulation. The General Data Protection Regulation (GDPR) reshaped how organizations handle personal data worldwide, and the European Union’s landmark AI Act aims to set guardrails for high-risk AI systems across industries. Yet policymakers now appear willing to recalibrate. European officials have begun discussing potential simplifications or delays to portions of the AI Act and related digital rules as they confront a widening innovation gap with the U.S. and China. The EU’s strict regulatory framework has slowed the pace of large-scale AI experimentation compared with other global tech hubs, putting them at a distinct disadvantage in the market. Members of the Senior Executive AI Think Tank—a curated network of leaders specializing in machine learning, generative AI and enterprise AI strategy—say the debate isn’t simply about regulation versus innovation. Instead, they argue that Europe’s regulatory approach has quietly limited several categories of AI development, from cross-border data platforms to real-time industrial automation. If policymakers move forward with regulatory adjustments, the ripple effects could be significant: Startups may gain the freedom to experiment faster, enterprises may finally scale AI deployments beyond pilot programs and the EU could evolve from global rule-setter into a more formidable technology competitor. Below, Think Tank members explain what Europe may have been holding back—and what could happen next.

expert panel
AI tools are proliferating across enterprises at unprecedented speed. Yet implementation does not guarantee adoption. According to a McKinsey report on generative AI adoption, while organizations are investing heavily, many struggle to translate experimentation into sustained value. The gap is rarely technical—it is behavioral. Members of the Senior Executive AI Think Tank, a curated group of experts in enterprise AI, generative AI and machine learning strategy, agree: whether AI becomes a trusted decision-support system—or a tool employees quietly resist—depends largely on the signals sent by the C-suite. Executives shape consequence structures, model risk tolerance, determine measurement standards and define what success looks like. In short, employees learn how to treat AI by watching how leaders treat it. Below, Think Tank members share what C-suite leaders most often get wrong—and what they must do differently to ensure their organizations gain real, measurable value from AI.

expert panel
Internal AI assistants are quickly becoming the connective tissue of modern enterprises, answering employee questions, accelerating sales cycles and guiding operational decisions. Yet as adoption grows, a quiet risk is emerging: AI systems are only as reliable as the knowledge they consume. Members of the Senior Executive AI Think Tank—a curated group of leaders working at the forefront of enterprise AI—warn that many organizations are underestimating the complexity of managing proprietary knowledge at scale. While executives often focus on model selection or vendor strategy, accuracy failures more often stem from outdated documents, weak governance and unclear ownership of information. Research from MIT Sloan Management Review shows that generative AI tools often produce biased or inaccurate outputs because they rely on vast, unvetted datasets and that most responsible‑AI programs aren’t yet equipped to mitigate these risks—reinforcing the need for disciplined, enterprise level knowledge governance. As organizations move from experimentation to production, Think Tank members offer key strategies for rethinking how knowledge is curated, validated and secured—without institutionalizing misinformation at machine speed.

expert panel
As AI becomes inseparable from competitive strategy, executives are confronting a difficult question: Who actually owns AI? Traditional org charts, designed for slower cycles of change, often fail to clarify accountability when algorithms influence revenue, risk and brand trust simultaneously. Without oversight and clear ownership of responsibility, issues like “shadow AI” deployments that increase compliance and reputational risk can quickly get out of hand. To prevent this problem, executive teams are rethinking AI councils, Chief AI Officers and cross-functional pods as strategic infrastructure—not bureaucratic overhead. Members of the Senior Executive AI Think Tank—a curated group of leaders specializing in machine learning, generative AI and enterprise AI deployment—argue that this structure matters, but not in the way most organizations assume. Below, they break down how leading organizations are restructuring for AI: what belongs at the center, what should be embedded in the business and how executive teams can assign clear ownership without slowing innovation.

expert panel
AI agents are no longer experimental tools tucked inside innovation labs. They are drafting contracts, recommending prices, screening candidates and reshaping how decisions are made across companies. As adoption accelerates, however, many organizations are discovering a sobering truth: Knowing how to use AI is not the same as knowing when not to. Members of the Senior Executive AI Think Tank—a curated group of technologists, executives and strategists shaping the future of applied AI—agree that the next frontier of AI maturity is literacy rooted in judgment. Training programs must now prepare employees not just to operate AI agents, but to question them, override them and escalate concerns when outputs conflict with human values, domain expertise or organizational risk. That concern is well founded: Organizations relying on unchecked automation face higher reputational and compliance risk, even when systems appear highly accurate. Similarly, confident but incorrect AI outputs—often called “hallucinations”—are becoming one of the biggest enterprise risks as generative AI scales. Against that backdrop, Senior Executive AI Think Tank members outline what effective AI literacy training must look like in practice—and why leaders must act now.








