Mohan Krishna Mannava's avatarPerson

Mohan Krishna Mannava

Data Analytics LeaderTexas Health

Dallas, TX

Published content

Is Europe Now Ready to Unleash Its AI Potential?

expert panel

Europe has spent the last decade establishing itself as the global leader in technology regulation. The General Data Protection Regulation (GDPR) reshaped how organizations handle personal data worldwide, and the European Union’s landmark AI Act aims to set guardrails for high-risk AI systems across industries. Yet policymakers now appear willing to recalibrate. European officials have begun discussing potential simplifications or delays to portions of the AI Act and related digital rules as they confront a widening innovation gap with the U.S. and China. The EU’s strict regulatory framework has slowed the pace of large-scale AI experimentation compared with other global tech hubs, putting them at a distinct disadvantage in the market. Members of the Senior Executive AI Think Tank—a curated network of leaders specializing in machine learning, generative AI and enterprise AI strategy—say the debate isn’t simply about regulation versus innovation. Instead, they argue that Europe’s regulatory approach has quietly limited several categories of AI development, from cross-border data platforms to real-time industrial automation. If policymakers move forward with regulatory adjustments, the ripple effects could be significant: Startups may gain the freedom to experiment faster, enterprises may finally scale AI deployments beyond pilot programs and the EU could evolve from global rule-setter into a more formidable technology competitor. Below, Think Tank members explain what Europe may have been holding back—and what could happen next.

The Hidden Leadership Signals That Make or Break AI Adoption

expert panel

AI tools are proliferating across enterprises at unprecedented speed. Yet implementation does not guarantee adoption. According to a McKinsey report on generative AI adoption, while organizations are investing heavily, many struggle to translate experimentation into sustained value. The gap is rarely technical—it is behavioral. Members of the Senior Executive AI Think Tank, a curated group of experts in enterprise AI, generative AI and machine learning strategy, agree: whether AI becomes a trusted decision-support system—or a tool employees quietly resist—depends largely on the signals sent by the C-suite. Executives shape consequence structures, model risk tolerance, determine measurement standards and define what success looks like. In short, employees learn how to treat AI by watching how leaders treat it. Below, Think Tank members share what C-suite leaders most often get wrong—and what they must do differently to ensure their organizations gain real, measurable value from AI.

How to Keep Enterprise AI Knowledge Accurate, Current and Secure

expert panel

Internal AI assistants are quickly becoming the connective tissue of modern enterprises, answering employee questions, accelerating sales cycles and guiding operational decisions. Yet as adoption grows, a quiet risk is emerging: AI systems are only as reliable as the knowledge they consume. Members of the Senior Executive AI Think Tank—a curated group of leaders working at the forefront of enterprise AI—warn that many organizations are underestimating the complexity of managing proprietary knowledge at scale. While executives often focus on model selection or vendor strategy, accuracy failures more often stem from outdated documents, weak governance and unclear ownership of information. Research from MIT Sloan Management Review shows that generative AI tools often produce biased or inaccurate outputs because they rely on vast, unvetted datasets and that most responsible‑AI programs aren’t yet equipped to mitigate these risks—reinforcing the need for disciplined, enterprise level knowledge governance. As organizations move from experimentation to production, Think Tank members offer key strategies for rethinking how knowledge is curated, validated and secured—without institutionalizing misinformation at machine speed.

AI Is Now Strategy—Here’s How Org Charts Must Change

expert panel

As AI becomes inseparable from competitive strategy, executives are confronting a difficult question: Who actually owns AI? Traditional org charts, designed for slower cycles of change, often fail to clarify accountability when algorithms influence revenue, risk and brand trust simultaneously. Without oversight and clear ownership of responsibility, issues like “shadow AI” deployments that increase compliance and reputational risk can quickly get out of hand. To prevent this problem, executive teams are rethinking AI councils, Chief AI Officers and cross-functional pods as strategic infrastructure—not bureaucratic overhead. Members of the Senior Executive AI Think Tank—a curated group of leaders specializing in machine learning, generative AI and enterprise AI deployment—argue that this structure matters, but not in the way most organizations assume. Below, they break down how leading organizations are restructuring for AI: what belongs at the center, what should be embedded in the business and how executive teams can assign clear ownership without slowing innovation.

How to Build AI Literacy That Empowers—and Protects—Your Workforce

expert panel

AI agents are no longer experimental tools tucked inside innovation labs. They are drafting contracts, recommending prices, screening candidates and reshaping how decisions are made across companies. As adoption accelerates, however, many organizations are discovering a sobering truth: Knowing how to use AI is not the same as knowing when not to. Members of the Senior Executive AI Think Tank—a curated group of technologists, executives and strategists shaping the future of applied AI—agree that the next frontier of AI maturity is literacy rooted in judgment. Training programs must now prepare employees not just to operate AI agents, but to question them, override them and escalate concerns when outputs conflict with human values, domain expertise or organizational risk. That concern is well founded: Organizations relying on unchecked automation face higher reputational and compliance risk, even when systems appear highly accurate. Similarly, confident but incorrect AI outputs—often called “hallucinations”—are becoming one of the biggest enterprise risks as generative AI scales. Against that backdrop, Senior Executive AI Think Tank members outline what effective AI literacy training must look like in practice—and why leaders must act now.

What the Disney–OpenAI Deal Means for Tomorrow's Media

expert panel

The recent Disney–OpenAI partnership represents a turning point in the convergence of entertainment and artificial intelligence. By investing $1 billion in OpenAI and securing a three-year licensing deal for over 200 characters, Disney positions itself not only as a content powerhouse but as a first-mover in AI-driven storytelling, setting new competitive benchmarks for legacy media companies. This partnership also shines a light on the way generative AI is reshaping IP licensing, content production and audience engagement at scale. Jeff Katzenberg, former CEO of DreamWorks Animation, says AI could reduce the costs of creating an animated film by 90%, drastically changing the way creative works have historically been produced. So what does this mean for the future of storytelling in the media? And how can legacy media companies integrate frontier AI capabilities into content ecosystems without compromising IP, brand integrity or creative quality? Members of the Senior Executive AI Think Tank—a curated group of experts specializing in machine learning, generative AI and enterprise AI applications—see the Disney–OpenAI alliance as a strategic signal that AI is moving from a peripheral tool to a core creative and operational engine. Below, they provide expert analysis and actionable strategies to help leaders navigate this rapidly evolving landscape.

Company details

Texas Health