Raghu Para
Ford Motor Company
Raghu Para
Published content

expert panel
Internal AI assistants are quickly becoming the connective tissue of modern enterprises, answering employee questions, accelerating sales cycles and guiding operational decisions. Yet as adoption grows, a quiet risk is emerging: AI systems are only as reliable as the knowledge they consume. Members of the Senior Executive AI Think Tank—a curated group of leaders working at the forefront of enterprise AI—warn that many organizations are underestimating the complexity of managing proprietary knowledge at scale. While executives often focus on model selection or vendor strategy, accuracy failures more often stem from outdated documents, weak governance and unclear ownership of information. Research from MIT Sloan Management Review shows that generative AI tools often produce biased or inaccurate outputs because they rely on vast, unvetted datasets and that most responsible‑AI programs aren’t yet equipped to mitigate these risks—reinforcing the need for disciplined, enterprise level knowledge governance. As organizations move from experimentation to production, Think Tank members offer key strategies for rethinking how knowledge is curated, validated and secured—without institutionalizing misinformation at machine speed.

expert panel
The recent Disney–OpenAI partnership represents a turning point in the convergence of entertainment and artificial intelligence. By investing $1 billion in OpenAI and securing a three-year licensing deal for over 200 characters, Disney positions itself not only as a content powerhouse but as a first-mover in AI-driven storytelling, setting new competitive benchmarks for legacy media companies. This partnership also shines a light on the way generative AI is reshaping IP licensing, content production and audience engagement at scale. Jeff Katzenberg, former CEO of DreamWorks Animation, says AI could reduce the costs of creating an animated film by 90%, drastically changing the way creative works have historically been produced. So what does this mean for the future of storytelling in the media? And how can legacy media companies integrate frontier AI capabilities into content ecosystems without compromising IP, brand integrity or creative quality? Members of the Senior Executive AI Think Tank—a curated group of experts specializing in machine learning, generative AI and enterprise AI applications—see the Disney–OpenAI alliance as a strategic signal that AI is moving from a peripheral tool to a core creative and operational engine. Below, they provide expert analysis and actionable strategies to help leaders navigate this rapidly evolving landscape.

expert panel
AI infrastructure spending has entered an era of historic scale. Microsoft, Google, Amazon and others have collectively committed hundreds of billions of dollars to expand compute capacity, even as analysts warn that parts of the market may be racing ahead of sustainable demand. For enterprise leaders outside Big Tech, the stakes are just as high, but the margin for error is far smaller. While AI investment continues to accelerate, many organizations struggle to connect infrastructure outlays to near-term financial returns, raising concerns about capital efficiency and long-term value creation. Members of the Senior Executive AI Think Tank—a curated group of executives and leaders shaping enterprise AI strategy—argue that the debate should not center on whether to invest, but how. What follows is a playbook drawn directly from their insights—detailing how seasoned leaders evaluate billion-dollar bets, stage risk intelligently and ensure AI infrastructure becomes a durable advantage rather than an expensive monument to hype.

expert panel
In the race to feed AI’s insatiable appetite for training data, model builders are increasingly butting heads with the platforms that host the content they depend on. The latest flashpoint is Reddit’s lawsuit against Perplexity AI, which accuses the company of “industrial-scale” evasion of anti-scraping protections and the indirect harvesting of Reddit posts through search engine caches. The case raises a knotty question: When is public web content a legitimate training resource, and when is it legally and/or ethically off-limits? Responses are arriving from both the marketplace and governments, with emerging startups helping content creators monetize AI-harvested data and Europe advancing the Artificial Intelligence Act, which would require firms to disclose or summarize copyrighted training data. The members of the Senior Executive AI Think Tank bring a practical and experienced perspective to the discussion of what responsible data acquisition should look like. Here, they break down where ethical and legal lines should be drawn and what responsible access must entail for AI developers, and they share insightful tips to help platforms rethink their data-licensing and access-control strategies.

expert panel
As artificial intelligence advances at breakneck speed, the question of trust has become more urgent than ever. How do senior leaders ensure that innovation doesn’t outpace safety—and that every stakeholder, from customers to regulators and employees, retains confidence in rapidly evolving AI systems? Members of the Senior Executive AI Think Tank—a curated group of seasoned AI leaders and ethics experts—are confronting this challenge head-on. With backgrounds at Microsoft, Salesforce, Morgan Stanley and beyond, these executives are uniquely positioned to share practical, real-world strategies for building trust even in regulatory gray areas. And their insights come at a critical moment: A recent global study by KPMG found that only 46% of people worldwide are willing to trust AI systems, despite widespread adoption and optimism about AI’s benefits. That “trust gap” is more than just a perception issue—it’s a barrier to realizing AI’s full business potential. Against this backdrop, the Think Tank’s lessons are not theoretical, but actionable frameworks for leading organizations in a world where regulation lags, public concern mounts and the stakes for getting trust wrong have never been higher.

expert panel
As artificial intelligence continues its rapid advance—from foundational models to enterprise-scale deployments—questions about sustainability are taking on new urgency. While much of the discourse has centered on the carbon footprint of data centers and model training, sustainable AI must also address long-term economic, labor and societal impacts: How will value from AI be shared? Who bears the downstream risks? Well-designed systems matter not only for performance, but also for fairness, trust and longevity. The Senior Executive AI Think Tank brings together seasoned experts in machine learning, generative AI and enterprise AI applications who offer deep insight into these challenges and opportunities. Below, they explore what truly sustainable AI looks like—beyond energy metrics—and who should be accountable.

