Skills
About
Divya Parekh, a Thinkers50-recognized leadership coach and AI adoption advisor, is a strategic founder and executive partner who helps leaders and organizations build future-ready performance in an AI-accelerated world. She blends executive coaching, leadership development, and practical AI integration to increase decision velocity, strengthen execution, and create cultures built on clarity and accountability. Her work bridges strategy and psychology with real-world systems leaders can actually use, turning complexity into focused action and measurable results.
Divya Parekh
Published content

expert panel
In boardrooms around the world, artificial intelligence has shifted from experimentation to execution. Enterprise leaders are no longer asking whether to deploy AI—they are asking how to scale it across jurisdictions that disagree on what “responsible” looks like. The regulatory map is anything but uniform. The European Union’s risk-based AI Act framework takes a precautionary stance, while the United States continues to rely on sector-specific oversight and executive guidance. At the same time, public trust remains fragile. According to Edelman’s 2024 Trust Barometer, a majority of global respondents report concern that innovation is moving too quickly without sufficient safeguards—an anxiety that directly affects adoption, investment and brand reputation. For AI leaders, this divergence creates both friction and opportunity. The organizations that treat ethics and governance as strategic design challenges—not compliance checklists—will be positioned to expand confidently across markets. Members of the Senior Executive AI Think Tank—a curated group of machine learning, generative AI and enterprise AI experts—argue that navigating global AI complexity requires a shift in mindset. Innovation and compliance are not opposing forces. When structured intentionally, they reinforce one another. The following strategies outline how leaders can operationalize that balance in practice.

expert panel
For many workers, learning artificial intelligence tools has quietly become “a second job”—one layered onto already full workloads, unclear expectations and rising anxiety about job security. Instead of freeing time and cognitive energy, AI initiatives often increase pressure, leaving employees feeling overworked or even disposable. A 2024 McKinsey report on generative AI adoption found that employees are more likely to experience burnout when AI tools are introduced without role redesign or workload reduction, even as productivity expectations rise. Similarly, a recent study from The Upwork Research Institute reveals that while 96% of execs expect AI to improve worker productivity, 77% of employees feel it’s only increased their workload (with an alarming 1 in 3 employees saying they will quit their jobs within the next six months due to burnout). Members of the Senior Executive AI Think Tank—a curated group of leaders in machine learning, generative AI and enterprise AI applications—note that this growing problem is not necessarily due to employee resistance or lack of technical ability, but how organizations sequence AI adoption, structure learning and communicate intent. Below, Think Tank members offer a clear roadmap for introducing AI as a system-level change—not an extracurricular obligation—to help ensure this technology empowers people rather than exhausts them.

expert panel
Internal AI assistants are quickly becoming the connective tissue of modern enterprises, answering employee questions, accelerating sales cycles and guiding operational decisions. Yet as adoption grows, a quiet risk is emerging: AI systems are only as reliable as the knowledge they consume. Members of the Senior Executive AI Think Tank—a curated group of leaders working at the forefront of enterprise AI—warn that many organizations are underestimating the complexity of managing proprietary knowledge at scale. While executives often focus on model selection or vendor strategy, accuracy failures more often stem from outdated documents, weak governance and unclear ownership of information. Research from MIT Sloan Management Review shows that generative AI tools often produce biased or inaccurate outputs because they rely on vast, unvetted datasets and that most responsible‑AI programs aren’t yet equipped to mitigate these risks—reinforcing the need for disciplined, enterprise level knowledge governance. As organizations move from experimentation to production, Think Tank members offer key strategies for rethinking how knowledge is curated, validated and secured—without institutionalizing misinformation at machine speed.

expert panel
As AI becomes inseparable from competitive strategy, executives are confronting a difficult question: Who actually owns AI? Traditional org charts, designed for slower cycles of change, often fail to clarify accountability when algorithms influence revenue, risk and brand trust simultaneously. Without oversight and clear ownership of responsibility, issues like “shadow AI” deployments that increase compliance and reputational risk can quickly get out of hand. To prevent this problem, executive teams are rethinking AI councils, Chief AI Officers and cross-functional pods as strategic infrastructure—not bureaucratic overhead. Members of the Senior Executive AI Think Tank—a curated group of leaders specializing in machine learning, generative AI and enterprise AI deployment—argue that this structure matters, but not in the way most organizations assume. Below, they break down how leading organizations are restructuring for AI: what belongs at the center, what should be embedded in the business and how executive teams can assign clear ownership without slowing innovation.

expert panel
As artificial intelligence advances at breakneck speed, the question of trust has become more urgent than ever. How do senior leaders ensure that innovation doesn’t outpace safety—and that every stakeholder, from customers to regulators and employees, retains confidence in rapidly evolving AI systems? Members of the Senior Executive AI Think Tank—a curated group of seasoned AI leaders and ethics experts—are confronting this challenge head-on. With backgrounds at Microsoft, Salesforce, Morgan Stanley and beyond, these executives are uniquely positioned to share practical, real-world strategies for building trust even in regulatory gray areas. And their insights come at a critical moment: A recent global study by KPMG found that only 46% of people worldwide are willing to trust AI systems, despite widespread adoption and optimism about AI’s benefits. That “trust gap” is more than just a perception issue—it’s a barrier to realizing AI’s full business potential. Against this backdrop, the Think Tank’s lessons are not theoretical, but actionable frameworks for leading organizations in a world where regulation lags, public concern mounts and the stakes for getting trust wrong have never been higher.

article
Influence is now a core leadership skill. Award-winning authority builder Divya Parekh explains why executives need a visible voice and how to build authentic, trust-driven influence that strengthens credibility, attracts top talent, and drives meaningful impact.
Company details
THE DP GROUP, LLC
Company bio
The DP Group is a leadership and management consulting firm that helps executives and organizations build AI-ready performance systems while protecting the human core that drives results. We partner with CEOs and senior leaders to reduce noise, accelerate decision-making, and embed practical AI into everyday execution, from communication and planning to performance management and strategic delivery. The work is both strategic and deeply human: we strengthen clarity, accountability, and culture so leaders can drive outcomes without burning out their people or eroding trust. The result is measurable and felt: faster execution, sharper priorities, stronger leadership presence, and teams that can move with confidence in an AI-accelerated world.













