Divya Parekh's avatarPerson

Divya Parekh

FounderDivyaParekh.com

Published content

How to Create Smart AI Training That's Empowering, Not Frustrating

expert panel

For many workers, learning artificial intelligence tools has quietly become “a second job”—one layered onto already full workloads, unclear expectations and rising anxiety about job security. Instead of freeing time and cognitive energy, AI initiatives often increase pressure, leaving employees feeling overworked or even disposable. A 2024 McKinsey report on generative AI adoption found that employees are more likely to experience burnout when AI tools are introduced without role redesign or workload reduction, even as productivity expectations rise. Similarly, a recent study from The Upwork Research Institute reveals that while 96% of execs expect AI to improve worker productivity, 77% of employees feel it’s only increased their workload (with an alarming 1 in 3 employees saying they will quit their jobs within the next six months due to burnout). Members of the Senior Executive AI Think Tank—a curated group of leaders in machine learning, generative AI and enterprise AI applications—note that this growing problem is not necessarily due to employee resistance or lack of technical ability, but how organizations sequence AI adoption, structure learning and communicate intent. Below, Think Tank members offer a clear roadmap for introducing AI as a system-level change—not an extracurricular obligation—to help ensure this technology empowers people rather than exhausts them.

How to Keep Enterprise AI Knowledge Accurate, Current and Secure

expert panel

Internal AI assistants are quickly becoming the connective tissue of modern enterprises, answering employee questions, accelerating sales cycles and guiding operational decisions. Yet as adoption grows, a quiet risk is emerging: AI systems are only as reliable as the knowledge they consume. Members of the Senior Executive AI Think Tank—a curated group of leaders working at the forefront of enterprise AI—warn that many organizations are underestimating the complexity of managing proprietary knowledge at scale. While executives often focus on model selection or vendor strategy, accuracy failures more often stem from outdated documents, weak governance and unclear ownership of information. Research from MIT Sloan Management Review shows that generative AI tools often produce biased or inaccurate outputs because they rely on vast, unvetted datasets and that most responsible‑AI programs aren’t yet equipped to mitigate these risks—reinforcing the need for disciplined, enterprise level knowledge governance. As organizations move from experimentation to production, Think Tank members offer key strategies for rethinking how knowledge is curated, validated and secured—without institutionalizing misinformation at machine speed.

AI Is Now Strategy—Here’s How Org Charts Must Change

expert panel

As AI becomes inseparable from competitive strategy, executives are confronting a difficult question: Who actually owns AI? Traditional org charts, designed for slower cycles of change, often fail to clarify accountability when algorithms influence revenue, risk and brand trust simultaneously. Without oversight and clear ownership of responsibility, issues like “shadow AI” deployments that increase compliance and reputational risk can quickly get out of hand. To prevent this problem, executive teams are rethinking AI councils, Chief AI Officers and cross-functional pods as strategic infrastructure—not bureaucratic overhead. Members of the Senior Executive AI Think Tank—a curated group of leaders specializing in machine learning, generative AI and enterprise AI deployment—argue that this structure matters, but not in the way most organizations assume. Below, they break down how leading organizations are restructuring for AI: what belongs at the center, what should be embedded in the business and how executive teams can assign clear ownership without slowing innovation.

Building Trust in AI: Strategies Leaders Can Use Now

expert panel

As artificial intelligence advances at breakneck speed, the question of trust has become more urgent than ever. How do senior leaders ensure that innovation doesn’t outpace safety—and that every stakeholder, from customers to regulators and employees, retains confidence in rapidly evolving AI systems? Members of the Senior Executive AI Think Tank—a curated group of seasoned AI leaders and ethics experts—are confronting this challenge head-on. With backgrounds at Microsoft, Salesforce, Morgan Stanley and beyond, these executives are uniquely positioned to share practical, real-world strategies for building trust even in regulatory gray areas. And their insights come at a critical moment: A recent global study by KPMG found that only 46% of people worldwide are willing to trust AI systems, despite widespread adoption and optimism about AI’s benefits. That “trust gap” is more than just a perception issue—it’s a barrier to realizing AI’s full business potential. Against this backdrop, the Think Tank’s lessons are not theoretical, but actionable frameworks for leading organizations in a world where regulation lags, public concern mounts and the stakes for getting trust wrong have never been higher.

Building Executive Presence Online: A Guide for Today’s Leaders

article

Influence is now a core leadership skill. Award-winning authority builder Divya Parekh explains why executives need a visible voice and how to build authentic, trust-driven influence that strengthens credibility, attracts top talent, and drives meaningful impact.

How to Govern 'Shadow AI' Use Without Killing Creativity

expert panel

As enterprises scale their use of artificial intelligence, a subtle but potent risk is emerging: employees increasingly turning to external AI tools without oversight. According to a 2025 report by 1Password, around one in four employees is using unapproved AI technology at work. This kind of “shadow AI” challenges traditional governance, security and alignment frameworks. But should this kind of AI use be banned outright? Or can its use be harnessed to spur innovation and encourage creativity and experimentation? The Senior Executive AI Think Tank—a curated group of senior leaders specializing in machine learning, generative AI and enterprise AI applications—has pooled its collective wisdom to help organizations transform unmanaged AI usage from a hidden threat into a structured lever of innovation, enhancing speed, agility and enterprise alignment.

Company details

DivyaParekh.com

Industry

Management Consulting