Skills
About
★ VP-Head of AI Product Engineering ★ Currently building, leading, launching Agentic AI Enterprise SaaS products ★ Expertise in Enterprise AI SaaS, Fintech, Marketplaces ★ Startups, Workday, Oracle ★ Gartner Thought Leader, Author of AI SaaS x Product Engineering Newsletter
Preeti Shukla
Published content

expert panel
No longer confined to analytics dashboards and recommendation engines, AI systems are now initiating transactions, approving workflows, flagging anomalies and even orchestrating other software agents. With this sudden increase in autonomy, business leaders are left asking: Where should humans step back—and where must they stay firmly in control? According to a 2025 McKinsey survey on the state of AI, nearly nine out of 10 organizations now report using AI in at least one business function, yet most are still early in scaling these technologies and many lack robust governance and risk controls. As artificial intelligence advances from advisory tools to agentic systems capable of multi-step planning and execution, the leadership challenge shifts: defining not just what AI can do, but what it should do. Members of the Senior Executive AI Think Tank—a curated group of experts in machine learning, generative AI and enterprise-scale transformation—argue that the real issue isn’t capability but accountability. Across their industry expertise, they all converge on one theme: The boundary between human judgment and machine decision-making must be dynamic, evidence-based and anchored in responsibility. Here is how they recommend drawing—and redrawing—that line.
