Jim Liddle's avatarPerson

Jim Liddle

Entrepreneur | Investor | Advisor | Enterprise AI StrategistN/A

London, UK

About

Jim Liddle is a serial entrepreneur, executive leader, and technologist with 25+ years building and scaling companies from the ground up, from early product code to global market success. Liddle successfully exited a previous venture to a leading cloud storage / data management unicorn. Experienced across full business lifecycles: founding, fundraising, scaling, and exit. A seasoned speaker on AI and Data Strategy, he focuses on how organizations can responsibly and effectively implement AI, from initial data strategy to AI Use Cases, Infrastructure and Governance. Hands-on with emerging technology, Liddle stays close to the detail of how AI, data, and architecture converge to drive innovation, efficiency, and growth in the enterprise.

Published content

How to Create Smart AI Training That's Empowering, Not Frustrating

expert panel

For many workers, learning artificial intelligence tools has quietly become “a second job”—one layered onto already full workloads, unclear expectations and rising anxiety about job security. Instead of freeing time and cognitive energy, AI initiatives often increase pressure, leaving employees feeling overworked or even disposable. A 2024 McKinsey report on generative AI adoption found that employees are more likely to experience burnout when AI tools are introduced without role redesign or workload reduction, even as productivity expectations rise. Similarly, a recent study from The Upwork Research Institute reveals that while 96% of execs expect AI to improve worker productivity, 77% of employees feel it’s only increased their workload (with an alarming 1 in 3 employees saying they will quit their jobs within the next six months due to burnout). Members of the Senior Executive AI Think Tank—a curated group of leaders in machine learning, generative AI and enterprise AI applications—note that this growing problem is not necessarily due to employee resistance or lack of technical ability, but how organizations sequence AI adoption, structure learning and communicate intent. Below, Think Tank members offer a clear roadmap for introducing AI as a system-level change—not an extracurricular obligation—to help ensure this technology empowers people rather than exhausts them.

How to Keep Enterprise AI Knowledge Accurate, Current and Secure

expert panel

Internal AI assistants are quickly becoming the connective tissue of modern enterprises, answering employee questions, accelerating sales cycles and guiding operational decisions. Yet as adoption grows, a quiet risk is emerging: AI systems are only as reliable as the knowledge they consume. Members of the Senior Executive AI Think Tank—a curated group of leaders working at the forefront of enterprise AI—warn that many organizations are underestimating the complexity of managing proprietary knowledge at scale. While executives often focus on model selection or vendor strategy, accuracy failures more often stem from outdated documents, weak governance and unclear ownership of information. Research from MIT Sloan Management Review shows that generative AI tools often produce biased or inaccurate outputs because they rely on vast, unvetted datasets and that most responsible‑AI programs aren’t yet equipped to mitigate these risks—reinforcing the need for disciplined, enterprise level knowledge governance. As organizations move from experimentation to production, Think Tank members offer key strategies for rethinking how knowledge is curated, validated and secured—without institutionalizing misinformation at machine speed.

How to Build AI Literacy That Empowers—and Protects—Your Workforce

expert panel

AI agents are no longer experimental tools tucked inside innovation labs. They are drafting contracts, recommending prices, screening candidates and reshaping how decisions are made across companies. As adoption accelerates, however, many organizations are discovering a sobering truth: Knowing how to use AI is not the same as knowing when not to. Members of the Senior Executive AI Think Tank—a curated group of technologists, executives and strategists shaping the future of applied AI—agree that the next frontier of AI maturity is literacy rooted in judgment. Training programs must now prepare employees not just to operate AI agents, but to question them, override them and escalate concerns when outputs conflict with human values, domain expertise or organizational risk. That concern is well founded: Organizations relying on unchecked automation face higher reputational and compliance risk, even when systems appear highly accurate. Similarly, confident but incorrect AI outputs—often called “hallucinations”—are becoming one of the biggest enterprise risks as generative AI scales. Against that backdrop, Senior Executive AI Think Tank members outline what effective AI literacy training must look like in practice—and why leaders must act now.

Execs: How to Fund AI Infrastructure With Confidence

expert panel

AI infrastructure spending has entered an era of historic scale. Microsoft, Google, Amazon and others have collectively committed hundreds of billions of dollars to expand compute capacity, even as analysts warn that parts of the market may be racing ahead of sustainable demand. For enterprise leaders outside Big Tech, the stakes are just as high, but the margin for error is far smaller. While AI investment continues to accelerate, many organizations struggle to connect infrastructure outlays to near-term financial returns, raising concerns about capital efficiency and long-term value creation. Members of the Senior Executive AI Think Tank—a curated group of executives and leaders shaping enterprise AI strategy—argue that the debate should not center on whether to invest, but how. What follows is a playbook drawn directly from their insights—detailing how seasoned leaders evaluate billion-dollar bets, stage risk intelligently and ensure AI infrastructure becomes a durable advantage rather than an expensive monument to hype.

AI Agents Are the New Customers—Is Your Business Ready?

expert panel

The launch of Google’s new AI shopping tools—including conversational search, agentic checkout and the ability for an AI to call stores for you—marks a turning point. These innovations raise a fundamental question for retailers and brands: What happens when the “customer” is no longer a human browsing or clicking, but an algorithm executing on behalf of a human?  Google expects this new model to simplify shopping at scale, using its Shopping Graph—with more than 50 billion product listings—and its Gemini AI models to power agentic checkout and store-calling. Yet the transition toward “agentic commerce” is fraught with risk and opportunity. Drawing on their expertise in machine learning, generative AI and enterprise AI applications, the members of Senior Executive AI Think Tank explore this new form of commerce, how this shift could upend traditional consumer relationships and what merchants must do now to stay visible—and profitable.

Data Integrity: Expert Strategies for AI Builders and Content Hosts

expert panel

In the race to feed AI’s insatiable appetite for training data, model builders are increasingly butting heads with the platforms that host the content they depend on. The latest flashpoint is Reddit’s lawsuit against Perplexity AI, which accuses the company of “industrial-scale” evasion of anti-scraping protections and the indirect harvesting of Reddit posts through search engine caches. The case raises a knotty question: When is public web content a legitimate training resource, and when is it legally and/or ethically off-limits? Responses are arriving from both the marketplace and governments, with emerging startups helping content creators monetize AI-harvested data and Europe advancing the Artificial Intelligence Act, which would require firms to disclose or summarize copyrighted training data. The members of the Senior Executive AI Think Tank bring a practical and experienced perspective to the discussion of what responsible data acquisition should look like. Here, they break down where ethical and legal lines should be drawn and what responsible access must entail for AI developers, and they share insightful tips to help platforms rethink their data-licensing and access-control strategies.

Company details

N/A

Industry

Information Technology & Services