Jim Liddle's avatarPerson

Jim Liddle

Entrepreneur | Investor | Advisor | Enterprise AI StrategistN/A

London, UK

About

Jim Liddle is a serial entrepreneur, executive leader, and technologist with 25+ years building and scaling companies from the ground up, from early product code to global market success. Liddle successfully exited a previous venture to a leading cloud storage / data management unicorn. Experienced across full business lifecycles: founding, fundraising, scaling, and exit. A seasoned speaker on AI and Data Strategy, he focuses on how organizations can responsibly and effectively implement AI, from initial data strategy to AI Use Cases, Infrastructure and Governance. Hands-on with emerging technology, Liddle stays close to the detail of how AI, data, and architecture converge to drive innovation, efficiency, and growth in the enterprise.

Published content

Data Integrity: Expert Strategies for AI Builders and Content Hosts

expert panel

In the race to feed AI’s insatiable appetite for training data, model builders are increasingly butting heads with the platforms that host the content they depend on. The latest flashpoint is Reddit’s lawsuit against Perplexity AI, which accuses the company of “industrial-scale” evasion of anti-scraping protections and the indirect harvesting of Reddit posts through search engine caches. The case raises a knotty question: When is public web content a legitimate training resource, and when is it legally and/or ethically off-limits? Responses are arriving from both the marketplace and governments, with emerging startups helping content creators monetize AI-harvested data and Europe advancing the Artificial Intelligence Act, which would require firms to disclose or summarize copyrighted training data. The members of the Senior Executive AI Think Tank bring a practical and experienced perspective to the discussion of what responsible data acquisition should look like. Here, they break down where ethical and legal lines should be drawn and what responsible access must entail for AI developers, and they share insightful tips to help platforms rethink their data-licensing and access-control strategies.

How to Govern 'Shadow AI' Use Without Killing Creativity

expert panel

As enterprises scale their use of artificial intelligence, a subtle but potent risk is emerging: employees increasingly turning to external AI tools without oversight. According to a 2025 report by 1Password, around one in four employees is using unapproved AI technology at work. This kind of “shadow AI” challenges traditional governance, security and alignment frameworks. But should this kind of AI use be banned outright? Or can its use be harnessed to spur innovation and encourage creativity and experimentation? The Senior Executive AI Think Tank—a curated group of senior leaders specializing in machine learning, generative AI and enterprise AI applications—has pooled its collective wisdom to help organizations transform unmanaged AI usage from a hidden threat into a structured lever of innovation, enhancing speed, agility and enterprise alignment.

Building AI Products With Limited Resources in a Centralized Landscape

expert panel

As major players like OpenAI, Google, Amazon and Anthropic continue to dominate AI infrastructure, smaller businesses and startups face a growing concern: how to compete in a landscape shaped by centralized compute, model development and vast resources. Major tech firms have invested billions in foundational models and own substantial portions of the infrastructure underlying generative AI. This can make it challenging for smaller companies to not only get off the ground, but get ahead. The Senior Executive AI Think Tank brings together seasoned experts in machine learning, generative AI and enterprise AI applications who believe that smaller firms can still win—in different ways. This article explores their insights on how startups should pivot from trying to match scale to leveraging agility, domain expertise and smarter infrastructure choices.

The AI Model Debate: Weighing Cost, Control and Competitive Edge

expert panel

As enterprise AI adoption accelerates, so too does the complexity of choosing the right foundation. Should companies invest in proprietary platforms like GPT-4 or Claude, or build on open-source models such as Meta’s Llama or Mistral? The answer increasingly lies not in technical specs alone, but in how each option aligns with an organization’s cost structure, data governance needs and long-term innovation strategy. Recent research from McKinsey & Company underscores the growing momentum behind open systems: Over 50% of enterprises already report using open-source AI tools across their technology stack, and 76% expect to increase usage in the coming years. At the same time, proprietary platforms offer speed, reliability and white-glove scalability—often the shortest path to business impact. The trade-offs are real and consequential. To help executive decision-makers navigate these choices, we turned to members of the Senior Executive AI Think Tank—a group of enterprise AI, machine learning and innovation leaders who are shaping the way organizations operationalize artificial intelligence. In the sections below, they break down the pros and cons of each approach and offer actionable guidance on when to build, when to buy and how to orchestrate the right AI model strategy for your organization’s evolving needs.

What TIME Missed: Where AI Can Make the Greatest Impact Next

expert panel

The recent release of TIME’s 2025 TIME100 AI list underscores how much attention is focused on foundation models, generative agents and consumer‑facing AI tools. Yet a closer look suggests that many powerful AI applications are still flying under the radar.  That’s where the Senior Executive AI Think Tank comes in—a curated group of experts in machine learning, generative AI and enterprise AI applications who combine technical depth with executive perspective. In this article, they use real-world insight to examine which industries and use cases are underrepresented in lists like TIME’s and explore the biggest AI frontiers that deserve attention now.

Amazon, Kiro and 'Vibe Coding': What Engineers Should Expect Now

expert panel

Earlier this year, Amazon Web Services introduced Kiro, a new agentic AI‑Integrated Development Environment (IDE) designed to transform how software gets built—moving beyond prototype experimentation and toward structured, production‑grade code.  The trend of vibe coding—loosely defined as using powerful AI agents to generate code directly from intuitive prompts—has been gaining attention. At the same time, tools like Kiro are being launched to offer guardrails and structure, addressing many of the common pitfalls of rapid AI‑driven development. The Senior Executive AI Think Tank, a curated group of experts in machine learning, generative AI and enterprise AI applications, has examined what enterprises adopting AI vibe coding—and especially tools like Kiro—might mean for engineering teams and the future of product development, and offer actionable strategies for how firms can respond, adapt and lead in the next wave of AI‑augmented product development.

Company details

N/A

Industry

Information Technology & Services