In the world of talent acquisition and technical hiring, a seismic shift is quietly underway. Earlier this year, Meta announced it is piloting a new kind of coding interview: one in which candidates may use AI assistants during the process. According to a report in Wired, Meta states it “is developing a new type of coding interview in which candidates have access to an AI assistant. This is more representative of the developer environment that our future employees will work in, and also makes LLM-based cheating less effective.”
Meanwhile, companies like Anthropic have changed their minds about AI use during hiring, and Google will be adding in-person interviews back in their process as a reaction to AI-assisted cheating. These developments all raise provocative questions for senior HR and talent leaders: Should companies shift to evaluating candidates based on how effectively they can work with AI rather than solely on how they perform without it? And if so, how should hiring, assessment and performance metrics evolve accordingly?
The Senior Executive HR Think Tank—a curated group of senior leaders in employee experience, talent acquisition, DEI, performance management and AI in HR—is watching this shift closely. Below, they examine how this trend may reshape hiring practices and offer actionable strategies for implementation.
“Letting a developer use AI tools frees the individual to focus on asking the right questions and formulating the right hypotheses.”
The New Reality of Developer Workflows
When Meta announced its pilot coding interview that lets candidates use AI assistants, Heide Abelli, CEO and Co-Founder of SageX Inc., wasn’t surprised. “Letting a developer use AI tools frees the individual to focus on asking the right questions and formulating the right hypotheses,” she says. “It will help Meta identify the most imaginative developers in the spirit of Einstein, who said, ‘Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.’”
Developers rarely code in isolation; they collaborate with intelligent systems daily. Traditional interviews that test memorization of algorithms are, in her view, the opposite of what companies should be doing: “What needs to be tested now is a software developer’s raw creativity, penchant for imagination and problem-solving capability.”
“Developers who effectively leverage AI tools demonstrate enhanced productivity. However, foundational knowledge remains critical.”
Reimagining Assessment: Tool Use as a Skill
For Ulrike Hildebrand, Strategic HR Advisor and Senior Consultant at Pin-Point Solutions, LLC, integrating AI into coding interviews is not a novelty—it’s essential. “AI has become integral to modern software development,” she says. “Developers who effectively leverage AI tools demonstrate enhanced productivity. However, foundational knowledge remains critical.”
Hildebrand emphasizes that proficiency with AI doesn’t replace technical fundamentals—it amplifies them. “Developers must evaluate AI outputs for accuracy, apply sound judgment and ensure solutions address the actual problem,” she explains. She further notes that this shift toward AI-assisted interviews reflects the way AI is now augmenting rather than replacing human expertise. Evaluating both will ensure you understand how candidates will perform in the real world.
AI in Hiring: Risks and Ethical Considerations
AI in hiring promises efficiency—but also introduces complexity. Laci Loew, Fellow, HR Analyst and People Scientist at the Global Curiosity Institute, notes that “AI can evaluate candidates by processing large amounts of data quickly and identifying patterns of thinking or behavior that might not be immediately obvious to human evaluators.” If designed correctly, she adds, AI can even reduce human bias.
But Loew warns that biased data can lead to biased hiring outcomes. “AI systems are only as good as the data they’re trained on,” she says. “If that data contains biases, AI can perpetuate or even amplify them.”
To mitigate this, Loew advocates for a balanced “human-plus-machine” approach: “The best outcome is achieved by combining the talents that are uniquely human with the data-driven wisdom of AI.”
The Kitchen Test: Real Tools Matter
For Steve Degnan, Advisor, Board Member and former CHRO, Meta’s move feels like common sense. “That seems the same as asking an applicant for a chef’s job to whip something up in a test kitchen,” he says. “It’s a tool they will use, so why not?”
Degnan argues that interviews should mirror the real work environment. Developers rely on AI daily—pretending they don’t is artificial. But he cautions against using AI merely for show. “Asking applicants to create the most fictitious but believable interview answers they can, using AI, might not be such a great idea,” he says. The key is relevance, not novelty.
“Any technology organization that seeks to defeat cheating in the interview, and wants to understand the way AI helps a given engineer, should try this approach.”
Innovation in Hiring: Testing for AI Fluency
Jason Elkin, Co-Founder and Chief Innovation Officer at EQUALS TRUE, considers Meta’s experiment a very smart move. “The core skillset of the new software engineer now includes their ability to work with AI,” he says. “How this new approach to candidate vetting will work out is anyone’s guess, but Meta will keep iterating until a practical, scalable solution presents itself.”
Elkin notes that allowing AI during interviews also discourages cheating. “Any technology organization that seeks to defeat cheating in the interview, and wants to understand the way AI helps a given engineer, should try this approach,” he says. By legitimizing AI use, employers can evaluate how effectively candidates collaborate with technology rather than whether or not they hide it.
What HR Leaders Can Do Next
- Measure how candidates use AI, not just if they know it. Build metrics around candidate prompts, evaluation of AI responses, debugging of AI output and justification of decisions—as much as algorithmic correctness.
- Maintain foundational skill measurement. Ensure assessments still cover coding fundamentals, logic and technical judgment—AI fluency complements, not replaces, core skills.
- Guard against bias in AI-enabled hiring. Audit tool usage, ensure training data used in AI assessments is fair and inclusive, and combine AI metrics with human-evaluated soft skills such as adaptability, empathy and cultural fit.
- Mirror the real work environment. Reflecting the real work environment ensures you can test candidates accurately.
- Start small, iterate and scale. As Meta is piloting a limited rollout, adopt a similar phased approach: Prototype new interview formats, collect data, iterate, refine and then scale across roles and populations.
The Future of Hiring Is AI Collaboration
The hiring landscape is entering an era where the ability to collaborate with AI is becoming as essential as the ability to code. As Meta’s announcement illustrates, the firms that lead will be those that recognize this reality, update their assessment frameworks and build hiring practices aligned with tool-augmented workflows.
For talent leaders, this means more than simply adjusting interview scripts—it demands a re-imagining of what “preparedness” looks like in a world where human-AI collaboration is the norm. By focusing on both core technical fundamentals and the candidate’s ability to use AI thoughtfully, equitably and creatively, companies can better predict who will excel in the modern workplace. The future of hiring isn’t just about what a candidate knows—it’s increasingly about how they work with the tools that will define their career.
