The data that finally made the problem undeniable#
For about two years, the "AI cheating in interviews" story was anecdote. Hiring managers talked about candidates who seemed too polished, pauses that felt scripted, new hires who could not perform in the role they had interviewed well for. The data to quantify it was missing.
That data has started arriving. The picture it paints is worse than anecdote suggested, and it explains why remote technical interviewing is in active retreat across the industry.
The verified numbers#
Fabric analysis of 19,368 interviews (State of AI Interview Cheating report): 38.5% of the dataset was flagged for AI assistance. More striking is the trajectory — the flag rate jumped from 9% in July 2025 to 45% by September 2025.
Gartner survey of 3,000 job seekers (cited in Computerworld coverage): 6% of job seekers admitted to interview fraud. Separately, Gartner found that 72.4% of recruiting leaders are now conducting in-person interviews specifically to combat fraud.
Blind user survey: 20% of US workers admitted to secretly using AI in job interviews. 59% of hiring managers reported suspecting AI misuse in recent interviews.
Coda Search/Staffing data (cited by Computerworld): in-person interview requests jumped from 5% of roles in 2024 to 30% of roles in 2025. Google, Cisco, and McKinsey are publicly confirmed to have reverted to more in-person interviewing.
HackerRank recruiter survey (Stopping AI Cheating in Remote Tech Assessments): over 60% of engineering and TA leaders list assessment security as their top concern for 2025-2026.
ACCA (Association of Chartered Certified Accountants) announced the end of remote exams in March 2026, specifically citing AI-assisted cheating as the reason. This is a professional accounting qualification affecting hundreds of thousands of candidates annually.
Columbia University suspension case: Chungin "Roy" Lee was suspended from Columbia in 2025 after building Interview Coder, which rebranded to Cluely. Cluely then raised $5.3 million in seed funding in April 2025 on the explicit premise of helping users cheat in interviews and more.
The tools that make this possible#
For anyone unfamiliar, the tools for AI-assisted interview cheating have become cheap and sophisticated:
Cluely (cluely.com) is the highest-profile product. It runs invisibly in the background, screenshots coding problems or conversational prompts, and displays AI-generated responses on a second screen or subtle overlay. The company explicitly markets it as a tool to "cheat on everything" — interviews, exams, sales calls, first dates.
Multi-monitor setups with screen-sharing AI: candidate shares their main screen, a second screen shows the interview via browser, an AI tool watches the video and audio feeds, generates answers, displays them on the second screen.
Earpiece-paired AI assistants: wireless earbuds paired with a phone running an AI model that listens and whispers answers. Hardware has become small enough that detection via webcam is difficult.
Post-submission code generation for take-homes: trivially just pasting the problem into Claude or ChatGPT.
This is not a fringe phenomenon. Cluely raised serious seed funding at a point when its explicit pitch was enabling interview cheating. Investors priced it as a real business. That is a market signal about how many people will pay for this category.
The industry response#
The response has been more decisive than anyone predicted at the start of 2024.
Return to in-person: as documented above, Google, Cisco, McKinsey, and many smaller firms are requiring in-person interviewing for senior or sensitive roles. Gartner's 72.4% figure is striking — nearly three in four recruiting leaders are conducting in-person interviews specifically because of AI cheating concerns.
Proctoring software: some companies have rolled out AI-based proctoring tools that watch candidates via webcam, track eye movements, and flag suspicious behaviour. These have their own reliability and civil-liberties problems — the Liang et al. 2023 Stanford study and the list of universities that have disabled AI detection tools show why this is not a clean answer.
End of remote assessments in some certification bodies: the ACCA decision is the clearest example, but other professional certification bodies are likely to follow.
Shift in what interviews evaluate: some companies have moved weight from live coding toward system design, behavioural conversations, and deep technical discussion where real-time AI assistance is harder. The trade-off is that these formats test different skills than what was being tested before.
Paid trial periods: for contract roles especially, paid 1-2 week trial periods at the start have become more common. This produces actual work output and is the highest-signal evaluation available.
"AI-aware" interviews: a smaller number of companies explicitly allow AI use during the interview and evaluate on how well the candidate directs it. This matches the world candidates will actually work in but is harder to evaluate consistently.
Why this is worse than the surface-level story#
The numbers above describe an equilibrium that is costly for everyone:
Honest candidates pay a tax. Every hour of proctoring software, every in-person mandate, every paranoid interview process is a cost imposed on every candidate to defend against the fraction who are cheating. A thoughtful, honest engineer has to sit through invasive monitoring, fly to a city they do not need to visit, or do a 16-hour take-home project, because the meta-incentive environment has changed.
The incentive structure inverts. A candidate who does not use AI during interviews competes on paper against candidates who do. The meta-game incentive to cheat increases because not cheating is a disadvantage — which is exactly the dynamic that accelerated the cheating rates from 9% in July to 45% in September 2025.
Signal from interviews has degraded. Companies that still rely primarily on the interview are making more bad hires. Additional signals (references, trial periods, multi-stage processes) are expensive but necessary.
Junior candidates are hit hardest. Senior candidates have past work, reputation, GitHub history, references. Juniors have only the interview, and the interview has the worst signal. The junior hiring market is measurably worse now, compounding the problem that AI coding tools already created for entry-level roles.
Trust in remote work erodes. The interview cheating story has become part of the broader "remote work does not work" narrative, even though the two are not really the same thing. It has influenced return-to-office decisions at companies that do not have interview fraud problems.
What works in hiring now#
Based on conversations with hiring managers and practical experience running interviews in the current environment, a workable approach:
- No screen-shared coding tests for roles where you can plausibly require in-person or shared-live-editor alternatives. The cheating surface is too wide for screen-share alone.
- Small take-homes with live walkthrough. A 2-hour take-home that the candidate does on their own time, followed by a 45-minute live follow-up where they defend specific implementation choices. Catches AI submissions because candidates cannot defend code they did not actually write.
- Senior interviews weighted toward system design and architecture rather than coding. Harder to cheat in real time, and more representative of senior work.
- Reference checks, three for any serious role. Always good practice, now required.
- Paid trial periods for contract work. The single highest-signal evaluation available.
- Honesty about tool use. Candidates who openly describe their AI workflow during interviews are not the problem. Candidates who secretly use a tool and cannot explain their own output are.
Where this goes next#
Regulation is coming. DSGVO scrutiny of proctoring software in Germany and the EU is already under way. Employment law concerns about invasive monitoring will likely produce explicit rules within two to three years.
Detection will improve but will not fully solve. The tools for detecting AI-coached speech and code will get better. They will not reach 100% reliability, and the false-positive rate is the critical metric — already documented to be too high for many uses.
The arms race continues. Cluely and similar products will get harder to detect. Detection tools will get more invasive. Neither side wins cleanly. This is the new steady state.
Interviews will bifurcate. Senior roles will shift toward long, conversational, reference-heavy processes that are slow and expensive but accurate. Junior roles will keep doing coding interviews with aggressive proctoring and in-person components. The middle will be messy.
The honest summary: AI has made the hiring process worse for almost everyone. The companies that take this seriously, redesign their processes thoughtfully, and accept the real costs are getting better outcomes than those pretending the problem is smaller than it is. If you are on either side of the hiring table, the minimum is to stop pretending.
Further reading#
- AI Detection Tools Are Broken on the related problem in content authenticity.
- Vibe Coding Is a Lie on the underlying shift in what engineers do that makes interviews harder to design.
- The Class of 2023 Retrospective on the broader pattern of AI tools changing faster than the systems around them adapt.
Sources#
- Fabric State of AI Interview Cheating 2026: https://fabrichq.ai/blogs/state-of-ai-interview-cheating-in-2026-insights-from-19-368-interviews
- Computerworld on in-person return: https://www.computerworld.com/article/4044734/to-counter-ai-cheating-companies-bring-back-in-person-job-interviews.html
- HackerRank 2025 playbook: https://www.hackerrank.com/writing/stopping-ai-cheating-remote-tech-assessments-2025-playbook-recruiters
- TechCrunch on Cluely / Interview Coder: https://techcrunch.com/2025/04/21/columbia-student-suspended-over-interview-cheating-tool-raises-5-3m-to-cheat-on-everything/
- Cluely product page: https://cluely.com
- Harvard Gazette on AI cheating at work and school: https://news.harvard.edu/gazette/story/2025/10/the-fear-wholesale-cheating-with-ai-at-work-school-the-reality-its-complicated/
- Liang et al. Stanford study on AI detector bias: https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7
- List of universities disabling AI detectors: https://www.pleasedu.org/resources/schools-that-banned-ai-detectors
