TL;DR
AI interview cheating is not a fringe behavior. It is widespread, systematic, and organized into a clear hierarchy of methods from basic to sophisticated.
Nobody plans to cheat when they first open the interview link. But somewhere between "I need this job" and "ChatGPT is right there," 83% of candidates say they would use AI assistance if they thought they could get away with it, according to Codepanion.
That number tracks with what Fabric sees in practice. Across 19,368 interviews analyzed between July 2025 and January 2026, 38.5% triggered cheating flags. The rate jumped from 9% in July 2025 to 45% by September and stayed elevated.
This post is a field guide to how candidates cheat, organized from their perspective. What they do step by step, why each method feels justified in the moment, and how it gets caught. It is useful for employers designing interview processes and for candidates considering whether cheating is worth the risk.
Method 1: Tab Switching and Second Screens (18% of Cases)
This is where most cheating starts. It is the gateway behavior.
What the candidate does: Opens a second browser tab or window with ChatGPT, Google, or relevant documentation. When the AI asks a question, they switch tabs, type the question, read the answer, and switch back. More sophisticated versions use a second monitor or a phone propped below the camera frame.
Why it feels justified: "I am not really cheating. I am just looking something up, like I would on the job." Candidates rationalize this as open-book testing. The line between "I know this but want to confirm" and "I have no idea but ChatGPT does" blurs easily.
How it gets caught: Tab switching is the easiest cheating method to detect. The browser reports focus loss events. Screen-sharing detection tools flag when the interview window loses focus. Even second-screen usage creates detectable patterns: eye movement shifts to a fixed position off-camera, and response timing shows a consistent delay after each question.
This is also the least effective method. The candidate has to read, comprehend, and rephrase the answer, which takes time and produces visible behavioral shifts. Detection systems caught this method long before AI cheating tools existed.
Method 2: Voice Mode LLMs (34% of Cases)
The second most common approach, and much harder to detect than tab switching.
What the candidate does: Opens ChatGPT voice mode or Gemini on a separate device (phone or tablet, usually out of camera view). The LLM listens to the interview audio through the room's ambient sound. When the AI interviewer asks a question, the LLM generates a response and whispers it through earbuds, or the candidate reads it on the phone screen.
Why it feels justified: "Everyone uses ChatGPT. If the company wanted to test my memory, they would give me a written exam." The normalization of AI assistants in daily work makes this feel like a natural extension rather than deception.
How it gets caught: Voice mode creates two detectable signatures. First, the timing pattern. The candidate hears the question, the LLM processes it (2 to 4 seconds), and the candidate begins responding. This creates an unusually consistent delay that does not vary with question complexity. A human thinking through a difficult question takes longer than one recalling a simple fact. A candidate relaying LLM answers shows near-identical response latency regardless of difficulty.
Second, speech patterns change. When a candidate shifts from speaking naturally in introductory small talk to delivering polished, structured responses to technical questions, the inconsistency is detectable. Vocabulary richness increases suddenly. Filler words disappear. The candidate sounds like two different people.
Method 3: Dedicated Cheating Tools (45% of Cases)
This is the most common method and the hardest to detect, because these tools are specifically designed to evade interview platforms.
What the candidate does: Installs a tool like Cluely, Interview Coder, Leetcode Wizard, or Final Round AI before the interview. These tools run as invisible overlays on the screen. They capture the interview audio, process it through an LLM in real time, and display suggested answers directly on the candidate's screen without any visible browser element or tab switch.
The candidate reads the suggested answer from the overlay while appearing to look at the camera. From the platform's perspective, no tabs were switched, no clipboard was used, and no external application was detected.
Why it feels justified: "The system is broken. Companies use AI to screen me out, so I will use AI to screen myself in." This is the most common rationalization, and it has an element of logic. 62% of hiring professionals admit that candidates are now better at faking with AI than recruiters are at detecting it, per WecreateProblems. Candidates see an uneven playing field and decide to level it.
How it gets caught: Invisible overlays do not switch tabs, but they do produce behavioral signatures. Eye tracking shows reading patterns (horizontal scanning, left to right, characteristic of reading text) rather than the scattered eye movement of someone recalling information.
Response timing shows the tool's processing lag: a consistent 3 to 5 second pause after every question as the audio is captured, processed, and displayed. And the quality-experience mismatch becomes apparent when a candidate with 2 years on their resume delivers senior-level architectural answers with textbook precision.
For a deeper look at the detection architecture behind each of these layers, see our technical deep dive on cheating detection.
Method 4: Live Help From Another Person (3% of Cases)
The oldest trick in the book, updated for remote interviews.
What the candidate does: Has someone else in the room (or on a phone call) who listens to the questions and provides answers. In extreme cases, this is full proxy fraud: a different person takes the interview entirely while the candidate's face appears on camera.
Why it feels justified: "I know the material, I just interview poorly." Some candidates genuinely believe they would perform well on the job but struggle with interview anxiety. Having a coach in the room feels like accommodation rather than fraud.
How it gets caught: Multiple voice detection catches cases where two voices are present. For proxy fraud, resume-based personalization creates a trap: the AI asks follow-up questions about specific projects and experiences listed on the candidate's resume. A proxy who did not live those experiences struggles to maintain consistency across deep follow-ups.
6% of candidates have admitted to interview impersonation or proxy fraud in a Gartner survey. The actual rate is likely higher, but this method's low share (3%) in Fabric's data suggests that the combination of video presence, voice detection, and resume-based questioning makes it difficult to execute at scale.
The Psychology Behind Why Cheating Feels Rational
Understanding why candidates cheat is as important as understanding how.
59% of hiring managers suspect candidates are using AI to misrepresent themselves, per HireTruffle. The suspicion is warranted, but so is the candidate's logic.
The job market for junior candidates is brutal. Hundreds of applications yield a handful of interviews. Each interview carries outsized stakes. A $20 to $50 monthly subscription to a cheating tool seems trivially cheap compared to the salary difference between getting and not getting the job.
This is a classic prisoner's dilemma. If every candidate cheats, the honest ones lose. If nobody cheats, the system works. But nobody can guarantee what the other candidates are doing. So the individually rational choice is to cheat, even though the collectively rational outcome is honesty.
Google and McKinsey reintroduced mandatory in-person interviews by mid-2025, per CNBC. This is the blunt-force response to the cheating problem: eliminate remote interviews entirely. It works, but it sacrifices the speed, scale, and accessibility advantages that made remote hiring attractive.
The better approach is detection that makes cheating reliably detectable, which changes the risk-reward calculation. If candidates believe cheating will be caught, the rational choice shifts back to honesty.
What Hiring Teams Should Do About It
The cheating problem does not have a single solution. It requires layered defenses that make cheating both harder to execute and riskier to attempt.
Use platforms with behavioral detection. Tab-switch monitoring catches 18% of cheating methods. Behavioral signal analysis covers the other 82%. For a breakdown of how AI interviews work and what evaluation looks like, see our complete guide.
Randomize questions. Fixed question sets are a gift to cheating tool databases. Randomized questions from a large pool, combined with adaptive follow-ups, make pre-generated answers useless.
Design for depth, not breadth. A cheating tool can generate a surface-level answer to any question. It struggles with the fourth follow-up question that asks the candidate to reconcile their answer with a constraint they mentioned earlier. Depth exposes genuine understanding.
Review flagged evidence. Fabric flags cheating with timestamped evidence and a 3-5% false positive rate. Do not auto-reject. Review the evidence, give borderline cases the benefit of the doubt, and make final decisions with human judgment.
Communicate the policy. Tell candidates before the interview that cheating detection is active. This shifts the cost-benefit analysis before they even open a cheating tool.
Fabric handles detection automatically across all four cheating categories. Try a free interview to see the detection in action, or book a demo to discuss how the integrity system fits your hiring process.
FAQ
What percentage of candidates cheat in AI interviews?
Fabric's analysis of 19,368 interviews found 38.5% triggered cheating flags. Surveys suggest 83% would cheat if they believed they could avoid detection. The gap between willingness and action is narrowing as cheating tools become easier to use.
What are the most common cheating methods?
Dedicated tools like Cluely and Interview Coder account for 45% of cases. Voice mode LLMs (ChatGPT, Gemini) make up 34%. Tab switching and second screens account for 18%. Live help from another person is 3%.
Can employers tell when a candidate is using ChatGPT during an interview?
Yes. AI-assisted responses create detectable patterns in response timing, speech characteristics, and eye movement. The timing signature alone (consistent delay regardless of question difficulty) is a strong indicator. Platforms like Fabric analyze 20+ behavioral signals simultaneously.
Should companies reject all flagged candidates?
No. Fabric's false positive rate is 3-5%, meaning some flagged candidates were not cheating. Review timestamped evidence for each case. Auto-rejection risks eliminating honest candidates, while evidence review allows informed decisions.
Is it worth cheating in an AI interview?
From a risk perspective, no. Modern detection systems flag cheating with high accuracy, and a cheating flag on your record is worse than a low interview score. Companies share candidate data, and a cheating flag can follow you across applications.
