There’s a narrative building in hiring circles right now: AI means candidates can cheat on assessments. Take a photo, upload it to ChatGPT, get the perfect answer. Therefore, online assessments are broken.
It sounds urgent. It’s also not really true.
Candidates have always been able to cheat. They could ask a friend to sit the assessment. Google the answers. Share questions on Reddit. Reapply having seen it all before. None of this required AI. All AI has done is make that “friend” faster, cheaper, and available at 2am.
The underlying problem hasn’t changed. If your assessment can be outsourced, it always could.
This is a design problem, not an AI problem
Here’s the bit nobody wants to say out loud: some assessments were already fragile. AI has just made it obvious.
If your assessment uses generic, reusable questions with a clear “right” answer, tests theory rather than application, and sits entirely in the abstract then yes, it’s easier to game now. But that doesn’t mean assessments are broken. It means those assessments are. There’s a difference.
The better question isn’t “can candidates cheat?” It’s “can this assessment be easily completed by someone, or something, other than the candidate?” Because if it can, you’re not measuring the person. You’re measuring their access to help.
Better design beats better policing
The instinct is to lock things down. Proctoring. Time limits. Browser restrictions. But that’s treating the symptom.
What actually makes an assessment hard to game is specificity. Assessments that are built around a real role, in a real company, with real trade-offs. Where there isn’t a clean “right” answer, just a series of judgement calls that reveal how someone actually thinks.
Real jobs involve incomplete information, competing priorities, and context that matters. AI is excellent at producing idealised answers. It’s far less useful when the question is messy, situational, and specific to an environment it’s never seen.
What’s actually happening here
This isn’t the death of assessments. It’s the death of lazy ones.
Generic, off-the-shelf, one-size-fits-all assessments were already a weak proxy for job performance. AI has just accelerated their irrelevance.
And that’s probably a good thing. Because hiring shouldn’t be about who can reverse-engineer what you’re looking for. It should be about who can actually do the job and who understands what it involves.
AI hasn’t broken hiring assessments. It’s raised the bar on what counts as a good one.
If your assessment could always be outsourced, that was always the problem. You just couldn’t ignore it before.