Let’s get one thing clear upfront: yes, you can measure whether a hiring assessment is working. But the way most organisations try to do it is completely backwards.
There’s a dangerous obsession in modern hiring: the belief that every decision must be justified by a tidy line on a graph. Nowhere is this more common than in how companies try to “validate” pre-hire assessments.
Here’s the game as it’s commonly played:
Run an assessment. Wait six months. Pull performance data from managers. Try to correlate the original assessment score with performance outcomes. If the numbers don’t line up neatly? The assessment is declared ineffective. Cancelled. Scrapped. On to the next shiny solution.
On the surface, this feels logical. Empirical. Scientific. But probe just a little and it collapses under its own weight.
The Illusion of Clean Correlation
The first problem is what happens between assessment and performance review. It’s not just a gap; it’s a noisy, messy, uncontrolled experiment.
You’re dealing with:
- Manager bias and inconsistency in ratings
- Contextual factors like team dynamics, onboarding quality, and resourcing
- Varying definitions of “performance” (sometimes it’s a KPI, sometimes it’s “gets on well with clients”)
Even if performance data is partially objective (targets hit, deals closed), the decision to rate someone highly is rarely made in a vacuum. It’s a stew of formal data, gut feeling, and internal politics. Expecting that to correlate neatly with an assessment score from six months ago is… optimistic.
And Then There’s the Sample Bias
By the time someone’s getting a performance review, they’ve already made it through a heavily filtered hiring funnel. Your assessment has probably screened out 90-95% of applicants. You’re now left with the top 5-10% sometimes the top 1%.
So when someone says, “but we didn’t see a strong correlation between assessment score and performance rating,” what they’re really saying is: “we tried to rank elite candidates against each other and got murky results.”
Which should surprise absolutely no one. You’ve already narrowed the field to people who are probably the best fit.
Hiring Is About Probability, Not Certainty
What most people get wrong about assessment validation is that they treat hiring like physics: predictable, repeatable, and precise. In reality, hiring is messy. It’s a game of influencing probability, not delivering guaranteed outcomes.
The point of a good assessment isn’t to find the “perfect” hire with mathematical precision. It’s to shift the odds in your favour. To move from a 1-in-5 chance of success to 1-in-2. To reduce chaos, not eliminate it.
Think about it like employer branding. You don’t invest in it because it guarantees great applicants. You invest because it increases the likelihood that better people will apply. Assessments work the same way. They increase the chances that the right people progress through the hiring funnel, and that the wrong ones opt out before wasting your team’s time.
So What Is Validity?
We often get asked: Is your assessment valid?
The answer is: yes, if it’s working.
And yes, you can measure whether it’s working. But you need to measure the right things.
For example, is your assessment helping you to:
- Reduce new hire attrition?
- Improve the overall performance of your new starters?
- Shorten time to decision?
- Give candidates a more engaging and realistic hiring experience?
If so, your assessment is valid. Because that’s what validation should mean: a tool that helps you make better decisions and achieve better outcomes. Not whether Simon, who scored 84%, outperformed Priya, who scored 78%, on a manager rating that fluctuates depending on whether the KPIs were set in January or July.
The truth is: good assessments don’t create certainty. They reduce chaos. They clear the path. They tilt the odds in your favour.
Embracing the Mess
If we want hiring to be more effective, we need to get comfortable with a messier truth: Success in hiring is not a function of measuring the perfect variable. It’s the result of influencing the right conditions, over time, with enough honesty, discipline and curiosity to keep learning.
To put it another way: hiring is the act of selecting raw material, not ready-made products. A good assessment increases the probability of picking high-potential material. But what you do after the hire – onboarding, training, managing, still makes or breaks the result.
As behavioural thinker, Rory Sutherland might put it, we need “less spreadsheet theatre and more probabilistic thinking”. And that starts by asking better questions, not just running the numbers you already have.