Most hiring teams think they have a volume problem.
Too many applications.
Not enough time.
So the obvious fix is automation.
Score candidates. Rank them. Filter them. Move faster.
On the surface, it works.
But here is the issue.
In trying to handle scale, many hiring teams have quietly crossed a line they do not fully understand.
They have moved from using automation to support decisions
to letting automation make decisions
And that is exactly where GDPR Article 22 comes in.
It is not new. It has been in place since 2018.
But it is suddenly very relevant, because modern hiring processes are now doing exactly what it was designed to prevent:
making decisions about people without meaningful human involvement
This is not a theoretical risk.
It is already happening in many assessment scoring, CV filtering, and AI driven screening tools.
And most teams would struggle to explain, clearly and confidently, how their current process avoids it.
This article breaks it down in simple terms:
- what Article 22 actually says
- where hiring teams are getting it wrong
- and what a defensible, practical approach looks like today
Because this is not just a compliance issue.
It is a design flaw in how hiring works.
What GDPR Article 22 actually is
Article 22 gives people the right:
not to be subject to decisions made solely by automated processing if those decisions significantly affect them
In recruitment, that is very straightforward.
A “significant decision” includes:
- being rejected for a job
- being filtered out before interview
- being blocked from progressing
So if a system is making those decisions on its own, you are in scope.
What “solely automated decision making” really means
This is where most teams get it wrong.
A process is solely automated if:
- no human reviews the decision before it is applied
- the system output is treated as final
- a recruiter is not actively checking or challenging outcomes
Designing the model is not enough.
Setting thresholds is not enough.
👉 A human has to be involved in the actual decision, not just the setup.
Why GDPR Article 22 matters for recruitment teams now
Because hiring has changed faster than compliance has caught up.
Three shifts:
1) Application volume has exploded
AI makes it easy to apply to dozens of roles quickly
2) Filtering happens earlier
Decisions are being made before any human interaction
3) Automation is being trusted too much
Scores are treated as truth, not signals
The Information Commissioner’s Office has been clear:
automation should support hiring decisions, not replace human judgement
What breaches actually look like in hiring 🚨
This is not theoretical. These patterns are common.
❌ Example 1: Auto rejection after an assessment
- candidate completes an online assessment
- system scores below threshold
- candidate is automatically rejected
- no recruiter ever reviews it
👉 Likely breach of Article 22
- CV parsing tool ranks candidates
- bottom 70 percent are filtered out
- recruiters only see top candidates
👉 If no human reviews the rejected group, this is risky
❌ Example 3: AI video interview scoring
- candidates complete recorded interviews
- AI scores tone, language, answers
- candidates below a score are rejected automatically
👉 High risk if there is no human validation
Real world signals this is being taken seriously
Regulators have already acted in adjacent areas.
- The Information Commissioner’s Office has issued detailed warnings on recruitment automation and fairness
- European regulators have investigated AI driven hiring tools for bias and transparency failures
- The Federal Trade Commission has warned companies about automated hiring discrimination
Even where fines are not yet widespread in hiring specifically, the direction is obvious.
What “good” looks like instead
You do not need to remove automation.
You need to use it properly.
A defensible model looks like:
✔ Automation does the heavy lifting
- scoring
- ranking
- prioritising
✔ Humans make or validate decisions
- especially near cut off points
- where outcomes are uncertain
- where context matters
✔ Decisions are explainable
- you can say why someone was rejected
- not just “the system said so”
✔ Outcomes are monitored
- are certain groups being filtered out more?
- are humans frequently overriding the system?
How ThriveMap helps you to ensure compliance with Article 22
ThriveMap is designed to support human decision making, not replace it. Assessments are built by humans around the real requirements of the role, with transparent scoring models and defined weightings. Candidate responses are then automatically scored and ranked to handle scale, but decisions are not left entirely to the system. Instead, candidates around key decision thresholds are flagged for recruiter review, where a human can validate, challenge, or override the outcome based on context. All decisions are logged, including where human judgement differs from automated scoring, allowing continuous monitoring of accuracy, bias, and threshold effectiveness. This creates a model where automation improves efficiency, while meaningful human involvement is maintained at the points where decisions have the greatest impact.