GDPR Article 22 for hiring: Automated decision making explained

4 minute read

Posted by Emily Hill on 7 May 2026

Most hiring teams think they have a volume problem.

Too many applications.
Not enough time.
So the obvious fix is automation.

Score candidates. Rank them. Filter them. Move faster.

On the surface, it works.

But here is the issue.

In trying to handle scale, many hiring teams have quietly crossed a line they do not fully understand.

They have moved from using automation to support decisions
to letting automation make decisions

And that is exactly where GDPR Article 22 comes in.

It is not new. It has been in place since 2018.

But it is suddenly very relevant, because modern hiring processes are now doing exactly what it was designed to prevent:

making decisions about people without meaningful human involvement

This is not a theoretical risk.

It is already happening in many assessment scoring, CV filtering, and AI driven screening tools.

And most teams would struggle to explain, clearly and confidently, how their current process avoids it.

This article breaks it down in simple terms:

  • what Article 22 actually says
  • where hiring teams are getting it wrong
  • and what a defensible, practical approach looks like today

Because this is not just a compliance issue.

It is a design flaw in how hiring works.

What GDPR Article 22 actually is

Article 22 gives people the right:

not to be subject to decisions made solely by automated processing if those decisions significantly affect them

In recruitment, that is very straightforward.

A “significant decision” includes:

  • being rejected for a job
  • being filtered out before interview
  • being blocked from progressing

So if a system is making those decisions on its own, you are in scope.

What “solely automated decision making” really means

This is where most teams get it wrong.

A process is solely automated if:

  • no human reviews the decision before it is applied
  • the system output is treated as final
  • a recruiter is not actively checking or challenging outcomes

Designing the model is not enough.
Setting thresholds is not enough.

👉 A human has to be involved in the actual decision, not just the setup.

Why GDPR Article 22 matters for recruitment teams now

Because hiring has changed faster than compliance has caught up.

Three shifts:

1) Application volume has exploded

AI makes it easy to apply to dozens of roles quickly

2) Filtering happens earlier

Decisions are being made before any human interaction

3) Automation is being trusted too much

Scores are treated as truth, not signals

The Information Commissioner’s Office has been clear:

automation should support hiring decisions, not replace human judgement

What breaches actually look like in hiring 🚨

This is not theoretical. These patterns are common.

❌ Example 1: Auto rejection after an assessment

  • candidate completes an online assessment
  • system scores below threshold
  • candidate is automatically rejected
  • no recruiter ever reviews it

👉 Likely breach of Article 22

❌ Example 2: CV screening tools with no review

  • CV parsing tool ranks candidates
  • bottom 70 percent are filtered out
  • recruiters only see top candidates

👉 If no human reviews the rejected group, this is risky

❌ Example 3: AI video interview scoring

  • candidates complete recorded interviews
  • AI scores tone, language, answers
  • candidates below a score are rejected automatically

👉 High risk if there is no human validation

Real world signals this is being taken seriously

Regulators have already acted in adjacent areas.

  • The Information Commissioner’s Office has issued detailed warnings on recruitment automation and fairness
  • European regulators have investigated AI driven hiring tools for bias and transparency failures
  • The Federal Trade Commission has warned companies about automated hiring discrimination

Even where fines are not yet widespread in hiring specifically, the direction is obvious.

What “good” looks like instead

You do not need to remove automation.

You need to use it properly.

A defensible model looks like:

✔ Automation does the heavy lifting

  • scoring
  • ranking
  • prioritising

✔ Humans make or validate decisions

  • especially near cut off points
  • where outcomes are uncertain
  • where context matters

✔ Decisions are explainable

  • you can say why someone was rejected
  • not just “the system said so”

✔ Outcomes are monitored

  • are certain groups being filtered out more?
  • are humans frequently overriding the system?

How ThriveMap helps you to ensure compliance with Article 22

ThriveMap is designed to support human decision making, not replace it. Assessments are built by humans around the real requirements of the role, with transparent scoring models and defined weightings. Candidate responses are then automatically scored and ranked to handle scale, but decisions are not left entirely to the system. Instead, candidates around key decision thresholds are flagged for recruiter review, where a human can validate, challenge, or override the outcome based on context. All decisions are logged, including where human judgement differs from automated scoring, allowing continuous monitoring of accuracy, bias, and threshold effectiveness. This creates a model where automation improves efficiency, while meaningful human involvement is maintained at the points where decisions have the greatest impact.

Share

The ThriveMap Newsroom

Subscribe for insights, debunks and what amounts to a free, up-to-date recruitment toolkit.

About ThriveMap

ThriveMap creates customised assessments for high volume roles, which take candidates through an online “day in the life” experience of work in your company. Our assessments have been proven to reduce staff turnover, reduce time to hire, and improve quality of hire.

Not sure what type of assessment is right for your business? Read our guide.

Other articles you might be interested in

Banner image for this post

AI hiring tools FCRA compliance: why the Eightfold case changes everything

In January 2026, a class action lawsuit fundamentally shifted how organisations need to think about AI hiring tools for FCRA compliance. Not because it proved algorithms are biased, but because it argues they don’t need to be biased to be unlawful. The case redefining AI hiring risk In Kistler et al. v. Eightfold AI Inc., […]

Continue reading
Banner image for this post

2026 Pre-Hire Assessment Spend: The Data Breakdown

Companies are investing heavily in evaluating candidates before they even make an offer. But exactly how much are they spending, and where are those budgets going? If you want to understand the modern hiring landscape, you need to look at the numbers. At ThriveMap, we recently surveyed 200 talent acquisition leaders to uncover exactly how […]

Continue reading
Banner image for this post

AI is “breaking” online assessments? Not quite.

There’s a narrative building in hiring circles right now: AI means candidates can cheat on assessments. Take a photo, upload it to ChatGPT, get the perfect answer. Therefore, online assessments are broken. It sounds urgent. It’s also not really true. Candidates have always been able to cheat. They could ask a friend to sit the […]

Continue reading

View all articles