How to make AI hiring decisions explainable to candidates (legally and ethically)

March 15, 2026

How to Make AI Hiring Decisions Explainable to Candidates (Legally and Ethically)
Section 1: Why Explainability in AI Hiring Is Becoming Mandatory
For a long time, companies treated their AI hiring tools as a black box. You fed in résumés, scores came out, and candidates were either advanced or declined. No one asked too many questions — not HR, not legal, and not the candidates themselves. That era is ending, and it is ending fast.
AI hiring explainability — the ability to clearly describe why an AI system made a particular hiring decision — is no longer just an ethical nicety. Across the US, EU, and UK, regulators are building frameworks that require employers to be able to explain, audit, and in some cases justify AI-assisted decisions to the people those decisions affect.
This isn't entirely about compliance either. Candidates today are sophisticated. They read the news. They know AI is involved in hiring. When someone gets a rejection and suspects — rightly or wrongly — that an algorithm filtered them out unfairly, they talk about it. They post about it. They file complaints. And if your organization cannot explain what happened and why, that silence reads as guilt.
The good news is that explainability is achievable. It doesn't require you to dismantle your AI stack or go back to manual screening. It requires a deliberate process, clear communication, and in some jurisdictions, a bit of legal infrastructure. This guide walks through all of it — from what the law actually requires to what a good AI explanation looks like in practice.
📊 The numbers make the case clearly:
• 83% of large US employers now use some form of AI or algorithmic tool in hiring (Harvard Business School, 2022)
• 67% of candidates say they want to understand how AI was used in the process that evaluated them (IBM Institute for Business Value, 2023)
• New York City's Local Law 144 — the first of its kind — took effect in 2023 and requires bias audits and candidate notices for AI hiring tools
• Under GDPR Article 22, EU residents have a legal right to human review of fully automated decisions that significantly affect them
• The EEOC has explicitly stated that employers cannot use the "vendor made us do it" defense when AI screening results in discriminatory outcomes
Section 2: What Explainability Means in AI Hiring (Legal + Practical)
When people talk about AI hiring explainability, they sometimes mean different things. Let's be precise, because the differences matter — both for compliance and for how you actually build a process around it.
At its most basic level, explainability means being able to answer the question: "Why did the AI score or rank this candidate this way?" That sounds simple. In practice, it involves three distinct layers.
Technical explainability
This is about understanding what the model is actually doing internally — which features or signals it weighed most heavily, how those features relate to the target outcome, and whether the model's reasoning is coherent. This level is primarily for your data team, your AI vendor, and your legal and compliance functions. Candidates don't need this. Auditors might.
Operational explainability
This is for your HR and recruiting teams. It means they understand, at a functional level, what the AI is evaluating, what thresholds it uses, what the scores mean, and where human judgment takes over. If your recruiters can't explain to a hiring manager how the tool works, they certainly can't explain it to a candidate.
Candidate-facing explainability
This is what this guide focuses on most heavily. It means giving candidates a clear, honest, human-readable account of how AI was used in their evaluation and, where relevant, what factors influenced the outcome. It doesn't mean exposing proprietary model architecture. It means respecting the person enough to tell them something meaningful rather than nothing.
The legal frameworks discussed in Section 3 primarily operate at the candidate-facing level, though they implicitly require technical and operational explainability to function properly.
Section 3: When You Are Legally Required to Explain AI Decisions
The legal landscape around AI hiring explainability varies significantly by geography. Here is a clear breakdown of the key frameworks and what they actually require.
United States: EEOC and NYC Local Law 144
At the federal level, no single US law requires employers to explain AI hiring decisions. However, the Equal Employment Opportunity Commission (EEOC) has made clear that existing anti-discrimination law — particularly Title VII — applies fully to AI hiring tools. If an AI screening tool creates disparate impact against a protected class, the employer is liable, regardless of whether the tool was built in-house or purchased from a vendor.
The EEOC's 2023 technical assistance guidance explicitly stated that employers should conduct ongoing bias testing of their AI tools and be prepared to explain and defend selection procedures. This isn't legally binding in the same way a statute is, but it signals enforcement posture clearly.
New York City's Local Law 144 is the most concrete AI hiring regulation in the US. It applies to employers and employment agencies that use automated employment decision tools (AEDTs) to screen candidates for jobs in New York City. Key requirements include an annual bias audit by an independent auditor, public posting of the audit results, and advance notice to candidates that an AEDT is being used — with information about the characteristics or categories used.
Several other US states including California, Illinois, Maryland, and Washington have passed or are advancing similar AI hiring legislation. The direction of travel is clear: expect disclosure and audit requirements to expand.
European Union: GDPR Article 22 and the EU AI Act
GDPR Article 22 gives EU data subjects the right not to be subject to a decision based solely on automated processing when that decision produces significant effects on them. Hiring decisions qualify. This means that if your AI system makes a fully automated decision — with no meaningful human involvement — EU candidates have the right to request human review, receive an explanation of the decision, and contest the outcome.
The explanation you're required to provide under GDPR must be meaningful. The Article 29 Working Party (now the European Data Protection Board) has been explicit that a vague statement like "your application was processed by an automated system" does not satisfy the obligation. Candidates are entitled to understand the logic involved — not necessarily the algorithm's source code, but the key factors that influenced the decision.
The EU AI Act, which entered into force in 2024, classifies AI systems used in employment as high-risk. This carries additional obligations: transparency to candidates, logging and documentation requirements, human oversight mechanisms, and accuracy and robustness testing. Employers who deploy third-party AI hiring tools will need to ensure their vendors comply with these obligations and provide the documentation necessary to do so.
United Kingdom
Post-Brexit, the UK retained a version of GDPR (UK GDPR) with Article 22 rights broadly intact. The ICO (Information Commissioner's Office) has published guidance on explaining AI decisions that is practical and worth reading directly. UK employers using AI in hiring should have a documented explanation process, ensure candidates are informed, and have a clear route for candidates to request human review of automated decisions.
The UK Equality Act 2010 also applies: AI tools cannot be used in ways that produce discriminatory outcomes, and employers remain responsible for demonstrating that their selection processes are non-discriminatory.
"The question isn't whether your AI hiring tool is biased. The question is whether you'd be able to prove it isn't — to a regulator, a candidate, or a jury."
— Employment law practitioner, speaking at the 2023 SHRM Annual Conference
Section 4: Transparency vs Explainability (Critical Difference)
These two words get used interchangeably, but they are not the same thing, and conflating them leads to compliance gaps that can hurt you.
Transparency is disclosure. It means telling candidates that AI is involved in your process, what type of tool is being used, and broadly what it does. Under NYC Local Law 144, for example, the notice requirement is primarily a transparency obligation. You're telling people something is happening. You're not yet explaining how a specific outcome was reached.
Explainability goes further. It means giving a candidate a meaningful account of why their specific application was assessed the way it was. It requires being able to trace a decision back through the process and communicate the key factors — in terms the candidate can understand and, if necessary, challenge.
A company can be fully transparent (it publishes that it uses AI screening) while being completely unexplainable (no one can actually tell a candidate why they were rejected). This gap is where legal risk accumulates.
The best-run organizations treat transparency as the floor and explainability as the standard. They go beyond notifying candidates that AI exists and build processes that can actually account for what the AI did.
Think of it like a bank loan. Transparency is the sign in the window that says "credit decisions are made by algorithm." Explainability is the letter that tells you your application was declined because of your debt-to-income ratio and the length of your credit history. One is a disclosure. The other is an account. Candidates deserve accounts.
Section 5: Why Most Companies Fail at AI Explainability
Despite growing legal pressure and rising candidate expectations, the majority of employers using AI hiring tools cannot actually explain those tools' decisions to candidates in any meaningful way. Here's why.
They don't fully understand their own tools
Many HR teams bought AI screening tools because the sales pitch was compelling, the demo looked slick, and integration was relatively painless. But they never fully understood what features the model evaluates, how it weights them, or what "passing" a screen actually means. If your team can't explain the tool to themselves, they can't explain it to candidates.
They confuse process transparency with outcome explainability
Most candidate-facing communications are written at the process level: "We use an automated screening tool to evaluate all applications." That tells a candidate nothing about their specific outcome. It's the equivalent of a hospital saying "we use diagnostic equipment" and leaving it there.
Their vendors don't provide explanation infrastructure
Some AI hiring vendors treat their models as proprietary black boxes. They may provide aggregate bias audit results (under legal pressure) but no mechanism for per-candidate explanation. If your vendor can't tell you why a specific candidate scored the way they did, you cannot tell the candidate either.
There's no internal accountability for explainability
In most organizations, AI hiring tools fall into a gap between IT (who manages the vendor), HR (who uses the output), and Legal (who handles complaints). No one owns explainability as a function. So when a candidate asks why they were rejected, the question bounces around and usually produces a generic, unhelpful response.
They fear explaining because they don't trust their tool
This is the most uncomfortable reason of all. Some organizations know their AI tools produce questionable results and have made a calculation that saying nothing is safer than saying something that might expose them. This is a short-term legal strategy with long-term reputational and liability consequences.
Section 6: How to Design an Explainable AI Hiring Process
Building explainability into your AI hiring process isn't a one-time project. It's an architecture decision that touches your tooling, your workflows, your communications, and your governance. Here's how to approach it.
Start with documentation at the point of deployment
Before your AI tool touches a single application, you should be able to document: what it evaluates, what signals or features it uses, what the scoring or ranking output means, what threshold decisions have been set and by whom, and what human review process follows AI screening. If you cannot complete this documentation, you are not ready to deploy the tool in a legally defensible way.
Build explanation generation into your vendor contract
When you procure or renew an AI hiring tool, require the vendor to provide per-candidate explanation outputs — not just aggregate statistics. This might look like a plain-text summary of the top factors that influenced a candidate's score, or a score breakdown across evaluated dimensions. Make this a contractual requirement, not a request.
Designate an explainability owner
Someone in your organization needs to own AI explainability as a responsibility. This person — whether a senior HR business partner, a legal counsel, or a dedicated AI governance role — is accountable for ensuring the explanation process works, for fielding candidate challenges, and for keeping documentation current as tools evolve.
Separate AI screening from final decisions with a clear handoff
One of the most effective explainability architectures is one where AI is explicitly advisory, not determinative. The AI scores and ranks. A human reviews and decides. This separation means you can explain AI outputs without claiming they were the final word, which significantly reduces legal exposure under GDPR Article 22 and similar frameworks.
Audit regularly and document audit results
Explainability is undermined if your AI is producing systematically biased outputs that you can't account for. Regular audits — ideally by an independent party, as required under NYC Local Law 144 — serve two purposes: they catch problems early, and they create the documented evidence that your process is working as intended.
Section 7: Writing AI Hiring Explanations Candidates Understand
The hardest part of AI hiring explainability isn't the legal infrastructure — it's the writing. Giving candidates an explanation that is accurate, meaningful, legally sound, and human-readable all at once requires genuine craft. Here is what good looks like.
Use plain language, not legal or technical jargon
Your explanation should be readable by someone with no technical background. Avoid phrases like "our automated scoring system applied a predictive algorithm." Say instead: "Our screening tool evaluates applications based on job-relevant criteria including experience in [area], specific qualifications listed in the job description, and responses to application questions."
Be specific about what was evaluated
Generic explanations are unhelpful and potentially non-compliant. A good explanation tells the candidate what dimensions were assessed. You don't need to give them a score breakdown if that's proprietary, but you do need to tell them what mattered. "Your application was evaluated on years of relevant experience, educational background, and qualifications specific to this role" is better than "your application did not meet our requirements."
Acknowledge where the AI stopped and the human started
Candidates are entitled to know whether a human was involved in their assessment and at what stage. If AI screening filtered their application and no human reviewed it, say so clearly. If AI produced a ranking and a recruiter made the final call, explain that too. Honesty here builds trust and reduces complaints.
Offer a route to human review
Particularly for EU and UK candidates, your explanation should include a mechanism to request human review of an AI decision. This doesn't need to be a complex process — it can be as simple as a contact email — but it needs to be real and it needs to be staffed.
Avoid language that sounds defensive or dismissive
Phrases like "the system determined you were not a fit" or "our AI objectively assessed your qualifications" are legally and ethically problematic. They imply the AI is infallible and the decision is unchallengeable, which it is not. Use language that is matter-of-fact without being dismissive: "Based on the criteria set for this role, your application did not meet the threshold at this stage. Here is what was evaluated."
Section 8: Real Examples of AI Hiring Decision Explanations
Abstract principles are only useful up to a point. Here are concrete examples of AI hiring explanation language — both poor and good — across different scenarios.
Automated rejection notice
"Thank you for applying. After careful review, we have decided to move forward with other candidates. We appreciate your interest in [Company]."
This tells the candidate nothing. It doesn't acknowledge AI involvement, doesn't explain what was evaluated, and provides no route to challenge or seek clarity.
AI-assisted rejection with basic explanation
"Thank you for applying for the [Role] position. Your application was reviewed using an automated screening tool that evaluates candidates based on qualifications relevant to this role, including years of experience in [field], technical skills listed in the job description, and responses to application questions. On this occasion, your application did not meet the threshold we set for advancing to the next stage. If you believe this assessment does not accurately reflect your qualifications, you may request a human review of your application by contacting [email] within 30 days."
This is significantly better. It discloses AI involvement, describes what was evaluated, is honest about the outcome, and provides a route to human review.
Full candidate-facing explanation (EU-compliant)
"Your application for [Role] at [Company] was evaluated using an automated tool that assesses job-relevant qualifications. The tool evaluated: (1) your stated years of experience in [specific area], (2) the presence of specific qualifications listed as required in the job description — specifically [qualification A] and [qualification B] — and (3) your responses to the screening questions about [topic]. Your application scored below the threshold set by our recruiting team for this stage. This threshold was set based on minimum job requirements and does not assess your overall suitability for a career in this field. No other information was used in this automated assessment. A human recruiter has been notified of all results and can be reached at [email] if you wish to request a review. Under applicable data protection law, you have the right to request that a human re-evaluate your application."
This version is legally robust, genuinely informative, and treats the candidate as an adult. It's the standard organizations should be aiming for.
Section 9: What to Do When Candidates Challenge AI Decisions
Even with the best explanation process in place, some candidates will challenge AI hiring decisions. This is not a problem to be avoided — it is a feature of a fair system. Here's how to handle it well.
Take every challenge seriously
The worst thing you can do when a candidate challenges an AI decision is respond with a form letter that restates the original rejection. Challenges deserve genuine human attention. Someone with actual authority over the hiring process — not a junior coordinator — should review the challenge.
Have a documented review process
Your response to AI decision challenges should follow a documented process. That process should include: who receives the challenge, who reviews it, what they review (the AI output, the candidate's full application, the job criteria), what the possible outcomes are (advance the candidate, uphold the rejection with additional explanation, escalate to legal), and what timeline you commit to.
Document everything
Every AI decision challenge, how it was reviewed, who reviewed it, and what the outcome was should be logged. This documentation serves multiple purposes: it helps you identify patterns that might indicate a systemic problem with your AI tool, and it creates a record that demonstrates good faith in the event of a regulatory inquiry or legal challenge.
Know when to escalate to legal
If a candidate's challenge alleges discrimination — particularly if they are a member of a protected class and allege that the AI treated them differently — escalate to your legal team immediately. Do not attempt to resolve discrimination allegations through standard HR processes without legal oversight.
Use challenges to improve your system
Patterns in candidate challenges are valuable data. If multiple candidates from a particular background are challenging the same decision type, that's a signal worth investigating. Build a quarterly review of challenge patterns into your AI governance process.
Section 10: Building a Legally Defensible Explainability System
Legal defensibility in AI hiring explainability isn't about having the perfect answer. It's about demonstrating that you had a deliberate, documented, and consistently applied process. Here are the components of a system that will hold up.
An AI use policy, in writing
Document your organization's AI hiring policy: which tools you use, for what purposes, at what stages of the funnel, and what human oversight exists. This document should be reviewed by legal and updated whenever your tooling changes.
Per-decision logging
For every candidate who passes through AI screening, there should be a log that records: the date, the tool version, the output (score or ranking), the criteria applied, and whether and how human review followed. This log doesn't need to be complex — a structured database record is fine — but it must exist and be retained.
Candidate notice infrastructure
Before candidates submit applications, they should receive notice that AI tools will be used in their evaluation. This notice should describe the type of tool, the characteristics it evaluates, and their rights (including the right to request human review in applicable jurisdictions). This notice should be documented and version-controlled.
Third-party audit cadence
Annual independent bias audits are mandatory in New York City and increasingly expected elsewhere. Build these into your calendar. Retain the reports. Make them available to candidates as required by law and, where appropriate, proactively.
Choosing auditability-first tools
When evaluating AI hiring tools, auditability and explainability features should be primary selection criteria — not afterthoughts. Tools differ significantly in how much insight they provide into individual decisions. When comparing platforms like ninjahire vs linkedin recruiter or ninjahire vs hireez, the question of what per-candidate explanation data the platform can surface is fundamental. Some platforms build explainability into their core architecture; others treat it as a bolt-on. That difference matters enormously when a candidate or regulator asks questions.
Similarly, if you're evaluating newer AI-native screening tools, look carefully at what documentation they provide around model logic, what audit outputs they can generate, and whether they have a documented process for supporting employer compliance. These aren't nice-to-have features; they're infrastructure for your legal risk management.
Section 11: Role of AI Vendors in Explainability (and How They Compare)
Employers often underestimate how much their AI vendor's choices shape their own explainability posture. If your vendor doesn't provide per-candidate explanation data, doesn't publish their evaluation criteria, and doesn't offer audit support, you are operating blind — and you bear the legal consequences regardless.
Here's what to look for in a vendor, and how different tool categories compare on explainability.
| Feature | What to look for | Red flags |
|---|---|---|
| Per-candidate explanation output | Score breakdowns, top factors, narrative summaries | Aggregate stats only, no individual-level data |
| Bias audit support | Built-in audit tooling or documented third-party audit process | "We don't do audits" or proprietary results not shareable |
| Criteria documentation | Clear description of what signals the model evaluates | Vague references to "AI matching" with no specifics |
| Model version control | Documented versioning so past decisions can be traced to a model version | Continuous updates with no versioning or changelog |
| Human override mechanisms | Built-in tools for recruiters to override or annotate AI decisions | AI scores treated as final, no override workflow |
| Candidate notice support | Template language, configurable disclosures in candidate communications | No candidate-facing disclosure features |
| Regulatory compliance documentation | Published compliance stance on GDPR, NYC LL144, EU AI Act | Silence or vague claims about compliance |
When organizations compare tools like ninjahire vs converzai or ninjahire vs tenzo ai, explainability infrastructure is one of the most important — and least-discussed — dimensions. Most vendor comparisons focus on features, integrations, and price. Legal teams should be asking about audit logs, per-candidate output, and compliance documentation before a contract is signed.
Platforms like ninjahire vs heymilo may differ substantially in terms of transparency architecture even if they look similar from a feature surface perspective. Ask vendors directly: "If a candidate in the EU asks for an explanation of their AI screening result, what data can you provide me to support that response?" The answer will be illuminating.
A vendor who treats explainability as a compliance burden is a liability. A vendor who treats it as a product feature — who has genuinely built systems to make explanation easy — is a partner. That distinction should drive your procurement decisions.
Section 12: Benefits of Explainable AI Hiring
The conversation around AI hiring explainability often stays in the register of risk management — what you need to do to avoid regulatory action or legal challenge. That framing misses something important: explainable AI hiring is actually better hiring, for reasons that go beyond compliance.
It forces you to build better criteria
You cannot explain what you have not defined. When you commit to giving candidates a meaningful account of what was evaluated, you are forced to articulate your selection criteria with real precision. That process — defining what genuinely matters for a role — improves hiring quality independently of any AI tool.
It builds employer brand
Candidates talk. A rejection that comes with a clear, respectful explanation of what was evaluated leaves a meaningfully different impression than a generic form response. Organizations that communicate with candidates honestly — even in rejection — are seen as employers of integrity. This matters for talent attraction over time.
It catches tool problems early
The discipline of explaining AI decisions exposes model behavior that might otherwise stay hidden. If your explanations start producing patterns that seem wrong — if the factors being cited don't match the actual job requirements, or if certain groups are being systematically flagged for the same reasons — you catch those problems early, before they accumulate into a discrimination claim.
It reduces complaint volume over time
Counterintuitively, organizations that give better explanations receive fewer hostile challenges. When candidates understand what was evaluated and why they didn't advance, the emotional charge of a rejection is lower. Ambiguity breeds frustration. Clarity — even disappointing clarity — is easier to accept.
It prepares you for regulation
The regulatory direction globally is toward greater AI accountability in hiring. Organizations that build explainability infrastructure now will not be scrambling to comply when new requirements land. They'll have the documentation, the processes, and the vendor relationships already in place.
Section 13: Key Takeaway
AI hiring explainability is not a technical problem. It's a governance, communication, and accountability problem — and it's one that every organization using AI in hiring needs to take seriously right now. The legal frameworks are here. Candidate expectations are rising. The organizations that get ahead of this will have a recruiting advantage, not just a compliance advantage.
Start by understanding your current AI tools, demanding explanation infrastructure from your vendors, and building the internal processes to translate AI outputs into human-readable accounts. Then communicate with candidates honestly. That combination — rigorous process, honest communication — is the foundation of explainable AI hiring done right.
Try for freeSection 14: FAQs
What is AI explainability in recruitment?
AI explainability in recruitment refers to the ability to provide meaningful, human-readable accounts of how an AI system evaluated a candidate — what criteria were used, how the candidate's profile compared to those criteria, and what factors influenced the outcome. It goes beyond disclosing that AI was used and extends to explaining what the AI did in a specific case.
How do you explain AI hiring decisions to candidates?
A good AI hiring explanation tells candidates: (1) that AI was used and for what purpose, (2) what criteria or dimensions were evaluated, (3) how their application compared to those criteria at a general level, and (4) their rights — including the right to request human review in applicable jurisdictions. The explanation should be in plain language, honest, and specific enough to be meaningful rather than generic.
Are employers required to explain AI hiring decisions?
It depends on jurisdiction. In the EU, GDPR Article 22 gives candidates the right to a meaningful explanation and human review of fully automated decisions that significantly affect them. In New York City, Local Law 144 requires candidate notice and bias audit disclosures. In the UK, similar obligations exist under UK GDPR. In the broader US, no single federal statute mandates explanations, but the EEOC has made clear that employers are responsible for the outcomes of AI hiring tools under existing anti-discrimination law.
Can candidates challenge AI hiring decisions?
Yes. Under GDPR and UK GDPR, candidates have an explicit right to request human review of automated decisions. In the US, candidates can file EEOC complaints if they believe an AI hiring tool produced discriminatory outcomes. Beyond formal legal mechanisms, any candidate can ask an employer to explain and reconsider an AI-driven decision, and employers with good-faith processes should have a mechanism to receive and respond to such requests.
What is the difference between AI transparency and AI explainability in hiring?
Transparency is about disclosure — informing candidates that AI is used in your hiring process. Explainability is about accountability — being able to describe why a specific AI decision was reached in a specific case. Organizations need both. Transparency without explainability leaves candidates informed but not understood. Explainability without transparency creates a trust gap. Together, they form the foundation of ethical AI hiring.
What does NYC Local Law 144 require for AI hiring tools?
NYC Local Law 144 requires employers and employment agencies that use automated employment decision tools (AEDTs) to screen candidates for positions in New York City to: conduct an annual independent bias audit of the tool, publish the results of that audit publicly, and provide candidates with advance notice that an AEDT is being used, along with information about the characteristics the tool evaluates.
How does GDPR affect AI hiring decisions?
Under GDPR Article 22, EU data subjects (including job candidates) have the right not to be subject to decisions made solely by automated processing when those decisions produce significant effects on them. Hiring decisions qualify. Employers must provide a meaningful explanation of the logic involved, offer candidates the right to request human review, and give candidates the ability to contest the decision. GDPR also requires lawful basis for processing personal data in AI screening, and data minimization — the AI should evaluate only what is genuinely relevant to the role.
What should I ask an AI hiring vendor about explainability?
Key questions include: Can you provide per-candidate explanation data? What criteria does your model evaluate and how are they weighted? Do you conduct bias audits, and can you share the methodology and results? What version control exists for your model? How do you support employers in meeting GDPR Article 22 and NYC Local Law 144 obligations? These questions will quickly reveal whether a vendor has built explainability into their core architecture or is treating it as an afterthought.
.png)

.jpg)
.png)