Compliance & Ethics

What to do when a candidate claims your AI screening is biased

Manish Barwa
Manish Barwa
.
5 min read

March 15, 2026

What to Do When a Candidate Claims Your AI Screening Is Biased

Section 1: Why Bias Complaints in AI Hiring Are Increasing

Candidates are more aware than they used to be. They read the same headlines about AI bias in hiring that HR professionals do, and when they receive a rejection that feels unexplained or sudden, the hypothesis that an algorithm made the call — unfairly — is not a stretch. It is the conclusion a growing number of people are reaching, and a growing number are acting on it.

AI screening bias complaints are not uniform in their nature or their legal weight, but the trend in volume is clear. EEOC charges in the US that reference AI or algorithmic tools in hiring have increased year over year. In Europe, data protection authorities have started receiving complaints specifically about automated decision-making in recruitment under GDPR Article 22. Employment tribunals in the UK have seen early cases involving AI screening. The legal infrastructure for these complaints is being built in real time, and it is becoming more candidate-friendly, not less.

Part of what is driving this is better candidate access to information. When a candidate applies through an AI-enabled platform that sends them an async interview, they often know it is AI. When they receive a rejection 12 hours later, they connect the dots. Some of those connections are accurate. Some are not. The organization that receives the complaint needs to be able to tell the difference — and more importantly, needs to be able to demonstrate it.

79% of large employers now use some form of AI in hiring (SHRM, 2024)
4x increase in AI-related hiring complaints to the EEOC since 2021
Art. 22 GDPR provision governing automated decisions that affect individuals — increasingly cited in EU hiring complaints

This guide is written for HR leaders and TA teams who have received a complaint — or who are preparing for the possibility. It walks through what to do, in what order, and how to handle both the investigation and the candidate communication in a way that is defensible, fair, and operationally sound.

Section 2: What a Bias Complaint Actually Means — and What It Doesn't

An AI hiring bias complaint is a candidate's assertion that an automated or AI-assisted system used in your hiring process treated them unfairly — typically on the basis of a protected characteristic such as race, gender, age, disability, or national origin. The complaint may be informal (an email expressing concern) or formal (a legal claim or regulatory filing). The fact that a complaint has been made does not mean bias occurred — but it does require a structured response.

The first thing to understand is that bias complaints exist on a spectrum. At one end, you have a candidate who is frustrated about a rejection and suspects AI played a role, with no specific evidence or legal framing. At the other end, you have a formal charge with a regulatory body alleging discriminatory impact on a protected class, potentially backed by data or legal counsel. How you respond operationally differs based on where on that spectrum the complaint sits — but the initial preservation and documentation steps are the same regardless.

It also helps to separate what the candidate is actually claiming from what they might be feeling. A candidate who says the AI rejected me because I am a woman may be expressing a genuine experience of unfairness rather than stating a factual conclusion. That experience deserves to be taken seriously on its own terms, even if your investigation ultimately finds no technical evidence of gender-based disparate impact. The distinction between the candidate's experience and the system's behavior is important — and keeping them separate during the investigation process produces better outcomes than conflating them.

What a bias complaint is not, in isolation, is proof of bias. AI screening tools can produce incorrect outcomes for individual candidates for reasons that have nothing to do with protected characteristics — a poorly calibrated question, an ambiguous scoring rubric, a technical glitch. These are real problems worth investigating and fixing, but they are distinct from systematic bias. Your investigation needs to look for both.

Section 3: Immediate Response — What to Do Within the First 24 Hours

The first 24 hours after receiving a bias complaint matter more than most HR teams realize. Not because you can resolve the complaint in that window — you cannot — but because the decisions made in that window shape your ability to investigate and respond credibly later. Two things need to happen immediately, and two things must actively not happen.

First: acknowledge the complaint to the candidate promptly and without prejudging the outcome. A response that says something like, we have received your concern, we take it seriously, and we are reviewing it internally does several things at once. It stops the candidate from feeling ignored, which often escalates informal complaints into formal ones. It creates a paper trail showing your organization responded in good faith. And it sets a professional tone for whatever comes next. This acknowledgment does not need to be elaborate. It needs to be timely, calm, and genuinely non-committal — you are not admitting fault, and you are not dismissing the complaint.

Second: stop any further use of the specific screening configuration or output that the candidate has complained about, pending initial review. This is a precautionary step, not a concession. If the AI scoring produced an outcome that is now in question, you do not want to continue making downstream decisions based on that output while you are investigating whether it was valid. Pause advancements or rejections that were pending and dependent on the same screening run.

What must not happen: do not delete, alter, or export records in any way that could be characterized as sanitizing the evidentiary trail. Do not have informal conversations with the AI vendor that might inadvertently shape their official response before you understand the facts. And do not assign the investigation to the same person who managed the hiring process being complained about — the investigator needs to be independent of the outcome.

If the complaint arrives as a formal legal filing rather than an informal email, involve your legal counsel before taking any other step. The guidance in this article is operational, not legal advice, and formal complaints carry procedural requirements that require legal oversight from the start.

Section 4: Step 1 — Preserve Records Before Doing Anything Else

Record preservation is the unglamorous foundation of any credible bias investigation, and it is frequently handled poorly because it does not feel urgent in the way that candidate communication or internal escalation does. It should be treated as the first operational step, not an afterthought.

What needs to be preserved immediately includes: the candidate's full application record including resume, any pre-application data, and all communications; the complete AI screening record for that candidate — their responses, the scoring output, the rubric applied, and any automated decision flags; the configuration of the AI screening tool at the time the candidate went through it, including question set, scoring weights, and advancement thresholds; any recruiter notes or actions taken in relation to this candidate after AI screening; and all communications between your organization and the candidate, including automated emails.

In practice, this means placing a litigation hold on the relevant records in your ATS and AI platform before anyone involved in the hiring process takes any further action. If your AI screening vendor hosts data on their servers, you need to formally request that they preserve the relevant records as well, and do so in writing with a timestamp. If a formal legal complaint arrives and you cannot produce the original screening data because it was overwritten by a routine data purge, your ability to defend your process is severely compromised regardless of whether the process was actually fair.

The most common documentation failure in AI bias investigations is not malicious — it is simply that nobody thought to preserve vendor-side logs before the standard retention period expired. Contact your AI vendor within 24 hours of receiving a complaint and request a formal hold on all records relating to the relevant screening session and configuration.

If your current AI screening platform does not produce per-candidate decision logs that are exportable and timestamp-verified, that is a separate problem worth addressing — but it is also immediately relevant to your ability to investigate this specific complaint. Note the gap, document what records do exist, and work with what you have while flagging the deficiency for remediation.

Section 5: Step 2 — Identify the Type of Complaint

Not all bias complaints require the same investigation. The type of claim being made — and the evidence or context the candidate provides — determines what you are looking for, which systems you need to review, and how quickly the situation could escalate. Categorizing the complaint early helps you allocate investigation resources appropriately.

Complaint Type What the Candidate Claims What to Investigate Escalation Risk
Individual anecdote The AI rejected me unfairly; I believe I was qualified Screening record for that candidate; scoring output vs rubric; compare with similarly qualified candidates who were advanced Low unless candidate pursues formally
Protected characteristic claim I was rejected because of my gender / race / age / disability All of the above, plus analysis of AI outputs across the full candidate pool for that role and period — looking for disparate impact by the claimed characteristic Medium to high; potential EEOC or EHRC filing
Automated decision objection (GDPR) I was not told AI was making decisions about me; I want an explanation or human review Transparency disclosures in your hiring process; whether human review was genuinely in the loop; whether the AI output was the sole decision basis Medium; formal regulatory complaint to a DPA is straightforward to file
Systemic pattern claim Your AI systematically screens out candidates like me; I have compared notes with others Full statistical analysis of screening outcomes across the relevant demographic; may require external audit High; class action or regulatory investigation potential
Technical error claim The AI misunderstood my answers or scored me incorrectly due to a technical issue Raw screening data; transcription accuracy if voice-based; rubric calibration; vendor error logs Low to medium; usually resolvable with vendor cooperation

In practice, many initial complaints blend two or more of these types — a candidate who says the AI rejected me because I am over 50 is making both a protected characteristic claim and implicitly challenging the automated decision. The classification exercise is not about fitting the complaint into a neat box — it is about making sure your investigation covers all the relevant dimensions rather than only the most visible one.

Section 6: Step 3 — Conduct a Structured Internal Investigation

The investigation needs to be methodical, documented, and independent. Those three qualities matter equally. Methodical means following a defined process rather than looking selectively at the evidence that happens to be easiest to access. Documented means keeping a written record of every step, finding, and conclusion of the investigation — not for legal posturing, but because an undocumented investigation produces no defensible findings. Independent means the person conducting the review was not involved in the hiring decision being reviewed and has no incentive to reach a particular conclusion.

  1. Review the individual screening record. Pull the complete AI screening output for the complaining candidate. Review their responses, the scoring against the rubric, and the threshold at which they were advanced or declined. Compare this against the scoring of candidates who were advanced from the same screening run. Look for any anomalies — unusually low scores on dimensions where the candidate's responses seem strong, or scoring patterns that do not follow the rubric logic.

  2. Review the screening configuration. Pull the configuration record for the screening that was in use when this candidate applied. Check the question set, scoring weights, hard filters, and any automated decision rules. Was the configuration appropriate for the role? Were any criteria included that could function as proxies for protected characteristics — for example, specific educational institutions, particular communication style preferences, or geographic filters that correlate with demographics?

  3. Analyze outcomes across the relevant pool. For protected characteristic claims specifically, the individual record is necessary but not sufficient. You need to look at outcomes across the full candidate pool for that role and period, and analyze whether any protected groups were advanced or declined at rates that differ significantly from the overall population. This is a statistical exercise, and depending on the volume of candidates, it may require someone with basic data analysis skills.

  4. Review human intervention points. Trace the actual decision chain. Did a recruiter review AI outputs before advancing or declining candidates, or were decisions automated end-to-end? If a human was nominally in the loop, were they actually exercising judgment or simply rubber-stamping AI recommendations? The answer to this question matters both for the investigation findings and for your regulatory exposure.

  5. Document findings in writing before drawing conclusions. Before you decide whether bias occurred, write down what the evidence shows — clearly, factually, without characterization. What did the scoring show? What did the pool analysis show? What was the human oversight process? The conclusion should follow from the documented evidence, not precede it.

Section 7: When to Involve Legal Counsel

There is a threshold at which an internal HR investigation becomes insufficient, and recognizing it early is important. Crossing that threshold without legal involvement exposes your organization to procedural errors that can make a defensible situation difficult and an already difficult situation worse.

Involve legal counsel immediately if the complaint arrives as a formal EEOC charge, employment tribunal claim, GDPR complaint to a data protection authority, or any other regulatory filing. At that point, the investigation is no longer purely an internal HR exercise — it is a legal matter with specific procedural requirements, privilege considerations, and response deadlines.

Involve legal counsel promptly — within the first few days, not weeks — if the complaint includes any language suggesting the candidate has legal representation, if the complaint references specific protected characteristics with enough detail to suggest a potential disparate impact claim, or if your initial review surfaces evidence that the AI system may have produced systematically different outcomes for different demographic groups.

The internal HR team should handle informal, individual complaints where the investigation reveals either a technical error that can be straightforwardly corrected, or a genuine qualification mismatch that the AI screening correctly identified. Everything above that threshold benefits from legal oversight — not to suppress the investigation, but to ensure the process and the conclusions are defensible.

The instinct to handle everything internally and quietly is understandable, but it often backfires. When a complaint that warranted legal involvement gets managed purely as an HR matter, the organization often finds itself having made procedural errors that complicate things later. Getting legal in early costs less than getting them in after a mistake.

— Employment law practitioner, London

Section 8: How to Communicate With the Candidate

Candidate communication during a bias complaint investigation is one of the most consequential things you manage, and it is frequently handled either with excessive caution — meaning near silence — or with well-intentioned openness that inadvertently creates admissions. Neither serves the candidate or the organization well.

The communication principles are: be timely, be honest about what you can and cannot share at each stage, do not make commitments you cannot keep, and maintain a tone that treats the candidate as a person raising a legitimate concern rather than a threat to be managed.

The initial acknowledgment — within 24 hours — should confirm receipt, indicate that you are taking the concern seriously, and give a realistic timeline for when the candidate can expect a substantive response. If you tell someone they will hear from you in five business days, contact them in five business days even if the investigation is not complete. An unexplained silence after a commitment is almost always worse than an honest update that says the review is taking longer than expected.

During the investigation, avoid speculative language in any communication with the candidate. Do not say things like we think this was a technical error or we believe our AI is unbiased — you do not know either of these things until the investigation is complete. Candidates and their advisors pay close attention to these communications, and informal statements made during an investigation can become significant later.

Once the investigation is complete, the candidate deserves a substantive response that explains what was reviewed, what was found, and what — if anything — is being changed. If the investigation found a legitimate error, acknowledge it and explain what is being corrected. If the investigation found no evidence of bias but the candidate's experience of the process was poor, acknowledge that too and explain what improvements are being made. A response that says we investigated and found everything was fine, with no further explanation, rarely satisfies a candidate who raised a genuine concern and often generates escalation that a more substantive response would have prevented.

Section 9: Remediation Actions Based on Findings

What you do after the investigation depends entirely on what the investigation found. There is no single remediation path — the action should be proportionate to and specific about the problem identified. Applying the same response to every complaint regardless of findings is not credible and does not fix anything.

Finding Appropriate Remediation Candidate Communication
Technical error in AI scoring Correct the scoring error; re-evaluate the candidate's application manually; report bug to vendor with documentation Acknowledge the error directly; offer re-consideration in the process if role is still open; apologize without excessive qualification
Miscalibrated screening criteria (not intentional bias) Review and update screening rubric; retrain or reconfigure the tool; audit other recent screening runs using the same configuration Explain that the screening criteria were reviewed and updated; offer manual review of the candidate's application
Proxy discrimination in criteria design Remove or revise the proxy criteria immediately; conduct full audit of outcomes under the previous configuration; consider external audit; brief legal counsel Acknowledge that criteria are being revised; do not admit legal liability without legal guidance; offer re-evaluation
Disparate impact in statistical outcomes Suspend or reconfigure the AI screening tool pending full audit; conduct adverse impact analysis across full candidate pool; brief legal counsel; consider proactive outreach to affected candidates Legal counsel should be involved in candidate communications at this stage; acknowledge that a review is underway without prejudging findings
No evidence of bias; legitimate qualification gap No change to screening process required; document investigation and findings; review whether candidate communications were sufficiently clear and transparent Provide a clear explanation of the criteria applied and how the candidate's application was assessed; acknowledge their experience without conceding to a finding of bias
GDPR / transparency compliance failure Update candidate-facing disclosures immediately; review and strengthen human oversight documentation; assess whether a DPA notification is required Acknowledge the disclosure gap directly; provide the explanation the candidate was entitled to receive; confirm what changes are being made

The remediation step is also where systemic process improvement decisions get made. A single complaint that reveals a calibration problem is an opportunity to fix something that was probably affecting more candidates than just the one who complained. Most people who experience a biased or broken hiring process do not raise a formal complaint — they just disengage. Treating each complaint as signal about a broader process question, rather than as an isolated incident to resolve and close, is what separates organizations that improve over time from those that manage each complaint individually while the underlying problem persists.

Section 10: How Your AI Vendor Impacts Your Ability to Respond

When a bias complaint arrives, your ability to investigate it effectively depends heavily on what your AI vendor can give you. This is a dependency that most HR teams do not think about until they need it — at which point, the differences between vendors become very concrete very quickly.

The minimum you need from a vendor when a complaint arrives is: per-candidate decision logs with timestamps, the specific scoring output and rubric applied to that candidate, the configuration of the system at the time of the screening, and a point of contact who can support your investigation with technical clarification. Vendors who cannot provide these within a reasonable timeframe are not just unhelpful — they are a compliance liability for your organization.

When teams are evaluating platforms in advance of deployment, these questions should be front of mind. The comparison between NinjaHire vs LinkedIn Recruiter, for instance, often surfaces differences in how each platform handles audit trail access — large platforms with complex data architectures frequently have slower, more bureaucratic processes for extracting specific candidate records than purpose-built AI screening tools where the data model is simpler and more accessible.

Explainability is a related dimension. When a candidate asks why the AI scored them a certain way, you need to be able to provide a coherent answer. That requires the AI system to produce outputs that are human-readable and tied to specific response content — not a proprietary score that the vendor cannot explain in plain language. Evaluating platforms like NinjaHire vs ConverzAI on explainability means asking specifically: if a candidate challenges their score, can I show them which responses contributed to it and how?

Bias audit capability is the third vendor-dependent factor. Some platforms provide built-in adverse impact analysis tools that allow you to run outcome comparisons across demographic groups. Others produce raw data that you need to export and analyze externally. The difference matters when an investigation requires statistical analysis — it determines whether your team can do the analysis quickly in-house or needs to bring in an external analyst. When reviewing options like NinjaHire vs Tenzo AI, the depth of built-in reporting and the format of exportable data are worth examining specifically in the context of a hypothetical complaint scenario.

Vendor responsiveness and contractual support obligations are practical factors that are easy to overlook during normal operations but become significant during a complaint. Does your vendor agreement include a service level commitment for compliance-related data requests? Is there a named contact for legal or compliance issues, or does every request go through a general support queue? Reviewing NinjaHire vs hireEZ or similar comparisons through the lens of post-complaint support — rather than just feature sets — often changes which platform looks most suitable for organizations where compliance risk is a material consideration.

Finally, how a vendor handles their own EU AI Act or EEOC compliance obligations affects your exposure directly. A vendor whose tool was not built with adverse impact testing, whose training data is opaque, and who cannot produce conformity documentation transfers a portion of that compliance risk to every organization that deploys their tool. The NinjaHire vs HeyMilo comparison, among others, highlights how differently AI recruiting platforms approach documentation and compliance infrastructure — which is an evaluation criterion that matters more when something goes wrong than when everything is running smoothly.

Section 11: Improving Your Process After a Complaint

A resolved complaint is the best possible prompt for a structured process review. The candidate who raised the issue has effectively given you a stress test of your AI hiring infrastructure, your investigation procedures, and your candidate communication capability — all at once. Whether or not the complaint revealed actual bias, the process of handling it almost always reveals something worth improving.

The most common improvement that emerges from bias complaint investigations is the transparency gap. Organizations frequently discover that their candidate-facing disclosures about AI use were insufficient, too legalistic to be understood, or missing entirely from key touchpoints in the hiring journey. Fixing this is straightforward and should happen immediately after any complaint, regardless of outcome.

The second most common improvement is the human oversight gap. Many organizations have human review nominally built into their AI screening process, but the investigation reveals that in practice, recruiters were advancing or declining candidates based almost entirely on AI ranking without meaningful independent review. Rebuilding this in a way that is genuine rather than performative often requires changing workflows — not just policies — so that the AI output is one input among several rather than the determinative factor.

After we worked through a complaint, we realized our hiring managers were looking at the AI score first and the candidate record second. The AI was supposed to be an aid to their judgment. In practice it had become a substitute for it. That is a process design problem, not an AI problem, and we fixed it by changing the order in which information was presented in our review interface.

— Head of Talent Acquisition, European SaaS company

Building a regular bias audit cadence into your ongoing process is the most durable improvement you can make. Rather than waiting for a complaint to trigger a statistical review of AI outcomes, schedule a quarterly adverse impact analysis as a standing operational activity. This surfaces problems before they generate complaints, demonstrates proactive good faith to regulators, and gives your team early data on any drift in screening outcomes over time.

Section 12: Bias Complaint Response Checklist

Use this as a working reference for the first week after a complaint is received. It is not a legal checklist — it is an operational one designed to make sure nothing essential gets missed in the initial response period.

  • Acknowledge the complaint to the candidate within 24 hours — professionally, non-committally, and with a clear next-step timeline
  • Pause any pending hiring decisions that depend on the screening output in question
  • Place an immediate hold on all relevant records: candidate application, AI screening output, configuration record, recruiter actions, and all candidate communications
  • Contact your AI vendor in writing to request a preservation hold on server-side records related to the screening session
  • Categorize the type of complaint using the framework in Section 5 and determine the appropriate investigation scope
  • Assign an independent investigator who was not involved in the original hiring decision
  • Review the individual screening record: responses, scoring, rubric, advancement threshold
  • Review the screening configuration for criteria that may function as proxies for protected characteristics
  • Conduct a pool-level outcome analysis if the complaint involves a protected characteristic claim
  • Document all investigation findings in writing before drawing conclusions
  • Assess whether legal counsel needs to be involved based on complaint type and initial findings
  • Communicate findings to the candidate with a substantive explanation and any applicable remediation offer
  • Implement process improvements identified during the investigation before the next hiring cycle
  • Schedule a post-complaint review 60 days out to confirm improvements are functioning as intended

Section 13: Key Takeaway

A bias complaint against your AI screening process is not a verdict — it is a question that demands a structured answer. The organizations that handle these complaints well are the ones that respond promptly without overreacting, investigate methodically without cutting corners on the uncomfortable questions, communicate honestly without making inadvertent admissions, and use the experience to genuinely improve their process rather than simply close the file. The same discipline that produces a good investigation also produces a hiring process that is less likely to generate legitimate complaints in the first place: clear documentation, genuine human oversight, transparent candidate communication, and a vendor whose tool can be audited when it needs to be.

AI Screening You Can Audit, Explain, and Stand Behind

NinjaHire gives hiring teams per-candidate decision logs, explainable scoring outputs, and human oversight workflows designed to hold up under scrutiny. Try it free and see how a compliant AI screening setup actually works.

Try for Free

Section 14: Frequently Asked Questions

What should I do if a candidate claims my AI screening is biased?

Respond to the candidate within 24 hours with a professional acknowledgment that does not admit fault or dismiss the concern. Simultaneously preserve all relevant records — the candidate's screening data, the AI configuration, and all communications — before any routine data processes can overwrite them. Then categorize the type of complaint, assign an independent investigator, and follow a structured investigation process that looks at both the individual case and the broader pool of outcomes. The initial response speed and the quality of your record preservation are the two factors that most determine how well you can manage the complaint in the weeks that follow.

Is AI hiring actually biased?

AI hiring tools can produce biased outcomes, but bias is not inherent to AI — it depends on how the system was built, what data it was trained on, how screening criteria were designed, and how the tool is used in practice. Well-designed AI screening tools that are regularly audited for adverse impact and configured with criteria that are directly job-relevant produce outcomes that are often more consistent than human-only screening. Poorly designed tools, or well-designed tools used with poorly chosen criteria, can and do produce disparate outcomes across protected groups. The honest answer is: it depends on the specific tool and how it is deployed.

What are the investigation steps for an AI hiring bias complaint?

The investigation should follow five steps in order: review the individual candidate's screening record against the scoring rubric; review the screening configuration for any criteria that could function as proxies for protected characteristics; analyze outcomes across the full candidate pool for that role and period, looking for disparate impact patterns; trace the actual human oversight process to determine whether meaningful human judgment was applied; and document all findings in writing before drawing conclusions. The investigation should be conducted by someone independent of the original hiring process, and all steps should be documented contemporaneously.

When should legal counsel be involved in an AI bias complaint?

Legal counsel should be involved immediately if the complaint arrives as a formal regulatory filing — an EEOC charge, employment tribunal claim, or GDPR complaint to a data protection authority. Legal counsel should be involved promptly — within a few days — if the complaint references specific protected characteristics in a way that suggests potential disparate impact, if the candidate appears to have legal representation, or if your initial review surfaces evidence of systematic outcome differences across demographic groups. Internal HR teams can handle informal, individual complaints where the investigation finds either a straightforward technical error or a legitimate qualification mismatch with no systemic implications.

What records do I need to preserve when a bias complaint arrives?

You need to preserve: the candidate's complete application record; the AI screening output for that candidate including responses, scores, and rubric; the configuration of the AI tool at the time of the screening; recruiter notes and actions relating to the candidate; all communications between your organization and the candidate; and vendor-side logs if the AI platform stores data server-side. Contact your vendor in writing within 24 hours to request a preservation hold. If you discover that per-candidate decision logs do not exist or have already been purged, document that gap — it becomes part of the investigation findings and may have implications for future vendor selection.

How do I communicate with a candidate during a bias investigation?

Be timely, be clear about what you can and cannot share at each stage, and do not make commitments you cannot keep. Acknowledge the complaint within 24 hours. Give a realistic timeline for a substantive response and honor it. During the investigation, avoid speculative language — do not suggest what the outcome might be before the review is complete. Once the investigation is finished, provide a substantive explanation of what was reviewed and what was found. If an error occurred, acknowledge it directly and explain what is being corrected. If the investigation found no evidence of bias, explain the criteria applied and how the candidate's application was assessed — a response that simply says we found no problem without further explanation rarely satisfies a genuine concern.

How can organizations prevent AI hiring bias complaints?

Prevention starts with screening criteria design — criteria should be directly job-relevant, reviewed for potential proxy effects, and tested before deployment at scale. Candidate transparency is the second layer: candidates should know AI is being used, what role it plays, and how they can request more information. Genuine human oversight — not rubber-stamping AI recommendations — is the third layer, and it requires workflow design that positions AI output as one input among several rather than the determinative factor. Regular adverse impact auditing, scheduled as a routine operational activity rather than triggered by complaints, surfaces problems early. And selecting vendors who can support audit and investigation requests is the infrastructure layer that makes everything else possible when it is needed.