Compliance & Ethics

AI hiring laws by state: what US employers need to know in 2026

Praneeth Patlola
Founder, Ninjahire
.
5 min read

March 15, 2026

AI Hiring Laws by State: What US Employers Need to Know in 2026

AI Hiring Laws by State: What US Employers Need to Know in 2026

The legal landscape around artificial intelligence in recruiting has shifted from a theoretical concern into an active compliance obligation. Whether you're running a distributed team across five states or staffing a single headquarters in New York, using AI to screen, rank, or evaluate candidates now carries regulatory weight that didn't exist just a few years ago.

This isn't a story about AI replacing recruiters or some distant regulatory future. It's a practical look at what the law actually requires, which jurisdictions have moved first, and how employers can build hiring processes that are both efficient and defensible.

The pace of state-level legislation is accelerating. Employers who rely solely on their vendors to manage compliance are taking on significant legal and reputational risk. Understanding what these laws require, and building your hiring operations accordingly, is no longer optional.


Why AI Hiring Compliance Has Become an Operational Priority

For most of the last decade, employers adopted AI-assisted hiring tools with relatively little regulatory friction. Applicant tracking systems, resume parsing engines, and video interview platforms promised to reduce bias and improve efficiency. Legal scrutiny was limited. That's changed.

Multiple states have now passed enforceable legislation. Federal agencies have issued formal guidance. Candidate advocacy groups are filing complaints. And plaintiff attorneys are beginning to develop case law around automated hiring discrimination. The question employers face is not whether to take this seriously, but how quickly they can operationalize a compliant approach.

A common misconception is that if your vendor is reputable and handles the algorithmic work, compliance is their problem. It isn't. Every employment law framework that has addressed AI hiring to date places the legal obligation on the employer, not the technology provider. Your vendor may conduct bias audits and publish results, but you are the one responsible for disclosing them to candidates, complying with local notice requirements, and ensuring your deployment of their tool doesn't produce disparate outcomes in your workforce.

That shift in accountability is the central operational reality of AI hiring compliance in 2026.

The Federal Baseline: EEOC, Title VII, and the ADA

Before examining state-specific rules, it's worth grounding the conversation in the federal framework that applies everywhere in the US. While Congress has not yet passed comprehensive AI hiring legislation, existing civil rights and employment laws create a baseline of obligations that interact directly with automated hiring tools.

EEOC Guidance on AI and Automated Systems

The Equal Employment Opportunity Commission has made clear, through guidance documents and technical assistance publications, that employers using automated systems to make or influence employment decisions remain fully responsible for ensuring those decisions comply with Title VII of the Civil Rights Act and the Americans with Disabilities Act. The fact that an algorithm, rather than a human recruiter, performed the screening does not create a legal exception.

Title VII prohibits employment practices that cause disparate impact based on race, color, religion, sex, or national origin, unless the employer can demonstrate the practice is job-related and consistent with business necessity. AI hiring tools that systematically score candidates in ways that correlate with protected characteristics, even unintentionally, can trigger disparate impact liability. This has been an established legal theory for decades; it now applies directly to machine learning systems.

ADA Accommodations and Automated Interviews

The ADA adds a layer that many employers overlook. Candidates with disabilities may be disadvantaged by AI tools that analyze speech patterns, facial expressions, or typing behaviors, since these systems were often trained on data that doesn't adequately represent people with certain physical or neurological conditions. Employers are required to provide reasonable accommodations, which in a recruiting context means offering alternative assessment pathways for candidates who cannot effectively participate in an AI-evaluated process. Building those pathways into your hiring workflow from the start is both a legal obligation and a straightforward operational design choice.

What Legally Counts as an AI Hiring Tool

One of the practical challenges with AI hiring compliance is understanding exactly which tools are covered by emerging regulations. Definitions vary by jurisdiction, but a few categories appear consistently across the current legislative landscape.

Resume screening and ranking systems that use machine learning to sort, score, or filter applications are almost universally within scope. Automated video interview platforms that analyze candidate speech, tone, word choice, or facial expressions fall under multiple state laws. Candidate scoring engines that generate numerical assessments or recommendation outputs based on structured or unstructured data are covered. Chatbot screening tools that conduct preliminary candidate interviews and route applicants based on their responses are also within scope in most jurisdictions.

What's less clear, and actively being debated in several states, is whether a simple keyword filter in an ATS constitutes an automated employment decision tool, or whether a recruiter who uses AI-generated summaries as one input among many is subject to the same disclosure requirements as a fully automated screening pipeline. These edge cases matter for compliance program design, and employers should work with employment counsel to establish clear internal definitions before a regulatory question forces the issue.


NYC Local Law 144: The Benchmark Everyone Is Watching

New York City's Local Law 144, which became fully enforceable in July 2023 after a series of delays, remains the most operationally significant AI hiring regulation in the United States for employers with any presence in New York City. It has become the de facto model that other jurisdictions are studying and, in several cases, expanding upon.

What the Law Covers

Local Law 144 applies to employers and employment agencies that use automated employment decision tools, or AEDTs, to screen candidates for employment or employees for promotion in New York City roles. The law defines AEDTs as computational processes derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issue simplified output, including scores, classifications, or recommendations, that employers use to substantially assist or replace discretionary decision-making in hiring or promotion.

That definition is deliberately broad. If a tool materially influences which candidates a recruiter sees, or filters out applicants before a human ever reviews them, it likely qualifies as an AEDT under Local Law 144. The word substantially has been a source of interpretive discussion, but employers should not rely on a narrow reading when designing their compliance programs.

The Bias Audit Requirement

The core of Local Law 144 is the bias audit requirement. Before deploying an AEDT, and annually thereafter, employers must obtain an independent bias audit of the tool. The audit must be conducted by an independent auditor, meaning someone with no financial conflict with the employer or vendor, and must assess whether the tool produces disparate impact based on sex, race, and ethnicity.

The audit methodology follows the four-fifths rule, also known as the 80 percent rule, drawn from the EEOC's Uniform Guidelines on Employee Selection Procedures. Essentially, if the selection rate for a protected group is less than 80 percent of the selection rate for the most-selected group, that represents a potential adverse impact that must be disclosed.

The audit results must be published on the employer's website for at least six months after the tool is used in hiring. If the employer doesn't have a website, the results must be made available to candidates upon request. This public disclosure requirement has teeth: a bias audit that reveals adverse impact doesn't automatically prohibit use of the tool, but it does create public accountability and a documented record that plaintiff attorneys can reference.

Candidate Notification Requirements

Beyond the audit, Local Law 144 requires employers to notify candidates that an AEDT will be used in their evaluation. This notice must be provided at least 10 business days before the tool is applied, must describe the type of data the tool uses, and must explain that candidates can request an alternative selection process or accommodation if available.

For NYC-based recruiting teams, this means incorporating disclosure language into job postings or application materials and building an accommodation workflow that can actually be fulfilled. A disclosure that promises an alternative process but provides no operational pathway to request or receive one creates liability rather than protection.

Enforcement and Penalties

The New York City Department of Consumer and Worker Protection enforces Local Law 144. Civil penalties range from $375 to $1,500 per violation per day. Each day of non-compliance can constitute a separate violation, and each candidate affected may represent a distinct violation. For an employer processing high volumes of applications using a non-compliant AEDT, the cumulative exposure can become significant quickly.

Importantly, the law has private right of action implications that continue to develop. While the city agency handles civil penalties directly, candidates who were screened using non-compliant tools may also pursue broader discrimination claims under New York City Human Rights Law, which is one of the most plaintiff-friendly employment statutes in the country.


Illinois AI Video Interview Act: Consent, Data, and Expanding Scope

Illinois was ahead of the curve on video interview regulation. The Illinois Artificial Intelligence Video Interview Act, effective since January 2020 and expanded in subsequent legislative sessions, applies to any employer using AI to analyze video interviews of candidates for positions based in Illinois. Given that remote hiring has made geographic jurisdiction more complex, employers recruiting for fully remote roles filled by Illinois residents should treat this law as applicable.

Consent and Explanation Requirements

Before an employer uses AI to evaluate a video interview, it must notify the applicant that AI may be used and explain how the AI works. This includes describing the general characteristics the technology evaluates, such as the candidate's language patterns, facial expressions, or responses to structured questions. Employers must also obtain the candidate's explicit consent before proceeding.

This is a meaningful operational requirement. The consent must be affirmative, not implied by the act of submitting an application. Employers deploying asynchronous video interview platforms with AI scoring components need a documented consent workflow that predates the candidate's recorded session.

Data Deletion Obligations

Illinois also imposes strict data deletion requirements. Employers may not share the candidate's video with third parties except for the limited purpose of using technology to evaluate the applicant. After the position is filled, the employer must delete the videos within a defined period, typically within 30 days of the candidate's request or within a prescribed time after the position is filled.

For employers managing candidates across dozens of open roles with multiple ATS integrations, building a video data retention and deletion workflow that actually executes on these timelines is a non-trivial operational challenge. Many compliance failures in this area aren't intentional. They're the result of inadequate data governance systems that don't surface retention deadlines or connect vendor storage policies to employer obligations.

Legislative Expansion: What Illinois Added

In 2021 and in subsequent sessions, Illinois expanded the original act to require employers to report demographic data about candidates who complete AI-analyzed video interviews to the Illinois Department of Commerce and Economic Opportunity. This expanded reporting obligation is designed to surface patterns of disparate impact across the employer's candidate pool. It represents a meaningful step beyond disclosure into affirmative monitoring, and it signals where other states may head as their AI hiring frameworks mature.

California's Emerging AI Hiring Regulation Landscape

California has not yet enacted a single comprehensive AI hiring law equivalent to NYC Local Law 144, but its regulatory environment is among the most consequential in the country for employers. That influence comes from two directions: existing privacy law and an active legislative pipeline that is moving toward dedicated AI employment regulation.

CCPA and Candidate Data

The California Consumer Privacy Act, and its amendments under the California Privacy Rights Act, apply to the personal data of job applicants. For employers using AI hiring tools that process candidate data, this creates obligations around data transparency, deletion rights, and limitations on the sale or sharing of applicant information. Candidates in California have the right to know what data is being collected about them, to request deletion of that data, and in some contexts to opt out of certain data processing activities.

For AI recruiting tools that build predictive profiles from candidate behavior, this framework requires careful attention. An AI system that infers personality traits, communication styles, or cultural fit scores from application data may be processing sensitive personal information in ways that trigger CPRA obligations even if those outputs are used only internally.

Proposed California AI Legislation

California's legislature has considered multiple AI bills affecting employment in recent sessions. While not all have become law, the trajectory is clear: legislators are moving toward requirements that include algorithmic impact assessments, transparency obligations for AI systems used in consequential decisions, and heightened scrutiny of automated tools in hiring. Employers operating in California should treat current CPRA obligations as a floor, not a ceiling, and structure their AI hiring governance accordingly.

Colorado SB 205: High-Risk AI and Governance Obligations

Colorado's SB 205, signed into law in 2024 and effective in 2026, takes a broader approach than most state AI hiring regulations. Rather than focusing exclusively on employment decisions, it establishes a governance framework for high-risk artificial intelligence systems used in consequential decisions, and hiring decisions are explicitly included within that scope.

What Makes a System High-Risk Under Colorado Law

Under SB 205, a high-risk AI system in the employment context is one that makes or substantially assists a consequential decision involving hiring, firing, promotion, or compensation. The law requires developers of these systems to implement risk management programs and provide deployers, meaning employers, with documentation about the system's known limitations and appropriate use cases. For employers, this translates into an obligation to conduct impact assessments before deploying high-risk AI systems and to implement measures that protect applicants from algorithmic discrimination.

Governance and Documentation Requirements

Colorado employers using covered AI hiring tools must maintain a risk management policy, conduct annual impact assessments, provide notice to affected individuals that a high-risk AI system was used in a decision about them, and give those individuals the opportunity to appeal or seek human review. The appeals and review requirement is particularly significant: it means employers can't simply automate hiring decisions and move on. They need a human review pathway that is operationally functional, not just stated in policy documents.

The enforcement mechanism includes the Colorado Attorney General's office, which can investigate violations and seek civil penalties. The law also creates a private right of action in certain circumstances, meaning that candidates who allege they were harmed by a non-compliant AI hiring system may have grounds for individual claims.


State-by-State Overview: Current AI Hiring Regulations

State / Jurisdiction Law / Regulation Applies To Key Employer Obligations Enforcement Status
New York City, NY Local Law 144 (AEDT Law) Employers using AEDTs for NYC hiring or promotion screening Annual independent bias audit, public disclosure of results, candidate notice 10 days prior, accommodation pathway Active — enforced by DCWP; civil penalties apply
Illinois AI Video Interview Act (2020, expanded 2021) Employers using AI analysis of video interviews for IL-based roles Candidate consent, explanation of AI criteria, video data deletion obligations, demographic reporting Active — private right of action available
Colorado SB 205 (2024, effective 2026) Deployers of high-risk AI systems in employment decisions Impact assessments, risk management policy, candidate notice, human review pathway, annual reassessment Active — AG enforcement; private right of action
California CPRA / Proposed AI employment bills Employers processing applicant personal data; broader AI governance pending Data transparency, deletion rights, limitations on profiling, anticipated algorithmic impact assessments CPRA active; dedicated AI hiring law pending legislation
Maryland HB 1202 (proposed); online interview notice requirements Employers using facial recognition or AI in interviews Candidate notice before AI facial analysis; proposed impact assessment requirements Partial — facial analysis notice active; broader bills in progress
Washington State Proposed automated decision legislation Employers using automated systems in employment decisions Transparency, opt-out rights, impact assessments under proposed framework Pending — active legislative activity as of 2025-2026
Texas Texas Responsible AI Governance Act (proposed) High-risk AI system deployers, including employment context Risk management, bias assessments, candidate notice Proposed — under legislative review
Federal (US-wide) Title VII, ADA, EEOC Guidance All US employers using AI in hiring decisions No disparate impact on protected classes, ADA accommodations for AI-assessed processes Active — EEOC guidance in effect; Title VII litigation risk ongoing

Multi-State Employers: The Compliance Complexity Problem

If your organization hires across multiple states, or if you're a fully distributed employer filling roles that could be performed from anywhere in the US, you're operating in a genuinely complex jurisdictional environment. The laws don't necessarily align, and in some cases they overlap or create conflicting requirements.

The most practical approach for multi-state employers is to design hiring processes to satisfy the most demanding applicable standard, then overlay any jurisdiction-specific obligations on top. This usually means treating NYC Local Law 144 and Colorado SB 205 as your baseline framework for any employer deploying AI tools at meaningful scale. Layer Illinois consent and data deletion requirements for any role where Illinois residents may apply. Apply CPRA candidate data obligations for California-based applicants. As additional states pass regulations, this layered compliance model scales more cleanly than trying to run separate processes for each jurisdiction.

Remote hiring adds additional complexity. When a candidate in Illinois applies via a platform that uses AI scoring, and the hiring decision is being made by a team in Texas for a role that could be performed anywhere, which state's law governs? The current consensus among employment attorneys is that the candidate's location at the time of application, combined with the location the role is being performed, determines applicable obligations. For most remote roles, this means assuming the broadest possible geographic scope when designing your compliance framework.

Building a Practical AI Hiring Compliance Framework

Compliance with AI hiring laws is not a one-time vendor check. It's an ongoing operational discipline that lives at the intersection of your recruiting process, your legal team, and your technology stack. Here's how to think about building that discipline in a way that's sustainable at scale.

Map Your Hiring Jurisdictions

Start by understanding where your candidates are applying from and where your roles will be performed. For distributed employers, this often means maintaining a living map of all applicable state and local AI hiring regulations, updated as new laws pass or take effect. Many compliance teams fail at this step not because they don't care, but because they haven't assigned clear ownership for tracking regulatory changes.

Audit Your AI Vendor Relationships

Every AI hiring tool you deploy creates shared compliance obligations. Before signing or renewing a contract with any vendor that performs automated screening, scoring, or evaluation of candidates, you need specific answers to a set of questions. Has the tool been subject to an independent bias audit? Are audit results available for your review and public disclosure? What data does the tool collect and how long is it retained? Can you configure the tool to satisfy your disclosure and accommodation obligations? Does the vendor support deletion workflows that align with Illinois and other state requirements?

These are not difficult questions to ask. But many employers still aren't asking them, and vendors vary widely in how they respond. A vendor who can't answer these questions clearly, or who deflects to vague contractual language, is a liability risk that needs to be addressed before the tool is deployed.

Design Disclosure and Accommodation Into the Process

Candidate notification shouldn't be buried in a terms-of-service document or footnoted at the bottom of an application. Under NYC Local Law 144, it must be provided 10 business days before the tool is applied. Under the Illinois AI Video Interview Act, it must be explicit and affirmative before the interview begins. Designing those touchpoints into your application flow from the start is straightforward if it's treated as a workflow design problem rather than a legal afterthought.

Accommodation pathways are the operational piece most often underdeveloped. A disclosure that tells candidates they can request an alternative process is meaningless if no one in your recruiting team knows how to handle that request. Define the alternative process, document it, train your recruiters on it, and make sure candidates who request it actually receive a timely response.

Document Everything

In the event of a complaint, audit, or litigation, your documentation is your defense. Maintain records of bias audit results, disclosure materials and their delivery dates, consent records for video interviews, vendor contracts specifying compliance obligations, and any accommodation requests and how they were handled. This documentation should be retained according to your jurisdiction's applicable employment record retention rules, which in most states means at minimum one to three years for hiring records.

The Biggest Employer Liability Risks in AI Hiring

It's worth being specific about where the actual exposure lies, because compliance discussions can sometimes become so abstract that the real risks get obscured.

Disparate impact claims are the highest-value litigation risk. If your AI screening tool systematically produces lower scores for candidates of a particular race, gender, or age group, you have a Title VII or ADEA problem regardless of whether the tool was built with any discriminatory intent. Adverse impact analysis, which is what the NYC bias audit process formalizes, is the mechanism for identifying and addressing this risk proactively.

Accessibility failures create ADA liability. A candidate with a stutter who performs poorly on an AI-analyzed video interview may have grounds for a disability discrimination claim if no accommodation was offered or if the accommodation process was so poorly executed that it was effectively unavailable.

Disclosure violations are often the most straightforward enforcement target. Unlike disparate impact claims, which require statistical analysis and often expert testimony, a failure to provide required candidate notice can be proven simply by showing that the notice wasn't given. Regulatory agencies looking to establish enforcement precedent tend to start with clear, documentable violations. Disclosure compliance is relatively low-cost to get right and high-risk to get wrong.

Vendor dependency risk is underappreciated. If your AI vendor changes their product, discontinues a compliance feature, fails a bias audit, or exits the market, your compliance program breaks. Employers who have built their entire AI hiring compliance posture around a vendor-managed process have no resilience when vendor circumstances change. Internal governance processes that exist independently of any specific vendor are the appropriate long-term answer.

What to Ask Your AI Hiring Vendors Right Now

If you're currently using any AI tool in your hiring process, or evaluating one for adoption, here are the questions that should be part of every vendor conversation.

  • Has an independent bias audit been conducted on this tool within the last 12 months, and can we see the full audit report, not just a summary?
  • Are audit results published in a format that satisfies NYC Local Law 144 public disclosure requirements?
  • How does the tool collect and store candidate data, and what deletion controls are available to satisfy Illinois and California obligations?
  • Can the tool generate documentation of which candidates were screened using AI, when, and what criteria were applied? This is essential for audit and accommodation workflows.
  • What is the vendor's process for addressing algorithm updates that might alter the tool's impact profile, and will we be notified before such changes are deployed?
  • Does the tool provide explainability outputs that allow recruiters to understand why a candidate was scored or ranked in a particular way?

Vendors who welcome these questions are demonstrating that they've built compliance considerations into their product development. Vendors who treat these as difficult or unusual questions are giving you important information about the risk profile of their tool.


Internal Governance: HR, Legal, and the Recruiter's Role

Compliance with AI hiring laws ultimately depends on how well your internal teams coordinate. Legal counsel needs to track regulatory changes and translate them into operational requirements. HR and recruiting operations need to implement those requirements in actual hiring workflows. Recruiters need to understand what they're responsible for disclosing, documenting, and escalating.

In practice, this means establishing a clear owner for AI hiring compliance, whether that's a senior HR leader, a compliance officer, or a cross-functional team. It means conducting quarterly or semi-annual reviews of all AI tools in your recruiting stack against current regulatory requirements. And it means creating recruiter training that is practical and role-specific, not a one-time policy acknowledgment that doesn't change daily behavior.

The distinction between ethical AI and legally compliant AI is worth making explicit here. An AI tool can technically pass a bias audit and still produce hiring outcomes that thoughtful practitioners would find problematic. The law sets a floor, not a ceiling. Employers who treat legal compliance as a complete answer to AI ethics in hiring are missing something important, both for candidate trust and for the quality of hiring decisions.

Candidate Trust as a Competitive Factor

There's a business case for transparency that sits alongside the legal argument. Candidates are increasingly aware that AI is being used in hiring processes, and they have strongly held views about it. Research consistently shows that candidates are more willing to accept AI-assisted decisions when they understand how the process works, when they feel the process was fair, and when they know they had an opportunity to present themselves as whole people rather than as data points.

Employers who proactively disclose their use of AI, clearly explain what the tool evaluates, and offer genuine accommodation pathways are building a candidate experience that reflects positively on their brand. Employers who deploy AI tools invisibly, provide no accommodation options, and offer no transparency about why candidates were or weren't progressed are creating an experience that generates trust deficits, even when their tools are technically compliant.

The Future of AI Hiring Regulation in the US

The regulatory trajectory is clear: more states will pass AI hiring laws, federal legislation is increasingly likely within the next two to three years, and the requirements will generally become more demanding rather than less. The early state laws focused primarily on bias audits and disclosure. Emerging legislation is adding impact assessments, human review pathways, individual appeal rights, and in some proposals, pre-deployment regulatory approval for high-risk systems.

Employers who build robust compliance infrastructure now will find it far easier to adapt to new requirements than those who are still scrambling to meet the 2023 and 2024 standards. The foundational elements, bias auditing, candidate transparency, accommodation pathways, vendor governance, and documentation, are common to every regulatory framework currently on the table. Building those capabilities now is not just about current compliance. It's about building an operational posture that can evolve as the legal landscape does.

The employers navigating AI hiring compliance most effectively aren't treating it as a legal department problem. They're treating it as a process design discipline, one that makes their hiring operations more transparent, more defensible, and more trusted by the candidates who move through them.

That framing, compliance as operational excellence rather than compliance as risk avoidance, is what separates organizations that are building durable hiring programs from those that are reacting to regulatory pressure one law at a time.

AI Hiring Compliance Is Now an Operational Requirement

The employers adapting fastest are building transparency, documentation, and candidate trust directly into their hiring workflows. Don't wait for a regulatory audit to find the gaps in yours.

Try Compliant AI Hiring Workflows for Free

Frequently Asked Questions

Are AI hiring tools legal in the United States?

Yes, AI hiring tools are legal in the United States, but their use is subject to a growing body of federal and state regulations. At the federal level, existing laws including Title VII of the Civil Rights Act and the Americans with Disabilities Act apply to AI-assisted hiring decisions. Several states and localities, including New York City, Illinois, and Colorado, have passed specific laws requiring employers to conduct bias audits, disclose AI use to candidates, and provide accommodation pathways. Legal use depends on compliance with all applicable requirements in the jurisdictions where candidates are located or roles will be performed.

What states currently have AI hiring regulations?

As of 2026, the most substantive AI hiring regulations are in New York City (Local Law 144, requiring annual bias audits and candidate notice), Illinois (the AI Video Interview Act, covering AI-analyzed video interviews), and Colorado (SB 205, establishing a high-risk AI governance framework effective 2026). California has strong applicant data obligations under the CPRA and active legislative proposals for dedicated AI employment regulation. Maryland has enacted targeted notice requirements for AI-analyzed interviews. Washington State and Texas have significant proposed legislation in progress, and the federal regulatory environment continues to develop through EEOC guidance and proposed legislation.

What is NYC Local Law 144 and how does it affect employers?

NYC Local Law 144 requires employers that use automated employment decision tools (AEDTs) to screen candidates for New York City roles to obtain an independent bias audit of those tools before deployment and annually thereafter. Audit results must be published on the employer's website. Employers must also notify candidates at least 10 business days before an AEDT is used in their evaluation, describe what data the tool analyzes, and offer an accommodation or alternative process upon request. The law is enforced by the NYC Department of Consumer and Worker Protection, with civil penalties ranging from $375 to $1,500 per violation per day.

What is an AI bias audit and do employers have to conduct one?

An AI bias audit is an independent assessment of whether an AI hiring tool produces disparate impact, meaning systematically lower selection rates for candidates in protected groups based on characteristics such as race, sex, or ethnicity. Under NYC Local Law 144, employers must obtain an independent bias audit before deploying covered tools and annually thereafter, and must publish the results. Illinois and Colorado also require impact assessments, though the specific methodology and disclosure requirements differ. Employers in jurisdictions without explicit audit mandates should still consider voluntary auditing as a risk management measure, given potential Title VII liability for adverse impact in AI-assisted hiring.

What are employers required to disclose to candidates about AI hiring tools?

Disclosure requirements vary by jurisdiction. Under NYC Local Law 144, employers must notify candidates at least 10 business days before using an AEDT, disclose the type of data the tool collects and uses, and inform candidates of any accommodation options. Under the Illinois AI Video Interview Act, employers must notify candidates that AI will be used to analyze video interviews, explain what characteristics the AI evaluates, and obtain affirmative consent before proceeding. Colorado SB 205 requires employers to notify individuals that a high-risk AI system was used in a decision about them and explain the basis for that decision. At minimum, employers should treat proactive disclosure as a best practice regardless of jurisdiction, both for legal protection and to build candidate trust.