AI in Employment and Health Care
May 10, 2026
Last Updated: May 9, 2026
Two industries face more AI regulatory scrutiny than any other in the United States: employment and health care. The stakes in both are high -- a biased hiring algorithm can deny someone a livelihood, and an AI system that misrepresents itself as a mental health professional can put a vulnerable person in real danger. Lawmakers in both areas have moved well ahead of the general AI legislative curve, producing a patchwork of enacted laws, draft regulations, and advancing bills that employers, HR technology vendors, health care providers, and insurers need to track carefully.
This page covers enacted and advancing AI laws in employment and health care contexts. For the broader state law landscape, see our U.S. AI State Laws Tracker. For federal policy developments including the Trump executive order and the NIST AI Risk Management Framework, see our Federal AI Policy page.
Quick Navigation
Employment: Federal Baseline • Illinois • NYC Local Law 144 • California • Connecticut • Other States • Litigation Landscape
Health Care: California • Texas • Illinois • Indiana • Utah • Tennessee • 2026 Trends • Michigan Businesses
AI in Employment
Federal Baseline: Existing Law Already Applies
No federal law specifically regulates AI in employment, but that does not mean AI-driven employment decisions exist in a legal vacuum. Title VII of the Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination in Employment Act all apply to AI-driven decisions. In April 2023, the EEOC, DOJ Civil Rights Division, FTC, and Consumer Financial Protection Bureau issued a joint statement on enforcement efforts against discrimination and bias in automated systems, signaling that federal agencies view algorithmic discrimination as squarely within existing civil rights law.
The EEOC has identified AI hiring tools as a priority enforcement area and has issued guidance on how the ADA and Title VII apply when employers use AI to screen applicants or make promotion decisions. Employers cannot outsource their legal obligations to a vendor -- if a third-party AI tool produces discriminatory outcomes, the employer remains liable.
Illinois: The Most Comprehensive Employment AI Framework
Illinois has built the most layered set of employment AI requirements of any state, starting with the nation's first AI hiring law in 2020 and expanding significantly in 2026.
The Artificial Intelligence Video Interview Act (820 ILCS 42), effective January 2020, was the first AI hiring law in the country. It applies to any employer using AI to analyze video interviews of applicants. Before the interview, employers must notify applicants that AI will be used and explain what characteristics it evaluates. Employers must obtain consent, limit sharing of video recordings to only those personnel whose expertise is necessary to evaluate the applicant, and destroy recordings within 30 days of a request.
Illinois HB 3773, effective Jan. 1, 2026, expands that framework significantly by amending the Illinois Human Rights Act. The law prohibits employers from using AI -- including generative AI -- for recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or any other term or condition of employment, if such use results in discrimination based on a protected class. Notably, this prohibition applies regardless of whether the discrimination was intentional. The law also explicitly bars using zip codes as a proxy for protected characteristics in AI-driven employment decisions, recognizing that zip code correlates strongly with race, national origin, and socioeconomic status.
Employers subject to HB 3773 must provide notice to applicants and employees when AI is used in covered employment decisions. The Illinois Department of Human Rights is tasked with issuing implementing rules and has released draft regulations, "Subpart J: Use of Artificial Intelligence in Employment," which clarify notice and recordkeeping requirements. Enforcement falls to the IDHR and the Illinois Human Rights Commission, with remedies that can include back pay, reinstatement, emotional distress damages, and attorney fees.
New York City: Local Law 144 and Its Enforcement Challenges
New York City's Automated Employment Decision Tools (AEDT) law, Local Law 144, has been in effect since July 5, 2023, and remains the most stringent employment AI law currently in force in the United States. The law applies to employers and employment agencies that use automated decision tools for hiring or promotion decisions affecting positions located in New York City.
Covered employers must commission an annual independent bias audit by a third-party auditor -- neither the employer nor the tool vendor can conduct the audit. The auditor calculates impact ratios by dividing each demographic group's selection or scoring rate by the highest-performing group's rate. An impact ratio below 80% signals potential disparate impact under the four-fifths rule. Audit results must be publicly disclosed on the employer's website for at least six months and must include the audit date, data sources, applicant counts by group, and impact ratios. Employers must also notify candidates at least 10 business days before using an AEDT and must offer a reasonable alternative assessment to any candidate who objects. Penalties start at $500 per violation for a first offense and escalate to $1,500 for subsequent violations.
A December 2025 audit by the New York State Comptroller found enforcement of Local Law 144 "ineffective," citing limited compliance monitoring and inconsistent enforcement activity by the city's Department of Consumer and Worker Protection. In response, DCWP shifted to proactive investigations in 2026. Employers and vendors that have not completed required audits should treat this enforcement posture change as an immediate compliance trigger.
California: ADMT Regulations and FEHA Protections
California has taken a two-track approach to employment AI regulation. Under the California Fair Employment and Housing Act, the Civil Rights Department issued automated decision-making technology (ADMT) regulations effective October 1, 2025. The regulations make it unlawful to use automated decision systems in ways that discriminate against employees or applicants based on protected characteristics. Employers must conduct bias testing, preserve ADMT records for four years, provide notice to applicants and employees when ADS tools are used in covered decisions, and allow individuals to opt out of automated evaluation where a reasonable alternative exists.
Separately, the California Privacy Protection Agency's ADMT regulations that took effect Jan. 1, 2026 require risk assessments for significant automated decisions, with broader consumer opt-out rights and pre-use notice obligations phasing in April 1, 2027. Companies using AI in employment decisions in California should assess whether they are subject to both frameworks.
Gov. Newsom vetoed SB 7 (No Robo Bosses Act) in 2025, which would have required human oversight for all significant employment decisions made or assisted by AI. That veto signals California's preference for targeted regulation over broad mandates, but additional employment AI bills continue to advance in the legislature.
Connecticut: SB 5 Employment Provisions
Connecticut's Artificial Intelligence Responsibility and Transparency (AIRT) Act, passed May 1, 2026 and awaiting the governor's signature, adds one of the broadest employment AI frameworks in the country. The law defines "automated employment-related decision technology" broadly to include any system that processes personal data and produces outputs -- scores, ranks, recommendations, classifications -- that substantially influence an employment decision. That definition sweeps in resume screening software, AI interview platforms, performance analytics, and scheduling algorithms.
Developer obligations take effect Oct. 1, 2026: developers of covered tools must provide deployers with information about the tool's purpose, training data categories, known limitations, and anti-bias testing results. Deployer obligations -- notifying affected employees and applicants of AI use, purpose, data categories, and data sources -- follow the same Oct. 1, 2026 effective date for the core framework, with pre-decision notice requirements taking effect Oct. 1, 2027.
The law also amends Connecticut's anti-discrimination statutes to make clear that using an automated tool is not a defense to an employment discrimination claim. Courts and the Connecticut Commission on Human Rights and Opportunities will consider evidence of proactive anti-bias testing as a mitigating factor, but such testing does not eliminate liability. Beginning Oct. 1, 2026, any employer filing a WARN Act layoff notice with the state must disclose whether AI or another technological change contributed to the reduction in force. Enforcement runs through the attorney general as unfair or deceptive trade practices, with a cure-notice provision available through the end of 2027.
Other States and Advancing Bills
Maryland's HB 1202 (2020) prohibits employers from using facial recognition technology during a job applicant's interview without written consent from the applicant. While narrow in scope, it applies to any AI tool with a facial analysis component used during the hiring process.
Colorado's SB 24-205, if it survives litigation and legislative replacement, would cover employment AI as a category of high-risk AI making consequential decisions. Its replacement bill, SB 26-189, would also cover automated decision-making in employment contexts, though with a disclosure-focused rather than impact-assessment framework. See our Colorado AI Act Compliance Guide for current status.
As of May 2026, active employment AI bills are advancing in more than a dozen states. Washington, New Jersey, Minnesota, New York state, Massachusetts, and California all have employment AI bills in committee or on floor calendars. Most follow one of two templates: the Colorado impact-assessment model or the Connecticut/Illinois disclosure model. The trajectory is clear -- what New York City introduced in 2021 as a local ordinance is becoming a national standard for AI hiring regulation.
The Litigation Landscape
Employment AI litigation is no longer theoretical. Several cases have established real employer and vendor exposure that compliance programs need to account for.
In Mobley v. Workday, a federal court certified a nationwide collective action in 2025 alleging that Workday's AI resume screener discriminated against older, disabled, and minority applicants. One plaintiff was rejected from over 100 jobs. The case is significant because it establishes that software vendors can be held liable as employer "agents" under federal anti-discrimination law -- not just the employers who deployed the tool.
EEOC v. iTutorGroup resulted in a $365,000 settlement in 2023 after the company's AI automatically rejected women over 55 and men over 60, affirming that algorithmic discrimination is a top EEOC enforcement priority. CVS settled in 2024 after AI-powered video interviews allegedly rated facial expressions for "employability" in ways that violated Massachusetts law. HireVue has faced ACLU claims that its video assessment platform discriminated against Indigenous and deaf applicants who lacked proper accommodations.
These cases collectively establish three patterns employers should internalize: vendors cannot insulate employers from liability, facial and behavioral analysis tools carry elevated legal risk, and the EEOC is actively monitoring the space.
AI in Health Care
California: Health Care AI Disclosures
California has been the most active state on health care AI regulation. AB 3030, effective January 2025, requires health care providers using AI to generate patient communications to include a clear disclosure that the content was AI-generated and to provide patients with a way to reach a human. The law covers AI-generated patient instructions, care summaries, and other written communications.
AB 489, effective Jan. 1, 2026, prohibits developers and deployers of AI systems from using terms, letters, phrases, or design elements that indicate or imply the AI possesses a health care license or credential. The law applies broadly to any AI system that communicates directly with patients in a health care context, and violations can trigger enforcement by the relevant professional licensing boards.
Texas: Provider Disclosure Requirements Under TRAIGA
Texas TRAIGA, effective Jan. 1, 2026, contains specific provisions for licensed health care practitioners. Practitioners must provide patients -- or their personal representatives -- with conspicuous written disclosure of any AI use in the diagnosis or treatment of the patient before or at the time of the relevant interaction. In emergency situations, disclosure must be provided as soon as reasonably practicable. TRAIGA also prohibits the use of AI systems with the specific intent to discriminate against individuals based on protected characteristics in health care contexts, though disparate impact alone is not sufficient to establish a violation.
Illinois: First State to Ban AI Therapy Chatbot Misrepresentation
Illinois became the first state to ban the commercial use of AI therapy chatbots that could mislead users about the nature of mental health services, with a law that took effect Aug. 1, 2025. The law prohibits AI systems from representing themselves as licensed mental health professionals or as capable of providing equivalent services. It applies broadly to chatbot developers and deployers operating in commercial contexts, not just clinical settings.
Indiana: AI Downcoding and Insurer Disclosure
Indiana enacted two distinct health care AI laws. The first, enacted earlier, requires health care professionals and insurers to disclose to patients when AI is used in health care decisions or communications, with parallel disclosure requirements for insurers when AI influences coverage or treatment determinations.
The second, enacted March 4, 2026 and effective July 1, 2026, addresses one of the most contested uses of AI in health insurance: automated claim downcoding. When a provider submits a claim at a particular billing code, insurers have increasingly used AI to automatically reduce that code to a lower-reimbursing one at scale. Indiana's law prohibits health insurers from using AI as the sole basis to downcode a claim without reviewing the covered individual's actual medical record. Insurers must also disclose to providers when AI was used in any downcoding decision. The July 1, 2026 effective date is approaching quickly, and health insurers operating in Indiana should have reviewed their AI claim-review workflows and disclosure mechanisms by that date.
Utah: Prior Authorization Transparency
Utah enacted legislation on March 19, 2026, effective Jan. 1, 2027, requiring health insurers to publicly disclose whether AI is used to review prior authorization requests. Prior authorization -- the process by which insurers require approval before covering certain treatments, medications, or procedures -- has become one of the most contested areas of health AI regulation. The use of AI to screen or decide prior authorization requests at scale has drawn criticism from providers and patient advocates who argue that AI systems deny medically necessary care based on statistical patterns rather than individual clinical assessment. Utah's law does not prohibit AI use in prior authorization; it requires transparency about whether AI is being used at all.
Tennessee: Mental Health AI Restrictions
Tennessee Gov. Bill Lee signed SB 1580 on April 1, 2026, making Tennessee among the first states to specifically regulate how AI can be marketed in the mental health space. Effective July 1, 2026, the law prohibits any person who develops or deploys an AI system from advertising or representing to the public that the system is, or is able to act as, a qualified mental health professional. That definition encompasses licensed psychiatrists, psychologists, psychological examiners, social workers, and marital and family therapists under Tennessee law.
Despite being less than one page long, SB 1580 carries significant compliance weight because it includes a private right of action. Violations constitute unfair or deceptive acts under the Tennessee Consumer Protection Act, subject to civil penalties of $5,000 per violation. Businesses that develop or deploy AI products in any proximity to the mental health space -- including wellness apps, chatbots, and AI-assisted therapy platforms -- should review their marketing language carefully before July 1.
Maine enacted similar legislation in 2026 through the governor's signature on a bill passed by the legislature, prohibiting any person from providing, advertising, or offering therapy or psychotherapy services using AI unless the services are provided by a licensed professional. South Carolina's Senate unanimously passed a comparable bill regulating AI use in therapy and psychotherapy, which was still advancing as of early May 2026.
Health Care AI: 2026 Trends
The scale of health care AI legislation in 2026 has been striking. More than 240 bills addressing AI use in health care contexts have been introduced across 43 states in 2026 alone, according to tracking by Manatt Health and ComplianceHub. The legislation clusters around several themes.
Insurer AI restrictions have become a focal point, with particular attention on AI downcoding and prior authorization. Indiana's downcoding law is the most advanced enacted version, but bills with similar provisions have been introduced in California, Connecticut, Illinois, Indiana, Maryland, Missouri, and Oregon in 2026. Prior authorization AI disclosure requirements are advancing in multiple states following Utah's enactment.
Mental health AI licensing requirements are the fastest-moving category. Over 30 bills introduced in 2026 include prohibitions on chatbots representing themselves as licensed professionals, following templates established in Illinois (Aug. 1, 2025) and California (Jan. 1, 2026). Tennessee's newly signed SB 1580 is the most recent enacted version, and similar bills are advancing in Louisiana, Kansas, and Missouri.
Consent requirements for AI in clinical contexts are emerging as a distinct category. Several 2026 bills -- in Virginia, Florida, South Carolina, and Maine -- go beyond disclosure to require affirmative patient consent before AI tools are used in clinical interactions, particularly in mental health settings. This is a meaningful shift from the notification-only approach most states have taken so far.
At the federal level, the Advanced Research Projects Agency for Health (ARPA-H) launched a 39-month initiative called ADVOCATE to develop and deploy the first FDA-authorized agentic AI system for clinical care, including a patient-facing agent capable of autonomously adjusting appointments, medications, diet, and exercise. That development signals that federal health AI activity is moving in the direction of deployment, not just oversight.
What This Means for Michigan Businesses
Michigan manufacturers, law firms, accounting firms, and health care providers all have direct exposure to employment and health care AI laws -- even without Michigan-specific legislation in these areas.
On the employment side, any Michigan company using AI tools for hiring, screening, or promotion decisions involving employees or candidates in Illinois, New York City, California, or Connecticut must comply with those jurisdictions' requirements now. A Michigan manufacturer with an Illinois facility using AI-assisted resume screening has AIVIA and HB 3773 obligations today. A professional services firm with New York City employees using AI promotion tools needs a completed bias audit. The territorial scope of these laws is based on where the affected employee or candidate is located, not where the employer is headquartered.
The Workday litigation also carries a direct warning for Michigan companies that rely on third-party AI hiring tools: vendor liability does not eliminate employer liability. Contracts with AI vendors should clearly allocate compliance responsibilities, require documentation of bias testing, and specify what happens if the tool produces discriminatory outcomes. Reviewing and updating those agreements is a practical near-term step regardless of which states your employees are in.
For health care providers and health plans operating in Michigan, the patchwork of state laws creates compliance obligations whenever you serve patients or plan members in covered states. Indiana's downcoding law is particularly relevant for any insurer or third-party administrator processing claims from Indiana-based members -- the July 1, 2026 deadline is weeks away. Tennessee's SB 1580 applies to any AI product marketed as a mental health tool to Tennessee users, including apps with national distribution.
Michigan's own Civil Rights Commission has signaled interest in pursuing algorithmic discrimination protections similar to Colorado and Illinois, and the state's split legislature is actively considering chatbot legislation. The likely direction of Michigan law over the next one to two years is toward disclosure and anti-discrimination requirements that mirror what Illinois already requires. Companies building internal AI governance programs now will be well positioned when Michigan-specific obligations arrive.
For assistance navigating AI compliance requirements in employment, health care, and cybersecurity, contact STACK Cybersecurity.