- Free Consultation: 800-700-WAGE (9243) Tap Here to Call Us
Can Your Employer Use AI to Decide Whether to Hire or Fire You in California?
Algorithms are making employment decisions about California workers right now. Automated systems screen resumes, score interviews, rank candidates for promotion, flag employees for discipline, and even determine who gets laid off. Many workers never know an algorithm played a role in the decision that changed their career — and that is exactly the problem California’s updated Fair Employment and Housing Act regulations are designed to address.
Effective October 1, 2025, the California Civil Rights Council finalized regulations making clear that the use of artificial intelligence and automated decision systems in employment is subject to the same anti-discrimination protections that have governed human decision-making for decades under FEHA. If an AI tool produces discriminatory outcomes based on race, gender, age, disability, or any other protected characteristic, the employer is liable — even if the employer did not intend to discriminate and even if a third-party vendor built the tool.
The Nourmand Law Firm, APC represents California employees who have experienced employment discrimination through hiring algorithms, automated performance evaluations, and AI-driven termination decisions. If you believe an employer’s use of technology played a role in denying you a job, a promotion, or your continued employment, call 800-700-WAGE for a free consultation.
What Qualifies as an Automated Decision System Under California Law?
An automated decision system (ADS) under the updated FEHA regulations is any computational process — including artificial intelligence, machine learning, algorithms, or statistical modeling — that makes or assists in making decisions about job applicants or employees. The definition is deliberately broad.
Common examples of automated decision systems used in California workplaces include resume-screening software that filters applicants based on keyword matches or predictive scoring, video interview platforms that analyze speech patterns and facial expressions, scheduling algorithms that assign shifts or determine availability, performance-tracking tools that generate productivity scores in warehouses and distribution centers, and workforce reduction models that rank employees for layoff eligibility.
Basic office software such as email, spreadsheets, and word processors does not fall within the definition. But any tool that evaluates, scores, ranks, or recommends employment actions crosses the threshold — and subjects the employer to FEHA liability if the results discriminate against protected groups.
How Can AI Tools Discriminate Against California Workers?
AI systems learn from historical data. When that data reflects decades of biased hiring patterns, pay disparities, or discriminatory promotion practices, the algorithm reproduces those patterns at scale — often without anyone recognizing the bias until the damage is done.
A resume-screening tool trained on a company’s past hiring data might systematically downrank applicants with names associated with certain racial or ethnic backgrounds. An interview-scoring platform could penalize candidates with speech patterns linked to a disability or non-native English proficiency, which implicates both disability discrimination and national origin discrimination under Government Code § 12940.
Performance-monitoring algorithms in warehouses throughout Riverside, San Bernardino, and Stockton may impose productivity quotas that fail to account for workers who need reasonable accommodations under FEHA. When a worker with a disability or a medical condition cannot meet an algorithm’s threshold, and the employer terminates that worker based on the automated score, the result is unlawful discrimination — regardless of whether a human manager pressed the final button.
Lawsuits filed in 2025 and early 2026 have alleged that major AI hiring platforms generate opaque scores excluding older workers and perpetuating racial bias, with the logic behind those decisions hidden from both the affected workers and regulators.
What Does California Law Require of Employers Who Use AI in Employment Decisions?
The updated FEHA regulations impose several concrete obligations on employers using automated decision systems:
Anti-bias testing. Employers should conduct bias audits of their AI tools before and after adoption. Courts will evaluate the quality, scope, recency, and results of any testing — and whether the employer acted on findings that revealed discriminatory patterns. One-time vendor assurances or outdated audits will carry little weight in a discrimination claim.
Record retention. Employers must preserve all ADS-related data for at least four years, including the input data, scoring criteria, output rankings, and results of any bias testing. This expanded retention period — doubled from the previous two-year requirement — gives workers and their attorneys a longer window to obtain evidence supporting discrimination claims.
Vendor accountability. An employer cannot escape liability by pointing to a third-party vendor. Under the regulations, anyone acting on behalf of the employer to perform a FEHA-regulated activity — including recruitment, screening, or promotion decisions conducted through an automated system — is considered the employer’s agent. If the vendor’s tool discriminates, the employer is responsible.
Reasonable accommodation obligations. Employers must consider whether an applicant or employee needs a reasonable accommodation when an ADS is part of the screening or evaluation process. A chatbot interview that cannot accommodate a hearing-impaired applicant, or a timed assessment that disadvantages a worker with a cognitive disability, may violate FEHA’s interactive process requirements.
What Rights Do California Workers Have When AI Is Used Against Them?
Workers subjected to adverse employment actions driven by automated decision systems have the same rights as workers harmed by human decision-makers. Under Government Code § 12940, it is unlawful for an employer to discriminate against any individual because of a protected characteristic — and the method of discrimination does not change the analysis. An algorithm that produces a discriminatory outcome violates the statute just as a biased manager would.
California workers can file complaints with the California Civil Rights Department (CRD), which has enforcement authority over FEHA violations including those arising from AI-driven discrimination. Workers may also pursue private lawsuits seeking damages for lost wages, emotional distress, and — in egregious cases — punitive damages.
One of the most significant challenges for workers in AI discrimination cases is transparency. Many automated systems operate as opaque processes where the scoring criteria, weighting factors, and training data are hidden from the people whose careers depend on them. The four-year record-retention requirement under the new regulations gives workers and their legal counsel a stronger foundation to demand discovery of ADS data in litigation.
Workers in industries that rely heavily on algorithmic management — including logistics, warehousing, healthcare staffing, gig platforms, and food service — should pay particular attention to these protections. Employees in Bakersfield, Fontana, Long Beach, Oakland, and throughout the Central Valley and Inland Empire work in sectors where automated scheduling, productivity tracking, and workforce reduction tools are already widespread.
What Should You Do If You Suspect AI-Driven Discrimination?
Document everything you can about the decision that affected your employment. If you were denied a job, ask the employer whether automated tools were used in the screening process. If you were terminated or disciplined based on a performance score, request information about how that score was calculated and what data was used.
Under the updated FEHA regulations, applicants and employees should receive notice explaining when and how automated decision systems are being used in employment decisions. If your employer failed to provide that notice, that failure itself may support a discrimination claim.
Keep records of your qualifications, performance history, and any communications suggesting that the employer’s stated reason for the adverse action does not hold up. If the timing of the decision aligns with a request for accommodation, a pregnancy disclosure, a complaint about harassment, or any other protected activity, that pattern may indicate that the automated system served as a pretext for unlawful retaliation.
Hold Employers Accountable for Algorithmic Discrimination
The Nourmand Law Firm, APC has fought for the rights of California employees for more than two decades, recovering millions of dollars in discrimination, wrongful termination, and wage theft cases across the state. Whether a biased manager or a biased algorithm caused the harm, the legal protections are the same — and so is the firm’s commitment to holding employers accountable. Call 800-700-WAGE or contact us online for a free, confidential consultation. Se Habla Español.











