Question: I heard that California is going to adopt new laws that limit an employer’s ability to use AI when making decisions regarding employees.  Is that true?

Answer: The California Civil Rights Department (“CRD”) has long been considering adopting regulations which regulate the ability of California employers to use artificial intelligence (“AI”), automated decision-making systems, or “Automated Decision Systems” to make, or facilitate, employment decisions, like recruitment, hiring, and promotion. 

On March 21, 2025, the Civil Rights Council, a branch of the CRD, approved the final version of its proposed “Employment Regulations regarding Automated Decision Systems.”  If adopted, these regulations (the “AI Regulations” or “Regulations”) would regulate an employer’s use of AI and other automated decision-making systems in the employment context.  Broadly speaking, the AI Regulations would clarify that California employers may not use AI or Automated Decision Systems in a manner which violates existing anti-discrimination laws, like the Fair Employment and Housing Act (“FEHA”).

The definitions in the Regulations provide guidance to understand how broadly they will apply. First, the Regulations define AI as any machine-based system which “infers, from the input it received, how to generate outputs,” like “predictions, content, recommendations, or decisions.” This includes machine learning systems, which can “use and learn from its own analysis of data or experience and apply [that] learning automatically in future calculations or tasks.”  This is not limited to ChatGPT and could be built into various HR programs.

Second, “Automated Decision Systems” are any “computational processes” that makes a decision or facilitates human decision making regarding an employment benefit”—think hiring or promotion decisions.  Automated Decision Systems include systems derived from AI and machine learning, like puzzles, tests, or games which employers use to evaluate job applicants or employees, target job advertisements, screen resumes, or processes that similar data “acquired from third parties.”  However, more traditional systems like word processing or spreadsheet systems are generally not covered AI systems.  For example, an employer’s use of ChatGPT to make employment decisions would be subject to the AI Regulations, but an employer’s use of Microsoft Word would not.

Third, the term “agent” would be defined to include any person who relies—in whole or in part—on an Automated Decision System to perform employment functions, like applicant recruitment or screening, hiring, promotion, or decisions regarding pay or benefits.  And because agents are considered “employers” under the FEHA, they could be subject to liability for the discriminatory use of Automated Decision Systems.

The AI Regulations will confirm that AI cannot be used circumvent California’s anti-discrimination laws.  In the hiring context, the Regulations will prohibit employers from using an Automated Decision System or “selection criteria”—which includes a “qualification standard, employment test, or proxy”—that discriminates against an applicant, or class of applicants, on any basis protected by the FEHA.  For example, an employer may not screen job applicants using tests, questions, games, or puzzles that are “likely to elicit information about a disability.”  Similarly, employers generally can’t use AI to screen for an applicant’s prior criminal history.

The AI Regulations are currently pending final adoption by the California Office of Administrative Law.  If approved, California would become one of the first jurisdictions to issue comprehensive regulations for the use of AI in the employment context.  Employers who use AI to make, or facilitate, employment decisions should keep in touch with their labor counsel regarding the status of the final adoption of the AI Regulations.