Question: I own a small, local business and lack the time or resources to recruit new employees. I’m considering using an artificial intelligence (AI) system to automate my recruitment and onboarding process. Would this also help eliminate the legal risks with this process?
Answer: Not necessarily. While AI may enhance efficiency and productivity, there are concerns about its potential to perpetuate bias and discrimination. Employers who use AI tools in the workplace still must ensure that their use of this new technology does not violate long-standing federal and state anti-discrimination laws. The rise of AI and the speed at which employers are implementing AI into the workplace have inspired federal and state regulators to act. Currently, the result is a patchwork of evolving federal and state rules and regulations.
For example, last year the U.S. Equal Employment Opportunity Commission (EEOC) recently issued guidance on the use of AI systems in a range of HR-related tasks. The main takeaway from this guidance is that an AI-driven process may inadvertently discriminate against protected groups. For example, if a recruitment algorithm screens out individuals with a gap in employment, that algorithm could have a disparate impact on individuals who took extended time off due to protected medical conditions. In its guidance, the EEOC puts the burden of compliance on employers, meaning an employer may be liable for the effect of an algorithm created by a third-party vendor.
More recently, on May 17, 2024, the California Civil Rights Council proposed new regulations aimed at ensuring AI tools prevent—rather than perpetuate—discrimination based on protected characteristics. Among other updates, the new regulations would:
- Clarify that it is a violation of California law to use an automated decision-making system if it harms applicants or employees based on protected characteristics.
- Ensure employers maintain employment records, including automated decision-making data, for a minimum of four years.
- Affirm that the use of an automated decision-making system alone does not replace the requirement for an individualized assessment when considering an applicant’s criminal history.
- Clarify that third parties are prohibited from aiding and abetting employment discrimination, including through the design, sale, or use of an automated decision-making system.
- Provide clear examples of tests or challenges used in automated decision-making system assessments that may constitute unlawful medical or psychological inquiries.
A similar development is occurring in the California Legislature. Earlier this year, the California State Assembly Privacy and Consumer Protection Committee introduced Assembly Bill 2930, which seeks to prohibit bias and “algorithmic discrimination” by automated decision tools (ADTs).
Under the proposed legislation, employers who use ADTs to make consequential decisions must undergo impact assessments, where employers evaluate the pros and cons of using such technology. The proposed legislation also requires employers to provide certain notices to impacted employees. If AB 2930 becomes law, it would apply to employers with 25 or more employees, or employers with fewer than 25 employees who deployed an ADT that impacted more than 999 people per year. Moreover, the impact assessments would need to be completed by January 1, 2026.
These new developments highlight the importance of addressing potential bias and discrimination in automated systems. The use of AI in employment is new and will soon be subject to additional laws and regulations. Employers should consult their labor and employment counsel to ensure they remain current on the use of AI in the workplace.
