By Sauntharya Manikandan, J.D. Candidate, 2026
Overview
On May 17, 2024, the California Civil Rights Department (CRD) took a step towards addressing the intersection of technology and employment with its proposed modifications to employment regulations concerning automated decision systems. This initiative is indicative of lawmakers racing to catch up with the far-reaching implications of AI in our lives. While the regulations represent a step in the right direction, issues like a lack of transparency, algorithmic bias, and challenges in accountability need to be addressed to ensure fair and effective use of automated decision systems.
Background
Automated decision-making in employment is not new, but its sophistication has evolved dramatically. For example, in the 1970s, Applicant Tracking Systems (ATS) were rudimentary, featuring basic data entry functionalities with limited ability to analyze or report on candidate information. With the rise of the internet in the 1990s, ATS integrated online job boards and advanced algorithms, allowing recruiters to quickly evaluate and rank candidates based on specific criteria. By 2023, over 97.4% of Fortune 500 companies had adopted advanced AI-driven ATS trained on extensive datasets including historical hiring and performance data to identify patterns linked to high performance.
While AI systems are recognized for improving efficiency in hiring and recruitment, they also pose significant risk, including bias, discrimination, data privacy concerns, and a lack of transparency and accountability. These issues have culminated into legal actions, such as the Equal Employment Opportunity Commission’s (EEOC) first AI hiring discrimination lawsuit against a company’s hiring program that automatically rejected female candidates over 55 and male candidates over 60. To avoid similar lawsuits, companies like Amazon have stopped using AI hiring algorithms that demonstrate bias against women, as these systems were trained on datasets predominantly composed of men. Given the reality of AI’s general unreliability—characterized by issues such as biased algorithms, errors, and a lack of transparency in decision-making—there is a clear need for regulations to ensure that AI technologies promote fairness, transparency, and inclusivity in hiring.
Proposed Changes
Amid growing concerns over AI and automated decision-making, both federal and state governments are under pressure to establish clear regulations and provide guidance.This is reflected in initiatives like the White House’s Blueprint for an AI Bill of Rights, the EEOC’s guidelines on algorithmic fairness, and California’s leadership in AI regulation.
For example, the California Civil Rights Council is leveraging its regulatory authority to protect residents from unlawful discrimination, especially employment practices involving automated decision systems. These proposed regulations are part of the Council’s larger role in enforcing the Fair Employment and Housing Act (FEHA), which safeguards California employees from discrimination, retaliation, and harassment based on protected characteristics or conditions.
Some key highlights of the proposed modifications include:
- Expanded definitions: The term “agent,” previously limited to direct representatives of employers, would now be expanded to include third parties acting on employers’ behalf in hiring, thereby extending accountability to vendors, contractors, and other external entities for the outcomes of automated employment tools.
- Employer Defense: It would be unlawful to use qualification standards such as employment tests, or Automated Decision Systems (ADS) that exclude applicants based on protected characteristics unless the employer can prove these are job-relevant and necessary for business purposes.
- Record Keeping Obligations: The retention period for records would extend from two to four years, and the requirement to maintain these records would broaden to cover any person or organization that offers, supplies, or applies an ADS or selection criteria on behalf of an employer.
- Consideration of Criminal History: The use of ADS to investigate applicants’ criminal histories would be prohibited. If an employer rescinds an offer due to such history, the employer must furnish the applicant with the relevant reports and assessment criteria used by the ADS.
While these modifications represent progress, they face criticism regarding their potential effectiveness in achieving their intended goals. A report assessing the proposed legislation highlights several shortcomings, such as a lack of clarity around how employers should demonstrate job-relatedness and business necessity, which could result in ambiguous or fabricated evidence that complicates compliance. Additionally, although the proposal would strengthen record-keeping requirements, there is no mandate to create impact reports or share information with the Department of Fair Employment and Housing (DFEH), undermining efforts for algorithmic accountability. These gaps in the proposed regulations are similar to those seen in the recent evaluation of a New York City algorithmic transparency AI law that required employers to conduct annual bias audits on their algorithmic tools and disclose their findings. Researchers from Cornell found that few employers complied, as vague definitions allowed companies to avoid accountability. Additionally, job seekers had difficulty accessing these documents, which were often buried or poorly organized on company websites.
To address these regulatory shortcomings, it’s helpful to look at the EU AI Act, which offers a stricter approach to AI regulation. The Act extends requirements beyond just AI developers, placing obligations on a range of stakeholders involved in the technology. The Act also adopts a risk-based framework, classifying AI systems into four categories based on their level of risk. By mandating that AI systems be explainable, undergo thorough assessments, and be subject to post-market monitoring, the EU Act is able to ensure ongoing compliance. Furthermore, the Act enforces stringent penalties for non-compliance, such as fines of up to up to €35 million or 7% of global turnover. The EU Act ultimately sets a strong precedent to strengthen AI regulations.
This approach offers valuable lessons for California’s AI governance. To truly protect job seekers and ensure fair practices, it is essential for California lawmakers to provide clear definitions and guidelines regarding compliance. Strengthening accountability measures, including mandates for impact assessments and transparent reporting, will be vital to prevent discrimination and foster trust in AI technologies. Without these enhancements, the regulations risk being ineffective, echoing the shortcomings seen in similar laws like New York City’s. California could greatly benefit from aligning its regulations with international standards like the EU AI Act. By incorporating these elements, California can better promote transparency, accountability, and fairness in AI use.