By Ilke Okan LL.M., 2025
Automated decision-making technologies (ADMTs) are systems that analyze personal data to predict or decide outcomes about individuals, often without human input. These tools can determine whether someone is approved for a loan, lands a job interview, or is flagged for extra screening—all with limited transparency. As artificial intelligence (AI) and ADMTs become increasingly embedded in daily life—from hiring and credit scoring to healthcare and education—regulators are under growing pressure to draw clear lines between innovation and accountability.
California’s Consumer Privacy Act (CCPA), already a landmark in U.S. privacy legislation, is now addressing these technologies with a new set of proposed rules by the California Privacy Protection Agency (CPPA). The European Union’s General Data Protection Regulation (GDPR), particularly Article 22, addresses similar concerns by regulating decisions made solely through automated processing.
This blog post unpacks the proposed CCPA rules on ADMTs, compares them with GDPR’s Article 22 protections on automated decision making. It argues that for California to lead in responsible AI governance, it must strike a careful balance between fostering innovation and protecting fundamental privacy rights—before fragmented standards and under-enforced rules leave consumers behind.
Key Points of Divergence: CCPA vs. GDPR on ADMTs
The CCPA amendments seek to regulate ADMTs used to analyze or predict personal characteristics like behavior, job performance, health, preferences, and more. The rules provide clarity on definitions, consumer rights, and business responsibilities. The definition of ADMT is narrowed under § 7001(f) to cover only tools that replace human judgment or serve as a key factor in significant decisions—excluding systems that merely assist human input. Under § 7220(c), businesses must provide consumers with pre-use notices when ADMTs are used to make impactful decisions, such as hiring or loan approvals, explaining how the system works and what rights apply. In addition, § 7150 (b) and § 7153 require companies to conduct internal risk assessments (particularly when profiling is involved) and share relevant information with third parties in plain language. Finally, businesses must update their privacy policies in line with § 7220 to inform consumers of their right to opt out of ADMT use in high-stakes contexts and offer clear guidance on how to exercise that right.
While the CCPA’s proposed rules are promising, they still diverge in key respects from the GDPR—Europe’s comprehensive data protection law that has been setting the standard since 2018.
- Consent: Opt-Out vs. Opt-In
Perhaps the most fundamental difference between the CCPA and GDPR lies in their consent frameworks. GDPR Article 22 gives individuals the right not to be subject to a decision based solely on automated processing, unless they’ve given explicit consent, or the decision is necessary for a contract or authorized by law. This is an opt-in regime.
By contrast, under § 7221 of the CCPA proposal offers a more flexible opt-out right. Consumers can choose not to have their data used in ADMTs, but only under certain circumstances. For instance, under § 7200, there’s no opt-out option when ADMT is used for fraud prevention, hiring, or educational profiling—areas where such tools are likely to have high impact. Making opt-out rights universal—or at least extending opt-in consent for sensitive data—would strengthen protections and align California law with global standards.
- Right to Explanation and Appeal
Both frameworks allow consumers to request information about the logic behind ADMTs and the likely outcome of a decision. However, the GDPR Article 22 (3) provides an unconditional right to a human review of automated decisions and a detailed explanation of the algorithm’s rationale.
Under § 7221 (b) of the CCPA companies are not required to offer a right to appeal unless they also deny the opt-out option. This “either-or” tradeoff undermines the consumer’s ability to challenge decisions made by opaque systems and leaves them vulnerable to automated errors or discrimination.
- Scope and Exceptions
The GDPR applies to any fully automated decision-making process that produces legal or similarly significant effects, such as affecting someone’s employment or creditworthiness. It casts a broad net, with few exceptions. The CCPA, in contrast, includes multiple carve-outs. For example, when the ADMT is used solely for training purposes, that does not result in decisions or profiling about the consumer, even though that still involves the processing of personal data.
Moreover, the GDPR treats sensitive data—like biometric or health information—with heightened protection and requires stronger justifications for its use. The CCPA’s current draft doesn’t clearly adopt such a layered approach. Explicitly extending opt-in consent to the processing of sensitive information would be a step in the right direction.
Why Alignment Matters
Many companies subject to the CCPA are also governed by the GDPR due to the global nature of digital commerce. Divergences between the two frameworks increase compliance costs and create regulatory uncertainty. When rules vary across jurisdictions, companies either over-comply (wasting resources) or under-comply (exposing themselves to enforcement, litigation, or reputational fallout). This kind of fragmentation can also weaken enforcement and make it harder for consumers to understand their rights across jurisdictions. Harmonizing standards, especially around consumer rights, would reduce this legal friction and benefit consumers through consistent protections.
Beyond inefficiency, the lack of alignment invites regulatory arbitrage: companies may exploit weaker rules in one jurisdiction to avoid accountability in another. This undermines trust in privacy protections and risks entrenching harmful practices (especially in high-stakes contexts like employment, credit, or healthcare.) Harmonizing standards around consumer rights, transparency, and risk mitigation could reduce legal friction and build public confidence in algorithmic systems.
Recent enforcement in the EU shows how these protections work in practice. In SCHUFA (CJEU Case C-634/21), Europe’s highest court held that automated credit scores used to assess individuals’ eligibility for services qualified as automated decision-making under Article 22 of the GDPR. Because the scores significantly shaped contractual outcomes, the court found that individuals were entitled to safeguards such as explanation, human review, and the right to contest the decision. The case highlights how strong ADMT protections can meaningfully shape corporate behavior and clarify obligations even in complex, third-party scoring systems. Aligning with GDPR standards would allow U.S. companies to piggyback on existing compliance mechanisms. Many already have opt-in frameworks, human review procedures, and clear disclosures in place. Modifying the CCPA rules to reflect these expectations would reduce redundancy and promote efficiency.
Final Thoughts
California has long been a pioneer in privacy law, and its newest efforts to regulate automated decision-making technologies (ADMTs) continue that legacy. These proposed rules show a forward-thinking approach to addressing emerging risks, while keeping consumer empowerment and transparency at the forefront.
Rather than reinventing the wheel, California is thoughtfully building on global best practices—like those found in the GDPR—while tailoring them to the U.S. context. The current draft sets a strong foundation, and as the rulemaking process continues, there’s a unique opportunity to further refine and elevate these protections. With the CCPA’s momentum, California is well-positioned to shape the future of responsible AI governance—not just for the state, but for the nation.