Get exclusive insights on privacy laws, compliance strategies, and product updates delivered to your inbox
Your company uses an applicant tracking system that scores resumes and surfaces a ranked shortlist. You use a credit decisioning model that approves or declines loan applications. Your customer service platform routes users to different support tiers based on behavioral scoring. Until recently, none of these were regulated at the algorithmic level in California. As of late 2025 and into 2026, all three are.
California has not enacted a single "Algorithmic Accountability Act." What it has done is more consequential: built a layered framework across three separate regulatory instruments — the California Civil Rights Council's employment regulations, the CPPA's ADMT and risk assessment rules under the CCPA, and a collection of AI-specific statutes — that together create meaningful, enforceable algorithmic accountability obligations for businesses operating at scale. The penalty structure is not academic. The CPPA currently has hundreds of active investigations. FEHA discrimination claims carry no cap on compensatory damages. The CCPA imposes fines of up to $7,988 per intentional violation, assessed per consumer.
Explore more privacy compliance insights and best practices

Prioritizing user privacy is essential. Secure Privacy's free Privacy by Design Checklist helps you integrate privacy considerations into your development and data management processes.
DOWNLOAD YOUR PRIVACY BY DESIGN CHECKLISTThe California Civil Rights Council finalized regulations that took effect October 1, 2025, amending Title 2 of the California Code of Regulations to clarify how the Fair Employment and Housing Act applies to automated-decision systems. These regulations apply to every employer in California with five or more employees — including organizations headquartered outside California with at least one California employee — that uses any "computational process that makes a decision or facilitates human decision making regarding an employment benefit" derived from or using artificial intelligence, machine learning, algorithms, statistics, or other data processing techniques.
The definition of ADS is deliberately broad. It captures sophisticated neural networks and simple rule-based scoring systems alike. It includes systems that generate predictions, content, recommendations, or decisions. It explicitly covers screening tools that analyze voice tone, facial expressions, puzzle game results, dexterity tests, and other behavioral or physical assessments. The definition also covers AI tools operated by third-party vendors — HR technology platforms, recruitment software providers, workforce analytics tools — with the critical caveat that employers cannot outsource their compliance liability. Bias introduced by a vendor's algorithm remains the employer's legal and ethical responsibility.
The operative substantive requirement is that employers cannot use an ADS that creates a disparate impact on any class protected by FEHA — race, gender, age, national origin, religion, disability, sexual orientation, and others — regardless of intent. Discriminatory effect, not discriminatory purpose, is the standard. Anti-bias testing is not technically mandated, but the regulations make its presence or absence directly relevant to both claims and defenses: the presence of documented, timely, repeatable anti-bias testing supports an employer's defense; its absence is admissible evidence in a claim. A one-time validation at launch is explicitly insufficient — the regulations require ongoing, systematic bias monitoring integrated into regular governance practices.
Applicants and employees must receive pre-use notices explaining when and how ADS tools will be used, what information influences the system, how outputs contribute to decisions, and what rights they have to opt out or appeal. Post-use notices are required when adverse employment actions are taken based on ADS outputs. All ADS-related data — inputs, model parameters, decision criteria, audit results, correspondence — must be retained for at least four years. Human oversight is not optional. Decision flows must invite human review, challenge, and intervention; the regulations explicitly reject efficiency as a justification for eliminating human judgment from consequential employment decisions.
The California Privacy Protection Agency finalized comprehensive regulations under the CCPA on September 22, 2025, taking effect January 1, 2026. The ADMT provisions are distinct from and broader than the employment-context CRC regulations — they apply across all significant decision domains, not just employment — and carry different timing for full compliance.
Risk assessment requirements began January 1, 2026. Businesses subject to the CCPA that engage in processing activities presenting a significant risk to consumers' privacy must conduct and document privacy risk assessments before those activities are initiated. Covered activities include using ADMT for significant decisions, processing personal information to train ADMT, and using automated processing to infer sensitive personal information. For processing already underway before January 1, 2026, assessments must be completed by December 31, 2027. Attestations and assessment summaries are due to the CPPA by April 1, 2028.
Consumer rights obligations under the ADMT regulations apply to businesses using ADMT for significant decisions starting January 1, 2027 for existing systems, and immediately for new systems deployed after that date. Significant decisions include those affecting financial or lending services, housing, insurance, healthcare, education, employment, and access to essential services. Advertising — a domain covered by earlier draft versions of the regulations — was removed from the final text.
The full operational picture of California's ADMT compliance timeline, what "significant decisions" means in practice, and how the risk assessment requirements interact with consumer rights obligations — is essential reading for any team trying to map their AI systems against California's actual requirements rather than the earlier draft versions that many compliance teams planned against.
The consumer rights structure requires that businesses provide pre-use notices before collecting personal information for use in ADMT or before applying ADMT to already-collected data. Those notices must describe how the ADMT works, what data influences its outputs, how outputs are used in decisions, and what happens when consumers opt out. Businesses must offer at least two accessible opt-out mechanisms and must have a genuine non-ADMT alternative for consumers who exercise that right. Access rights allow consumers to request information about the logic of the specific ADMT decision that affected them, including the parameters that generated the output.
The CPPA deliberately removed all references to "artificial intelligence" from the final ADMT regulations. The operative definition is functional: any technology that processes personal information and uses computation to replace or substantially replace human decision-making. This captures both simple rule-based systems and sophisticated ML models, provided they are operating on personal information and producing outputs that substitute for human judgment in covered decision domains.
AB 2013, the Generative AI Training Data Transparency Act, took effect January 1, 2026. It applies to developers of generative AI systems that are publicly accessible within California and that were trained using datasets including personal information. Developers must publish documentation describing their training data sources, the types of data included, the time period covered, whether the data includes personal information, whether it includes copyrighted material, and how the data was licensed or acquired. The obligation is retroactive to systems released or substantially modified on or after January 1, 2022.
Several other enacted California AI statutes took effect in 2026 or 2026 adjacent: AB 316 limits affirmative defenses in civil liability claims involving AI-generated or AI-modified content; AB 325 prohibits anticompetitive use of common pricing algorithms under California antitrust law; the Transparency in Frontier AI Act (SB 53) requires developers of the most advanced "frontier" AI models to publish safety protocols and report critical safety incidents; and AB 489 prohibits AI from falsely implying healthcare licensure.
Several California bills targeting algorithmic accountability specifically failed to become law, and it is important to understand what these bills would have required — both to avoid planning compliance programs around them and to understand the direction of future legislation.
AB 2930, introduced in 2024, would have imposed algorithmic impact assessment requirements across industries for any developer or deployer of "automated decision tools" — defined as systems that make or substantially assist decisions affecting employment, housing, credit, education, healthcare, and other consequential domains. It would have required pre-deployment impact assessments, bias audits with demographic disaggregation, public summaries, and consumer rights against automated decisions. It would have created a private right of action. The bill was withdrawn before reaching the Governor's desk after amendments narrowed its scope. It represents the most detailed legislative proposal for algorithmic accountability California has seen and is likely to resurface in some form.
SB 1047, the comprehensive "frontier AI safety" bill that attracted enormous industry attention in 2024, was vetoed by Governor Newsom in September 2024. The Governor's reasoning — that regulating AI based on training cost thresholds could create a false sense of security while leaving potentially dangerous smaller systems unregulated — has shaped subsequent California AI policy toward more targeted, use-case-specific regulation rather than model-level governance.
Any company that operates in California, employs California residents, or deploys products and services to California consumers may fall within the framework depending on which layer applies. The CCPA ADMT requirements apply to CCPA-covered businesses — those meeting the $26.6 million annual revenue threshold, the 100,000 California consumer data volume threshold, or the 50% revenue-from-data-sale threshold. The CRC employment regulations apply to any employer with five or more employees anywhere using ADS for employment decisions affecting California employees or applicants. The AI-specific statutes apply based on the nature of the product and its California market presence.
High-risk use cases that fall within multiple regulatory layers simultaneously include hiring and recruiting algorithms (CRC regulations plus potential CPPA ADMT obligations), credit scoring and lending decisioning (CPPA ADMT for significant decisions affecting financial services), healthcare AI that influences clinical decisions or insurance determinations, and algorithmic pricing systems that may implicate AB 325's anticompetitive provisions. Maintaining a centralized inventory of every AI system and automated decision tool deployed across the enterprise — mapped to the regulatory regime that applies to each — is the operational prerequisite for compliance across this layered framework.

Prioritizing user privacy is essential. Secure Privacy's free Privacy by Design Checklist helps you integrate privacy considerations into your development and data management processes.
DOWNLOAD YOUR PRIVACY BY DESIGN CHECKLIST
The practical compliance programme starts with a system inventory that goes beyond the ML models owned by the central data science team. It must capture algorithmic tools embedded in HR platforms, CRM systems, customer scoring models, ad targeting algorithms, and any third-party software that makes or substantially assists consequential decisions. For each system, the inventory should record: the decision domain, the population affected, whether California consumers or employees are in scope, which regulatory layer applies, the current status of bias testing and documentation, and the gap between current state and required state.
Risk assessments for CPPA-covered processing that began before January 1, 2026 must be completed by December 31, 2027. Risk assessments for new processing activities must be completed before those activities are initiated. The assessment must evaluate the purposes and benefits of the ADMT, the logic of how it works, foreseeable negative impacts, planned safeguards, and whether the privacy risks outweigh the benefits. Automating the risk assessment process so that assessments are initiated as a gate in the deployment pipeline rather than as a downstream documentation activity is the structural change that makes risk assessment compliance operationally sustainable at scale.
Bias testing and audit programmes for employment ADS must be ongoing, not one-time. The CRC regulations identify six factors relevant to the quality of anti-bias testing: the statistical methods used, whether the testing covers all relevant protected classes, the recency and frequency of testing, whether it covers both the algorithm and its application in practice, the scope of data used in testing, and how the employer responded to results. A single pre-launch validation without a subsequent monitoring programme does not satisfy this standard. Bias is not a static property — it can shift as data distributions change, user populations evolve, and model versions are updated.
Documentation and record retention must be treated as ongoing operational infrastructure rather than point-in-time compliance outputs. The CRC requires four-year retention of ADS-related data, including inputs, model parameters, audit results, and correspondence. The CPPA risk assessment regime requires five-year retention of completed assessments and their supporting documentation. Technical documentation — model cards, data provenance records, performance benchmarks, bias testing results — must be maintained in sync with the actual deployed system rather than reflecting the state at launch.
California's emerging algorithmic accountability framework and the EU AI Act address overlapping concerns through structurally different approaches. The EU AI Act is product-focused and provider-centric: it classifies AI systems by inherent risk tier, imposes conformity assessment obligations on providers before market entry, and requires technical documentation, human oversight implementation, and post-market monitoring as product-level requirements. California's framework is deployer-centric and rights-based: it imposes obligations on businesses using automated systems against consumers and employees, and it operationalizes accountability through consumer rights, risk assessments, and anti-bias testing rather than technical conformity assessments.
The practical overlap is significant: employment AI, credit scoring, healthcare AI, and several other high-stakes domains are designated high-risk under EU AI Act Annex III and subject to significant decision restrictions under California's ADMT framework. Organizations subject to both frameworks building governance programmes should integrate their risk assessment, documentation, and human oversight requirements across both rather than maintaining parallel programmes. The Fundamental Rights Impact Assessment required by the EU AI Act for deployers of high-risk employment and credit systems shares substantive overlap with the risk assessment documentation California's CPPA regulations require, making integrated assessment the more efficient compliance path.
The EU AI Act's August 2, 2026 enforcement deadline for Annex III high-risk systems is a parallel forcing function for organizations that deploy employment AI, credit scoring, biometric identification, or other regulated categories in both US and EU markets. Building compliance infrastructure now that satisfies both frameworks simultaneously is structurally more efficient than sequential compliance programmes.
Assuming no regulation yet means no action needed is the most consequential planning error. Two of the three California regulatory layers — the CRC employment regulations and the CPPA risk assessment requirements — are already operative. The ADMT consumer rights obligations are 8 months away for existing systems. The window for proactive programme design is now; reactive compliance in response to investigations or litigation is structurally more expensive and strategically less effective.
Relying on vendor compliance representations without independent verification is the employment-context equivalent of outsourcing liability that cannot legally be outsourced. The CRC regulations are explicit: a vendor's bias audit does not satisfy the employer's obligation. The employer must understand how the tool works, what data it uses, what protected-class impacts it produces, and how it responds to adverse results. Vendor contracts should include audit rights, testing data sharing, and co-operation obligations for regulatory responses.
Treating bias testing as a pre-launch checkbox rather than an ongoing governance practice creates the specific vulnerability the CRC regulations target. A single validation result from 18 months ago, against training data that predates the current user population, is not evidence of current compliance. Fairness monitoring should be integrated into the same operational infrastructure that tracks model performance, with threshold-based escalation when protected-class disparity metrics deteriorate.
The principle that organizations deploying automated decision systems must be able to explain, audit, and be held responsible for the outcomes those systems produce — particularly when those outcomes affect protected-class members or carry significant consequences for individuals.
Yes, through multiple enacted regulatory instruments. The CRC employment ADS regulations are effective October 1, 2025. The CPPA ADMT and risk assessment regulations are effective January 1, 2026, with consumer rights obligations for existing systems by January 1, 2027. Multiple AI-specific statutes also took effect January 1, 2026.
A structured evaluation of the potential harms an automated decision system may cause — particularly bias, discrimination, or privacy impacts — conducted before deployment and updated when the system or its processing context changes materially. California's CPPA risk assessment framework operationalizes this requirement for covered processing activities.
For employment ADS, anti-bias testing is not technically mandated but its presence or absence directly affects both claims and defenses under FEHA. For CPPA-covered ADMT processing, risk assessments are legally required. Cybersecurity audits for large businesses are phased in between 2028 and 2030.
By inventorying every automated decision system in scope, mapping each against the applicable regulatory layer, completing or scheduling required risk assessments, implementing bias monitoring as an ongoing practice rather than a launch activity, building consumer-facing notice and opt-out infrastructure, and maintaining documentation with four-year retention for employment systems and five-year retention for CPPA risk assessments.
California's algorithmic accountability framework is not a single coming law. It is a set of existing obligations — some operative since October 2025, some since January 2026, some arriving in January 2027 — that together constitute the most comprehensive algorithmic governance regime in the United States. The organizations that will navigate this environment most effectively are those building governance infrastructure now: AI system inventories, bias testing programmes, risk assessment workflows, and documentation systems that treat accountability as an operational discipline rather than a regulatory compliance event.