Sacramento, California — As artificial intelligence (AI) reshapes various sectors, companies are increasingly adopting AI tools within their workforce. In response, California regulators are taking proactive measures to address the legal implications associated with these evolving technologies.
On July 24, 2025, the California Privacy Protection Agency (CPPA) established new regulations under the California Consumer Privacy Act (CCPA) that focus on automated decision-making technology (ADMT). These new rules will become effective once they receive approval from the Office of Administrative Law.
ADMT is defined broadly to encompass any technology that processes personal information to replace or significantly alter human decision-making. In the workplace, this includes tools for application screening, performance evaluations, productivity monitoring, and systems that influence key employment decisions, including hiring and termination. The regulations specifically target significant decisions that impact the terms and conditions of employment, rather than more minor functions like spell-checking.
One significant aspect of the new regulations is that businesses cannot avoid accountability by outsourcing ADMT to third-party vendors. Companies are still liable for the actions of these vendors and must actively work with them to ensure compliance with the regulatory framework. In some situations, companies may need to conduct risk assessments to weigh the privacy risks of using ADMT against its perceived benefits.
Employers utilizing ADMT must establish specific protocols regarding its application. This includes informing employees, job applicants, and those affected before implementing such technology. Required disclosures entail the purpose of using ADMT, an overview of how the technology operates, opt-out rights, access to processed data, and information about protections against retaliation.
For organizations currently employing ADMT, there is a deadline of January 1, 2027, to comply with these notice requirements.
As the regulatory landscape surrounding AI continues to evolve, compliance is not a one-time task but rather a dynamic process. Businesses should regularly review and adjust their privacy policies and practices in light of legal advancements. Engaging experienced legal counsel can help businesses navigate the complexities of AI-related regulations and minimize their liability risks.
For further insights into the emerging AI regulations and their implications for workplace compliance and litigation, interested parties can join a webinar led by CDF’s Privacy and AI Practice Group titled “AI, Algorithms & Employer Protection — What You Need to Know,” scheduled for August 20.
Disclaimer: This article was automatically generated by Open AI. The information, including people, facts, circumstances, and the narrative, may contain inaccuracies. Requests for article removal, retraction, or correction can be sent via email to contact@publiclawlibrary.org.