Originally published by Truyo.
In the evolving landscape of artificial intelligence (AI) regulation, the United States finds itself at a crossroads, with two significant pieces of legislation vying to shape the future of AI governance: the California Automated Decisionmaking Technology law and the American Data Privacy and Protection Act (ADPPA).
While the ADPPA isn’t strictly aimed at AI regulation, it is a back door into AI regulation. Sections resemble the California ADMT ‘rules,’ which are not technically new legislation but part of CCPA, and include material elements that would regulate automated decision-making algorithms, even if somewhat narrowly. This movement will further empower the FTC to seek AI regulation. Both of the aforementioned proposed laws seek to address the growing concerns surrounding AI’s potential to discriminate and infringe upon individuals’ privacy rights. Let’s delve into a comparative analysis of these two regulatory frameworks.
California Automated Decisionmaking Technology Law
California, often a trailblazer in privacy legislation, introduced the California Automated Decisionmaking Technology (ADMT) law, aimed at curbing the potential harms associated with the use of personal information in automated decision-making processes. The AI regulation casts a wide net, covering not just consumer data but also extending protections to employees, job applicants, and business-to-business contacts.
Under the ADMT AI regulation, businesses are required to provide consumers with pre-use notice before employing ADMT on their personal information. This notice must outline how ADMT will be utilized, opt-out mechanisms, and avenues for accessing information about its usage. The AI regulation also grant consumers the right to opt out of ADMT applications, offering multiple methods for doing so.
American Data Privacy and Protection Act (ADPPA)
In contrast, the ADPPA focuses on regulating AI at a federal level, introducing provisions such as Section 207: Civil Rights and Algorithms. This section prohibits covered entities from collecting, processing, or transferring covered data in a manner that discriminates based on protected characteristics. It mandates algorithm design evaluations and impact assessments to mitigate potential discriminatory impacts.
The ADPPA empowers the Federal Trade Commission (FTC) to enforce its provisions, with the creation of a Bureau of Privacy dedicated to oversight. Companies subject to Section 207 are required to evaluate algorithms, submit evaluations to the FTC, and conduct annual impact assessments. However, unlike the ADMT law, the ADPPA does not provide consumers with direct opt-out rights regarding the use of AI.
Comparing CA ADMT & ADPPA
While both regulatory frameworks aim to address the risks associated with AI, they diverge in their approach and scope. The ADMT law focuses on providing consumers with transparency and control over the use of their personal information in automated decision-making processes. In contrast, the ADPPA places greater emphasis on algorithmic accountability and evaluation to prevent discrimination.
One notable distinction is the level of enforcement and oversight. The ADPPA entrusts the FTC with enforcement authority, while the ADMT law relies on California’s regulatory bodies. Additionally, the ADPPA’s broader reach extends to large data holders, requiring comprehensive evaluations and assessments.
Challenges and Future Implications
Both regulatory frameworks face challenges, including concerns over preemption and the need for clarity in defining terms such as discrimination and profiling. Businesses must navigate these complexities to ensure compliance and mitigate legal risks.
Looking ahead, the convergence of these regulatory efforts may pave the way for a unified approach to AI governance in the United States. Collaboration between state and federal agencies, along with stakeholder engagement, will be crucial in shaping an effective and equitable regulatory landscape for AI.