A Swiss-based data privacy, AI and risk intelligence consulting firm, specializing in helping tech companies streamline data privacy compliance.
Contact@custodia-privacy.com
New privacy regulations are transforming AI governance, mandating personal accountability, risk assessments, and dedicated oversight for responsible deployment.

The recent approval of new regulations in a major U.S. state, focusing on cybersecurity audits, risk assessments, and automated decision-making technology (ADMT), marks a significant evolution in privacy regulation. These provisions introduce a heightened level of personal accountability for designated individuals within organizations and mandate dedicated oversight roles for artificial intelligence (AI) and cybersecurity. While rooted in data privacy, these developments lay crucial groundwork and offer profound insights into the burgeoning field of AI governance, underscoring how established privacy principles are amplified and recontextualized in an AI-driven world.
A central theme emerging from these new regulations is the imposition of personal accountability on senior leaders, such as CEOs, presidents, CISOs, CPOs, and general counsels, for privacy compliance. This "personal liability" extends beyond corporate responsibility to individual oversight. Interpreted through an AI governance lens, this principle is transformative. When AI systems are making critical decisions or processing sensitive personal data, the accountability for ensuring their ethical, lawful, and privacy-compliant operation will increasingly fall on specific, identifiable individuals. This mandates that leaders not only understand the privacy implications of their AI initiatives but also actively implement robust governance frameworks, conduct thorough risk assessments, and ensure ongoing compliance. The executive certification requirement for privacy programs will inevitably broaden to encompass AI programs that process personal data, forcing a granular understanding of AI's impacts from the top down.
The regulations emphasize the necessity of cybersecurity audits and comprehensive risk assessments. Crucially, they also explicitly introduce "AI assessments" and "ADMT impact assessments." This is a direct and critical bridge from data privacy to AI governance. Just as Data Protection Impact Assessments (DPIAs) are essential for identifying and mitigating privacy risks in data processing, ADMT impact assessments serve a similar, yet expanded, purpose for AI systems. These assessments must go beyond traditional privacy concerns to evaluate AI-specific risks, such as algorithmic bias, lack of transparency or explainability, potential for discrimination, data poisoning, and the amplification of societal harms. By mandating such assessments, the regulations highlight that understanding, documenting, and mitigating the multifaceted risks of AI systems are not optional but are foundational prerequisites for their responsible deployment.
Underlying effective privacy compliance are robust data governance practices, including data mapping, data inventory, access controls, data quality, and data retention policies. These practices are not merely ancillary to AI governance; they are its indispensable bedrock. AI models are inherently data-hungry, and their outputs are only as reliable, fair, and secure as the data they are trained on and operate with. The source content's emphasis on strong data privacy governance implicitly underscores the need for meticulous data quality checks to prevent biased datasets from propagating discriminatory AI outcomes, rigorous data minimization to reduce the attack surface for AI systems, and transparent data lineage to explain AI decisions. Without a strong privacy-centric data governance foundation, any attempt at AI governance will be built on precarious ground, making it impossible to ensure the integrity, fairness, and lawfulness of AI systems.
A noteworthy provision in the new regulations is the requirement for "dedicated AI and cybersecurity oversight roles" among covered entities. This explicit mention solidifies the evolving organizational structure needed to manage advanced technologies. Just as privacy and cybersecurity functions have matured into specialized roles (e.g., CPO, CISO), AI governance necessitates its own dedicated leadership and teams. These roles are critical for developing and implementing AI governance strategies, establishing ethical AI principles, overseeing AI risk assessments, ensuring compliance with evolving regulations, and fostering a culture of responsible AI innovation. The establishment of such roles signifies a formal recognition that governing AI is a distinct and complex challenge requiring specialized expertise and strategic direction.
In conclusion, the new regulatory landscape, with its emphasis on personal accountability, comprehensive risk assessments including ADMT impact assessments, robust data governance, and dedicated AI oversight roles, provides a potent framework for understanding and building AI governance. It underscores that AI governance is not a wholly separate discipline but rather an essential evolution and amplification of established data privacy principles and practices. Navigating the complexities of AI, ensuring its ethical development and deployment, and protecting individual rights in an automated world will require dedicated expertise, robust data governance, and structured frameworks that proactively address the unique challenges AI presents.