California Governor Gavin Newsom Signs Landmark AI Safety Regulation
In a significant move to regulate artificial intelligence, California Governor Gavin Newsom signed a new state law on [date] that requires major AI companies to publicly disclose their safety protocols. The law also creates mechanisms for reporting critical safety incidents and extends whistleblower protections to AI company employees.
The regulation, which takes effect in 2024, mandates that AI companies with more than $100 million in annual revenue must submit regular reports detailing their plans to mitigate the risks associated with advanced AI models. These reports will be made publicly available, allowing researchers, policymakers, and the general public to scrutinize the safety measures in place.
"This law is a crucial step towards ensuring that AI development prioritizes human safety and well-being," said Governor Newsom in a statement. "We must work together to address the potential risks of advanced AI and ensure that these technologies are developed responsibly."
The new regulation also establishes CalCompute, a government consortium tasked with creating a framework for evaluating the safety and efficacy of AI systems. This framework will provide a standardized approach for assessing the risks associated with AI development.
According to Dr. Fei-Fei Li, Director of the Stanford Artificial Intelligence Lab (SAIL), "This law is a game-changer for AI research and development. By requiring transparency and accountability, we can ensure that AI systems are developed in a way that prioritizes human values and safety."
Background and context:
California has been at the forefront of AI regulation efforts, with several other states following suit. The new law builds on previous legislation aimed at addressing the potential risks associated with AI development.
The regulation is also seen as a response to growing concerns about the lack of transparency in AI development. Critics argue that major tech companies have failed to provide adequate information about their AI systems, making it difficult for policymakers and researchers to assess the safety risks.
Additional perspectives:
Industry experts note that the new regulation will require significant changes in how AI companies operate. "This law is a wake-up call for the industry," said Dr. Andrew Ng, co-founder of Google Brain. "We need to rethink our approach to AI development and prioritize transparency and accountability."
Current status and next developments:
The new regulation takes effect in 2024, with AI companies required to submit their first reports by [date]. The CalCompute consortium will begin working on the framework for evaluating AI safety and efficacy.
As the industry continues to evolve, policymakers and researchers are urging caution. "We need to be careful not to stifle innovation," said Dr. Fei-Fei Li. "But we also need to ensure that AI development prioritizes human values and safety."
The signing of this landmark regulation marks a significant step towards regulating AI in California, but it is just the beginning. As the industry continues to grow and evolve, policymakers and researchers will be closely monitoring developments to ensure that AI systems are developed responsibly and with transparency.
*Reporting by Fortune.*