The Meteoric Rise and Spectacular Fall of CA’s SB1047: a Lesson in Preemptive Regulation
This summer California’s SB 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” made waves across the tech scene. The polarized response included praise from Elon Musk and AI researchers and apprehension from OpenAI and others within the industry.
The bill’s proponents heralded it as a roadmap for national legislation and the solution to the fears surrounding AI. Those more hesitant about the legislation warned that such stringent regulation could stop American innovation in its tracks, harming the nation’s competitive edge and blocking the public from the benefits that advancements in technology can bring. At the end of last month, California Governor Gavin Newsom vetoed the bill, and given some of the concerns expressed by experts, it seems to have been a prudent move.
SB1047 was one of many state-level AI bills introduced over the last year. Its notoriety was due to its preemptive approach, which imposed vast amounts of red tape on AI models before they were even put into action. In order to comply with SB1047, developers would have to guarantee that the processes they put in place were enough to prevent their AI from causing harm. The bill authors provided no clarity as to what processes would meet their standards, leaving the burden upon developers to ease the vague concerns. These standards would apply to companies and developers from outside of California as well, effectively imposing compliance far beyond the state borders.
The advocates of SB 1047 relied on a fear of the unknown over a commitment to addressing outcomes with measured pragmatism. Adam Thierer, a senior fellow at the R Street Institute, pins this approach as the ultimate downfall of the legislation, “this is where SB 1047 went wrong, essentially treating the very act of creating powerful computational systems as inherently risky. It would be unwise to regulate computers, data systems, and large AI models to address hypotheticals.”
Governor Newsom has historically been eager to regulate AI and his rejection of the bill was a welcome surprise. In his veto statement he astutely observed that the nature of AI and technology in general is to progress at a rapid pace, and hasty fear-based laws do little to address potential harms. “I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.”
The veto is a small victory in a larger war against a patchwork of state laws that prevents the U.S from leading in AI. However, it is a hopeful sign of a changing culture around the technology. AI has a particular capability to strike fear into the imaginations of lawmakers. When visions of robot apocalypse and looming catastrophe are replaced with a levelheaded assessment of real risks, reliance on existing laws and regulations, and wonderful potential for good, reason can prevail.