If you’re one of the 123 million viewers who watched the Super Bowl last Sunday, chances are you saw an advertisement mentioning artificial intelligence. The Superbowl commercials made clear that AI is everywhere. Once a futuristic and mysterious technology, artificial intelligence has become an inescapable part of the technological and cultural landscape. But, for every silly commercial and ironic meme, there is a new piece of legislation calling for preemptive regulation to address problems with AI that have yet to arise.

The White House Executive Order on AI from October 2023 exemplifies this excessively bureaucratic approach. The 53-page document describes the risks of artificial intelligence to nearly every industry in America and invites all regulatory bodies to closely monitor and control the development of this technology.

Not to be outdone by the White House, state lawmakers have already introduced an estimated 200 bills related to AI since last year. While the bills have varied in scope and success, one thing has become obvious: states will pass legislation related to AI.

California is at the forefront of rushing to regulate AI and has introduced a series of expansive bills. One bill in particular epitomizes the problems with the rush to regulation: SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Act. The bill requires developers to subject all new AI models to rigorous safety testing and to implement a feature that would shut down the model if it begins to act unsafely. Should a developer fail to anticipate a flaw in the model, it can be fined and punished.

This bill places a huge burden on developers, especially smaller start-ups because it requires months or even years of expensive preemptive troubleshooting. If passed, it would have a chilling effect on both progress and competition in AI, since smaller firms would be effectively priced out of the process.

Rather than states rushing to irresponsibly overregulate AI, existing state laws and regulations can be used to encourage a landscape of safe AI innovation. Lawmakers can look at what requirements and safeguards are already on the books to address bad behavior from AI when it arises. For example, well-established civil rights laws can be used to punish discriminatory practices from an AI model rather than creating new laws and agencies to combat machine discrimination specifically.

This approach creates an environment where both large and small AI companies can compete to make the best product and let consumers judge for themselves. An atmosphere of freedom and innovation will benefit everyone far more than legislation that will favor well-established tech giants.

The future of AI will be determined by the outcomes of bills like SB 1047. San Francisco didn’t win the Superbowl, and regulating AI would be just another defeat. Lawmakers should set expectations that state leaders enforce existing requirements and promote freedom so that AI developers can create good technology without excessive burden.