Suddenly, artificial intelligence (AI) seems to be everywhere, especially in the news. As with most innovations, a never-ending stream of news articles warning of dystopian terrors has been written with little evidence to support the concerns. Often these fears seem as if they were a cut-and-paste from a script for the apocalyptic Terminator film, when in fact a much better reference of our fictional future is The Jetson’sRosey the Robot.

Britannica provides a useful and realistic description of AI: “It is the ability of a computer operation to perform tasks that we might associate with intelligent beings.” But how does it work? As Capterra helpfully describes, “Artificial intelligence works by applying logic-based techniques, including machine learning, to interpret events, automate tasks, and complete actions with limited human intervention. The analysis and techniques are used to build data models with fast and iterative processing that are fed into machines via software to act like humans.” Given its foundations in math, perhaps it is not a surprise that AI has yet to match a human particularly when nuance, innuendo, and other subtleties are involved.

Legislators and regulators have been quick to act, buying the hype. According to the Bryan Cave law firm, 11 states have already enacted legislation governing the use of AI, with 12 more having proposed legislation. Not to be left out of the government control goldrush, Congress has stepped into the fray, perhaps most notably with a proposal to establish a Federal Digital Platform Commission. Designed to comprehensively regulate digital platforms with heavy-handed government restrictions, the proposed legislation would also explicitly cover AI products, given that the definition of a digital platform was changed this year to include companies that offer “content primarily generated by algorithmic processes.”

Even ahead of this broad grant of power, regulators are champing at the bit. According to the Goodwin law firm, the U.S. regulations watchlist for AI is already robust.

Why all the fuss? That’s hard to pin down, although largely it is fear of the new. AI is a tool much like any other, not so different from the computer, of whatever size, such as the one you are using right now. But being fed the steady diet of techno-dystopian movies and literature that pervades our pop culture has created fear of technology that is based on the imagination rather than facts.

Rarely do the wonders of advanced innovation make for a good story. Anyone interested in seeing a show about how amazing it is that you can connect to the internet while flying 30,000 feet above the Earth? Or that virtually everyone over the age of 13 carries a supercomputer in their pocket these days? So quickly forgotten is the fact that software is of regular assistance to us daily, whether to avoid traffic, find the least expensive gasoline, eat healthier, watch movies, play games, book a better-priced hotel, price-shop airfare, or hail a cab, to name but a few. AI is no different.

Some of the overreaction is based on fear of jobs being eliminated. In the 19th century, English textile workers, calling themselves Luddites after their imaginary leader General Ludd, smashed mechanized looms and knitting frames. These artisans feared that the machines automating parts of their jobs would take away those jobs in total. Regardless, the industrial age that followed grew economies immensely, providing careers and a better standard of living for the world.

The choice, though, is not AI-specific laws or nothing. There are dozens of laws and agencies across the country that are already empowered to address hypothetical issues. For example, if AI generates some false information about a person, libel and defamation laws are well established. If a medical professional wants to use AI in their work, the state board of medicine will have something to say about that before it happens.

Adding redundant regulatory bodies, new layers of repetitive laws, or prophylactic rules will quickly constrain the marketplace with enforcement actions and lawsuits. The regulatory agencies and laws aren’t going anywhere anytime soon. Instead of strangling AI out of irrational fear today, it’s better to allow AI to develop—and only act when, or if, there is a real harm that somehow is not addressed.