Artificial intelligence (AI) has long been treated as a convenient backdrop for apocalypse scenarios. When tools like ChatGPT entered the mainstream a few years ago, predictions of a looming robot takeover were rampant. With the technology’s rapid development and stunning capabilities, it’s easy to imagine extreme outcomes. When these speculations drive policy, “precautionary legislation” that usually serves up government overreach is the outcome. Sound policy policies on AI, on the other hand, will first look to existing laws, regulations, and resources offered by the free market before layering new measures onto a technology with great promise for both the American people and economy.

The newest trend in AI regulation is age verification. As is so common when discussing the risks of AI, AI age verification legislation ignores existing resources in favor of a worst case scenario. It also advances the belief that the government must step in for parents and the free market to create a safe AI experience.

Age verification, a method that uses personal information to verify users’ appropriate age to access online content, has rapidly become many policy makers’ default strategy when worries about youth safety are raised. As artificial intelligence has become more common, parents and lawmakers have raised concerns about the potential for inappropriate interactions between children and AI chatbots. Like most AI regulation, the Children Harmed by AI Technology (CHAT) Act and the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act were crafted for a world where options are so limited and consumers are so ignorant that only a government-imposed solution will do.

Introduced by U.S. Senator Jon Husted (R-OH,) the CHAT Act mandates age verification so that minors cannot access AI chatbots without a parent’s consent. Under the CHAT Act, underage users with consenting parents may still only have access to a modified bot, where sexual content is unavailable, parents can receive notifications if the conversations include self harm, and regular reminders of the non-human status of the chatbot are displayed. Similarly, the GUARD Act introduced by U.S. Senator Josh Hawley (R-MO) would use age verification to completely ban minors from using AI chatbot companions. The GUARD Act also includes criminal penalties for companies that fail to comply with its age verification mandates and subsequent blocking procedures. Mandatory age verification is an expensive and invasive process, and complying with the caveats of these bills, if passed and signed into law, would inevitably raise both development and litigation costs for American AI companies. In the fast-paced AI market, American companies would struggle to balance competitiveness with navigating a regulatory web.

The bigger problem, though, is that parents can already do these things without the government’s help. Two major companies with popular chatbot features, OpenAI and Meta, have introduced ways for parents to link their accounts with that of their children. Parents can link accounts and select a modified version for children that restricts certain content. We highlighted these innovations in our 2025 Tools for Keeping Kids Safe Online and will continue to celebrate the strides taken by the free market in response to legitimate consumer concerns. Not only do these tools already achieve the goals that politicians are promising the government can deliver, they do so without the compliance costs and chilling effect on innovation that so often accompanies strenuous AI regulation. Furthermore, these tools avoid the data privacy risks that accompany age verification laws—risks that are far beyond hypothetical.

These market innovations are more than just helpful to individual families; they are good for states and communities like those in Louisiana. AI holds great promise for our state. Louisiana businesses are ranking high nationally in AI adoption, with small businesses relying on the technology to compete and excel. Louisiana’s schools are using AI in the classroom to supplement traditional instruction and ensure that students receive tailored interventions. Our universities are also gaining national recognition by leveraging and innovating with AI in the fields of medicine, agriculture, and beyond. In addition, Louisiana’s economy is looking forward to the jobs and business that data centers for AI companies will bring. In short, Louisiana has a vested interest in AI remaining accessible, affordable, and innovative—goals that become more difficult to achieve when panic leads to regulatory burdens that stifle innovation.

A quick inventory of the tools currently available to parents and the willingness of American AI companies to adapt to their customers’ desires indicates that the CHAT Act and the GUARD Act are policies that favor a fiction over a reality. Families deserve the best possible tools to keep their kids safe online, and they shouldn’t have to co-parent with the government and turn over their private information to have them. Instead of an apocalypse scenario, policymakers can imagine what American ingenuity and empowered parents can do for their families and their states.

 

Links to Learn More

Is It the Government’s Job to Make Sure Chatbots Are Safe for Kids? | Cato at Liberty Blog

The CHAT Act Won’t Protect Kids, but it Might Break the Internet | Reason Magazine

Recent AI Legislation Roundup: The Good, the Bad, the Ugly | Pelican Institute for Public Policy