AI: What Are the Real Risks?

AI: What Are the Real Risks?

A few weeks ago, the White House stunned people all across the world when it convened some of the nation’s largest technology companies and announced a planned executive order regarding artificial intelligence (AI). It wasn’t a celebratory event, as one might expect, to highlight immense advancements in technology, preview how AI can increase educational and economic opportunity, and how the U.S. can harness this innovation to position our country as a world leader. Instead, the Administration convened the meeting to interrogate companies on how they plan to “manage the risks posed by AI” and “guard against…broader societal effects…such as the effects on fairness and bias.”

Some leading technology experts likened the summoning to Washington more of an exercise in public intimidation rather than securing “voluntary commitments” from the AI giants. With an exciting and unknown technology seemingly bursting on the scene overnight, it’s understandable that citizens, business, and government alike might be concerned. But what’s truly alarming is government leaders rushing to “do something” about a problem that doesn’t yet exist, potentially stifling what could be one of the greatest inventions in modern history.

AI certainly isn’t new. It’s existed for years to leverage data for the purpose of recognizing and predicting relationships, including credit scores, medical diagnoses, and public safety. AI can also adapt to contexts and learn from human behavior, enabling voice recognition, customized social media feeds, and search engine results.

The newest, most exciting form of AI could serve as a new tool for increased productivity, economic growth, and human flourishing. Generative AI uses documents to learn what words and phrases mean so that a Large Language Model can generate brand new content in response to an individual’s prompt (request). Current examples of this technology include ChatGPT, DALL-E, Google Bard, Meta’s LLaMA, and Midjourney. One prompt might be, “Write a paragraph about the Biden administration and artificial intelligence,” or even, “Write an article that describes the implications of federal overregulation of technology.”

The individual doing the prompting will receive a draft, which he or she then bears the responsibility of reviewing and fact-checking against the initial prompt and to ensure accuracy. The technology pulls from content that has existed from the beginning of the Internet—some of which is true and accurate and some is not—so personal research and responsibility are necessary components to get to the final product. Some forms of AI pull only from organizations’ or individuals’ own documents, minimizing any chance of using inaccurate or fictional content and allowing the user to tailor the document to a familiar style.

Just imagine the possibilities for writers, artists, educators, small business owners, corporations…even politicians writing campaign speeches or testimony for a bill!

As this technology advances, it’s true that there are some risks and unknowns. It’s right and good for society to be asking questions about what this means for our world. But that doesn’t mean that government should rush to enact more regulations.

There are already hundreds, if not thousands, of federal and state regulations already on the books addressing deceptive marketing, defamation, theft of intellectual property, misleading election information, mistakes by licensed professionals, and yes, even bias and discrimination. In the education space, plenty of rules and disciplinary policies already exist to address cheating and plagiarism. As AI advances and its use becomes more widespread, if regulators discover gaps in those policies, they should plug them by addressing very specific problems that cannot be addressed through existing mechanisms. It shouldn’t matter that the problem came about from AI versus some other form of behavior. State and federal governments don’t need to create volumes of laws and regulations addressing those same undesired outcomes specific to AI and possibly setting different standards and consequences.

State and national leaders can embrace this new technology, learn from its early use, watch it grow, and enable innovation and opportunity that we haven’t yet dreamed about. It could allow time to work through any questions, issues, or concerns that might arise and swiftly address them in a targeted fashion. This is the healthy relationship that federal and state governments can have with providers. The approach could allow the U.S. to truly lead in the AI space, rather than threatening and regulating it to death before it even has a chance to demonstrate what it can do for us.

What can—or should—government do about AI, if anything at all? This past spring, the Louisiana Legislature passed a resolution urging and requesting the Joint Legislative Committee on Technology and Cybersecurity to “study the impact of artificial intelligence on operations, procurement, and policy, and submit a written report of its findings to the House Committee on Commerce and the Senate Committee on Commerce, Consumer Protection and International Affairs not later than sixty days prior to the beginning of the 2024 Regular Session of the Legislature of Louisiana,” which is early January 2024. It’s critical that the committee weighs the true risks here as it learns more about this rapidly emerging technology—those associated with premature regulation and overregulation—and allow Louisianans to reap the benefits. In so doing, they can also show our leaders in Washington, from the White House to the FCC, a better path to prosperity.

Want more? Get stories like this delivered straight to your inbox.

Thank you, we'll keep you informed!

By sharing your phone number and/or email address, you consent to receive emails, calls, and texts from Pelican Action. You may opt-out at any time.

Sign Up for Free Swag Delivered!

We're shipping Louisiana residents and supporters free swag! Simply fill out the form below.

close