May 8, 2023 Reading Time: 4 minutes

In late November of last year, ChatGPT shook the world into a frenzy over the future of artificial intelligence. ChatGPT is among the family of generative AI models that process vast troves of existing data and spit out coherent strings of written, verbal, and visual content. You can, for example, view an image of a Tyrannosaurus Rex shot on a Canon 300mm f/2.8 lens within seconds by typing in the relevant command. These generative AI models are revolutionizing industries ranging from healthcare to education to finance to the entertainment industry.

With such rapid development of AI come growing fears that it will be used in harmful ways. Lina Khan, chief of the Federal Trade Commission argues in a New York Times op-ed that generative AI needs to be strictly regulated to avoid consumer harms and anti-competitive practices. “Alongside tools that create deep fake videos and voice clones,” she writes, “these technologies can be used to facilitate fraud and extortion on a massive scale.”

Imposing blanket regulations on generative AI models, however, will significantly disrupt innovation and delay the life-changing benefits that come with it. Even if governments had the proper regulatory tools in their pockets, the pace at which AI technologies are advancing will prove intractable, rendering any form of stringent regulation ineffective.

Instead of this approach, we should opt for “softer” regulations such as industry best practices and informal negotiations among stakeholders that encourage the formation of bottom-up governance. This way, innovation can continue apace while decentralized forms of governance can emerge to address the potential harms associated with generative AI systems.

In addition to the concerns raised by Khan, many also believe that this new generation of AI will increase online sexual exploitation, harassment, and disinformation. To address these issues, companies like OpenAI and Google are enlisting and training human reviewers to remove malicious and inaccurate content. In some cases, OpenAI sets limits on user behavior through their Terms of Service contracts. These examples represent a growing effort among AI companies to incorporate bottom-up governance that doesn’t require heavy-handed regulation.  

In collaboration with industry leaders and other valuable stakeholders, the National Institute of Standards and Technology (NIST) created a framework for reducing negative biases in AI systems, which “is intended to enable the development and use of AI in ways that will increase trustworthiness, advance usefulness, and address potential harms.”

The Institute of Electrical and Electronics Engineers (IEEE), a professional association with more than 420,000 members in 160 countries, published a report outlining industry standards that “ensure every stakeholder involved in the design and development of autonomous and intelligence systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”

By consulting with numerous stakeholders, the NIST and IEEE are creating decentralized frameworks that are informing the AI community as to best practices and establishing a set of standards that can mitigate problems before they arise.

Of course, no panacea exists that will eliminate the uncertainty swirling around generative AI. But fostering spontaneous governing institutions will empower innovations that will improve everything from marketing and customer service to art and music entertainment. One marketing firm recently predicted that 30 percent of marketing content will be AI generated by 2025. Even more stunning, they expect that AI will contribute 90 percent of text and video generation for a major Hollywood film by 2030.

By increasing productivity, moreover, generative AI is expanding the technology frontier. Researchers from MIT, for example, are exploring ways in which generative AI can “accelerate the development of new drugs and reduce the likelihood of adverse side effects.” Companies are also using AI to generate computer code that will increase software development and enhance user experiences.

In stark contrast to the bottom-up governing structures emerging, the European Parliament is considering a proposal known as the AI Act that will impose burdensome regulations on AI companies across the European Union. Some stakeholders, such as the Future of Life Institute, argue that generative AI must undergo “conformity assessments,” which are strict requirements that AI companies need to fulfill before their technologies are released to the public. The same Institute also published an open letter with more than 27,000 signatures, including Elon Musk, calling for AI labs to suspend development for at least six months until “a set of shared safety protocols” have been fully implemented.

As calls grow louder to regulate AI in Europe, the White House released its AI Bill of Rights, outlining what the Biden administration believes to be its top priorities “for building and deploying automated systems that are aligned with democratic values and protect civil rights, civil liberties, and privacy.”

Unfortunately, these proposals neglect the bottom-up governance systems that have evolved in the absence of rigid government oversight. It’s true that there are serious risks in an unbounded world where AI can be used to track millions of people through facial recognition, spread disinformation, and scam families by emulating a relative’s voice. But as Adam Thierer writes in his book, Evasive Entrepreneurship, the “softer” approach embraces an attitude of permissionless innovation that creates a more productive environment in which we can solve complex problems without jeopardizing the incentive to innovate and discover new ways of doing things.

With AI developing at such a rapid clip, it’s easy to let our fears of it overwhelm us. After all, our attitude toward new technology is what ultimately shapes the regulatory landscape we adopt. Rather than letting such fears flood our senses, we should remain cautiously optimistic about the incredible possibilities that generative AI models have unlocked. Embracing a bottom-up approach of governance, though not perfect, is the best strategy to ensure that the benefits of AI become a reality without endangering the important values we cherish.

Michael N. Peterson

Michael is the Content Specialist at an academic institution in the Washington, D.C. area.

He is currently pursuing an MA in economics from GMU. Michael’s studies focus on development economics and institutional analysis.

Get notified of new articles from Michael N. Peterson and AIER.