Who Is Going to Regulate AI?

0
54

Testifying before Congress on May 16, OpenAI chief executive Sam Altman said it was time for regulators to start setting limits on powerful AI systems. “As this technology advances we understand that people are anxious about how it could change the way we live. We are too,” Altman told a Senate committee. “If this technology goes wrong, it can go quite wrong,” he said, claiming it could do “significant harm to the world.” He agreed with lawmakers that government oversight will be critical to mitigating the risks.

A topic barely on lawmakers’ radars a year ago, governments around the globe are now fiercely debating the pros and cons of regulating or even prohibiting some uses of artificial intelligence technologies. The question business leaders should be focused on at this moment, however, is not how or even when AI will be regulated, but by whom. Whether Congress, the European Commission, China, or even U.S. states or courts take the lead will determine both the speed and trajectory of AI’s transformation of the global economy, potentially protecting some industries or limiting the ability of all companies to use the technology to interact directly with consumers.

Since the November 2022 release of OpenAI’s ChatGPT, its generative AI chatbot built on a self-improving large language model neural network (LLM), use of generative AI has exploded. According to data compiled by Statista, ChatGPT reached one million users in five days, blowing away previously warp-speed internet product introductions including Facebook, Spotify, and Netflix. Midjourney and DALL-E, LLMs that create custom illustrations based on user input, have likewise exploded in popularity, generating millions of images every day. Generative AI certainly meets the criteria for what one of us previously co-defined as a “Big Bang Disruptor”: a new technology that, from the moment of release, offers users an experience that is both better and cheaper than those with which it competes.

Such a remarkable take-up is naturally cause for excitement, and, for incumbent businesses, alarm. The potential for LLMs seems limitless, perhaps revolutionizing everything from search to content generation, customer service to education, and well, you name it. Unlike more targeted Big Bang Disruptions, ChatGPT and other LLMs are uber-disruptors, breaking longstanding rules not just in one industry, but in all of them. At the same time.

Given the potential scale of this disruption — as well as issues such as privacy, bias, and even national security — it’s reasonable for lawmakers to take notice. Think of Goethe’s poem “The Sorcerer’s Apprentice,” animated in Disney’s classic movie Fantasia, where the sorcerer returns to his workshop to find that his apprentice has unleashed forces that have quickly gone out of control, threatening to destroy everything in sight until the magician restores order. Many of those concerned about AI’s possible unintended consequences, including developers such as Altman, are looking to lawmakers to fill the sorcerer’s role.

Here Comes Everybody

In the United States, multiple actors are jostling to lead the regulation of AI.

First, there’s Congress, where Senate Majority Leader Chuck Schumer is calling for preemptive legislation to establish regulatory “guardrails” on AI products and services. The guardrails focus on user transparency, government reporting, and “aligning these systems with American values and ensuring that AI developers deliver on their promise to create a better world.” The vagueness of this proposal, however, isn’t promising.

Second, there’s the Biden Administration, where there is some competition among federal agencies to implement a White House blueprint for an AI Bill of Rights, which was introduced last October. The blueprint is similarly general, calling for developers to ensure “safe and effective” systems that don’t discriminate or violate expectations of privacy and that explain when a user is engaging with an automated system and offer human “fallbacks” for users who request them — without, at least so far, defining any of these key terms.

At the Department of Commerce, the National Telecommunications and Information Administration (NTIA) has opened an inquiry about the usefulness of audits and certifications for AI systems. The agency has requested comments on dozens of questions about accountability for AI systems, including whether, when, how, and by whom new applications should be assessed, certified, or audited, and what kind of criteria to include in these reviews. Here, the specificity of the inquiry seems headed in the right direction.

Federal Trade Commission Chair Lina Kahn, meanwhile, is taking a different approach, claiming her agency already has jurisdiction over LLMs, and rattling the agency’s anti-competitive and consumer protection sabers in the direction of the new technology. Kahn speculates that AI could exacerbate existing problems in tech, including “collusion, monopolization, mergers, price discrimination, and unfair methods of competition.” Generative AI, the FTC chair also believes, “risks turbocharging fraud” with its ability to create false but convincing content. And, she notes, LLMs could — intentionally or otherwise — violate existing privacy and anti-discrimination laws by crafting responses to user requests based on biased datasets.

Then, there are more provincial efforts: AI-related legislation has already been introduced in at least 17 states. Some of these proposed laws would incentivize local development of AI products, while others would limit its use in applications such as health care and hiring. Many states have or are considering creating their own task forces to recommend future legislation.

So far, there are few specifics in any of these proposals, with the kinds of hypothetical harms from AI falling into existing categories, including misinformation and copyright and trademark abuse. In any case, regulators will likely have little impact on the technology’s development in the short term. Many of the proposed regulations would require Congress to give additional legal authority to the agencies, which seems unlikely in the current political climate. Even then, the application of new rules will be a matter for the courts, which translates to years of grinding process. And governments have historically had trouble attracting the kind of technical expertise required even to define the kinds of new harms LLMs and other AI applications may cause.

Among the federal proposals, credit the Department of Commerce for asking the right questions. But it is unclear whether Secretary Gina Raimondo has the legal authority to create a sustainable certification process, or the political clout to get the tech industry to support NTIA’s efforts. Further, as the Department acknowledges, its inquiry is only part of the larger White House effort to create a trusted environment for AI services, an objective that would require previously unseen levels of coordination and cooperation across numerous government silos.

These discussions are also taking place against the backdrop of monumental changes in American law that will likely determine who will ultimately win the part of AI’s principal regulator. Recent Supreme Court decisions have dramatically altered the legal landscape for business law, shifting power from federal regulators to the courts and the states, adding even more fragmentation, uncertainty, and delay to enforcement actions. The Court has given a green light to businesses hoping to challenge agency rulemaking, for example, by demanding more specific instructions from Congress, effectively outsourcing to federal judges the final decision on whether adopted rules will ever take effect. Meanwhile, of course, the technology will continue to evolve at its own accelerated pace.

Together, these limitations suggest that major regulation is more likely to come first from outside of the U.S.

For competition law, and its application to technology companies in particular, the momentum over the last few decades has already relocated from the U.S. to Europe. As the EU continues to pass substantial new internet legislation, Congress dithers, leaving the FTC and other federal agencies largely without the tools or resources to compete with their European counterparts. The European Parliament recently approved the AI Act, a 100-page statute that would preemptively ban applications deemed to have “unacceptable” levels of risk, require others to obtain pre-approval and licenses before use in the EU, and impose substantial fines on developers for a range of violations.

Regulators in China are also moving quickly, both to incentivize home-grown AI products and services and to define how they can and cannot operate. Not only could this limit how non-Chinese companies interact with over a billion potential Chinese users, but could, by being first, become the de facto legal regime for future applications.

What Businesses Should Do Now

It’s far from clear that any combination of government action — legislative, regulatory, or judicial — can really achieve the balancing act of maximizing the value of AI while minimizing its potential harm to the economy or society more broadly. As with all revolutionary technologies, the ability of governments to effectively regulate LLMs will almost certainly fall short. This is no criticism of lawmakers and regulators, but a side effect of the basic fact that law advances incrementally while technology evolves exponentially.

In the interim, business leaders and academics should take their cue from the Department of Commerce’s ongoing initiative, and start to develop nongovernmental regulators, audits, and certification processes that identify and provide market incentives to purchase ethical and trusted AI products and services, making clear which applications are and are not reliable.

There is, of course, a long history of successful (and unsuccessful) self-regulatory bodies, going all the way back to the Middle Ages and merchant “courts” that enforced the norms of medieval markets. Today, numerous groups, including the International Standards Organization, develop and certify corporate compliance with a remarkably wide range of standards, best practices, and ratings. In the information age, similar efforts have addressed everything from corporate standards for dealing with authoritarian regimes to the development of the very software and protocols that make up the internet itself.

Some government regulation is inevitable. Still, the most promising way not to provoke the sorcerer would be to avoid making too big a mess in the first place.

LEAVE A REPLY

Please enter your comment!
Please enter your name here