ChatGPT creator OpenAI threatens to leave the EU over AI Act

0
59

OpenAI’s Sam Altman wants no business with the European Union if its laws are too overbearing.

At a conference in London yesterday (May 24), the founder and CEO of the company behind the vastly popular ChatGPT said OpenAI would leave the bloc if it is unable to comply with the region’s artificial intelligence (AI) regulations, which are currently a work in progress.

As part of its rule-making, the EU is seeking to implement transparency measures in so-called general purpose AI. “Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training,” the European Parliament noted on May 11.

The need for transparency over the data collected to train the algorithm has long been a concern for regulators in European country—that was the basis for Italy’s temporary ban on ChatGPT in March. Altman isn’t making any promises on that frony. If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible,” Altman said, according to Time.

OpenAI has earlier expressed that individual countries should not be setting narrow rules, like defining what an AI should be allowed to say, or watering down its potential by putting the threshold for regulations too low.

Quotable: Altman is waiting for less regulating by the EU

“The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back. They are still talking about it.”—Sam Altman to Reuters

Company of interest: Alphabet

Unlike others who are in wait-and-watch mode, Google’s parent company is taking a proactive approach in collaborating with EU lawmakers. Alphabet CEO Sundar Pichai is forging “An ‘AI Pact’ ahead of the EU AI Act” on a “voluntary basis ahead of the legal deadline of the AI regulation,” Thierry Breton, the European commissioner for internal market, tweeted on May 24.

The case for regulating ChatGPT and other generative AI

Since its public debut in November last year, ChatGPT has given rise to a slew of use cases from cutting down time to write code from 9 weeks to a few days to creating AI videos from start to finish to diagnosing patients. But for all its merits, generative AI also poses concerns like helping bad actors spread disinformation and generally erode critical thinking skills of the larger population.

Therefore, critics are calling for increased oversight. And the chorus is only growing.

While the EU is the closest to creating a first-of-its-kind regulation for ChaptGPT-like AI tools, other countries have also put it on the priority list. The two largest democracies in the world—the US and India—are looking to draw up regulations, too. Last month, China shared draft rules on how this tech should be used, including specifying that the content produced by ChatGPT-esque systems should align with the country’s core ideology of socialism.

Country of Interest: India

Altman is slated to visit India next month, where he will reportedly meet top government officials. But swaying them in his favor won’t be easy because India has its own set of views on how AI must be deployed in the country, union minister Rajiv Chandrasekhar warned.

“Sam Altman is obviously a smart man. He has his own ideas about how AI should be regulated,” Chandrasekhar told MoneyControl. “We certainly think we have some smart brains in India as well and we have our own views on how AI should have guardrails.”

Fun fact: ChatGPT supports the regulation of ChatGPT

In December 2022, in response to a request by journal Scientific American, ChatGPT penned a thoughtful essay on how ChatGPT should be regulated. It established a need to strike a balance, identifying that “overly strict regulations could stifle innovation” but “insufficient regulation could lead to abuses of the technology.”

One more thing: OpenAI calls for an international governing body

In a May 22 blogpost titled “Governance of Superintelligence,” Altman, together with OpenAI’s Nonprofit & Strategic Initiatives head Greg Brockman and the company’s co-Founder and chief scientist Ilya Sutskeber, said that AI will eventually need to be supervised by a monitoring body akin to the International Atomic Energy Agency (IAEA), an intergovernmental organization that seeks to promote the peaceful use of nuclear energy and to inhibit its use for any military purpose.

This international body would “inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.,” the trio wrote. “It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.”

Related stories

🕵️ ChatGPT is getting more nuanced

🇮🇹 Italy has banned ChatGPT, but will its clampdown work?

🇨🇳 China wants to require a security review of AI services before they’re released

LEAVE A REPLY

Please enter your comment!
Please enter your name here