ChatGPT Creator Is Worried About AI

0
43

OpenAI CEO Sam Altman expressed concerns today about the future of artificial intelligence (AI), which he warns requires regulation due to the unintended consequences.

Altman’s appearance before the Senate Judiciary Committee included a tone of wariness, presenting a potentially ominous foreboding of a future that could be mired with more misinformation, election interference, computers taking over human employees, and these artificial beings being smarter than humans themselves.

Concern about AI has almost always existed, though recently it has been exacerbated by tech companies—including Altman’s own AI research lab—spending billions in research to get ahead of the next frontier, according to Nerd Wallet. Even Geoffrey Hinton, coined as the “godfather” of AI due to his foundational work in the field, left Google and has warned about the ability to “prevent bad actors from using [AI] for bad things,” according to the New York Times.

Altman, whose OpenAI is the parent company of ChatGPT, expressed a similar sentiment during his first trip to Capitol Hill.

“I think if this technology goes wrong, it can go quite wrong,” Altman told senators. “And we want to be vocal about that. We want to work with the government to prevent that from happening.”

Sam Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023, in Washington, D.C. Altman expressed concern about the future of AI and implored for regulations to exist to prevent potential negative scenarios.
Win McNamee/Getty

U.S. Senator and Committee Chair Richard Blumenthal said, “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls.”

Asked by Democratic Senator Amy Klobuchar on what his plan is regarding upcoming elections and misinformation about candidates, polling places and election rules, Altman said he is “quite concerned” about potential impacts.

“This is not social media. This is different,” he said. “And so, the response that we need is different. This is a tool that a user is using to generate content more efficiently than before. They can change it; they can test the accuracy of it; if they don’t like it they can get another version.”

John McGinnis, a law professor at the University of Northwestern who has written about AI, told Newsweek via phone that he is more hesitant to share the concerns of the tech gurus. He agreed with Hinton, that any tool in the wrong hands can lead to problems—although he doesn’t believe AI itself could run amok and lead to malevolence.

But he’s “generally very skeptical” about regulations slowing down AI, which he said is “actually very useful” when dealing with threats like climate change, for example.

“Moreover, I think we have to understand this in a geopolitical sense,” he said. “Anything that slows down AI here will not slow down AI in places like China. The fact is that AI…is very intertwined with national security, and the United States needs to remain a leader in AI.

“That doesn’t mean that you couldn’t regulate some functions that AI is doing along the lines of regulating this for other activities, even if it weren’t AI, that seems right to me. But in general, I’m a little concerned about that.”

Regarding misinformation, McGinnis said that it has existed since the beginning of our republic and there’s “real danger” in allowing the government to decide what misinformation could entail in that hypothetical scenario.

Last month, Google and Alphabet CEO Sundar Pichai told 60 Minutes that “every product of every company” will be impacted by AI’s rapid development. He, like Altman and Hinton, has trepidation about the possible repercussions.

Google has its own chatbot called Bard.

“This is going to impact every product across every company,” Pichai said. “For example, you could be a radiologist, if you think about five to 10 years from now, you’re going to have an AI collaborator with you. You come in the morning, let’s say you have a hundred things to go through, it may say, ‘These are the most serious cases you need to look at first.'”

The other major concern, expressed by McGinnis and others, is that those in Congress who are making decisions regarding regulations are either too old or not well-versed enough to comprehend the technology.

While the U.S. House in this 118th Congress is younger than in previous years, the Senate is older with the average age of the chamber being 65 years, according to the Pew Research Center.

McGinnis said that government agencies already in existence can use AI to root out discrimination, for example, which he said is “different from having a more wide-ranging and untethered regulation of AI,” which he says would be a “disaster.”

“Most of [the senators] don’t study these things,” McGinnis said. “So, I think it is a real problem that they’re putting down regulations. I think the idea, ‘Well, we’re just gonna give it over to some agency to regulate’ because, of course, they couldn’t write very knowledgeable rules.

“But I think agencies themselves have terrible problems in this regard and getting up to speed and keeping on top of the next AI development. After all, there’s huge money to be made. There’s always a problem in agencies that the best people in the field aren’t on government salaries, but this is kind of going to be that problem on steroids.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here