China Focuses on OpenAI Turmoil in A.I. Race with U.S.

0
28

The drama at pioneering artificial intelligence company OpenAI over the firing and re-hiring of CEO Sam Altman has prompted questions in China, which seeks to dominate AI globally by 2030 and is investing billions of dollars to win the race.

Wallstreetcn.com, a prominent Chinese-language economy and finance website, said the addition of political insider Lawrence H. Summers, a former Treasury Secretary, to OpenAI’s board would more closely connect the developer of ChatGPT to the U.S. government and other powerful interest groups.

“As OpenAI’s influence draws more and more attention from the government, Summers’ emergence can help OpenAI better build connections with governments, businesses and academia,” it said. “Putting Summers on the board is of important strategic meaning for OpenAI,” the Shanghai-based Chinese-language media said, adding that the private company was increasingly of policy importance.

OpenAI did not immediately respond to a request for comment.

Sam Altman, CEO of OpenAI, in a selfie with an attendee at the Asia-Pacific Economic Cooperation (APEC) Leaders’ Week in San Francisco, California, on November 16, 2023. China is putting the turmoil at OpenAI under scrutiny. Photo by JULIE JAMMOT/AFP via Getty Images

Summers’ appointment came after days of turbulence that saw several original board members leave and a four-person, slimmed down board put in place with Altman reinstated as CEO.

The state of AI research and development in the U.S. is of intense interest to Beijing which is pouring funding into research institutes and into building a surveillance and business ecosystem.

Its focus includes the most advanced form of AI, “human-like,” general artificial intelligence, and it has published plans to put it on track to dominate the field by 2030.

China has moved to regulate AI more quickly than the U.S., prompted by fears that the transformative technology could one day challenge the power of the Communist Party.

Yet China has also joined in initial international commitments to try to safely manage a technology which scientists believe may one day have the power to transform human civilization — but could also end it.

The AI safety group Future of Life Institute together with more than 30,000 scientists called earlier this year for a six-month moratorium on new AI systems, but the plea fell on deaf ears.

POWER OF AI, FAKES REQUIRE RESPONSIBLE HANDLING
A photo illustration created in Washington, DC, on November 16, 2023 shows an AI girl generator on a cell phone in front of a computer screen. Photo by STEFANI REYNOLDS/AFP via Getty Images

China’s state media outlet Xinhua has so far largely restricted itself to matter-of-fact accounts of the nearly week-long drama at OpenAI that began last Saturday.

But the state-run English language Global Times reported on speculation in China over what it could mean—and quoted industry observers as saying that the turmoil at OpenAI reflected the different environments for AI development in the U.S. and China.

Important Moment

“Some viewed the return of Altman as a victory for commercialization, pessimistically stating that if AI does one day threaten humanity, what happened this week will be an important moment in history,” the Global Times said. “A few humans tried to stop AI from advancing too fast, but they were powerless in the face of capitalists.”

A Washington D.C. technology center and thinktank which has led research into China’s own AI programs said it was “unlikely” the drama at OpenAI would affect those – or hurt the U.S.

“While OpenAI is one of the U.S.’s leading AI labs, our competitive environment and labor mobility probably means any potential impacts will be minimized for the U.S.,” said Tessa Baker, the Acting Director of Communications at the Center for Security and Emerging Technology (CSET), in an emailed statement to Newsweek.

On Oct. 30, U.S. President Joe Biden issued an Executive Order saying that the government planned to manage more carefully multiple safety aspects of the technology as well as encourage business. Congress has held many hearings on the issue and more government regulation may be coming.

“Given the abundance of legislative proposals, it seemed likely that Congress would try to regulate these firms and technologies,” CSET’s Baker said. “The October 30 EO also took steps to address the risks of unconstrained tech development by private firms by calling for greater transparency and testing of large AI models.”

The events at OpenAI had “probably increased the already high interest in addressing the challenges posed by these technologies,” Baker said.

A researcher at CSET, Helen Toner, was among the ousted board members of OpenAI. Baker referred questions about Toner to OpenAI.

Johann Laux, an AI ethicist at the Oxford Internet Institute, told Newsweek: “The recent events have demonstrated just how fragile AI governance is at the moment. With OpenAI, we all could witness corporate governance turmoil at a key player in the AI industry.”

“Weak governance – whether in industry or in actual government – is detrimental to AI safety,” Laux said, adding, “In the end, the beliefs and attitudes of people in charge matter. Who gets to sit on the board of a company or who has a seat at the table in regulatory bodies has a huge influence on how technology is rolled out.”

More broadly, a leading technology researcher at CSET said recently that America must guard against hubris in its approach to technology and leadership. China’s AI programs were to be taken seriously, said William C. Hannas, a research professor at Georgetown University. Hannas is also a lead analyst at CSET. Some AI analysts have downplayed or even dismissed China’s efforts, saying that the U.S. is ahead in some key areas including large language models such as ChatGPT.

“The result of decades of success in the United States—in science, wealth, and global power—is a dysfunctional hubris that blinds our country to genuine threats from peer- competitors,” Hannas said in testimony on Nov. 9 to a Congressional committee, the Subcommittee on Courts, Intellectual Property and the Internet.

“By contrast, two centuries of national humiliation’ and the bare needs for survival fuel China’s desire to compete and restore its former glory. China is clear about its intent to dominate all aspects of AI by 2030. Are we taking it seriously?” Hannas asked.

“While tech companies and the U.S. Government can come to terms on problems of common concern, such as AI safety and securing a global competitive advantage, the relationship is generally strained. The PRC government, for its part, can focus S&T
investment and compel compliance with national plans,” Hannas said in the testimony, which took place before the boardroom events at OpenAI.

A conundrum the U.S. had to work out was, “ensuring AI’s safe development and alignment with (Western) values without ceding ground to less cautious competitors (China),” Hannas said.