Americans Are Concerned About AI Data Collection, New Poll Shows

0
32

Most Americans who have an awareness of emerging artificial intelligence (AI) technology are worried that companies won’t use AI tools responsibly, according to survey results released this week by Pew Research Center.

There has been an increase in public discourse about AI this year due in part to the wide adoption of ChatGPT, a chatbot unveiled last November by the AI company OpenAI. Users are able to communicate with ChatGPT after initiating conversations through textual, visual and audio prompts. Global monthly web visits to the ChatGPT website were estimated to be at 1.43 billion in August, according to Reuters.

Technology leaders say AI development poses positive potential, particularly in the healthcare, drug development and transportation industries. But there is also risk and uncertainty associated with AI, as no one knows for certain what it could one day become.

Above, the Vizgard FortifAI software-based Artificial Intelligence (AI) engine for surveillance monitoring is displayed during the Dronex Epo at ExCel on September 26, 2023, in London, England. New survey results suggest most Americans who are familiar with AI are concerned about how data collected using the technology will be used.
John Keeble/Getty Images

In May, Pew Research Center polled 5,101 U.S. adults to gauge their thoughts on modern-day data privacy. Six in 10 respondents said they’ve heard or read “a little” about AI, while 27 percent said they’ve heard or read “a lot.” Only 13% percent of respondents indicated they have no familiarity with AI. These results were nearly identical to those found in a December 2022 poll about AI awareness, according to survey results clarified for Newsweek by Pew researchers.

Of those who expressed an awareness of what AI is, 81 percent said they are worried the data collected by companies using AI won’t be used in a way that will make them “comfortable,” while 80 percent said they are worried collected data will be used in a way “not originally intended.” Even so, 62 percent said personal data collected through AI could be used to make life “easier.”

When asked specifically about how they would feel about certain AI data uses, a majority of respondents with knowledge about AI—55 percent—told pollsters they think it is “unacceptable” to use AI in determining who can qualify for public help, compared with 28 percent that said they think AI could be used for these purposes.

Nearly half of American adults don’t want social media companies using their personal data for personalized user experiences, and 44 percent don’t like the idea of AI being used to identify people through voice analysis, according to the poll’s results.

Many leaders in the political and technology spaces have called for guardrails to help guide AI development in safe and responsible ways. AI presents a case in which humans can’t risk making wrong decisions in its development because some AI tools could one day “take over,” computer scientist Geoffrey Hinton recently told 60 Minutes. Hinton, known as the “Godfather of AI,” gave an example by citing ChatGPT, which he said “may well be able to reason better than us” later this decade.

To begin addressing some of the risks associated with AI development, a summit on AI safety is taking place next month in the United Kingdom, with its participants expected to focus on frontier AI. The global summit is set to be the first of its kind.

LEAVE A REPLY

Please enter your comment!
Please enter your name here