If Social Media Is a ‘Digital Heroin’ for Today’s Youth, AI Will Be Their Fentanyl

0
9

A pre-teen girl sees an innocuous advertisement for a weight loss program on a social media platform. She’s intrigued. After all, she wants to look like all those slender influencers on her feed. Little does she know, the ad—generated with artificial intelligence (AI) technology—was carefully targeted to her based on her AI-analyzed browsing habits and “private” conversations. Once she clicks on the ad, her feed becomes a relentless barrage of AI-curated content promoting harmful diet strategies—including deepfake videos from beloved influencers. Her online world morphs into a dangerous echo chamber, magnifying her insecurities and spiraling her into depression.

A boy with an innocent curiosity about guns peruses social media and clicks on a few videos about them. He stumbles upon some extremist content. He’s interested and continues exploring. Seduced by sophisticated AI algorithms, he becomes both the consumer and creator of violent content—losing hours, then days, to his new obsession. He withdraws from his family and friends and lives on social media, where he becomes just a number to meet a tech company’s quarterly key performance indicators (KPIs).

An 8-year-old exploring YouTube happens upon one of her favorite influencers discussing mental health. From there, an innocent quest for understanding about mental health leads her into a labyrinth of content that steadily feeds her anxiety and confusion about depression. With each click or short form video, she is unwittingly pulled deeper into a cycle of distress and sadness. She eventually harms herself before an intervention is made.

These scenarios are not merely hypothetical; many aspects of them are taken from real stories. But as AI explodes, already-addictive social media platforms will become even more capable at hooking kids to their content.

If social media is already a “digital heroin” for our youth, new and enhanced AI will become their fentanyl.

For years, predatory social media platforms have capitalized on human psychology by triggering dopamine rushes akin to those induced by narcotic substances. As a result, teenagers are ensnared in an average of five hours per day on these platforms. And a disturbingly young cohort, children aged 7-9, are increasingly exposed to their allure. By 10, children on average, have their first smartphone and their childhood starts to end.

This engineered addiction has devastating effects on children, who are in critical developmental stages. Engagement on social media can result in depression, anxiety, distorted body image, and sleep disruption. It can also increase exposure to cyberbullying and explicit material.

AI will amplify these effects. Powerful AI algorithms will allow social media companies to funnel even more addictive content to users. Creators will generate new and more tailored content faster than ever before. Bots will fuel artificial engagement—creating an even steadier stream of dopamine hits. Nefarious actors will use AI technology to create deepfakes, brainwash kids, or extort them, driving some to suicide—like in the tragic case of 17-year-old Gavin Guffey.

Social media apps are seen on a screen.

Scott Olson/Getty Images

We’re already beginning to see this play out. Consider just a few recent examples. In October, a fabricated AI ad showed Kelly Clarkson promoting weight loss gummies. Then in January, sexually explicit AI-generated deepfake images of Taylor Swift flooded the internet—with one image viewed more than 47 million times. And scammers recently stole and manipulated a video of a Christian social media influencer to create a YouTube commercial promoting erectile dysfunction supplements.

The pace of AI development will not slow to accommodate regulatory indecision or societal complacency. As a professional deeply ingrained in the AI sector—and more importantly, as a father—I am acutely aware of the stakes.

We need to act—and fast. For one, we need common sense regulations to hold social media behemoths accountable for churning out unsafe tech products. The Kids Online Safety Act, which has significant bipartisan support, offers a good start. If passed, tech companies would be required to prioritize the well-being of the youngest users by actively mitigating the risks of anxiety, depression, and other digital-age ailments.

Additionally, we must develop new technologies and approaches that steer kids and families to a healthier and happier future.

Parents also have a role to play. While prohibiting kids from accessing the internet altogether may seem like a safe solution, that would be akin to solving the “digital narcotics” problem the same way schools tried to mitigate actual drug use through the failed DARE program in the ’90s and early 2000s. Kids are going to find their way onto social media regardless of how much parents try to control it. Instead, parents can slowly integrate technology, building healthy online habits and digital literacy—until kids are ready to make choices on their own.

Make no mistake: AI has the potential to revolutionize and enrich our lives. But we must forge a path that safeguards our children from an even more potent, dangerous social media addiction.

Tim Estes, through his leadership at Digital Reasoning and now at Angel AI, is at the forefront of pioneering AI solutions designed to foster the well-being and healthy evolution of our youth in the digital age.

The views expressed in this article are the writer’s own.