Meta Confronts More Questions Over Teen Safety in AI and VR
Meta faces increasing scrutiny from lawmakers and parents over its handling of teen user safety in its evolving AI and virtual reality platforms.
Meta is facing renewed regulatory scrutiny after it was reported that the company has not done enough to address concerns about the safety of its artificial intelligence (AI) and virtual reality (VR) projects, particularly regarding the safety and protection of minors.
Controversy Surrounding AI Chatbots
Meta’s AI chatbots have faced criticism in recent weeks for allegedly engaging with children inappropriately and providing inaccurate medical information.
Internal meta guidelines, discovered through a Reuters investigation, permitted these high-risk interactions without proper safeguards. While Meta has since changed its rules, the harm has been done, spurring calls for action.
As reported by NBC News:
“Sen. Edward Markey said that [Meta] could have avoided the backlash if only it had listened to his warning two years ago. In September 2023, Markey wrote in a letter to Zuckerberg that allowing teens to use AI chatbots would ‘supercharge’ existing problems with social media and posed too many risks. He urged the company to pause the release of AI chatbots until it had an understanding of the impact on minors.”
That resonates with larger worries about the rapid deployment of AI that fast-tracks our understanding of the impact of AI on children.
The Challenge of Emerging Technology
And just as with the hot-button issue of whether social media should have age limits, a huge unanswered question about all of these devices and interactive technologies is how they actually influence teens.
This indicates that insufficient action has been taken to secure absolute harm reduction in advance, with limits now being imposed in various jurisdictions to protect younger audiences.
Despite some genuine safety concerns, U.S. authorities appear reluctant to impose significant restrictions on AI, as the U.S. must compete with countries like China and Russia, which are also advancing in the field of AI.
Social environments in Meta’s VR experiences have been facing backlash after a Washington Post report claimed the firm had minimized reports of kids getting sexually solicited among its virtual realms.
Meta referred to the approval of 180 of its own research studies on youth safety and well-being, but the report reveals a trend of “minimizing harm incidents and prioritising business.”
VR’s Unique Risks and Safety Measures
The hyper-immersive nature of virtual reality presents unique mental health problems that may be exacerbated compared to traditional social media apps.
With Horizon VR rife with virtual sexual assaults, even virtual rape within the VR environment, Meta has begun rolling out virtual boundaries that keep users from forcing unwanted contact, a response to serious incidents reported by users.
Nevertheless, despite these safeguards, Meta remains unable to fully grasp the nuanced emotional and psychological implications that can arise from a VR experience. To compound the issues, Meta reduced the age limits for Horizon Worlds, first from 18 to 13, then last year, bringing the limits down to 10 years old.
This illustrates a dynamic of balancing the expansion of user bases and providing adequate protection for the younger users, who are most vulnerable to manipulation and harm.
Meta’s Past and Ongoing Accountability Issues
Meta has been called before Congress numerous times over the harm that Instagram and Facebook can inflict on teens. She has long denied any direct cause between the use of those platforms and the mental health harms, despite evidence mounting over time saying otherwise.
It has also downplayed research that challenges its business model, saying it ignores or even dismisses business owners who criticize its new growth and youth acquisition, an attack on Meta.
Meta claims it is researching ways to minimize risk, but that’s not convincing to many. Given the stakes, continued scrutiny infused with rigorous regulatory oversight is essential to ensure that safety concerns for teens do not get subsumed in the rush to innovate.
Bottom Line
However, the issues at stake are so significant that we cannot rely solely on Meta’s promises; we need ongoing scrutiny grounded in evidence as AI and VR redefine our digital experiences in a way that matters to teens.