Parents are gonna “like” this.
Meta has added “Teen Accounts” to Facebook and Messenger to limit who can contact minors and screen the content they’re exposed to.
On Tuesday, the tech giant announced that users under 18 will automatically be enrolled in these accounts in an attempt “to give parents more peace of mind across Meta apps” and curb exposure to inappropriate content.
Meta told TechCrunch that teens will only receive messages from people they follow or have messaged before. Only their friends will be able to see and reply to their stories and tags, mentions, and comments will be limited to those in their network.
Teens will also be sent notifications to close the apps after logging an hour of screen time and have their apps placed on “quiet mode” at night.
Users under 16 need a parent’s permission to change settings to be less strict.
These protections will be rolled out in the US, UK, Australia, and Canada before expanding to other locations.
Similar safety features were added to Instagram last year as watchdogs and lawmakers have continued to crack down on social media companies’ lack of protections against children, amid concern about rising mental health issues in relation to the apps.
Along with the features recently added to Facebook and Messenger, Instagram allows parents to view which accounts their child has recently messaged, set daily time limits and block teens from using the app during specific time periods.
In the most recent update released on Tuesday, Meta also added protections blocking teens under 16 from going “live”, receiving “unwanted images” and unblurring images suspected of containing nudity, all without a parent’s permission.
Meta claims that 97% of teens aged 13 to 15 have kept these built-in restrictions on their accounts since they were first added last year and that 94% of parents say these restrictions are “helpful.”
However, from the beginning of these changes, many online safety and parenting groups have insisted that the safety upgrades are inadequate.
Last summer US Surgeon General Vivek Murthy called for the implementation of a tobacco-style “warning label” for social media apps to raise awareness about their potential mental health risks, including depression and anxiety.
Last fall, a coalition of state attorneys general sued Meta, alleging the company has relied on addictive features to hook kids and boost profits at the expense of their mental health.
“Meta can push out as many ‘kid-’ or ‘teen’-focused ‘features’ as it wants, it won’t change the fact that its core business model is predicated on profiting off and encouraging children and teenagers to become addicted to its products – and American parents are wise to the hustle and demanding legislative action,” Tech Oversight Project director Sacha Haworth said in a statement at the time to The Post.
Another watchdog, the Tech Transparency Project, argued that Meta has “claimed for years to already be implementing” versions of the features detailed in the initial move.
For example, Meta originally announced plans to make teen accounts private by default and to limit their interactions with strangers as far back as 2021, according to previous blog posts.