Instagram will start proactively alerting parents if their child repeatedly searches for content related to suicide within a short timeframe, company reps announced Thursday.
Starting next week, the flagging system will be available to moms and dads who use Instagram’s parental supervision tool.
“These alerts are designed to give parents the information they need to support their teen and come with expert resources to help parents approach these sensitive conversations,” Meta, which owns Instagram, Facebook and Whatsapp, wrote in a blog post.
Per the policy, guardians will be notified when their kids use “phrases that suggest a teen wants to harm themselves, and terms like ‘suicide’ or ‘self-harm’.”
These alerts will be forwarded to their email, text, or WhatsApp, depending on the contact info they’ve provided, along with an in-app notification.
Included in said notification will be resources that will help the parent approach “potentially sensitive conversations” with their teen.
“We understand how sensitive these issues are, and how distressing it could be for a parent to receive an alert like this,” Meta wrote. “Our goal is to empower parents to step in if their teen’s searches suggest they may need support.”
The tech giant pledged to “avoid sending these notifications unnecessarily” as excessive alerts could reduce their usefulness.
The tool will roll out in the US, UK and Australia with plans to implement it in other countries as well.
Meta also plans to implement a similar parental warning system for “certain AI experiences” that will notify parents if terms related to suicide or self-harm crop up in conversations with Chatbots.
Over the past year, several parents filed lawsuits against OpenAI, alleging that its flagship chatbot ChatGPT goaded their teens into committing suicide.
Meta’s self-harm safeguards come as it and rival tech firms are embroiled in lawsuits involving young people who claimed be harmed by the tech.
Last week, Meta CEO Mark Zuckerberg testified during a landmark Los Angeles trial, in which one plaintiff claimed they’d become addicted to apps like Instagram while underage, leading to depression and suicidal thoughts.
During his testimony, the tech bigwig admitted that keeping kids under 13 off the platforms was “very difficult.”
However, he claimed that mobile operating system and app store owners like Apple and Google were in a better position to verify users’ ages than the app makers.