Facebook and Instagram parent company, Meta Platforms, has revealed that it would be testing new features aimed at protecting teenagers on Instagram from harmful and sexual content, and potential scammers.
The tech giant made this decision due to growing pressure over concerns regarding the addictive nature of its apps and their impact on the mental health of young users.
One of the key features Meta plans to implement is blurring messages containing nudity in Instagram’s direct messages. This protection mechanism will utilise on-device machine learning to analyse images and determine if they contain nudity. Importantly, this feature will be enabled by default for users under 18, with Meta actively encouraging adults to activate it as well.
Meta added that the nudity protection feature will function even in end-to-end encrypted chats, where the company does not have access to the content of the messages unless they are reported. This addresses concerns about privacy while maintaining a focus on safety.
While direct messages on Instagram are currently not encrypted, Meta has announced plans to introduce encryption for the service. Additionally, the company is developing technology to identify accounts potentially involved in sextortion scams and is testing new pop-up messages to warn users who may have interacted with such accounts.
This announcement follows Meta’s previous commitment to hide more sensitive content from teenage users on Facebook and Instagram, including content related to suicide, self-harm, and eating disorders.
Reuters reports that the tech giant has faced legal challenges, with attorneys general from 33 U.S. states, including California and New York, filing lawsuits against the company in October 2023, alleging that it misled the public about the dangers of its platforms. In Europe, the European Commission has also raised questions about Meta’s measures to protect children from illegal and harmful content.