Meta is doubling down on its efforts to protect young users on Instagram as the tech giant plans to deploy an AI-powered “adult classifier” tool early next year.
This tool will analyze a user’s activity, such as the accounts they follow and the content they engage with, to determine if they are under 18.
If the AI suspects a user is underage, it will automatically switch their account to a teen account with more privacy restrictions — regardless of the age they’ve claimed.
While this move is meant to safeguard young users, it raises questions about the accuracy of the AI and the potential for false positives.
Meta has yet to disclose the tool’s precision rate, and the appeal process for users mistakenly flagged as underage remains to be seen.
To further tighten age verification, Meta will also implement measures to prevent teens from manually changing their age. Users who attempt to do so will be required to provide proof of identity, either by uploading a government-issued ID or sharing a video selfie.
Meta’s move comes amidst increasing scrutiny from US lawmakers and parents concerned about the impact of social media on young people.
Previously, the company has faced criticism for failing to adequately protect underage users from harmful content and online predators.