Meta is developing a new AI-powered system to detect users who have lied about their age on its platforms, Facebook and Instagram. This move is part of the company’s ongoing efforts to protect young users from harmful content and comply with increasing regulatory scrutiny.
The new AI system, dubbed the “adult classifier,” will utilize advanced machine learning techniques to analyze a user’s account data, including their profile, follower list, interaction history, and even public birthday posts from friends. This comprehensive analysis will enable the system to categorize users into two age brackets: older or younger than 18 with greater accuracy.
Meta announced plans for this technology in September and is set to begin live testing on Instagram early next year. This new tool will join other measures already in place, such as age verification processes, parental consent options, and advanced protections for teen accounts. These measures aim to create a safer online environment for young users and mitigate the risks associated with harmful content.
The development of this AI system comes as various regions around the world consider implementing age restrictions on social media access. Australia, Denmark, the U.S., and the U.K. are among the countries exploring these measures. While Meta has suggested that app stores should be responsible for enforcing age restrictions, this proposal has not gained significant traction.
While this new AI system may help to improve age verification, it is likely that young users will continue to find ways to circumvent these measures. Therefore, a multi-layered approach, combining platform-level efforts, app store restrictions, and parental involvement, may be necessary to effectively protect young users and ensure compliance with evolving regulations.