YouTube is tightening its child-safety measures with a new artificial intelligence (AI) system designed to catch underage users who pretend to be adults on the platform.
The AI-powered system, which uses machine learning, estimates a user’s real age by analyzing patterns such as the types of videos watched and account longevity. James Beser, YouTube Youth’s product director, said this approach ensures that protections apply “regardless of the birthday listed in the account.”
If YouTube suspects a user is underage, they will be asked to verify their identity through methods like government ID, a selfie, or a credit card check. This move builds on existing safeguards and aims to strengthen the platform’s responsibility toward younger audiences.
Globally, social media platforms like YouTube, TikTok, and Instagram face criticism for exposing minors to harmful content. In Australia, a strict new law will ban children under 16 from joining these platforms starting December 10. The legislation, one of the toughest worldwide, has drawn interest from other nations considering similar restrictions.
Communications Minister Anika Wells revealed that nearly 40% of Australian children had encountered harmful content on YouTube. She emphasized the importance of the upcoming ban to shield young users from addictive algorithms and inappropriate material.
While YouTube insists it is primarily a video-sharing platform and not social media, regulators argue its massive reach makes it equally responsible for child protection.