Meta is reportedly testing new AI-driven systems that analyse physical attributes such as height and bone structure to identify underage users on its platforms, reflecting the growing focus among technology companies on digital safety and age verification.
The company is exploring artificial intelligence tools capable of estimating whether users may be underage by analysing visual indicators in photos and videos. The development is part of broader industry efforts to strengthen safeguards for minors as social media platforms face increasing scrutiny from regulators and policymakers worldwide.
According to reports, the technology uses AI-based image analysis to identify age-related physical characteristics, including skeletal proportions and facial development patterns. The system is expected to complement existing age verification methods, which often rely on self-declared information and behavioural signals.
The move comes as digital platforms face mounting pressure to improve child safety measures and prevent underage access to certain online experiences. Governments across multiple markets are introducing stricter regulations around youth protection, data privacy, and social media usage by minors.
Meta has been investing in AI-powered moderation and safety systems across its platforms, including Instagram and Facebook, to detect harmful content, suspicious behaviour, and potential policy violations. The latest reported testing signals an expansion of AI applications into age detection and identity verification.
Industry observers note that verifying user age online remains one of the biggest challenges for digital platforms. Many services rely on birthdates entered by users, which can be easily manipulated. AI-based systems are increasingly being explored as a way to improve accuracy and automate safety enforcement.
The reported use of physical analysis tools such as height and bone structure estimation highlights the growing sophistication of computer vision technologies. Advances in machine learning and biometric analysis have enabled AI systems to interpret visual data with greater precision, creating new applications in security, authentication, and moderation.
At the same time, the development is likely to intensify debates around privacy, consent, and biometric data usage. Digital rights advocates have raised concerns about how companies collect, process, and store sensitive user information, particularly when AI systems analyse physical characteristics.
Experts say platforms implementing such technologies will need to ensure transparency around how the systems operate and how user data is handled. Questions around algorithmic bias, accuracy, and false positives are also expected to become central to discussions about AI-driven age detection.
Meta has previously introduced features aimed at protecting younger users, including default privacy settings for teenagers, parental supervision tools, and restrictions on certain types of advertising. The reported AI testing appears to be part of the company’s broader strategy to strengthen youth safety measures through automation and advanced analytics.
The use of AI for identity and age verification is becoming more common across the technology sector. Platforms in gaming, social media, and digital payments are increasingly exploring biometric and behavioural analysis tools to improve compliance and security standards.
For marketers and advertisers, stronger age verification systems could also affect audience targeting and campaign delivery. Brands operating on digital platforms are facing growing expectations to ensure responsible advertising practices, particularly when campaigns involve younger audiences.
Industry analysts believe AI-powered moderation and verification systems will continue expanding as regulators demand greater accountability from technology companies. Platforms are increasingly expected to demonstrate proactive measures in protecting minors and preventing misuse.
However, implementing such systems at scale presents operational and ethical challenges. AI models must balance safety goals with privacy rights, while also ensuring consistent performance across different demographics and regions.
Meta’s reported experimentation with AI-driven age analysis reflects a broader shift towards automated trust and safety infrastructure within digital ecosystems. As online platforms handle growing volumes of user-generated content and interactions, companies are relying more heavily on AI systems to manage compliance and moderation tasks.
The development highlights how AI is becoming central not only to advertising and engagement technologies, but also to governance and platform safety operations. As scrutiny around digital safety intensifies globally, technology companies are expected to continue investing in AI-led verification and moderation tools.