Part D – Age Estimation

Part D of the Age Assurance Technology Trial focuses specifically on age estimation – a method of determining an individual’s likely age or age range by analysing physical or behavioural characteristics using artificial intelligence or machine learning models. Unlike age verification, which relies on known and validated dates of birth, age estimation applies biometric or statistical techniques (such as facial analysis, voice modelling or motion pattern recognition) to predict age without the need for formal identity documents.

Findings on Age Estimation

Age estimation can be effectively implemented in the Australian context and is already in active use. It provides a fast, low-friction, document-free method of age assurance well-suited to binary threshold decisions (e.g. 13+, 16+, 18+). Several systems are already deployed across sectors such as social media, e-commerce and youth platforms. These implementations align with emerging international standards, including ISO/IEC FDIS 27566-1 and IEEE 2089.1.

Most systems are technically deployable across standard devices and environments, though edge-case limitations remain. The Trial found no substantial technological limitations to adoption. However, performance may degrade under poor lighting, occlusion, extreme angles or low-resolution input - conditions requiring ongoing optimisation for reliable real-world use.

Provider claims regarding performance were generally substantiated through independent evaluation. System outputs under test conditions aligned with stated model accuracy, confidence thresholds and demographic performance metrics. A minority of early-stage systems lacked complete transparency, but most vendors provided sufficient documentation and verification.

Age estimation must be configured to context - there is no one-size-fits-all approach. Vendors offered different deployment models (e.g. real-time or asynchronous), with configurable thresholds, fallback methods and policy tuning to match the risk profile and regulatory context of the relying party.

The age estimation sector is innovative, fast-moving and responsive to privacy and fairness challenges. Providers are iterating rapidly - introducing on-device AI, synthetic data augmentation and lower-latency models, while integrating privacy-preserving architectures such as edge inference and federated learning.

Demographic performance consistency is improving but requires continued focus. While many systems showed broadly fair results, some exhibited reduced accuracy for non-Caucasian users, older adults or female-presenting users near age thresholds. Underrepresentation of Indigenous populations, particularly First Nations and Torres Strait Islander Peoples, remains a challenge that vendors are beginning to address through dataset expansion and fairness auditing.

Accuracy varies in suboptimal conditions, highlighting the need for robustness improvements. Test scenarios involving occluded faces, substandard lighting, unusual angles or low-quality cameras showed increased false rejections or misclassifications. Technical refinements are needed to maintain reliability in these edge cases and avoid excluding eligible users.

Vendors are actively mitigating adversarial threats, including spoofing and injection attacks. Most systems implemented ISO/IEC 30107-aligned presentation attack detection and began preparing for injection risk countermeasures (as defined in ISO/IEC AWI 25456). Project DefAI and other certification initiatives will further support resilience to manipulation and deepfakes.

Contextual signals (e.g. parental controls or device settings) were sometimes integrated but not treated as authoritative. These elements were typically used as supplementary inputs to support or refine user journeys. Age estimation decisions remained based on real-time, independently derived evidence - not on self-declared, inferred or parental assertions.

Providers are aligning with emerging international standards and demonstrating readiness for certification. Many systems reflected the privacy, transparency and proportionality expectations in ISO/IEC FDIS 27566-1 especially Clauses 5.3 (Privacy), 5.7 (Fairness), 6.2 (Friction minimisation) and 6.4 (Confidence Expression) - as well as information security standards like ISO/IEC 27001 and biometric assurance practices under IEEE 2089.1.

Case Studies

Privately Logo

Privately

Privately offers a lightweight, on-device facial age estimation system designed for privacy-by-design deployments, especially in youth-focused contexts such as education and family settings.

Needemand Logo

Needemand

Needemand’s BorderAge solution offers a compelling example of a non-facial, privacy-preserving age estimation modality. Unlike most Trial participants, Needemand does not rely on facial analysis, voice or any biometric traits traditionally associated with identity. Instead, it uses hand gesture dynamics, captured via a device’s camera, to whether a user is likely an adult or a child.

Persona Logo

Persona

Facial age estimation with fallback to ID verification. Includes audit-backed fairness metrics and governed update process; privacy-preserving design with opt-out controls.

Unissey Logo

Unissey

Rigr Logo

Rigr AI

Rigr AI uses AI-driven facial age estimation with privacy-preserving, on-device or edge-enabled architecture to deliver real-time age assurance without storing biometric data, supporting diverse, low-friction digital contexts.

Verifymy Logo

Verifymy

Verifymy provides flexible AV solutions integrated with digital wallets, document verification and cross-jurisdictional datasets. It supports selective disclosure and privacy-first age checks, delivering binary outcomes (e.g., “Over 18: Yes”) via APIs and reusable credentials for platforms such as gambling, e-commerce and education.

Luciditi Logo

Luciditi

Luciditi provides facial age estimation, document verification via selfie-ID match, NFC passport reading and open banking or telco records, with fallback to a reusable digital ID app.

Yoti Logo

Yoti

Yoti provides low-friction, high-trust verification with one-time & reusable tokens. A standout example of minimising user friction while maintaining assurance comes from Yoti, whose platform consistently prioritised privacy, simplicity and user control throughout the Trial.

Funded by
Project by
SUBSCRIBE TO GET UPDATES
We will publish regular newsletters with updates on our progress, and links to published documentation.
© 2026
 Age Assurance Technology Trial
Created with Heartburst