Social media firms could use facial recognition to verify children’s ages
Facial recognition may soon verify kids' ages on social media, with Ofcom pushing stricter rules under the Online Safety Act.
Millions of children under 13 could lose their social media accounts under new guidance set to be introduced by Ofcom, the UK’s communications regulator.
The move, part of the Online Safety Act, aims to tackle the widespread issue of underage users accessing platforms like Instagram, TikTok, and Snapchat.
Why are age verification measures being updated?
Despite platforms requiring users to be at least 13 years old, Ofcom estimates that 60% of eight to 11-year-olds have social media accounts. Many underage children create profiles by lying about their age, with over a fifth claiming to be adults when signing up.
Jon Higham, Ofcom’s head of online safety policy, said the issue is a significant one: “It doesn’t take a genius to work out that children are going to lie about their age. So we think there’s a big issue there.”
In response, Ofcom is exploring the use of facial recognition technology to verify ages. Higham described the technology as “highly accurate and effective” at distinguishing between children and adults. Platforms would use it to identify underage users and apply specific protections or remove accounts where necessary.
What are the consequences for tech firms?
The Online Safety Act grants Ofcom the authority to impose strict penalties on platforms that fail to protect children. Companies could face fines of up to 10% of their global turnover, and executives who repeatedly breach the rules could even face prison sentences.
The new rules are expected to come into effect next spring, with Ofcom publishing its guidance shortly after. However, the timeline for implementation remains uncertain, with final codes and compliance deadlines potentially stretching into 2025.
How do tech companies currently verify ages?
Many social media platforms have already introduced measures such as ID scanning, parental consent, and facial age estimation. Yet Ofcom research highlights gaps in enforcement. A majority of children said they were never asked to confirm their age when creating accounts. Among users, only 18% on Instagram, 19% on TikTok, and 14% on Snapchat recalled being asked to verify their date of birth.
This lack of consistent checks has left children exposed to harmful content, a key concern for regulators. Earlier this month, Ofcom issued additional rules requiring platforms to tackle illegal and harmful content as part of their compliance with the Online Safety Act.
What does this mean for parents and children?
For parents, the proposed use of facial recognition technology offers hope for stricter protections for young users. Platforms would be better equipped to enforce age limits and shield children from harmful material.
However, there are also concerns about the privacy implications of using AI-driven facial recognition technology. Parents may wish to understand how their child’s data is collected and used before fully embracing these measures.
In the meantime, parents can take proactive steps to protect their children online by monitoring usage, using parental controls, and having open discussions about safe social media habits.
Read more:
Authors
Ruairidh is the Digital Lead on MadeForMums. He works with a team of fantastically talented content creators and subject-matter experts on MadeForMums.
Create the perfect wishlist for your baby with MyCrib
Are you expecting and don't know where to start? Discover how MyCrib can help you build your dream wishlist. You can add products from any site with just one click and even use MyCrib's buying assistant to help get you started.