The UK’s Online Safety Act and Age Checks: A Bad Idea Done Badly

The UK’s Online Safety Act and Age Checks: A Bad Idea Done Badly

Let’s talk about the Online Safety Act.

You’ve probably seen headlines about it, it’s supposed to make the internet safer, especially for kids. On paper, that sounds great. Who wouldn’t want less harmful content online?

But once you look under the bonnet, things start to fall apart. Fast.

I work with tech every day, helping people navigate the messy bits. So when I see legislation like this, I can’t help but think: who exactly is this helping? Because it’s not the users. And it’s definitely not making the internet any simpler or safer.

Age verification: good intentions, messy execution

One of the big pillars of this new law is age verification. The idea is that websites need to make sure users are old enough to access certain content. Again, sounds reasonable. But the methods? They’re all over the place.

We’re talking passport uploads, facial scans, credit card checks - even handing off your details to third-party companies you’ve never heard of. Some of them aren’t even based in the UK (!).

Think about that for a second. You’re uploading sensitive ID, the kind of stuff we’re always telling people to lock down - to an overseas company, just to access a website. What could possibly go wrong?

And it’s not hypothetical. In 2024, a major age verification company, VeriFast, was hacked, exposing biometric data from thousands of users. These are companies with no universal standards and very little oversight.

The privacy trap

Most people assume that if a UK law requires something, the data stays in the UK. That’s not the case here.

Take Yoti, one of the most widely used age assurance providers. They publicly admit that when AI can’t verify your age, the check is done manually, by staff based in India. Another company, Persona, is based in the US and stores uploaded ID photos for up to seven days.

None of this is illegal. But it raises serious concerns. What happens to that data after you close the browser tab? How is it stored? Who else gets to see it? and it’s not just adult sites being hit. This law applies to any platform that could be “potentially harmful to children.” That means forums, blogs, games, chat apps, basically half the internet. Even Reddit has introduced mandatory ID checks for UK users accessing certain communities.

Workarounds, VPNs and wasted effort

Here’s the stupid bit - the people this law is supposed to protect (teenagers) are the most likely to dodge it.

Since the first enforcement deadline hit in July 2025, Proton VPN has seen signups in the UK jump by over 1400%. If you’re a 14-year-old who wants to get around this stuff, all it takes is a free app and five minutes on TikTok.

So we’re building this massive privacy-invading system… that doesn’t even work. Classic.

A regulatory black hole

Even if you’re fine with age checks in principle, the way it’s been rolled out is a mess.

There’s no approved list of providers. No universal technical standard. No independent regulator making sure these companies handle your data properly. In fact, some age assurance providers are allowed to use your ID data for training algorithms, advertising, or “research” as long as they include it somewhere in the small print.

That’s not privacy. That’s wishful thinking.

Groups like the Open Rights Group and the Wikimedia Foundation have already raised red flags. Wikimedia is now challenging the law in court, arguing that enforcing contributor identity checks could put volunteers at risk, especially in countries where editing Wikipedia can be dangerous.

So what do we do?

I’m not saying we shouldn’t protect kids online. Of course we should. But not like this.

Real online safety isn’t about blanket ID checks. It’s about education, platform design, parental tools, and holding tech giants accountable. It’s about making the internet better for everyone, not just building digital border checks that don’t even work.

This is a mess

The Online Safety Act is a classic case of overpromising and underdelivering. It’s invasive, it’s inconsistent, and it opens the door to privacy violations most people don’t even realise they’re agreeing to.

If you value privacy, freedom, or just a bit of common sense online, this matters. We all want a safer internet. But it shouldn’t come at the cost of our rights.

We need a better approach. One that’s actually safe, not just pretending to be.

Ashley Adkins, Founder @ Adkinsio | Helping Business Work Smarter