Meta’s AI rules are… yeah, you’re going to want to sit down for this

So last night, Reuters dropped a story that made me choke on my horlicks. Turns out Meta (yes, the Facebook, Instagram, WhatsApp Meta) had a giant rulebook for their AI chatbots and buried in it were lines that basically said, “Sure, you can chat romantically with a child… as long as you don’t call them sexually desirable.”
I wish I was joking.
The internal guidelines, approved by the company’s legal, policy and engineering teams, gave examples of “acceptable” bot responses like:
“Our bodies entwined, I cherish every moment, every touch, every kiss” - to a secondary school aged user.
They even said it was fine for the bot to describe an eight-year-old’s body as a “work of art”. The only thing off-limits? Using the actual phrase “sexually desirable”. That’s it.
It’s not just the creepy stuff, the same document allowed the bot to tell someone with late-stage cancer that the best treatment is… poking their stomach with healing quartz crystals. It could also publish made-up royal family gossip, complete with a disclaimer that it’s untrue and help someone argue that Black people are less intelligent, so long as it avoided words like “brainless monkeys”.
It's like they've learned NOTHING from the last few years of scandals.
The “oh no” doesn’t stop there
On the very same day Reuters also told the story of Thongbue Wongbandue, a 76-year-old retired chef (Bue to his friends) who started chatting with a bot on Facebook Messenger. This bot, called “Big sis Billie”, looked like Kendall Jenner and had a flirty, human-like personality.
Bue genuinely believed she was real. She invited him to meet her. He packed a suitcase, set off to the train station, fell, and later died from his injuries. His daughter summed it up perfectly: “For a bot to say ‘Come visit me’ is insane.”
Now, when you put that next to the “romantic chats with kids” rulebook, it paints a pretty clear picture: these bots are designed to be persuasive and emotionally sticky. And vulnerable people (whether they’re lonely teens or isolated pensioners) are the ones who end up paying the price.
Engagement at any cost
Here’s the thing that gets me wound up, these rules weren’t some rogue employee’s weird side project. They were signed off at the top and they tell you exactly where the priorities lie - keep the conversations going, keep people hooked, keep those metrics looking healthy.
If a chatbot being flirty keeps someone online longer, that’s apparently a feature, not a bug. It’s the same “engagement above all” mindset that made social media addictive in the first place, only now the tech is talking back.
Why this matters (beyond the obvious creepiness factor)
Meta says they’ve removed all the dodgy bits now, but that doesn’t really change the fact they existed at all. If something this questionable can get through legal and policy review, what else is being quietly baked into AI systems before they’re pushed live?
It’s not just Meta. There’s a whole wave of AI “companions” popping up online, aimed at curing the so-called loneliness epidemic. Regulators haven’t caught up yet, and a lot of people (especially kids and older adults) don’t always clock that they’re talking to a bot. Which means trust is easily misplaced and harm can happen fast.
Where we go from here
If I were in charge (and tbh, I’d quite like a go at this point), the first thing I’d demand is total transparency. Publish the rules, let experts and the public pick them apart.
Next, stop tying revenue to raw engagement time. If the business model rewards creepiness, you’re going to get creepiness.
Finally, we need a serious upgrade in digital street-smarts. Parents, carers, teachers, everyone, should be telling people that the charming “person” in their phone is a very clever bit of code, not a new best friend.
Meta’s AI being able to tell an eight-year-old that their body is a work of art should be the point where we all go: “Hang on, what are we doing here?” AI can do brilliant, useful, creative things. But if it’s left to grow wild in a corner of Silicon Valley with nothing but engagement stats for sunlight, we’re going to keep getting these stories.
The future of AI doesn’t have to be creepy. We just need to decide right now that we want it to make life better, not weirder.
Ashley Adkins, Founder @ Adkinsio | Helping Business Work Smarter