AI Safety & Responsibility: Learning from Social Media’s Mistakes
Over the past while, I’ve found myself thinking more about the real harms we’re already seeing from new technology particularly AI.
This isn’t about hypothetical future risks or what might go wrong one day.
It’s about what’s happening right now, and the impact it’s having on real people.
I’m a genuine fan of technology and the opportunities it creates. AI, when developed well, has the potential to do a huge amount of good. But how it’s designed and deployed really matters.
What’s happening.
Recent headlines have highlighted how AI tools can be used to generate and spread sexualised imagery, including deepfakes for financial gain. These are not abstract technical failures. They are deliberate acts using technology as a tool which affect people’s dignity, safety, and wellbeing and can cause untold harm.
Regulators asking tough questions isn’t surprising. The harm is no longer theoretical. It’s visible, predictable, and already occurring.

Why this matters
When we talk about AI misuse, it’s easy to focus on platforms, tools, or technical loopholes. But at the centre of all of this must be the people who have already been harmed by this technology particularly children and young people.
Images don’t just exist online in isolation. They can follow someone into school, work, relationships, and everyday life. The emotional and psychological impact and triggers can be long-lasting, especially for younger users who may not yet have the confidence or support to speak up.
A difficult but important reality
Even when technology companies put safeguards in place, we have to acknowledge a difficult reality: some users will actively try to work around them using different prompts or techniques to bypass restrictions.
That doesn’t remove responsibility from platforms, but it does remind us that safety can’t be treated as a one-time fix. It has to be ongoing, monitored, and responsive.
Safeguards absolutely do matter. But so do detection, reporting, rapid removal, and accountability when things go wrong.
Have we learned enough from the social media era?
This isn’t the first time we’ve seen technology scale faster than our ability to fully understand its impact.
We know all too well the harms that have emerged during the rise of social media from misinformation and harmful content to its effects on mental health, wellbeing, and wider society. Yes, social media comes with benefits, but many of the risks were only addressed after they became widespread and deeply embedded.
That’s what makes this moment with AI particularly important.
We now have years of lived experience, evidence-based research, and real-world evidence showing what happens when platforms prioritise rapid growth and financial gain over built-in safety. We’ve seen the cost of addressing harm reactively rather than proactively.
So it’s reasonable to ask:
Why aren’t those learnings being applied more clearly to AI development today?
If we already know the impact of “move fast and fix later,” repeating that approach with far more powerful tools feels like a risk we don’t need to take.
Can we ban our way out of this?
In response to harms like these, calls to ban certain AI tools or features are completely understandable. When people are being hurt, the instinct to stop the technology altogether comes from a place of protection.
But we also need to be honest about the limits of bans.
AI tools don’t exist in one place, on one platform, or in one country. Even if a feature is restricted or removed in one jurisdiction, similar tools can still be accessed elsewhere, shared privately, or recreated in new forms. In practice, harmful behaviour often doesn’t disappear – it simply moves.
That doesn’t mean regulation or restrictions aren’t necessary. They absolutely are. But banning our way out of this, on its own, is unlikely to solve the problem.
What’s more realistic and potentially more effective is a combination of:
- Strong safeguards built in from the start
- Continuous monitoring and rapid response
- Clear accountability when harm occurs
- Financial penalties when harm occurs
- Education and awareness so users understand impact and consequences
The questions I keep coming back to
These are the questions I find myself asking more often lately:
- If an AI tool can be misused to cause predictable harm, should it ever be launched without strong safeguards built in from day one?
- When harmful content spreads at scale, is that simply user misuse or does it point to a product design issue?
- Should platforms be expected to demonstrate safety before release, rather than responding only after harm occurs?
- And when things do go wrong, who carries responsibility?. The user, the developer, or the platform deploying the tool?
These aren’t easy questions, but they’re necessary ones.
Moving forward
This isn’t about stopping innovation or slowing progress. It’s about building technology people can trust and ensuring safety at the centre, especially for the most vulnerable – with consent and wellbeing not treated as optional extras.
If AI is going to play an increasing role in our lives, then protecting people especially children and young people has to be part of that conversation from the very beginning and not an afterthought.
💬 Are bans a realistic solution here or do we need to get much better at applying the lessons we’ve already learned and building safety at scale
If you would like further help
👉 Download the Parents App for clear step by step guides and practical support
👉 Book a one to one session with me to help with any questions or guidance you might need
Wayne
Found this article useful?
Remember to share it with your family & friends.