Ai generated child abuse content
I’ve been talking for a while now about the importance of awareness, education and safeguarding around #Ai . The need for this has been reinforced in the past few days by the IWF as they urge the Prime Minister to act on the threat, ‘IWF ‘sounds alarm’ on first confirmed AI-generated images of child sexual abuse’.
The concern is that child abuse content will be able to be generated at the ‘touch of a button’.
With advancements in Ai technology and Ai image generators – it is highly likely this material will become more widespread, easily available and therefore rapidly ‘normalised’. Susie Hargreaves of IWF said ‘the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery’.
According to the IWF article;
‘In just five weeks*, the IWF investigated 29 reports of URLs containing suspected AI-generated child sexual abuse imagery. Of these, the IWF confirmed seven URLs containing AI-generated child sexual abuse imagery’.
‘The pages removed by the IWF included Category A and Category B material, with children as young as 3 to 6 years old. Both females and males were depicted’.
Their analysts have also discovered an online ‘manual’ dedicated to helping offenders refine their prompts and train AI to return more and more realistic results.
DALL-E 2, a popular image generator from ChatGPT creator OpenAI, and Midjourney both say they limit their software’s training data to restrict its ability to make certain content, and block some text inputs.
The first global summit on AI safety will be held in the UK in the autumn, with Rishi Sunak calling for international cooperation to mitigate the risks posed by the technology.
IWF calls for AI companies and governments on a global scale, to do more to prevent the abuse of AI tools, and to protect users from the spread of AI-generated child sexual abuse imagery.
So where do we go from here? Our worry is we are not moving quickly enough and learning from what has come before.
‘Previous warnings have suggested that the problem may largely be hiding in plain sight. In late 2020, figures revealed that over 90% of the 69 million sexual abuse images reported in the US were not hidden on some dark web database, but were being sent over Facebook messenger’. Digit.fyi (August 2021)
Susie Hargreaves (IWF Chair) says that easy and wide accessibility to this technology could be ‘potentially devastating’ to safeguarding children online.
Chris Farramond (Director of Threat Leadership at the NCA) says that this technology ‘will potentially make it easier for abusers to commit a range of child sexual abuse offences’.
Myself, my team and other trusted colleagues globally, have been talking now for some time now about the importance of awareness, education and practical safeguarding around Ai and how to take this forward with schools and families.
We need more collaboration to find a way to get out in front and slow the speeding Ai train. We fear that this may already have become a much bigger concern and the numbers do not accurately reflect the problem.
NCA figures show that the largest group of offenders in the UK are those that abuse children.
Up to 830,000 people pose some degree of risk to children and target them through social media and gaming platforms.
We are at a crossroads with Ai. It is vital we choose the right leadership and course to effectively safeguard children and young people.
Stay safe online,