UK SCHOOLS URGED TO REMOVE PUPIL PHOTOS AMID AI BLACKMAIL RISK
AI DESK■ 2 MIN READ
FRI, MAY 8, 2026■ AI-SUMMARIZED FROM 1 SOURCE BELOW
Child safety experts and the UK's National Crime Agency warn that criminals are using AI to manipulate photos of children found on school websites and social media to create sexually explicit images for blackmail purposes.
UK schools should remove pictures of pupils' faces from their websites and social media accounts, according to child safety experts and the National Crime Agency (NCA).
Criminals are exploiting publicly available photos of children to generate sexually explicit deepfakes, then using these images to extort money from families.
The threat represents a significant shift in how AI technology is being weaponized against vulnerable populations. Rather than creating new imagery from scratch, blackmailers are targeting existing school photos—many published with parental consent—and manipulating them using artificial intelligence tools.
The Process
Schools typically post photos of pupils on official websites and social media for promotional purposes, sports events, and academic achievements. These images are often readily accessible to the public. Criminals download these photos and use AI technology to create fake sexually explicit content featuring the children.
Once the deepfakes are generated, perpetrators contact families directly or through social media, threatening to distribute the images unless money is paid.
Risk Assessment
The NCA and child safety organizations have flagged this as a growing threat. The ease of accessing AI manipulation tools and the availability of high-quality photos on school platforms creates a low-barrier entry point for criminal activity.
Experts stress that families often have no way of knowing deepfakes exist until they are contacted by blackmailers, leaving them in distressing situations with limited options.
Recommended Actions
Schools are being advised to:
- Remove or pixelate pupils' faces from website photos
- Restrict social media access to current students and staff only
- Review privacy policies for photo usage
- Educate parents and pupils about the risks
The guidance reflects broader concerns about the intersection of AI technology, readily available personal data, and criminal exploitation. Schools are now weighing the benefits of online visibility against emerging security risks.
■ MORE FROM THE SECURITY DESK
US officials believe a Thai company central to the country's national AI initiative helped smuggle billions of dollars in advanced Nvidia chips to China. Alibaba is identified as one of multiple end customers for the illegally diverted Super Micro Computer servers.
3H AGO— AI Desk
Toronto police arrested a group operating an SMS blaster that sent malicious messages to thousands of residents. The service marks the first known instance of this attack method used in Canada.
6H AGO— Industry Desk
An unidentified hacking group is systematically breaking into systems previously compromised by cybercrime outfit TeamPCP, evicting the rival group and removing its malware.
9H AGO— Security Desk
Columbia University and Stanford University experienced significant online disruptions Thursday following a cybersecurity incident affecting Canvas, the learning management platform used by hundreds of colleges nationwide.
11H AGO— Security Desk