Professor Lisa Sugiura, from the School of Criminology and Criminal Justice, writes for The Conversation
It’s hard to overstate the impact that artificial intelligence has had since the release of generative AI platforms such as ChatGPT just three years ago. While they have led to countless advances in how we live and work, they have also been at the centre of controversies around domestic and sexual abuse.
The use of the tool Grok women’s clothing in images brought the issue of so-called technology-facilitated abuse to the fore. But it’s a problem that predates AI – with , , , and all used by abusers to control, harass or stalk their victims.
This abuse as tech has become in people’s lives, and as AI advances rapidly. But governments to make tech companies design systems that minimise misuse, and to hold them accountable when things go wrong.
Our has confirmed that technology misuse has increased and that its harms are significant. But governments and the tech sector are doing little to combat it – despite numerous examples of how tech can enable abuse.
Case 1: Smart glasses
The of smart glasses – which look like normal eyewear but can do many things a smartphone does – has led to reports of secret filming. In some cases, videos were , often attracting degrading and sexually explicit comments.
Meta its smart glasses have a light to show when they are recording and anti-tamper tech to make sure the light cannot be covered. But there appear .
In England and Wales, voyeurism legislation focuses on private spaces, and harassment laws do not specifically apply to targeted recording and online distribution. However, the UK Information Commissioner’s Office after subcontractors were allegedly able to access intimate footage from customers’ glasses. This is in addition to , which alleges Meta violated privacy laws and engaged in false advertising. Meta has said that it takes the very seriously and that faces are usually blurred out. It also discloses in its UK the potential for content to be reviewed either by a human or by automation.
Case 2: Bluetooth trackers
Apple’s AirTags, and other devices built for tracking personal items, to stalk and harass people, . Apple released updates to so that potential victims would be alerted if an unknown device was travelling with them. But for many, this feature should have existed from the outset.
The law in England and Wales is clear that attaching tracker devices to someone without their knowledge is a . But , the ease of covertly monitoring people using these devices means people continue to be at risk.
Case 3: AI deepfake and ‘nudification’ apps
Apps can now , while AI is increasingly used to make . In January, several instances of xAI’s assistant to create sexualised photos of women and minors came to light. All it took to create the images were some .
, xAI decided to limit this feature. But the safeguards appear to apply only to and .
In February, legal changes similar to the in the US, which will require tech platforms in the UK to remove non-consensual intimate images within 48 hours. Failure to do so will result in fines and services being blocked, and the law is likely to be implemented from summer.
Using automated technology known as , victims will only need to report an image once to have it removed from multiple platforms simultaneously. The same images would then be automatically deleted every time anyone attempted to reupload them. Nudification apps and using AI chatbots to create deepfake pornography in the UK.
But there is more to be done. Mitigating risks must be embedded at the design stage to prevent these images being created in the first place. The rise of romantic and sexual chatbots means this has become more urgent.
And beyond deepfakes and nudification, AI can also enable . This includes directly targeting someone with abusive content, or fake images or profiles that for so-called .
Challenges ahead
These issues must be prevented built into these technologies. This is what prioritising user safety should look like, after all. But often, these guardrails . Safety tools are only usually added , not built into platforms from the start.
Governments have allowed regulation to fall behind fast-paced developments. Tech companies have grown quickly, but laws and enforcement have not kept up. At the same time, police and legal systems are often under-trained or unclear on how to handle digital harm.
Even where there is regulation, such as the UK’s , penalties for platforms that allow abuse are often . The regulator Ofcom has issued only to tech companies on how to better protect women and girls on their platforms. Campaigners have called for this to be , with clear penalties for companies that do not comply, placing it on a level legal footing with child sexual abuse and terrorism content.
As AI advances, tech companies must prioritise system design that puts user safety first. But until governments enforce real consequences, the tech sector will be able to profit from harm while those using the platforms bear the cost.![]()
, Reader in Cyber Security, and , Professor of Cybercrime and Gender,
This article is republished from under a Creative Commons license. Read the .
More from The Conversation...
Alzheimer’s drugs offer little benefit, major review finds – and the reasons go deeper than the science
Dr Simon Kolstoe, Associate Professor in Bioethics, writes for The Conversation.
Simon Kolstoe
21 April 2026
7 minutes
How to build cities for wildlife, not just people – new research
Helen Currie
6 May 2026
A bird flu vaccine for humans is being trialled – here’s how it works
Roja Hadianamrei
11 May 2026