top of page

AI and Intimate Image Abuse: How Irish law is equipped to deal with the rise of sexual abuse in the form of deepfakes

  • Asha Rait
  • 4 days ago
  • 4 min read

- Asha Rait, Junior Editor


In the current digital age, with a smartphone and camera in everyone’s pocket, the threat of the distribution of intimate images has become an increasing concern for many individuals. The availability of social media has made the distribution of such images easier, and therefore, Irish lawmakers have had to adapt to such developments. The implementation of legislation, such as the Harassment, Harmful Communications and Related Offences Act 2020, have provided a legal framework to criminalise and prosecute for the non-consensual sharing of intimate images. However, as technology, such as generative artificial intelligence (AI) develops, the scale and sophistication of image-based sexual abuse has intensified. The increasing concerns surrounding the use of AI and deepfakes have sparked new conversations surrounding how the law is cut out to protect individuals from the harms of these new technologies and those abusing them.

 

What are deepfakes, and what issues do they pose in relation to sexual abuse?

 

Deepfakes are images, videos, or audio that have been edited or generated using artificial intelligence, often swapping faces, altering speech and depicting individuals saying or doing things they never did. The term itself originates from a 2017 Reddit user named “deepfake,” who, among other users, shared pornographic videos of celebrities' faces superimposed on the bodies of actors. As technology has advanced and generative AI has developed and been made accessible, the deepfakes being produced are becoming increasingly advanced. While once easy to differentiate between authentic content and deepfakes, the development of this technology has made this increasingly difficult. In recent weeks, Grok AI, a generative AI tool developed by Elon Musk’s xAI, has been regularly featured across the news for its use in the production of deepfake images, specifically deepfake pornography. While this has dominated headlines for the last number of weeks, the issue of deepfake pornography online is not a new one. 


Back in 2023, Security Hero published its State of Deepfakes report, which analysed almost 100,000 deepfake videos across 85 different platforms. The report found that deepfake pornography makes up 98% of all deepfake videos online, with 99% of individuals targeted in deepfake pornography being women. Such sites have been readily available online, with many of these images hidden behind a paywall, allowing perpetrators to profit from their distribution. An example of this was a site known as MrDeepFakes, shut down in May 2025. Researchers at Stanford University and the University of California found that as of November 2023, MrDeepFakes hosted 43,000 sexual deepfake videos depicting 3,800 individuals; these videos have been watched more than 1.5 billion times. 

 

However, since 2023, the scope of this issue and the technology available to perpetrators has increased exponentially. From 19 December 2025, Grok AI’s image-generating feature skyrocketed in popularity, until it was hidden behind a paywall on 9 January 2026. Ofcom, the UK media watchdog, opened an investigation into Grok AI after it found that the AI tool allowed users to prompt it to alter images of clothed women and children by making them appear in bikinis and sexually suggestive poses. The British-American charity, the Centre for Countering Digital Hate, has estimated that in the 11-day period for which the feature was available on the platform, an estimated 3 million sexualised images were produced, including 23,000 that appeared to depict children. Researchers have stated that the AI tool has become “an industrial-scale machine for the production of sexual abuse material”. The European Union has subsequently launched an investigation into X in response to the production of sexually explicit images and potential child sexual abuse material. 

 

The current law in Ireland, and calls for reform 

 

While tech companies are under investigation for being complicit in allowing the generation of these images, where does the law stand on the creation and distribution of deepfake images by individuals? In Ireland, the Harassment, Harmful Communications and Related Offences Act 2020, otherwise known as Coco’s Law, was enacted in February 2021. The legislation criminalises the distribution of intimate images, defined as “any visual representation … made by any means including photographic, film, video or digital representation...” depicting a person in an intimate way, therefore including deepfakes within the scope of this. In England and Wales, the law in relation to the distribution of sexually explicit material online was significantly strengthened by the Online Safety Act 2023, which came into effect in early 2024. This law also covers the sharing of deepfake images. While the inclusion of deepfakes to these definitions was an important step in adapting to the way abuse is changing online, there are key gaps in the law that the Grok AI controversy has brought to the forefront of the conversation. Mainly, that while the distribution of abusive deepfake images is illegal, the creation of such deepfakes is not. 

 

On 12 January 2026, the UK Technology Secretary, Liz Kendall, announced that the creation of non-consensual intimate images is now a specific criminal offence in England and Wales under the Data (Use and Access) Act. Effective from February, this legislation is a major step in adapting the law to protect individuals from image-based abuse. In Ireland, calls have been made for the Government to step in and enact similar laws. Fine Gael TD Grace Boland has confirmed her intention to bring forward proposals to “ensure that those who without consent, create or request fabricated intimate images are held fully accountable.” 

 

Holding platforms accountable is critical, as many deepfake abuses are enabled by private companies, such as Grok AI, that benefit from user engagement without implementing sufficient safeguards. However, while the Irish Government and the European Union take the time to determine how to regulate these technologies, they continue to be used to facilitate image-based sexual abuse. The implementation of laws that criminalise the creation of these deepfake images by individuals, like in England and Wales, is a crucial step ensuring the protection of individuals and holding abusers accountable.

 
 
 

Comments


Have a Question? Let us Know!

Thanks for submitting!

© 2023 by Train of Thoughts. Proudly created with Wix.com

bottom of page