Deepfakes and Image-Based Abuse: Is It Illegal and What Are Your Options?

The digital landscape has evolved at a pace that often outstrips our legal and ethical frameworks. One of the most distressing products of this rapid advancement is the rise of deepfakes—AI-generated media that can replace a person’s likeness with startling accuracy. While the technology has creative applications in cinema and art, it has also been weaponised for image-based abuse. This typically involves the non-consensual creation and distribution of sexually explicit material, a practice often referred to as “nudification.”

The psychological impact of seeing one’s likeness manipulated and shared without permission is profound. It is a violation of bodily autonomy and digital privacy that can lead to severe reputational damage, emotional trauma, and professional fallout. Fortunately, in 2026, the law has finally begun to catch up with the technology, providing victims with clearer pathways to justice and protection.

The Legal Status of Deepfakes in 2026

The short answer is yes, deepfake-based abuse is illegal in many major jurisdictions, including the UK, the US, and Australia. The legal focus has shifted from merely the “sharing” of images to the “creation” and “solicitation” of them, closing loopholes that previously allowed perpetrators to hide behind the “synthetic” nature of the media.

The UK Legal Landscape

In the United Kingdom, the Data (Use and Access) Act 2025, which came into full force in early 2026, made it a specific criminal offence to create or request the creation of a sexually explicit deepfake of an adult without their consent. This is a standalone offence; it does not matter if the image was never shared or if the victim never saw it. The act of creation itself is the crime.

Under the Online Safety Act, these crimes are now classified as “priority offences.” This forces social media platforms and search engines to proactively remove such content or face massive fines from Ofcom, reaching up to £18 million or 10% of their global revenue. For cases involving children, the laws are even stricter, with deepfakes of minors falling under existing child sexual abuse material (CSAM) legislation, regardless of whether the child is “real” or AI-generated.

The US Federal Position

In the United States, the federal TAKE IT DOWN Act of 2025 has revolutionised the prosecution of these crimes. It is now a federal felony to publish or threaten to publish non-consensual deepfake pornography. Prosecutors no longer need to prove physical contact; they only need to show the intent to harass, harm, or extort the victim. Civil remedies have also been bolstered, allowing victims to sue perpetrators for damages even if a criminal case is not pursued.

Australia and Beyond

Australia’s Criminal Code Amendment (Deepfake Sexual Material) has similarly criminalised the transmission of deepfake sexual material. The law treats the creation of such material as an “aggravated offence,” leading to harsher custodial sentences. These global shifts signal a unified front against digital violence, making it clear that “synthetic” does not mean “victimless.”

Identifying Image-Based Abuse

Image-based abuse is not limited to “nudifying” a standard photo. It encompasses a broad range of malicious activities facilitated by generative AI. Understanding these categories can help you identify when a crime has been committed against you or someone you know.

  • Non-Consensual Deepfake Pornography: Creating sexualised videos or images of a person using AI.
  • Sextortion: Using a deepfake (or the threat of creating one) to blackmail a victim for money, more images, or sexual favours.
  • Doxing with Deepfakes: Attaching a person’s real identity and contact information to synthetic explicit material.
  • Harassment and Cyber-bullying: Using manipulated images to humiliate an individual within their community, school, or workplace.

Immediate Steps to Take if You Are a Victim

Discovery of a deepfake can feel overwhelming, but taking swift, methodical action is essential for both your mental well-being and the strength of a potential legal case.

1. Document the Evidence

Before the content is deleted or the account is blocked, you must preserve the evidence.

  • Take Screenshots: Ensure the URL, the username of the uploader, the date, and the platform are visible.
  • Save URLs: Copy the direct link to the post or profile.
  • Do Not Share: While you need to save the evidence, do not forward it to friends or family, as this can inadvertently contribute to its spread. Save it to a secure, encrypted folder or a physical USB drive.

2. Report to the Platform

Most major social media platforms now have dedicated reporting categories for “Non-consensual Intimate Imagery” or “Manipulated Media.”

  • Meta (Facebook/Instagram): Use the “Adult Nudity/Sexual Exploitation” reporting tool.
  • X (formerly Twitter): Select “Non-Consensual Nudity.”
  • Google: You can request the removal of non-consensual explicit images from search results through their legal help page.

3. Seek Professional Removal Assistance

Sometimes, reporting to the platform is not enough, especially if the content is hosted on “tube” sites or niche forums. Organisations like StopNCII.org use “hashing” technology to help stop your images from being uploaded in the first place. By creating a digital fingerprint of the image, platforms can automatically detect and block it without a human ever having to view the original file.

Legal and Regulatory Options

If you know the identity of the perpetrator, or even if you do not, there are legal avenues available to hold them accountable and protect your future.

Reporting to the Police

You should report the incident to your local police force. In the UK, you can do this online or via 111. Be prepared to provide the documentation you collected in the first step. Even if the perpetrator is anonymous, police digital forensics teams can often trace IP addresses or payment methods linked to the creation of the material.

Civil Lawsuits

In many jurisdictions, you can pursue a civil claim for “intentional infliction of emotional distress,” “invasion of privacy,” or “defamation.” A civil case has a lower burden of proof than a criminal case and can result in financial compensation for the harm caused. In the US, the TAKE IT DOWN Act specifically provides for these civil remedies.

Takedown Orders

Under modern legislation, victims can apply for 48-hour takedown orders. These are court-mandated notices that require a website or individual to remove the content within two days. Failure to comply with these orders can result in contempt of court or significant daily fines for the platform.

Safeguarding Your Digital Identity

While the blame lies entirely with the perpetrator, there are proactive steps you can take to make it harder for malicious actors to harvest your data for deepfake creation.

Tighten Privacy Settings

Ensure your social media profiles are set to private. Many deepfake tools require a large dataset of your face from different angles to create a convincing fake. By limiting who can see your photos, you significantly reduce the “training material” available to hackers.

Use Watermarking

If you are a public figure or a creator, consider using subtle digital watermarks or “poisoning” tools like Nightshade or Glaze. These tools make minute changes to the pixels of your photos that are invisible to the human eye but cause AI models to “break” or produce distorted results when they try to use your image for training.

Monitor Your Likeness

Set up Google Alerts for your name and use reverse image search tools like PimEyes periodically to see where your likeness is appearing online. Early detection is the best way to prevent an image from going viral.


Mental Health and Support Resources

The trauma of image-based abuse is real and valid. You do not have to navigate this alone. Many specialist charities offer free, confidential advice and emotional support tailored to digital survivors.

OrganisationRegionSpeciality
Revenge Porn HelplineUKRemoval assistance and legal advice
eSafety CommissionerAustraliaRegulatory support and content removal
Cyber Civil Rights InitiativeUSAdvocacy and victim resources
1800RESPECTAustraliaTrauma-informed counselling

It is important to remember that this is not your fault. The technology was misused by a perpetrator to commit a crime. Focusing on your recovery while the legal system handles the consequences is a vital part of the healing process.


The Path Forward

As AI continues to integrate into our daily lives, the fight against deepfake abuse will require constant vigilance from lawmakers, tech companies, and users. We are moving toward a world where “digital consent” is as protected as physical consent. By staying informed of your rights and the tools available to you, you can regain control of your digital narrative and ensure that technology serves as a tool for connection, not a weapon for abuse.

Related Posts

© Copyright 2026