Deceptive Synthetic Media refers to images, videos, or audio that has been altered or entirely created using artificial intelligence or other digital software to falsely depict real individuals, often with the goals to deceive, harass, or exploit targets. The creation of such deceptive synthetic media is done without the consent or knowledge of the individuals in question.
Deceptive synthetic media is frequently weaponized to harm an individual’s reputation and violate their autonomy. Women are disproportionately targeted through the creation of sexualized synthetic content, in which their likeness is manipulated or generated to depict them in explicit scenarios they never consented to or engaged in. When such content is distributed—particularly on public platforms—without the individual's knowledge or permission, it constitutes a form of intimate imagery abuse. This abuse not only distorts public perception of the victim but also causes lasting emotional, social, and professional harm.
Deceptive synthetic media is also increasingly used to advance political agendas by discrediting or undermining public figures. Politicians, activists, and journalists have been targeted through fabricated videos, audio clips, or images that falsely depict them engaging in unethical, illegal, or scandalous behavior (a form of online impersonation). Such media can be used to erode public trust, disrupt election campaigns, or stoke political polarization. The speed and realism with which this content can circulate—especially on social media—make it a powerful tool for misinformation and manipulation, with serious implications for democratic institutions and public discourse.
While the technology is globally accessible, cultural attitudes toward women's rights and digital privacy influence how this abuse is perceived and addressed. Some regions may have stronger legal frameworks or social support systems, while others may normalize this abuse or lack enforcement mechanisms.
Low - Previously required high technical skills, but generative AI tools have made creation accessible to general users with minimal technical knowledge.
The advancement of AI, especially in deep learning and generative models, has significantly increased the risks associated with deceptive synthetic media. Deepfake technology now enables the rapid and realistic generation of audio, video, and images that are often indistinguishable from authentic content. As a result, malicious actors can fabricate convincing material that depicts individuals—disproportionately women and marginalized groups—in sexual, criminal, or compromising situations they were never involved in.
One security company found explicit deepfakes increased fourfold from 2022 to 2023 (IIA Tools Landscape Analysis, 2025). This not only facilitates intimate imagery abuse and reputational harm, but also amplifies misinformation and disinformation with broader political, social, and economic consequences.