Intimate image abuse (IIA), formerly (and mistakenly) termed "revenge porn," involves the producing, reproducing and/or sharing of intimate images or videos without the depicted person's consent. The shift away from the older moniker emphasizes that the root of the issue isn't solely spiteful ex-partners seeking retaliation, but comprises a serious violation of privacy and consent. IIA is a persistent challenge for platforms that permit explicit content: distinguishing between consensual and non-consensual material is not always possible from content alone. Additionally, sexual content that may have been consensually recorded can be later shared non-consensually, further challenging an already fraught dilemma.
While the vast majority of platforms prohibit IIA (or its synonyms) in their Terms of Service or Policies/Community Standards, a few nefarious sites are explicitly designed for this kind of content. The ambiguity in determining this from content alone leads many platforms that ostensibly prohibit IIA to inadvertently host it alongside consensual content.
Yet, the omnipresence of the internet means once something is uploaded, eradicating it entirely becomes nearly impossible.
Digital platforms have limited means by which to address hosting IIA today, though legal regimes, cross platform collaboration, and better reporting mechanisms may offer some hope for moving toward robust takedowns. Instead, the role platforms can play most effectively is to make sure they aren't actively promoting or enabling the discovery of IIA. Web searches for an individual’s name should only return intimate imagery for those who have intentionally associated their name or image with adult material online on the internet. IIA websites can be easily added to deny-lists for platforms to prevent users from sharing links to or distributing links to their content. Approaches like these might seem low stakes, but for a user whose privacy is violated, they can make a world of difference in how they are perceived and able to represent themselves in the world.
The perpetrator may have also done other abuse types while perpetrating IIA:
Cultural variation
Which kinds of images are considered “intimate” is a large factor in the difficulty of mitigating this kind of abuse. Some actions are innocuous in some cultures but taboo in others.
Examples include:
- Two people holding hands / image of a person being shared in what appears to be in a relationship
- A person normally veiled being shown without veil
- A person being shown next to materials forbidden/taboo in their culture
In Pakistan's remote Kohistan region, a woman was reportedly killed by family members after the circulation of a digitally altered photograph showing her holding hands with a man (Mao, 2023). In Bangladesh, a manipulated deepfake of a woman politician in a bikini led to public criticism (Verma, 2024).
In conservative societies, even minor image manipulations can severely damage reputation and safety, while similar manipulations might have lesser consequences elsewhere.
Companies and policy that intends to address this form of abuse on a global scale must contend with these differences in interpretation across their populations.
References
- India Ministry of Home Affairs, National Crime Records Bureau. (2022). Crime in India 2022 - Statistics Volume-I. https://www.ncrb.gov.in/uploads/nationalcrimerecordsbureau/custom/1701607577CrimeinIndia2022Book1.pdf
- Joyful Heart Foundation. (2024). Image-Based Abuse. Joyfulheartfoundation.org. https://www.joyfulheartfoundation.org/learn/image-based-abuse
- Kaspersky. (2024). The Naked Truth - How intimate image sharing is reshaping our world. https://media.kasperskydaily.com/wp-content/uploads/sites/86/2024/07/15164921/The-Naked-Truth-Kaspersky.pdf
- Mao, F., Ng, K., & Zubair, M. (2023, November 28). Pakistan: Woman killed after being seen with man in viral photo. BBC News. https://www.bbc.com/news/world-asia-67551554
- Papachristou, K. (2023). Revenge Porn Helpline 2023 Report. In Revenge Porn Helpline. https://revengepornhelpline.org.uk/assets/documents/revenge-porn-helpline-report-2023.pdf
- Powell, A., Flynn, A., & Hindes, S. (2022, December). Technology-facilitated abuse: National survey of Australian adults’ experiences. ANROWS - Australia’s National Research Organisation for Women’s Safety. https://www.anrows.org.au/publication/technology-facilitated-abuse-national-survey-of-australian-adults-experiences/
- Revenge Porn Helpline. (2024, April 18). Reports to the Revenge Porn Helpline Increased by 106% in 2023 | Revenge Porn Helpline. Revenge Porn Helpline. https://revengepornhelpline.org.uk/news/reports-to-the-revenge-porn-helpline-increased-by-106-in-2023/
- Ruvalcaba, Y., & Eaton, A. A. (2019). Nonconsensual pornography among U.S. Adults: A sexual scripts framework on victimization, perpetration, and health correlates for women and men. Psychology of Violence, 10(1). https://doi.org/10.1037/vio0000233
- UN Women. (2021). Violence against women in the online space: insights from a multi-country study in the Arab States. UN Women – Arab States. https://arabstates.unwomen.org/en/digital-library/publications/2021/11/violence-against-women-in-the-online-space
- UN Women. (2023a). The dark side of digitalization: Technology-facilitated violence against women in Eastern Europe and Central Asia. UN Women – Europe and Central Asia. https://eca.unwomen.org/en/digital-library/publications/2023/11/the-dark-side-of-digitalization-technology-facilitated-violence-against-women-in-eastern-europe-and-central-asia
- UN Women. (2024). Frequently Asked questions: Tech-facilitated gender-based Violence. UN Women – Headquarters. https://www.unwomen.org/en/what-we-do/ending-violence-against-women/faqs/tech-facilitated-gender-based-violence
- Vengattil, M., & Kalra, A. (2022, July 21). Facebook’s growth woes in India: too much nudity, not enough women. Reuters; Reuters. https://www.reuters.com/technology/facebooks-growth-woes-india-too-much-nudity-not-enough-women-2022-07-21/
- Verma, P., & Zakrzewski, C. (2024, April 23). AI deepfakes threaten to upend global elections. No one can stop them. Washington Post. https://www.washingtonpost.com/technology/2024/04/23/ai-deepfake-election-2024-us-india/
AI Risks and Opportunities
As AI improves and becomes more ubiquitous, the risk of deceptive synthetic media being successfully used as a form of IIA increases, both through it being less easily identified and by lowering the skill level to do develop convincing images.