Online impersonation creates accounts, profiles, or digital content that falsely represent someone without their consent. This includes fake social media profiles, unauthorized use of photos, and pretending to be someone in digital communications.
This behavior encompasses controlling, stalking, threatening, and harassment behaviors, involving unauthorized access to digital services and defamation (Koukopoulos, 2025).
Common manifestations include:
- Creating social media accounts using someone's identity
- Sending messages while pretending to be the target
- Using AI to create deepfake (deceptive synthetic media) videos or images
- Posting harmful content under the target's identity
Perpetrators typically aim to discredit victims, gain status, or extort money or sexual content (Humane Intelligence, 2025).
The eSafety Commissioner (2023) notes that those with high-profile jobs face increased risk, especially if they are younger or have disabilities.
Skill required
Medium - Someone with average computer literacy could perpetrate this kind of abuse.
With generative AI advancements, the technical barrier has decreased significantly. While sophisticated impersonation previously required specialized knowledge, now user-friendly AI tools enable creation of realistic fake content with minimal technical skills (Humane Intelligence, 2025).
References
- Australian eSafety Commissioner. (2025). Gendered violence. ESafety Commissioner. https://www.esafety.gov.au/key-topics/gendered-violence
- Federal Trade Commission. (2024, February 9). As Nationwide Fraud Losses Top $10 Billion in 2023, FTC Steps Up Efforts to Protect the Public. Federal Trade Commission. https://www.ftc.gov/news-events/news/press-releases/2024/02/nationwide-fraud-losses-top-10-billion-2023-ftc-steps-efforts-protect-public
- Giannelis, M. (2022, January 3). Fake Facebook accounts is an ongoing problem. Tech Business News. https://www.techbusinessnews.com.au/news/fake-facebook-accounts-is-an-ongoing-problem/
- Humane Intelligence. (2025). Digital violence, real world harm: Evaluating survivor-centric tools for intimate image abuse in the age of gen AI.
- Koukopoulos, N., Janickyj, M., & Tanczer, L. M. (2025). Defining and Conceptualizing Technology-Facilitated Abuse (“Tech Abuse”): Findings of a Global Delphi Study. Journal of Interpersonal Violence. https://doi.org/10.1177/08862605241310465
- Basuroy, T. (2024, June 24). Number of online impersonation offenses reported across India in 2022, by leading state. Statista. https://www.statista.com/statistics/1097572/india-number-of-online-impersonation-offences-by-leading-state/
- Security Hero. (2023). 2023 State Of Deepfakes: Realities, Threats, And Impact. Security Hero. https://www.securityhero.io/state-of-deepfakes/#overview-of-current-state
- World Economic Forum. (2025, March 25). The Intervention Journey: A Roadmap to Effective Digital Safety Measures. World Economic Forum. https://www.weforum.org/publications/the-intervention-journey-a-roadmap-to-effective-digital-safety-measures/
AI Risks and Opportunities
Risks
Generative AI has dramatically lowered barriers to creating convincing impersonations. One security company estimates a 4x increase in explicit deepfakes from 2022 to 2023 (Security Hero, 2023). These tools require less technical skill while producing more convincing results, enabling perpetrators to create realistic impersonations that falsely portray individuals in compromising situations.
Opportunities
AI also enables protective measures including automatic evidence capture systems that preserve metadata and AI-powered detection tools like TikTok's content credentials system that identifies AI-generated material (World Economic Forum, 2025).