TFGBV Taxonomy
Abuse Type:

Online harassment

Last Updated 6/5/25
Definition: Pervasive or severe, unwanted digital communication targeting an individual or group with the intention to intimidate, threaten, or cause psychological distress.
Sub Types:
Doxxing Inappropriate content Cyberstalking
Perpetrators:
Formal group Informal group Nation-state Stranger Personal connection
Perpetrator Intents:
Silence Punitive intent Entertainment Aggrandizement
Targets:
Public figure Organization, group, community Society Private individual
Impact Types:
Abuse normalization Economic harm Psychological & emotional harm Self-censorship Social & political harm
Synonyms:
Cyberbullying, Online abuse, Digital harassment
Skill Required:Low

Online harassment is an umbrella term that encompasses a wide range of behaviors via technology intended to silence or intimidate a target, many of which are presented elsewhere in this taxonomy.

It often includes repeated hostile messages, direct threats of violence, and offensive or abusive comments (including expression of sexism, racism, xenophobia, homophobia, transphobia or ableist prejudices). It may include coordinated pile-on attacks or sustained harassment campaigns by formal groups or informal groups intended to overwhelm the target.

Many other forms of TFGBV are often encompassed by online harassment, including cyberstalking, online impersonation, intimate image abuse, inappropriate content, and deceptive synthetic media.

The harassment often escalates across platforms and can include doxxing personal information to facilitate offline harm.

Some forms that do not have their own entries in this taxonomy include spamming, sending viruses, and hacking.

While anyone can be targeted by online harassment, women and gender minorities are disproportionately targeted.

Cultural variation

Harassment patterns vary significantly across regions, with intersectional discrimination adding complexity.

Sharing a picture of a woman with a man who is not her husband or not wearing a hijab can be considered online harassment in some contexts.

Cultural contexts shape which topics or identities become targets, with women journalists in many regions facing higher rates of harassment for covering certain subjects.

Research by UNFPA shows that Black, Asian, Minority, Ethnic (BAME) LGBTQIA+ people in the UK experience TFGBV at twice the rate of white LGBTQIA+ people (20% vs 9%).

Skill required

Low - requires minimal technical knowledge, primarily involving standard platform features like messaging, commenting, and account creation.

References

  • Anti-Bullying Alliance. (n.d.). What is online bullying? Anti-Bullying Alliance. https://anti-bullyingalliance.org.uk/tools-information/all-about-bullying/online-bullying/what-online-bullying
  • Durham University. (n.d.). What is online harassment? Report and Support - Durham University. https://reportandsupport.durham.ac.uk/support/what-is-online-harassment
  • Koukopoulos, N., Janickyj, M., & Tanczer, L. M. (2025). Defining and Conceptualizing Technology-Facilitated Abuse (“Tech Abuse”): Findings of a Global Delphi Study. Journal of Interpersonal Violence. https://doi.org/10.1177/08862605241310465
  • Plan International. (2020). The State of the Worlds Girls 2020 - Free to be online? Girls’ and young women’s experiences of online harassment. https://plan-international.org/uploads/2022/02/sotwgr2020-execsummary-en-3.pdf
  • Powell, A., Flynn, A., & Hindes, S. (2022, December). Technology-facilitated abuse: National survey of Australian adults’ experiences. ANROWS - Australia’s National Research Organisation for Women’s Safety. https://www.anrows.org.au/publication/technology-facilitated-abuse-national-survey-of-australian-adults-experiences/
  • UK Police. (2025). Advice and information - Stalking and harassment. Police.UK. https://www.police.uk/advice/advice-and-information/beta-stalking-and-harassment/what-is-stalking-harassment/
  • UNESCO, & IRCAI. (2024). Challenging systematic prejudices: an investigation into Gender Bias in Large Language Models. United Nations Educational, Scientific and Cultural Organization (UNESCO). https://unesdoc.unesco.org/ark:/48223/pf0000388971
  • UNFPA Technical Division, Gender and Human Rights Branch. (2021, December 1). Technology-facilitated gender-based violence: Making all spaces safe. United Nations Population Fund (UNFPA). https://www.unfpa.org/publications/technology-facilitated-gender-based-violence-making-all-spaces-safe

AI Risks and Opportunities

Risks

AI systems can amplify harassment through biased content recommendation algorithms that reward inflammatory content. UNESCO research demonstrates that large language models perpetuate gender bias, with some generating misogynistic content in 20% of instances.

The advancement of AI makes it easier than ever to create deceptive synthetic media, which can be a particularly harmful form of online harassment.

They also make the creation of bots easier, which is another form.

Opportunities

AI also offers detection opportunities through natural language processing to identify harassment patterns and automated content moderation systems.

AI could be used to support survivors of online harassment in requesting the content take down in cases when the online harassment involves images posted.

Prevalence

  • 58 percent of women and girls 15-25 years old had experienced online harassment (Plan International, 2020).
  • The majority of girls get harassed for the first time between the age of 14-16 (Plan International, 2020).
  • 50 percent of girls said they face more online harassment than street harassment (Plan International, 2020).
  • 37 per cent of the girls who identified themselves to be from an ethnic minority and have faced harassment said they get harassed because of it (Plan International, 2020).
  • 42 per cent of girls who identified themselves as LGBTIQ+ and have faced harassment, said that they get harassed because of it (Plan International, 2020).

Mitigation Strategies

Real-time prompts for reconsideration
Nudging users to reconsider harmful behavior.
Safety onboarding & awareness training
New user onboarding and ongoing awareness raising.
Update ranking model
Move Away from Engagement-Based Content Ranking.
Quarantine borderline content
Implement Quarantine Systems for Gray-Area Content.
Default to highest privacy settings
Default Privacy Settings to Minimize User Vulnerability.
Rate limits on low trust accounts
Rate Limits on Interactions from New or Unverified Accounts.
Transparent feedback and reporting
Enhanced Feedback Mechanisms for Reporting and Transparency.
User-controlled content filters
Filters to Empower Users in Managing Content Exposure
Is something missing, or could it be better?
About This SiteGet InvolvedContactCopyrightPrivacy
Loading...