This site is an interactive taxonomy — a guide to name, understand, and build toward solutions for technology-facilitated gender-based violence. Its aim is to serve policymakers, product-managers, and platform executives by mapping abuse patterns, impacts, and recommended interventions.
The taxonomy is currently a prototype, with active collection of feedback from industry, civil society, academia, and the public before its formal launch.
Perpetrators with specific intents commit acts of abuse against targets, leading to impacts on their lives.
Responsible organizations can implement mitigation strategies to reduce the likelihood of the abuse and the severity of the impact.
With advancements in artificial intelligence (AI), the nature of many of these risks is the same as what we’ve seen associated with many other technological advancements:
The main difference is that AI has the potential to increase the scale of each of these additional risks more than any past technological advancement.
AI does have the potential to provide Trust & Safety (T&S) professionals with parallel opportunities:
But because much of the work of T&S professionals is inherently reactionary, the speed of these changes is a serious additional threat.
With this taxonomy, our aim is to support T&S professionals throughout the AI ecosystem (including AI generation organizations, model hosting platforms, social media platforms, third-party tools/NGOs, and regulatory entities) in coalescing around a shared language and understanding of the scope of what we are facing.
The more they are able to effectively communicate with each other, the more they can proactively work together to mitigate these risks as or before they blossom in the real world.
______
Is there anything you’d like to provide comment on in this taxonomy? We’d love to hear your feedback!