The information on this page is adapted with permission from Prevention by Designby lead authors Lena Slachmuijlder and Sofia Bonilla.
Replace engagement-driven content ranking models with systems that prioritize trustworthiness and content quality. Shifting from engagement-based to trust-based ranking reduces the spread of harmful content. By deprioritizing sensationalist and borderline violating material, platforms can reduce the prevalence of TFGBV and align content promotion with safety and inclusivity, growing trust in the platform over the medium to long term.
Engagement-based ranking systems uprank and recommend content based on user interactions such as clicks, likes, and shares, thus predicting engagement with similar content. These systems consistently rank highly engaging material over quality or trustworthiness, leading to the proliferation of divisive, misleading, or sensational content. Meta CEO Mark Zuckerberg, describing the ‘natural engagement platform’ wrote: “One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content."
Examples
Pinterest: Pinterest does not rely solely on engagement signals for ranking and recommendations.
LinkedIn: Prioritizes content relevance and professional value, using quality metrics to rank posts.
YouTube & Google Search: YouTube promotes watch time over clicks, incentivizing creators to focus on informative, engaging content. In 2024, YouTube reported that its systems are “trained to elevate authoritative sources higher in search results, particularly in sensitive contexts” – mitigating risks while optimizing high quality information in its ranking framework. Google Search predicts quality using a wide variety of signals, including long established information retrieval signals (the most famous being PageRank, Google’s founding algorithm). As a result, users get results from trusted medical organizations and other authoritative sources when using Google Search, especially around sensitive topics. Replacing engagement-based ranking with trust-based models offers a clear pathway to reducing the amplification of TFGBV while maintaining platform integrity.
TikTok reduces the discoverability of search results in areas that are violative of its Community Guidelines. For example, users searching for content that TikTok deems violative of these guidelines in relation to hate - such as content relating to specific individuals who promote hateful ideologies - will not be shown directly related results and related terms will not appear on suggested search lists or predictive text. This intervention supports TikTok’s scaled removal of content by limiting the discoverability of hateful content.
TikTok publicly reported that in the period July-September 2023 it removed 92% of Hate Speech and Hateful Behaviour Content proactively, before anyone reported it, and 87% of this content was removed within 24 hours. For Violent and Hateful Organisations and Individuals, TikTok says it removed 98% of content proactively, before anyone reported it, and 83% was removed within 24 hours (eSafety Commissioner, 2024).
When building a new ranking model and utilizing prevalence rates, it is important to remember that prevalence doesn’t tell the full story
It is important to understand who is seeing the 2% of Violent and Hateful content (for example) that was not proactively removed -- what is the total amount of content (10,000 or 10 million)? Was the 2% of content distributed evenly around the globe, or were those exposures concentrated in a specific country or minority group?