This abuse manifests when perpetrators intentionally share graphic violence, pornography, self-harm content, or other disturbing material with targets to cause psychological distress, desensitize them to abuse, or create trauma responses. Common tactics include sending unsolicited violent imagery, sharing content depicting sexual violence to normalize abuse, or exposing vulnerable individuals to content that triggers existing trauma. The World Economic Forum (2023) identifies this as harm occurring through content consumption, where individuals are "negatively affected as a result of viewing illegal, age-inappropriate, potentially dangerous or misleading content."
The harm occurs when individuals encounter this content without adequate preparation or when it's used to isolate, manipulate, or groom vulnerable individuals. Perpetrators may share such content directly with targets or create environments where targets are likely to encounter it.
Low - Creating or sharing inappropriate content requires minimal technical skills, as perpetrators can easily find, save, and redistribute existing harmful content from various online sources.
Regional legal frameworks influence how inappropriate content is defined and regulated, leading to inconsistencies in protections across jurisdictions.
Generative AI systems can produce realistic harmful content with minimal technical skill, lowering barriers to creating potentially traumatizing material. AI-powered recommendation systems may also inadvertently promote inappropriate content, particularly when algorithms prioritize engagement over safety (World Economic Forum, 2023).
AI detection tools can help identify and moderate inappropriate content before users encounter it. Machine learning classifiers can be developed to recognize potentially harmful material, assisting platforms in proactive content moderation (Tech Coalition, 2023).