Abstract
Despite the growing role of content moderation online, particularly in mental health spaces, there is limited research into the effectiveness of platform practices and a lack of user-driven evidence for regulatory guidance. This study aimed to explore user accounts of moderation related to self-harm and suicide (SH/S) content online, including their experiences of being moderated and perspectives on moderation practices. Additionally, where participants were also moderators, their experiences of moderating SH/S content were explored. 14 participants were interviewed at baseline, n = 8 at 3-months and n = 7 at 6-months. They also completed daily diaries of online use between interviews. Thematic analysis was used to explore perspectives. Three key themes were identified: ‘content reporting behaviour’, exploring factors influencing decisions to report content; ‘perceptions of having content blocked’, exploring experiences and speculative accounts of SH/S content moderation; and ‘content moderation and moderators’, examining participant views on moderation approaches and their experiences of moderating. This study revealed challenges in moderating SH/S content online, and highlighted inadequacies with current procedures. Participants struggled to self-moderate online SH/S spaces, showing the need for proactive platform-level strategies. Additionally, whilst the lived experience of moderators was valued, associated risks emphasised the need for supportive measures. Policymakers and industry leaders should prioritise transparent and consistent moderation practices.
Original language | English |
---|---|
Article number | 8 |
Number of pages | 21 |
Journal | Digital Society |
Volume | 4 |
Issue number | 1 |
Early online date | 18 Feb 2025 |
DOIs | |
Publication status | Published - 1 Apr 2025 |
Bibliographical note
Publisher Copyright:© The Author(s) 2025.