Abstract
Background:
Social networking site (SNS) users may experience mental health difficulties themselves or engage with mental health–related content on these platforms. While SNSs use moderation systems and user tools to limit harmful content availability, concerns persist regarding the implementation and effectiveness of these methods.
Objective:
This study aimed to use an ethnographic walkthrough method to critically evaluate 4 SNSs—Instagram, TikTok, Tumblr, and Tellmi.
Methods:
Walkthrough methods were used to identify and analyze mental health content moderation and safety and well-being resources of SNS platforms. We completed systematic checklists for each of the SNS platforms and then used thematic analysis to interpret the data.
Results:
Findings highlighted both successes and challenges in balancing user safety and content moderation across platforms. While varied mental health resources were available on platforms, several issues emerged, including redundancy of information, broken links, and a lack of non–US-centric resources. In addition, despite the presence of several self-moderation tool options, there was insufficient evidence of user education and testing around these features, potentially limiting their effectiveness. Platforms also faced difficulties addressing harmful mental health content due to unclear language around what was allowed or disallowed. This was especially evident in the management of mental health–related terminology, where the emergence of “algospeak,” where users adopt alternative codewords or phrases to avoid having content removed or banned by moderation systems, highlighted how users easily bypass platform censorship. Furthermore, platforms did not detail support for reporters or reportees of mental health–related content, leaving users susceptible.
Conclusions:
Our study resulted in the production of preliminary recommendations for platforms regarding potential mental health content moderation and well-being procedures and tools. We also emphasized the need for more inclusive user-centered design, feedback, and research to improve SNS safety and moderation features.
Social networking site (SNS) users may experience mental health difficulties themselves or engage with mental health–related content on these platforms. While SNSs use moderation systems and user tools to limit harmful content availability, concerns persist regarding the implementation and effectiveness of these methods.
Objective:
This study aimed to use an ethnographic walkthrough method to critically evaluate 4 SNSs—Instagram, TikTok, Tumblr, and Tellmi.
Methods:
Walkthrough methods were used to identify and analyze mental health content moderation and safety and well-being resources of SNS platforms. We completed systematic checklists for each of the SNS platforms and then used thematic analysis to interpret the data.
Results:
Findings highlighted both successes and challenges in balancing user safety and content moderation across platforms. While varied mental health resources were available on platforms, several issues emerged, including redundancy of information, broken links, and a lack of non–US-centric resources. In addition, despite the presence of several self-moderation tool options, there was insufficient evidence of user education and testing around these features, potentially limiting their effectiveness. Platforms also faced difficulties addressing harmful mental health content due to unclear language around what was allowed or disallowed. This was especially evident in the management of mental health–related terminology, where the emergence of “algospeak,” where users adopt alternative codewords or phrases to avoid having content removed or banned by moderation systems, highlighted how users easily bypass platform censorship. Furthermore, platforms did not detail support for reporters or reportees of mental health–related content, leaving users susceptible.
Conclusions:
Our study resulted in the production of preliminary recommendations for platforms regarding potential mental health content moderation and well-being procedures and tools. We also emphasized the need for more inclusive user-centered design, feedback, and research to improve SNS safety and moderation features.
Original language | English |
---|---|
Article number | e69817 |
Journal | JMIR Human Factors |
Volume | 12 |
DOIs | |
Publication status | Published - 29 May 2025 |
Bibliographical note
Publisher Copyright:©Zoë Haime, Lucy Biddle.