Exploring safe moderation of online suicide and self-harm content

Theme Mental health

Workstream Psychological interventions

Status: This project is ongoing

The aim of this project is to assess the impact of blocking online self-harm content. People who access content about self-harm online can find that it affects their mental health. Policymakers are therefore particularly concerned about the how to manage online content relating to suicide or content encouraging users to self-harm.

We must be able to safely and appropriately moderate information about these topics online, so we can protect vulnerable users from its effects. However, anecdotal evidence suggests moderation can have unintended consequences for blocked users. It can make them feel isolated and stigmatised and there has been little research into the best ways to moderate and reach out to blocked users.

During our project we will analyse data collected for the DELVE study. DELVE collected information from a group of individuals who accessed and created self-harm and/or suicide content. Our analysis will focus on their experiences of moderation, including their experiences of having content blocked and fulfilling moderator roles within online groups and communities.

We will also collect data in collaboration with the Tellmi young person’s mental health app. We will analyse responses received from users following notification that their post has been withheld. Once we have completed this analysis, we will conduct in-depth interviews with a sample of withheld users.

Our aim is to develop guidance for industry and online communities. We will consider disseminating these via a policy briefing and through a Samaritans Online Excellence programme webinar event, which reach a range of audiences including the safety leads from leading online platforms. We will also explore opportunities to extend the work with Samaritans or other relevant funding bodies.