If you encounter a post or comment on Reddit that suggests a user is in immediate danger from self-harm or violence — either expressed as intent, plan, or imminent action — there are both platform-specific and broader steps you can take to respond appropriately. Within Reddit, the platform has an automated crisis response system that activates when certain keywords related to suicide or self-harm appear in posts or comments. This system displays a message to the user with links to crisis resources, including the National Suicide Prevention Lifeline in the United States (988 or 1-800-273-8255) and the Crisis Text Line (text HOME to 741741). You can prompt this system by reporting the post using the "Something's Threatening, Abusive, or Spam" category and selecting the option related to suicide or self-harm. In addition to using the platform's report function, responding compassionately in the thread itself can be meaningful. A brief, non-judgmental reply that acknowledges the person's pain and provides crisis contact information — without engaging in debate or asking probing questions — gives the person a direct resource in a moment when they are already online. Organizations like the Crisis Text Line and the International Association for Suicide Prevention maintain directories of crisis resources by country for non-US situations. For posts that describe a specific, credible, and imminent plan to harm another person — rather than generalized anger or frustration — the same reporting mechanism applies, and Reddit's safety team is able to contact relevant authorities in cases where real-world harm appears genuinely imminent. Users in the United States can also contact local emergency services directly if the post contains geographic information that allows for identification. Whenever possible, the goal should be to ensure the person has access to professional crisis support rather than placing the responsibility for intervention on another Reddit user.
Knowledge Base entry
What should you do if you think a user is in immediate danger (self-harm, violence)?
A practical answer page built from the knowledge base source.
FAQ
Imported article
More to read
How can you configure privacy settings to minimize data collection and tracking?
What are best practices for avoiding doxxing yourself (sharing identifying details)?
How do you anonymize screenshots or posts that include sensitive info?
How should you think about posting content involving your workplace, family, or minors?
What types of scams are common on Reddit (crypto, giveaways, phishing)?
How do you recognize fake customer-service accounts or impersonation attempts?
How should you respond if someone asks you to move a conversation to another platform?
How do you avoid malware or phishing links in comments and DMs?
What is doxxing, and how does Reddit's policy treat it?
How does Reddit enforce policies on non-consensual intimate imagery?
What steps can you take if your account is compromised or hacked?
How can you use Reddit safely from high-risk environments (activism, sensitive topics)?
How do you verify that "official" help or mod messages are legitimate?
How can you appeal a site-wide suspension or report a false positive?
How do you keep a healthy relationship with Reddit to avoid burnout or doomscrolling?
Reddit Course — Part 5 (Q223–270)
What do common acronyms like AITA, TIFU, TIL, ELI5, LPT, CMV, and TL;DR stand for?
How do flairs like "Serious," "Answered," or "Update" shift expectations for behavior?
What is "shitposting," and when is it acceptable or unwelcome?
What is a "copypasta," and how does it spread across communities?