Reddit's Rule 3 covers non-consensual intimate media (NCIM) — formerly called "involuntary pornography" — and the platform treats violations of this rule with zero tolerance. The current policy covers any intimate or sexually explicit media of a real, identifiable person posted without that person's consent, including leaked private photographs, screenshotted sexting messages redistributed without permission, "upskirt" or "creepshot" imagery, AI-generated deepfakes depicting real individuals in sexual contexts, and content soliciting "lookalike" sexual material targeting a specific person. The enforcement approach uses a combination of automated detection and human review. Reddit employs hash-matching technology — the same type used by other major platforms — that compares uploaded images against a database of known NCIM. When a match is found, the content is removed automatically. Reddit also collaborates with StopNCII.org, a tool that allows victims of non-consensual intimate image sharing to create a hash of their images without uploading them, which is then distributed to participating platforms to prevent future uploads. Victims can use this resource proactively to protect themselves across multiple platforms simultaneously. User reports play an important role in the enforcement pipeline. When someone reports content under the NCIM category, the report goes to Reddit's safety team for review rather than to the community's volunteer moderators. This routing reflects the severity and sensitivity of the issue. Confirmed violations result in content removal and permanent account suspension for the poster. Reddit has stated that users who post this type of content are banned rather than warned. The policy extends to AI-generated content: sexually explicit AI-generated imagery depicting real, identifiable individuals is explicitly prohibited under the current rule. Fictional AI-generated sexual content not depicting real people falls outside this specific rule, though it may be subject to other policies.
Knowledge Base entry
How does Reddit enforce policies on non-consensual intimate imagery?
A practical answer page built from the knowledge base source.
FAQ
Imported article
More to read
How do you block direct messages from unknown accounts?
How can you configure privacy settings to minimize data collection and tracking?
What are best practices for avoiding doxxing yourself (sharing identifying details)?
How do you anonymize screenshots or posts that include sensitive info?
How should you think about posting content involving your workplace, family, or minors?
What types of scams are common on Reddit (crypto, giveaways, phishing)?
How do you recognize fake customer-service accounts or impersonation attempts?
How should you respond if someone asks you to move a conversation to another platform?
How do you avoid malware or phishing links in comments and DMs?
What is doxxing, and how does Reddit's policy treat it?
What should you do if you think a user is in immediate danger (self-harm, violence)?
What steps can you take if your account is compromised or hacked?
How can you use Reddit safely from high-risk environments (activism, sensitive topics)?
How do you verify that "official" help or mod messages are legitimate?
How can you appeal a site-wide suspension or report a false positive?
How do you keep a healthy relationship with Reddit to avoid burnout or doomscrolling?
Reddit Course — Part 5 (Q223–270)
What do common acronyms like AITA, TIFU, TIL, ELI5, LPT, CMV, and TL;DR stand for?
How do flairs like "Serious," "Answered," or "Update" shift expectations for behavior?
What is "shitposting," and when is it acceptable or unwelcome?