Knowledge Base entry

How do you deal with early trolls and low-effort spam in a fresh community?

A practical answer page built from the knowledge base source.

A new, unestablished community is an attractive target for trolls and spammers precisely because it lacks the enforcement history, the community norms, and the moderator experience that make established subreddits harder to disrupt. Addressing early bad actors quickly and consistently is essential for establishing that the community takes its rules seriously. The first line of defense is AutoModerator configuration. Even before you have human moderators, you can write AutoModerator rules that automatically remove posts and comments matching common spam patterns — links to specific domains, posts with karma below a threshold, accounts created within the last 30 days, and content containing known spam phrases. Setting new accounts to a "held for approval" state in the early weeks of a subreddit's existence gives moderators time to review contributions from unknown accounts before they are publicly visible. When a troll appears, remove their content and issue a warning or ban promptly and without extended engagement. Trolls typically seek a reaction, and moderators who engage in lengthy debates about enforcement decisions in the comments are providing exactly the interaction the troll wants. A short, factual removal reason applied consistently is far more effective. Document the ban with a mod note so that if the troll creates an alternate account, there is a record of the pattern. For low-effort spam, the removal reason matters. A polite, specific explanation of why the post was removed — "This post was removed because it consists of a link without context (Rule 2). You're welcome to repost with a description of why this is relevant to the community" — turns an enforcement action into an educational opportunity. New members who receive a respectful removal reason often repost correctly and become good community participants. Those who respond angrily or immediately violate the rule again self-identify as bad-faith actors who warrant stricter treatment. Building a modest approved-submitter list in the early days — adding members who have demonstrated good-faith participation — creates a core of trusted voices whose content is less subject to spam-filter catches, which reduces the false-positive rate of automated enforcement.