Reddit declares war on bad bot activity

Reddit is introducing changes to support interactions between people. The company is taking a bottom-up approach to help users understand when they are engaging with another person unless an account is labeled otherwise. Reddit plans to verify that users are human without requiring disclosure of real-world identity.

Reddit human verification

How does it work

Verified profiles for brands, publishers, and creators launched in late 2025 to help their content gain acceptance in relevant communities.

The next step is standardizing how automated accounts appear on Reddit. Accounts that use automation will carry an [App] label so users can recognize that they are interacting with software. Developers can register apps built on Reddit’s Developer Platform, along with other compliant automated accounts, to receive this label. Accounts that violate rules will continue to be removed.

This labeling system already exists within the Developer Platform ecosystem. Previously, content from apps was labeled, but starting March 31, 2026, labels will appear on account profiles instead.

There will be two types of labels. Apps built on the Developer Platform will be marked as Developer Platform App, while identified or registered automated accounts outside the platform will be marked as App.

“For folks not yet building on the Developer Platform, we’ll be notifying accounts we’ve identified as apps in this first phase of labeling today, and whether you receive a notification or not, this is where we could use your help. Register your existing apps. Registration will help our team better understand usage and have the best way to contact you (and apps that register before the end of June may be eligible to claim a porting bounty). Since accounts with automations will be labeled as Apps, we’ll encourage separate accounts for automations and personal use,” Reddit explained.

Reddit reports removing around 100,000 accounts per day, often before users encounter them, and it will continue to remove spam and malicious bot activity.

If an account shows signs of automation or non-human behavior, Reddit may request confirmation that a person is behind it. Accounts that fail verification may face restrictions, and reporting these accounts will become more flexible.

What about privacy?

Reddit is exploring methods to confirm human presence while maintaining privacy. The company is prioritizing decentralized and private approaches that do not require long-term storage of identity data and that comply with regulations.

“When confirming that there is a human behind an account, we prefer third-party tools that keep a distance between verification and Reddit itself. Any system we use will not expose your real-world identity to Reddit nor your Reddit username or activity to any third party. There are a handful of ways to do this, and I’m sure there will be more,” Steve Huffman, CEO of Reddit, said.

The company is considering options such as passkeys, third-party biometric verification, and third-party government ID services.

AI-generated content

Reddit plans to monitor AI-generated content at a sitewide level while allowing communities to set their own standards. The current focus is on confirming that accounts represent real people, as AI tools are part of how users communicate.

Don't miss