MENLO PARK – Meta has announced a major policy shift to curb the spread of “unoriginal” content on Facebook, targeting accounts that repeatedly repost text, images, or videos without meaningful transformation.
In a blog post on Monday, the company said it has already removed 10 million accounts this year for impersonating well-known creators and taken action against another 500,000 profiles involved in spam or fake engagement. Going forward, Facebook will penalize accounts that simply recycle material—particularly spam networks and impersonators—by reducing their reach in algorithmic feeds and banning them from monetization programs.
Meta stressed that creators who add value to existing content, such as through commentary, reaction videos, or creative trends, will not be affected. Instead, the crackdown is aimed at low-effort reposting and content farms, many of which rely on generative AI to mass-produce repetitive posts.
The company is also testing a new feature that will automatically include links in duplicate videos to direct viewers to the original source, ensuring proper credit goes to creators.
This move mirrors YouTube’s recent steps to limit mass-produced, AI-generated videos that have been flooding the platform. The rise of what some critics call “AI slop”—bland, low-quality automated content—has sparked frustration among original creators.
Meta’s latest enforcement comes amid growing complaints over its automated moderation. A petition signed by nearly 30,000 users has demanded better human oversight and clearer appeals processes, citing wrongful account suspensions.
The updated policies will roll out gradually over the next few months, giving creators time to adapt. To help users understand how their posts are evaluated, Facebook’s Professional Dashboard now offers detailed post-level insights, including indicators for potential demotion or monetization risks.
In its Transparency Report, Meta revealed that about 3% of Facebook’s monthly active users are fake accounts. In the first quarter of 2025 alone, it acted on 1 billion fake profiles.
Alongside these efforts, Meta is also leaning more on community-based fact-checking in the US—similar to X’s Community Notes—rather than relying solely on internal moderation teams.



