Newly revealed internal documents from Meta, the American tech giant, expose that around 10% of the company’s 2024 revenue—approximately $16 billion—came from advertisements promoting scams and prohibited goods. These figures highlight serious flaws in Meta’s advertising systems, which have allowed billions of users across its platforms to be repeatedly exposed to fraudulent content.

According to the leaked files, Meta’s social networks—including Facebook, Instagram, and WhatsApp—display an estimated 15 billion fraudulent ads to users every single day. Despite this staggering volume, the company has failed to effectively detect or block the majority of these non-compliant ads for at least the past three years.

The internal reports indicate that a significant portion of online scams in the United States are facilitated through Meta’s platforms. Approximately one-third of successful scams in the country have been linked to content distributed via Facebook, Instagram, or WhatsApp. Internal findings also suggest that scammers themselves view Meta as a preferred platform, believing it to be easier to run fraudulent ads there than on competitors like Google.

Meta’s advertising policies have come under intense scrutiny. The company’s automated systems are designed to flag suspicious advertisers, but they only take action—such as banning an account—if the risk of fraud is assessed to exceed 95%. For those below that threshold, Meta employs a controversial tactic: charging higher ad fees. This approach incentivizes high-risk advertisers to stay on the platform by allowing them to essentially pay for continued visibility.

Moreover, the documents reveal a stark disparity in how Meta enforces its rules. Smaller advertisers risk immediate bans after just eight violations related to financial fraud, whereas large clients can maintain active ad accounts despite accumulating over 500 violations.

The platform’s algorithmic ad delivery system further exacerbates the problem. Because Meta tailors ads based on user behavior, individuals who click on fraudulent content are more likely to be shown similar ads in the future. This creates a dangerous feedback loop, particularly for users who are more susceptible to scams.

Meta has publicly claimed it is committed to fighting fraud. In a statement, a company spokesperson said, “We do not want fraudulent content on our platforms—our users don’t want it, legitimate advertisers don’t want it, and neither do we.” The spokesperson also noted that over the past 18 months, the number of user-reported scam ads has dropped by 58%, and that more than 134 million scam ads had been removed as of 2025.

However, internal documents tell a different story. They show that while Meta has taken some steps to curb scam advertising, its efforts are constrained by financial considerations. A dedicated team tasked with reviewing suspicious ads has been given a strict directive: any enforcement actions must not lead to a revenue loss exceeding 0.15% of total income, equivalent to $135 million.

This financial ceiling underscores the priority Meta places on protecting its bottom line. In fact, the company has openly acknowledged in internal memos that while it intends to reduce reliance on revenue from high-risk scam ads, it is also concerned that such reductions could negatively affect overall business performance.

The scale of the issue has not gone unnoticed by regulators. The U.S. Securities and Exchange Commission (SEC) has launched a formal investigation into Meta’s role in facilitating financial fraud through its advertising networks. In the UK, regulatory data shows that in 2023, 54% of all losses from payment-related scams were linked to Meta platforms—more than any other social media service.

As scrutiny intensifies, pressure is mounting on Meta to overhaul its advertising policies, improve oversight, and prioritize user safety over short-term profits. But with billions of dollars at stake, the challenge of balancing compliance, ethics, and revenue growth remains immense.