What the SCAM Act and New Mexico v. Meta Mean for the Fight Against Online Fraud
Published by Barnali Pal Sinha
Posted on March 13, 2026
9 min readLast updated: March 13, 2026

Published by Barnali Pal Sinha
Posted on March 13, 2026
9 min readLast updated: March 13, 2026

In September 2023, investigators working for the New Mexico Attorney General's office created a Facebook profile for a fictitious 13-year-old girl named “Issa Bee”. She had just moved from Texas to rural New Mexico. She posted about school, the cafeteria, and the school bus. Her musical tastes ran to Olivia Rodrigo and Harry Styles. One post lamented the loss of her last baby tooth. Another marked her first day of seventh grade. Within weeks, Issa had accumulated 5,000 Facebook friends, the maximum number the platform allows, and over 6,700 followers. Most interactions came from adult users who attempted to initiate private conversations and encouraged the account to move discussions to external messaging platforms such as WhatsApp and Telegram. Not one of those posts, accounts, or messages was removed by the platform. Issa Bee was not real. But the predators contacting her were.
Issa Bee's story didn't happen in a vacuum. For years, researchers, journalists, and regulators have documented the structural conditions that make stories like hers predictable. The BBC reported on networks of exploitative content on Facebook as far back as 2016. The Wall Street Journal published a detailed investigation in June 2023, finding that Instagram's algorithms were actively connecting users to networks trading in child sexual abuse material. Stanford's Internet Observatory reached similar conclusions.
The challenge, across all this reporting and research, was structural. Platforms operating under Section 230 of the Communications Decency Act, passed in 1996, faced limited legal exposure for harmful content generated or amplified on their services. Recent legislative findings have stated directly that courts interpreted Section 230 “too broadly, granting sweeping immunity even to online platforms alleged to facilitate unlawful or harmful activity — an outcome contrary to Congress's original intent.” Without meaningful liability, the incentive architecture for platforms skewed toward engagement and growth rather than investment in safety
For those working in fraud prevention, this structural dynamic has had direct and measurable consequences. Romance fraud is among the most financially devastating fraud typologies affecting consumers, and it does not begin at the bank. Romance fraud originates on social platforms and dating apps, where fake profiles are easy to create and maintain. In 2024, the FBI’s Internet Crime Complaint Center received 7,626 reports of confidence and romance fraud from individuals over 60, with direct losses of nearly $390 million, resulting in both an increase in the number of victims and losses. Among older victims, investment fraud linked to romance scams accounted for an additional $1.8 billion in losses.
For years, that environment went largely unchallenged. For the first time, two significant institutions are responding to it.
On February 4, 2026, Senator Ruben Gallego (D-AZ) introduced S. 3774, the Safeguarding Consumers from Advertising Misconduct (SCAM) Act, with Senator Bernie Moreno (R-OH) co-sponsoring on the same day, giving the bill bipartisan support from the outset. Eight days later, on February 12, Representatives Dan Meuser (R-PA) and Lou Correa (D-CA) introduced a companion bill in the House, H.R. 7548. The legislation represents the first federal effort to directly target fraudulent and deceptive advertising on social media platforms, and to strip those platforms of the Section 230 immunity they have long relied upon to avoid accountability for paid content.
Five days after the Senate bill was introduced, on February 9, 2026, a civil jury trial opened at the First Judicial District Court in Santa Fe — the first standalone state trial to take Meta to a jury over the harms its platforms enable, built on evidence gathered through a months-long undercover investigation of the kind more commonly associated with organized crime task forces than consumer protection law.
The legal environment that shaped this landscape is now, meaningfully, beginning to change.
What the Scam Act actually does
The SCAM Act is, at its core, a straightforward proposition: if a platform is paid to display an advertisement, it bears responsibility for ensuring that the advertisement is not fraudulent.
The bill's most consequential provision is its treatment of Section 230. Under the Act, the immunity that platforms have historically used to shield themselves from liability for content on their services explicitly does not apply to paid commercial advertisements. A platform that accepts money to run an ad and fails to take reasonable steps to verify that the ad is legitimate can no longer invoke a 1996 law intended to protect bulletin board operators. That carve-out alone represents a meaningful shift in the legal architecture of platform accountability.
What constitutes "reasonable steps" is defined with notable specificity. Platforms would be required to verify the legal name and physical location of every advertiser before a paid advertisement runs. They would need to collect valid, current government-issued identification for individual advertisers or legal documentation of existence and ownership for business entities. They would need to maintain contact information sufficient for follow-up by either the platform or the Federal Trade Commission. And they would be required to take active measures to prevent circumvention of those requirements through false, stolen, or synthetic identities — a provision that directly addresses the AI-enabled fraud techniques that have accelerated rapidly in recent years.
Beyond verification, the bill mandates active impersonation detection programs, automated and manual systems to detect fraudulent advertisements, and a clear, accessible tool for users to report suspected scam ads. When a report is filed by a user, a government entity, or the platform's own detection systems, the platform has 72 hours to investigate and 24 hours after that to notify the reporter of the outcome. If the advertisement is found to violate the Act, it must be removed within 24 hours of that determination.
Enforcement is the responsibility of the Federal Trade Commission, with violations treated as unfair or deceptive acts or practices under the FTC Act. State attorneys general can bring civil actions on behalf of their residents. And in a provision that will matter enormously to individual victims, the bill creates a private right of action — meaning a person injured by a fraudulent advertisement can sue the platform directly, with the potential for treble damages in cases of willful or knowing violations.
The banking industry recognized immediately what this bill represents. Rob Nichols, President and CEO of the American Bankers Association, endorsed it on the day of introduction with language that cut to the heart of the accountability question: "Banks of all sizes invest significant resources to detect and stop fraud, and Americans appreciate those efforts, but we need to prevent scams before they ever reach a bank." It was a pointed acknowledgment, from the industry that has borne the operational and reputational costs of fraud originating elsewhere, that the SCAM Act is addressing the right problem at the right point in the chain.
What the Scam Act still leaves open
The SCAM Act is a meaningful intervention. But it is worth being precise about what it does and does not cover.
The bill's scope is limited to paid commercial advertisements. A platform that accepts money to display a fraudulent ad and fails to verify the advertiser's identity is now exposed to FTC enforcement, state AG action, and private litigation. That is a genuine and significant change. But the fraud ecosystem that the bill is designed to address does not begin only through paid advertising. Romance fraud, which the bill's own findings cite as a primary fraud typology, overwhelmingly originates through organic content: fake profiles, unsolicited connection requests, AI-generated personas seeded into community groups and dating platforms. None of that activity involves a paid advertisement. None of it falls within the scope of the bill.
The New Mexico investigation speaks to a dimension of platform accountability that legislation alone may not fully reach. Issa Bee's account grew to 5,000 friends and over 6,700 followers through the same recommendation systems that drive engagement across Meta's platforms — systems that, without adequate safeguards, created vulnerabilities that bad actors exploited. What the trial addresses, through a product design liability theory that deliberately sidesteps Section 230, is precisely the organic platform behavior, meaning the algorithms, the recommendations, the absence of sufficient safety mechanisms, that sits outside the scope of advertising regulation. In this sense, the SCAM Act and the New Mexico trial address adjacent problems that, together, begin to cover the landscape.
The SCAM Act's authors understood the scope within which they were working. The bill includes a provision (Section 4) that requires the FTC, within nine months of enactment, to report to Congress on regulatory gaps that allow online scams involving financial transactions to persist, and to assess whether improved information-sharing mechanisms between platforms, financial institutions, and regulators could reduce consumer losses. That provision is quietly significant. It is an acknowledgment, built into the legislation itself, that the bill is a first step rather than a complete solution.
The question of information sharing is where the gap between the SCAM Act and a truly comprehensive framework becomes most visible. In previous analysis of global regulatory approaches to payment fraud, the contrast between the United States and its peers has been consistent. Australia's proposed Scams Mandatory Industry Codes mandate information sharing across banks, telecommunications companies, and digital platforms as a coordinated ecosystem response. Singapore's Shared Responsibility Framework creates explicit obligations for both financial institutions and telecoms, with a tiered liability structure that reflects where in the chain a scam could have been disrupted. The United States, across multiple legislative efforts, has moved sector by sector rather than building a cross-industry architecture. The SCAM Act addresses platforms. The Protecting Consumers from Payment Scams Act, introduced in 2024, addressed banks. The telecommunications sector, through which a significant proportion of scam contact still occurs, remains largely outside both frameworks.
The American Bankers Association's endorsement of the SCAM Act captured this dynamic precisely. Banks have invested heavily in fraud detection and increasingly bear reimbursement liability for losses that originate far outside their walls, that precede any banking interaction by weeks or months. The SCAM Act begins to rebalance that equation by placing obligations on platforms. But rebalancing is not the same as resolving. Until information flows freely between platforms, financial institutions, and regulators in real time and until liability is apportioned across the full chain of actors through which a scam travels, the response will remain fragmented.
What the SCAM Act and the New Mexico trial together represent is a convergence: two institutions, one judicial and one legislative, arriving at the same conclusion simultaneously. As both the SCAM Act and the New Mexico complaint assert, platforms are not passive intermediaries. They are active participants in the environments where fraud originates, and they can be held accountable for what those environments produce. That convergence, in early 2026, is genuinely new. It is the foundation for a more complete framework.
Explore more articles in the Technology category











