ScamZero
2025-2026 Annual Report

The State of
Scam Prevention

A comprehensive analysis of fraud trends, victim demographics, and the emerging AI-powered threats reshaping the $196 billion scam economy.

ScamZero Research February 2026
$196B
Estimated Annual
Fraud Losses
93-98% of victims never report
558% Increase since 2019
700% Rise in deepfake fraud
4% Recovery rate
$17B Crypto scam losses

Fraud in America has reached unprecedented scale. While the Federal Trade Commission received reports of $12.5 billion in consumer fraud losses for 2024, their analysis suggests actual losses reach $196 billion annually—a staggering figure that accounts for the 93-98% of victims who never report.

This isn't just a number on a page. It represents life savings wiped out in a single phone call. Retirement accounts drained through fake investment platforms. Young people losing their first paychecks to job scams. The scam economy has evolved into a sophisticated, professionalized industry that targets everyone—regardless of age, education, or technical savvy.

For financial institutions, the crisis is existential. Members lose trust after scam losses—even when the institution isn't at fault. Complaints to NCUA and CFPB surge. Reputational damage spreads through communities. And new regulatory pressure looms as lawmakers respond to the $196 billion problem. Credit unions and banks increasingly find themselves on the front lines of a battle they didn't start but can't afford to lose.

In this report, we examine the latest data from the FBI's Internet Crime Complaint Center (IC3), the FTC's Consumer Sentinel Network, and leading cybersecurity research to paint a complete picture of the scam landscape in 2025-2026. The trends are alarming: losses continue to accelerate, AI is supercharging scammer capabilities, and the criminals are getting more sophisticated every day.

The scale of the problem

The FBI's Internet Crime Complaint Center recorded its highest-ever losses in 2024: $16.6 billion across 859,532 complaints. This represents a staggering increase from just a few years ago—FBI IC3 losses totaled $6.9 billion in 2021, meaning reported losses have more than doubled in just three years.

But even these record-breaking numbers vastly understate the true impact. The FTC estimates that somewhere between 2% and 6.7% of fraud victims actually file reports with any agency. Using this adjustment factor, the FTC calculates that actual consumer fraud losses could range from $186 billion to $830 billion annually. For this report, we use the conservative midpoint estimate of $196 billion in actual annual losses.

To put this in perspective: Americans lose more money to scammers each year than the entire GDP of countries like Ukraine or Hungary. The scam economy is now larger than many legitimate industries.

The acceleration has been remarkable. FTC-reported losses increased from $1.9 billion in 2019 to $12.5 billion in 2024—a 558% increase in just five years. This isn't gradual growth; it's an explosion. Every year sets a new record, and every year experts predict the trend will continue.

Reported Fraud Losses to FTC (2019-2024)
2019
$1.9B
2020
$3.4B
2021
$5.8B
2022
$8.8B
2023
$10.0B
2024
$12.5B
Source: FTC Consumer Sentinel Network. Represents a 558% increase since 2019.

Several factors are driving this acceleration:

Cryptocurrency enables extraction at scale. Digital currencies allow scammers to receive irreversible payments that can be laundered within hours. Unlike wire transfers, which require bank accounts that can be frozen, crypto wallets can be created anonymously and emptied instantly. Cryptocurrency-related scams alone exceeded $17 billion in 2025, according to Chainalysis research.

Artificial intelligence has lowered barriers to entry. Generative AI allows scammers to create convincing phishing sites, write grammatically perfect scam messages in any language, and even clone voices—all without technical expertise. Tools that once required specialized knowledge are now available to anyone with an internet connection.

The professionalization of fraud. Criminal organizations now operate like legitimate businesses, with specialized roles, customer support centers, and even money-back guarantees for their "services." Some scam call centers employ hundreds of workers and generate millions in monthly revenue. This isn't amateur hour; it's organized crime at industrial scale.

Social media as attack surface. The platforms where people spend most of their time have become the primary hunting ground for scammers. Social media is particularly effective for investment scams and romance fraud, where criminals can build trust over weeks or months before extracting funds.

Who falls for scams? The demographics of victimization

One of the most persistent myths about scams is that only elderly, technologically unsophisticated people fall victim. The data tells a far more nuanced story: young adults report more scams, but older adults lose dramatically more money.

According to FTC data, adults aged 20-29 file more fraud reports than any other age group. They're more likely to encounter scams through their heavy social media use and are particularly vulnerable to job scams and online shopping fraud. The median loss for this age group is relatively modest at $417—painful, but survivable for most.

As age increases, the pattern shifts dramatically:

Median Fraud Loss by Age Group
20-29
$417
30-39
$450
40-49
$500
50-59
$520
60-69
$700
70-79
$1,000
80+
$1,600+
Source: FTC Consumer Sentinel Network 2024. Adults 80+ lose nearly 4x what young adults lose.

But even these medians obscure the true catastrophe facing older Americans. The distribution of losses is highly skewed: 68% of all money lost by older adults comes from individual cases exceeding $100,000. These aren't small mistakes; they're life-altering events that wipe out entire retirement savings in a matter of days or weeks.

The older adult crisis

The FTC's 2025 report on older adults reveals a disturbing acceleration in elder fraud. Reported losses by adults 60 and older reached $2.4 billion in 2024—a 300% increase from $600 million just four years earlier in 2020. Accounting for underreporting, actual losses for this demographic alone may range from $10.1 billion to $81.5 billion annually.

What makes these losses particularly devastating is their concentration at the high end of the spectrum:

Reports from older adults losing $10,000 or more increased more than 4-fold from 2020 to 2024. Reports of losses exceeding $100,000 increased nearly 7-fold during the same period. These aren't abstract statistics—they represent retirement savings, home equity, and decades of careful financial planning erased in days or weeks by sophisticated criminal operations.

The contact methods have also shifted dramatically. Social media has become the #1 contact method by total dollars lost for adults 60 and older. Losses via social platforms increased nearly 9-fold since 2020, driven primarily by investment scams that build trust over weeks or months before extracting funds. The criminals have followed their victims to where they spend their time online.

Why everyone is vulnerable

The data makes clear that scam vulnerability isn't about intelligence or technical sophistication. Young, digitally native adults fall for scams at higher rates than their parents. Educated professionals get caught in business email compromise schemes. Financially savvy investors lose fortunes to pig butchering scams.

Scams work because they exploit universal human psychology: the desire for connection, the fear of missing out, the panic of an emergency, the trust in authority figures. Criminals have become experts at triggering these emotional responses and short-circuiting rational decision-making. When someone believes their grandchild is in jail, or that they've found true love, or that they're about to miss a once-in-a-lifetime investment opportunity, normal skepticism evaporates.

The scam types causing the most damage

Not all scams are created equal. While online shopping fraud generates the highest volume of complaints, investment scams cause by far the greatest financial damage. Understanding which scam types drive the most losses helps organizations and individuals prioritize their prevention efforts.

Investment and cryptocurrency scams: $6.5 billion and rising

Investment scams, particularly those involving cryptocurrency, dominated the fraud landscape in 2024 and show no signs of slowing in 2025. The FBI's IC3 recorded over $6.5 billion in losses from investment fraud in 2024 alone, making it the most damaging category by a wide margin.

The signature tactic of 2024-2025 is the "pig butchering" scam—a term derived from the practice of "fattening up" victims before the slaughter. These schemes typically begin on dating apps, social media platforms, or messaging services like WhatsApp and Telegram. Scammers pose as potential romantic partners, successful investors, or friendly acquaintances, building genuine-seeming relationships over weeks or months.

Once trust is established, they introduce victims to fake investment platforms. These platforms are remarkably sophisticated, featuring professional interfaces, real-time "price" updates, and even fake customer service representatives. Victims see their initial investments appear to grow dramatically—on paper, at least. They're encouraged to invest more and more, often taking out loans or liquidating retirement accounts to fund their "winning" positions.

TRM Labs reported that pig butchering scams alone extracted $2.1 billion in the first half of 2025. The average scam payment has increased 253%, from $782 in 2024 to $2,764 in 2025—suggesting scammers are becoming more effective at maximizing extraction from each victim.

$17B
Total cryptocurrency scam losses in 2025
Source: Chainalysis 2025 Crypto Crime Report

When victims try to withdraw their "profits," they discover the truth: the platform demands taxes, fees, or additional deposits before releasing funds. The money is already gone, usually converted to cryptocurrency and laundered through multiple wallets within hours of receipt.

Imposter scams: Highest volume, rapidly rising losses

Imposter scams—where criminals pose as government agencies, banks, tech companies, or other trusted entities—generated over 1 million reports in the first three quarters of 2025 alone, already exceeding 2024's full-year total by 7.8%.

These scams work because they exploit authority and urgency. A call from "the IRS" threatening arrest for unpaid taxes. An email from "Microsoft" warning of a security breach requiring immediate action. A text from "your bank" about suspicious activity on your account. A call from "Apple Support" about a problem with your iCloud account. Scammers have become adept at spoofing caller IDs, creating convincing official-looking communications, and manufacturing panic.

The financial impact is substantial and growing. In Q3 2025 alone, government imposter scams generated 114,062 reports with $280 million in losses, representing an $800 median loss per victim. Business imposter scams accounted for 128,996 reports with $306 million in losses, at a $500 median loss. Combined, imposter scams are on pace to exceed $2 billion in reported losses for 2025.

Microsoft remains the most impersonated brand, appearing in approximately 40% of tech support and imposter scam attempts. Google accounts for about 9% of brand impersonation attacks, followed by Apple at 6%. The criminals target the brands their victims are most likely to trust and respond to urgently.

Romance scams: The highest conversion rate

Romance scams occupy a particularly insidious niche in the fraud ecosystem. With a 59.5% victimization rate—meaning more than half of people who engage with romance scammers eventually lose money—they're the most effective category at converting targets into victims.

Reported losses exceeded $1.3 billion in 2024, with a median loss of $2,050—the highest of any imposter scam category. UK banking data shows romance scam payments increased 37% in 2025, with victims making an average of 11 separate payments before realizing they've been defrauded. The criminals are patient, sometimes building relationships for months before making their first financial request.

The platforms where these scams originate reveal the social media connection: 58% of romance scams start on social media, with 30% originating specifically on Facebook. Another 40% begin on dating apps. The criminals go where the lonely and hopeful are looking for connection.

Romance scams are particularly devastating because they involve dual betrayal: victims lose not only their money but also a relationship they believed was real. The emotional damage often exceeds the financial harm, leaving victims struggling with depression, shame, and difficulty trusting others.

Business Email Compromise: The corporate threat

While consumer scams dominate headlines, Business Email Compromise (BEC) attacks quietly extracted $2.8 billion in 2024—bringing the three-year total to $8.5 billion. These sophisticated attacks target organizations rather than individuals, typically by compromising email accounts or impersonating executives to authorize fraudulent wire transfers.

According to cybersecurity research, 63% of organizations experienced a BEC attack in 2024. The attacks have grown more sophisticated as criminals leverage AI tools. An estimated 40% of BEC emails are now AI-generated, making them grammatically perfect and increasingly difficult to detect. Conversation hijacking attacks—where scammers insert themselves into legitimate email threads—increased 70% in 2024.

The typical BEC scenario involves a scammer who has either compromised an executive's email account or created a convincing lookalike address. They send urgent requests to finance departments or accounting staff, demanding immediate wire transfers for acquisitions, vendor payments, or other business purposes. The requests appear legitimate because they seem to come from known, trusted sources.

Online shopping fraud: The highest volume

Online shopping scams affected 26% of all scam victims in recent surveys, making it the most common scam type by volume. In the UK alone, over 7 million people reported falling victim to online shopping fraud in 2024.

The holiday shopping season sees particularly intense activity. During Black Friday 2024, researchers documented 306 fake shopping domains being created daily. Security firms blocked approximately 4,760 malicious domains per day during the peak shopping period. These fake sites mimic legitimate retailers, offering deals that seem too good to be true—because they are.

While individual losses from online shopping scams tend to be smaller than investment or romance fraud, the sheer volume makes it a massive problem. Victims typically lose the cost of their purchase plus whatever payment information the criminals captured.

Tech support scams: Targeting the confused

Tech support scams generated 36,002 FBI reports in 2024, with $65 million in losses in Q3 2025 alone. The median loss is $1,362, though some victims lose far more when scammers gain remote access to their computers.

These scams typically begin with a popup warning that the victim's computer is infected or compromised. The warning includes a phone number for "Microsoft Support" or similar. When victims call, scammers convince them to install remote access software, then charge hundreds or thousands of dollars for fake "repairs." In worst cases, they use the access to drain bank accounts or install additional malware.

Job and employment scams: Targeting the desperate

Job scams have increased over 300% since 2020, preying on people looking for remote work opportunities. According to recent surveys, 25% of job seekers report being victimized by fake job postings.

These scams typically involve fake job offers that require upfront payment for training, equipment, or background checks. Some involve "check cashing" schemes where victims deposit fake checks and wire money before discovering the checks were fraudulent. Others lead to identity theft as victims provide Social Security numbers, banking information, and other personal data to what they believe are new employers.

The rise of remote work has created perfect conditions for these scams. With many legitimate companies conducting entire hiring processes online, job seekers have become accustomed to accepting offers from people they've never met in person.

The AI and deepfake revolution in fraud

If there's a single trend that defines the 2025-2026 fraud landscape, it's the weaponization of artificial intelligence. What began as experimental techniques by sophisticated criminal groups has become mainstream criminal infrastructure, fundamentally changing what's possible for scammers at all levels.

The numbers are stark: deepfake fraud increased 700% in Q1 2025 compared to Q1 2024, according to identity verification researchers. What was once a rare and technically demanding attack vector is now routine.

According to McAfee's "State of the Scamiverse" research, 1 in 10 Americans now report experiencing a voice-clone scam—typically a call from someone who sounds exactly like a family member claiming to be in an emergency situation. Parents have reported receiving calls from what sounds exactly like their child, sobbing and claiming to need bail money or emergency medical funds. The voice is their child's voice. The panic is real. The call is fake.

Perhaps most concerning is the detection gap:

1 in 3
Americans cannot confidently identify deepfake content
Source: McAfee State of the Scamiverse 2026

This detection gap is likely to widen as the technology continues to improve while public awareness lags behind. The capabilities now available to criminals are remarkable:

Voice cloning has become trivial. Using just seconds of audio from a social media video, voicemail greeting, or public speech, scammers can replicate anyone's voice with sufficient accuracy to fool family members. The technology that enables this is freely available online. A child's TikTok video can provide everything a scammer needs to call their parents demanding emergency funds.

Video deepfakes are increasingly convincing. In early 2025, the CEO of WPP, one of the world's largest advertising companies, was targeted with a deepfake video call. While that particular attack failed because the target was suspicious, many don't—especially when targeting employees who may not regularly interact with executives or when exploiting time pressure and authority dynamics.

AI content generation has transformed phishing. Scammers use AI to create professional-looking phishing websites in minutes, write convincing scam messages in any language without grammatical errors, and generate fake job postings complete with fabricated company information and realistic correspondence. The "Nigerian prince" emails with obvious spelling errors are being replaced by flawless communications that pass casual inspection.

Automated targeting makes scams more efficient. AI tools help scammers identify and prioritize potential victims based on social media profiles and public data, personalize approaches based on interests and life circumstances, and scale operations that previously required significant manual effort. A romance scammer can now run dozens of simultaneous cons, with AI helping craft personalized messages for each victim.

The implications for traditional scam detection are profound. Red flags that used to work—poor grammar, generic greetings, implausible scenarios—are increasingly obsolete. AI allows even unsophisticated criminals to produce professional-quality scam materials. The playing field has been leveled, and not in favor of potential victims.

Why recovery is nearly impossible

One of the cruelest aspects of modern scams is how rarely victims recover their money. Scammers deliberately demand payment methods that are fast, irreversible, and difficult to trace—and they move funds through multiple accounts within hours of receipt. By the time most victims realize they've been scammed, their money has crossed borders multiple times and been converted through cryptocurrencies, making recovery essentially impossible.

The choice of payment method essentially determines whether recovery is possible:

Recovery Likelihood by Payment Method
Cryptocurrency Irreversible, quickly laundered
<1%
Gift Cards Codes redeemed instantly
~0%
Zelle / Venmo Authorized transfers rarely reversed
~10-15%
Wire Transfer Possible if caught within hours
~26%
Credit Card Chargeback rights apply
~60%
Source: Global Anti-Scam Alliance, FTC data. Recovery rates vary by circumstances and timing of report.

Cryptocurrency: Less than 1% of cryptocurrency sent to scammers is ever recovered. Transactions are irreversible by design, funds can be laundered through mixing services within hours, and criminals often operate from jurisdictions with no cooperation agreements. Once you send crypto to a scammer, it's gone.

Gift cards: Recovery rate is essentially zero. Scammers redeem the card codes instantly, often while still on the phone with victims. The funds become untraceable within seconds. This is why scammers love gift cards—they're nearly perfect criminal infrastructure.

Peer-to-peer payment apps (Zelle, Venmo, Cash App): Recovery rate is approximately 10-15%. These services are designed for sending money to people you know and trust. Transactions are typically instant and authorized by the sender, meaning fraud protection is minimal. Banks and payment services often refuse to reverse transactions that victims technically authorized, even under manipulation.

Wire transfers: Recovery rate is approximately 26%. If caught within hours, banks can sometimes freeze or recall wire transfers. But the window is narrow—criminals move funds quickly specifically to defeat recall attempts. After 24-48 hours, recovery becomes extremely unlikely.

Credit cards: Recovery rate is approximately 60%. Chargeback rights provide significant consumer protection for credit card transactions. However, scammers increasingly push victims toward other payment methods specifically to avoid these protections. If a scammer is willing to accept credit card payment, that's often because they've found ways to defeat chargeback claims.

The overall statistics are grim:

Only 4%
of scam victims ever recover any of their money. The remaining 96% of losses are permanent.
Source: Global Anti-Scam Alliance

Scammers have become experts at moving money quickly and laundering it effectively. Within hours of receipt, funds typically pass through multiple accounts, may be converted to cryptocurrency, split among numerous wallets, and potentially moved through privacy-focused coins or mixing services. By the time law enforcement can even begin an investigation, the money trail has gone cold.

This reality underscores why prevention is so critical. Once money is sent to a scammer, the overwhelming likelihood is that it's lost forever. No amount of reporting, investigation, or regret will bring it back. The only reliable protection is to stop the scam before the payment is made.

Recognizing the warning signs

Scams succeed because they manipulate emotions—fear, greed, loneliness, urgency. Understanding the common warning signs can help potential victims pause and verify before sending money. While sophisticated scams can be difficult to detect, most still share common characteristics that should trigger skepticism.

Universal red flags

Urgency and pressure: "Act now or lose access to your account." "This offer expires today." "Your grandson needs bail money immediately." "The IRS will issue a warrant for your arrest if you don't pay now." Legitimate organizations rarely demand instant action. Scammers create urgency specifically to prevent victims from thinking clearly or consulting with others.

Unsolicited contact: You didn't initiate the conversation. Someone reached out to you—via call, text, email, or social media—with an opportunity, warning, or romantic interest. While not all unsolicited contact is fraudulent, it should always prompt extra skepticism. Real opportunities rarely land in your inbox from strangers.

Unusual payment methods: Any request for payment via gift cards, cryptocurrency, wire transfer, or payment apps should be treated as a major red flag. Legitimate businesses and government agencies do not demand payment in these forms. If someone asks you to buy gift cards and read them the numbers, it's a scam. No exceptions.

Too good to be true: Investment returns that far exceed market rates. A job that pays extremely well for minimal work. A romantic partner who seems perfect in every way and falls for you quickly. A product at an impossibly low price. Scammers offer what victims want to hear. If it seems too good to be true, it almost certainly is.

Requests for secrecy: "Don't tell your family about this investment opportunity." "Don't mention this to the bank teller." "Keep this between us for now." Scammers isolate victims from the people who might recognize the fraud and intervene. Any request for secrecy around money should be treated as a warning sign.

Remote access requests: Legitimate tech support will never call you unsolicited and ask to remotely control your computer to "fix" a problem you didn't know existed. If someone asks you to install remote access software like AnyDesk or TeamViewer, end the conversation immediately.

Requests for personal information: Social Security numbers, banking credentials, passwords, and PINs should never be shared in response to incoming requests. Legitimate organizations already have the information they need and won't ask you to provide it over the phone or by email.

AI and deepfake-specific red flags

As AI-powered scams become more common, new warning signs are emerging that people need to learn to recognize:

Unnatural voice patterns: AI-cloned voices may have odd pauses, unusual cadence, subtle robotic qualities, or slightly unnatural intonation. If a call from a family member sounds slightly "off," it may be a deepfake. Even if it sounds perfect, unexpected urgent requests should be verified through other channels.

Video inconsistencies: Deepfake videos often have quality issues around the face, especially during movement. Watch for blur artifacts around the edges of faces, strange lighting that doesn't match the environment, lips that don't quite sync with audio, or unnatural eye movement. However, be aware that these artifacts are becoming less obvious as the technology improves.

Requests that bypass normal procedures: If someone claiming to be your CEO asks you to skip standard verification steps, transfer money through unusual channels, or keep a transaction secret from colleagues, be suspicious. Legitimate executives work within established procedures, especially for significant financial transactions.

Single-channel verification resistance: Scammers resist verification through alternative methods. If someone claims to be a family member but won't accept a callback on their regular phone number, or claims to be an executive but won't respond to an email at their known address, that's a critical warning sign.

Emotional manipulation in unexpected contexts: AI has made "grandparent scams" and family emergency scams far more convincing. Any urgent request for money from a family member—especially one you can't reach through normal channels—should be verified independently. Call them back on a number you know is theirs. Contact other family members. Take the time to verify, no matter how urgent the request seems.

The single most important defense remains simple: verify independently before sending money. Call back on a known number. Check with family members directly. Look up the organization's official contact information rather than using numbers or links provided in suspicious messages. Legitimate requests can wait for verification; scammers can't afford to let you check.

What financial institutions must do

With only 4% of victims recovering their money, prevention is the only reliable defense. For credit unions, banks, and other financial institutions, this means recognizing that member education at the point of transaction is often the last line of defense between a scam and a devastating loss.

The third case study above illustrates a critical reality: a trained teller asking the right question at the right moment stopped a $15,000 loss. That intervention succeeded because the institution had invested in fraud awareness training. But most scam attempts don't happen at teller windows anymore—they happen on mobile apps, through online banking, and via wire transfer requests that bypass human interaction entirely.

The regulatory pressure is intensifying

Financial institutions face growing pressure from multiple directions:

NCUA and CFPB scrutiny: Regulators increasingly expect credit unions to demonstrate proactive fraud prevention programs. Member complaints about scam losses—even when the institution bears no legal liability—damage examination ratings and can trigger compliance actions.

Nacha rules on payment fraud: The ACH network's evolving rules around transaction monitoring and fraud prevention create new compliance obligations. Institutions that lack documented fraud prevention programs face heightened scrutiny.

State-level consumer protection: Several states have enacted or are considering legislation requiring financial institutions to provide scam education to members, particularly for high-risk demographics like older adults.

Reputational stakes: In credit union communities, word travels fast. When a member loses their life savings to a scam, other members hear about it. The institution's reputation suffers even when it did nothing wrong—and especially when it could have done more to warn the victim.

Effective institutional defenses

Research and real-world experience point to several high-impact interventions:

Real-time member education: Providing members with easy access to scam verification at the moment of doubt—before they send money—has proven far more effective than generic awareness campaigns. When someone can check whether a contact, investment opportunity, or urgent request is legitimate, many scams are stopped before the transaction completes.

Transaction pattern monitoring: Unusual withdrawal patterns, especially from older members, can signal scam activity. Wire transfers to unfamiliar recipients, multiple large gift card purchases, and sudden liquidation of long-held accounts should trigger intervention protocols.

Staff training on scam recognition: Front-line staff who know the warning signs can intervene effectively. The grandmother in the voice-cloning case study was saved because a trained teller asked the right question. This training pays for itself many times over.

Clear intervention protocols: Staff need authorization and clear procedures to delay suspicious transactions, ask probing questions, and escalate concerns. Without institutional backing, even well-trained employees may hesitate to intervene.

Member communication programs: Regular, targeted communications about current scam trends—especially to high-risk demographics—help build awareness. These are most effective when they describe specific, current scam scenarios rather than generic warnings.

The institutions that successfully protect their members recognize that fraud prevention isn't just a compliance checkbox—it's a core part of the member relationship. When a credit union helps a member avoid a scam, that creates loyalty and trust that no marketing campaign can match.

What this means for 2026 and beyond

The scam landscape entering 2026 is more dangerous than it has ever been. Losses continue to accelerate year over year. Artificial intelligence is lowering barriers for criminals while making detection harder for victims. The professionalization of fraud means scammers are becoming more effective at extracting maximum value from each target. And the payment systems that enable instant, irreversible transfers make recovery nearly impossible.

Several trends warrant particular attention:

AI-powered scams will become the norm, not the exception. Voice cloning, deepfake video, and AI-generated content are no longer cutting-edge techniques available only to sophisticated criminal organizations—they're increasingly standard tools accessible to scammers at all levels. Organizations and individuals need to adapt their verification procedures to assume that any voice, video, or written communication could be artificially generated.

Older adults remain at existential financial risk. The 7-fold increase in $100,000+ losses among seniors represents a genuine crisis. Many victims are losing their entire retirement savings, home equity, and financial security in single incidents. The intersection of isolation, trust in authority, and available assets makes this demographic particularly vulnerable to devastating losses.

Social media has become ground zero for fraud. The 9-fold increase in social media-initiated losses for older adults reflects broader patterns affecting all age groups. Platforms designed for connection have become primary hunting grounds for criminals, and the platforms have been slow to implement effective countermeasures.

Recovery remains vanishingly unlikely. With only 4% of victims recovering any funds, the emphasis must be entirely on prevention. Once money is sent to scammers, it's almost always gone for good. This stark reality means that every scam stopped before payment represents a complete victory, while every scam completed represents a likely permanent loss.

The scale of the problem is vastly underestimated. The $196 billion estimated actual loss figure may itself be conservative. Many victims never recognize they've been scammed, are too embarrassed to report, or don't know where to report. The true scope of fraud's impact on American households is likely even larger than our estimates suggest.

The good news: awareness and education demonstrably reduce victimization rates. People who understand how scams work, who know the warning signs, and who have practiced healthy skepticism are far less likely to become victims. Scams rely on catching people off guard; informed targets are harder targets.

This is why ScamZero exists—to give everyone a place to check anything suspicious before they become part of the $196 billion problem. By providing easy access to scam verification and education, we help communities build the awareness and habits that protect against fraud.

Data sources and methodology

This report synthesizes data from official government sources, financial institutions, and leading cybersecurity research organizations. All statistics cited can be traced to primary sources, which we have listed below for reference and verification.

Key sources include:

  • FBI Internet Crime Complaint Center (IC3) — 2024 Annual Report documenting 859,532 complaints and $16.6 billion in losses, with historical data back to 2019 for trend analysis.
  • Federal Trade Commission Consumer Sentinel Network — Reported loss data, underreporting rate estimates (2%-6.7%), and the resulting $196 billion actual loss calculation. Also 2025 reports on older adult fraud trends.
  • National Consumers League (fraud.org) — Analysis of fraud trends including the 558% increase in reported losses since 2019.
  • TRM Labs — Cryptocurrency fraud analysis including pig butchering data ($2.1 billion in H1 2025) and average payment increases (253% to $2,764).
  • Chainalysis — 2025 Crypto Crime Report data including $17 billion in total cryptocurrency scam losses for 2025.
  • OmniWatch — 2025 Identity Theft Statistics including quarterly imposter scam reports and loss data.
  • McAfee — "State of the Scamiverse" 2026 research on AI/deepfake fraud prevalence and public awareness gaps.
  • Sumsub — Identity fraud research documenting the 700% increase in deepfake fraud in Q1 2025.
  • Global Anti-Scam Alliance (GASA) — Recovery rate statistics (4% overall) and victim behavior data.
  • TSB Bank (UK) — Romance scam payment trends, 37% increase in 2025, and average 11 payments per victim.
  • Barracuda Networks — BEC attack statistics including 63% of organizations affected and 40% AI-generated emails.
  • NordVPN research — Online shopping fraud prevalence (26% of victims, 7M+ UK victims).
  • Netcraft — Fake domain creation rates during Black Friday (306/day) and malicious domains blocked (4,760/day).

The $196 billion estimated actual loss figure uses the FTC's published methodology for adjusting reported losses to account for underreporting rates of 93-98%. This approach is conservative; some researchers estimate actual losses may be significantly higher.

Report last updated: February 2026