What Is AI Fraud?
AI fraud is what happens when the same generative artificial intelligence tools that help businesses write faster, code smarter, and serve customers better get weaponized by criminals to deceive faster, impersonate more convincingly, and steal at industrial scale. In 2026, AI fraud is no longer a futuristic threat category being debated in cybersecurity white papers — it is the defining characteristic of the modern fraud landscape, reshaping every sector from financial services and healthcare to corporate HR and everyday consumer banking. The core technologies enabling this wave — large language models that write hyper-personalized phishing emails, voice cloning tools that replicate any person’s voice from three seconds of audio, deepfake video generators that can put anyone on a live video call, and autonomous AI agents that conduct multi-step fraud operations without human input — have all become commercially accessible, astonishingly cheap, and staggeringly effective within the last two years.
The financial scale of the damage in 2026 is without precedent in the history of white-collar crime. The FBI’s Internet Crime Complaint Center (IC3) released its 2025 Annual Report — its most current edition — recording total cybercrime losses of $20.9 billion in 2025, a 26% increase from the $16.6 billion recorded in 2024, which itself was a 33% increase from 2023. For the first time in the IC3 report’s 26-year history, the FBI formally designated “AI-related” as a crime descriptor, logging over 22,000 complaints and nearly $900 million in AI-attributed losses in that initial accounting — a figure experts universally regard as a dramatic undercount of the true AI-driven share. Generative AI-enabled fraud surged 1,210% in 2025 per Vectra AI’s March 2026 analysis, and projected losses from AI-facilitated fraud in the United States alone are forecast to reach $40 billion by 2027 per Javelin Strategy & Research and AiPrise. This article compiles the most current, rigorously verified AI fraud statistics for 2026 — from deepfake scams and voice cloning to BEC, investment fraud, and the emerging frontiers of agentic AI crime.
📊 Key AI Fraud Facts in 2026 — At a Glance
| AI Fraud Fact | Data Point |
|---|---|
| US cybercrime losses (2025, FBI IC3) | $20.9 billion — record high |
| US cybercrime losses (2024, FBI IC3) | $16.6 billion (+33% from 2023) |
| AI-related complaints (FBI IC3, 2025, first year tracked) | 22,000+ complaints / ~$900 million in losses |
| AI-enabled fraud growth rate in 2025 | +1,210% vs. traditional fraud’s +195% |
| US projected AI fraud losses by 2027 | $40 billion (Javelin / AiPrise) |
| FTC consumer fraud losses (2024) | $12.5 billion — 25% increase YoY |
| US consumer identity fraud losses (2024) | $47 billion (Javelin Strategy & Research) |
| Global scam losses (2024) | $1 trillion (GASA / Sift) |
| AI-generated phishing emails — share of all phishing | Over 82% of all phishing emails |
| AI phishing vs. human-crafted — click-through rates | 4x higher click rate with AI-generated content |
| Deepfake-related US losses (2025) | $1.1 billion — tripled from $360M in 2024 |
| Deepfake-related US losses (Jan–Sep 2025) | Over $3 billion (AI-driven deepfakes) |
| Global deepfake fraud losses (total to date) | $2.19 billion (Surfshark, April 2026) |
| US share of global deepfake losses | $712 million — most targeted country globally |
| Deepfake vishing attacks surge (Q1 2025 vs. Q4 2024) | +1,600% in the United States |
| Voice clone scam victim loss rate | 77% of targeted victims lost money |
| Voice clone audio needed | Just 3 seconds at 85% voice accuracy |
| BEC losses (2022–2024, cumulative, IC3) | $8.5 billion |
| Investment fraud losses (2024, FBI IC3) | $6.57 billion — largest category |
| Deepfakes as share of global fraudulent activity (2026) | 11% of all global fraud (Sumsub) |
Source: FBI IC3 2025 Annual Report (April 2026), FBI IC3 2024 Annual Report, FTC Consumer Sentinel Network 2024, Javelin Strategy & Research Identity Fraud Study 2024, Surfshark Deepfake Fraud Study (April 2026), Vectra AI (March 2026), Experian Future of Fraud Forecast (January 2026), Sift Digital Trust Index Q2 2025, Sumsub Identity Fraud Report 2025, Keepnet Labs (March 2026)
The twenty facts above establish the most important headline of AI fraud in 2026 without ambiguity: this is not a trend that is arriving — it has arrived, at devastating scale, and it is accelerating. The jump from $16.6 billion to $20.9 billion in FBI-reported cybercrime losses in a single year represents more money stolen from Americans through cyber-enabled crime in 2025 than the combined annual revenues of many Fortune 500 companies. The 1,210% surge in AI-enabled fraud outpacing traditional fraud’s already-alarming 195% growth by a factor of more than six confirms that AI is not merely incrementally improving the efficiency of existing fraud — it is creating entirely new attack categories and scaling existing ones beyond anything that human-only fraud operations could achieve. And the 77% victim loss rate among people targeted by voice cloning scams — meaning that nearly four out of every five people who receive a convincing AI voice clone call and engage with it end up losing money — is perhaps the single most consequential consumer protection statistic in this entire article.
US Cybercrime Losses & AI Fraud Growth in 2026
📊 US Cybercrime Losses — FBI IC3 Annual Reports (2020–2025)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
2020 ██████████████ $4.1 billion
2021 ████████████████ $6.9 billion
2022 ██████████████████ $10.3 billion
2023 ███████████████████████ $12.5 billion
2024 ████████████████████████████ $16.6 billion (+33% YoY)
2025 █████████████████████████████ $20.9 billion (+26% YoY)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
| Cybercrime Losses Metric | Value | Source |
|---|---|---|
| Total US cybercrime losses (2025) | $20.9 billion | FBI IC3 2025 Annual Report |
| Total US cybercrime losses (2024) | $16.6 billion (+33% from 2023) | FBI IC3 2024 Annual Report |
| Total IC3 complaints (2024) | 859,532 complaints | FBI IC3 2024 |
| Total IC3 complaints (2025) | Over 1 million (+17.3% from 2024) | FBI IC3 2025 / SpyCloud |
| Average reported loss per complaint (2024) | $19,372 | FBI IC3 2024 |
| AI-related complaints (FBI IC3, first year tracked, 2025) | 22,000+ complaints | FBI IC3 2025 |
| AI-related losses (FBI IC3 2025) | ~$900 million | FBI IC3 2025 |
| AI-related loss — largest subcategory | Investment fraud: $632 million | FBI IC3 2025 |
| AI-related loss — BEC | $30 million (FBI tracked) | FBI IC3 2025 |
| AI-enabled fraud growth rate (2025) | +1,210% vs. traditional fraud’s +195% | Vectra AI (March 2026) |
Source: FBI IC3 2025 Annual Report (released April 2026), FBI IC3 2024 Annual Report (released April 2025), SpyCloud 2026 Annual Identity Exposure Report, Vectra AI AI Scams Guide (March 2026)
The FBI’s 2025 Internet Crime Complaint Center Annual Report — the most authoritative benchmark for US cybercrime losses, released in April 2026 — marks a watershed moment in the history of cybercrime reporting. For the first time in the IC3’s 26-year history, the FBI formally introduced “AI-related” as a specific crime descriptor, acknowledging what the fraud industry has known for two years: that AI is no longer simply a tool enhancement for existing crimes but a category-defining force reshaping the entire fraud landscape. The initial count of 22,000+ AI-related complaints with nearly $900 million in attributed losses almost certainly reflects a dramatic undercount — because most victims who lose money to an AI-powered phishing email, voice clone call, or deepfake investment scam never identify the AI component when filing their report. Security researchers at SpyCloud and Vectra AI estimate that AI is now a significant contributing factor in the majority of all cybercrime losses reported to the IC3.
The macro trend is relentless. Total FBI IC3 cybercrime losses have climbed from $4.1 billion in 2020 to $20.9 billion in 2025 — a five-fold increase in five years — while complaints crossed one million for the first time ever in 2025, averaging approximately 3,000 per day compared to the 2,000 per day pace of recent prior years. The $20.9 billion in 2025 losses represents more than five times the annual budget of the FBI itself, highlighting the vast resource asymmetry between attackers and defenders. Vectra AI’s March 2026 analysis provides the most precise measure of AI’s specific contribution to this acceleration: AI-enabled fraud grew at 1,210% in 2025, compared to traditional fraud’s already elevated 195% growth rate — meaning AI-powered schemes are scaling approximately six times faster than non-AI fraud. The projected trajectory to $40 billion in US AI-facilitated fraud losses by 2027 per Javelin Strategy & Research and AiPrise’s March 2026 report is not a worst-case scenario — it is the base case built from the current growth rate.
Deepfake Fraud Statistics in the US in 2026
📊 Deepfake Fraud — US & Global Financial Losses (2024–2026)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Global deepfake fraud losses (total, to Mar 2026) $2.19 billion
Global deepfake fraud losses — 2025 alone $1.65 billion
Global deepfake fraud losses — 2026 (to date) $96 million
US deepfake losses (2025 full year) $1.1 billion (tripled from $360M)
US deepfake losses (Jan–Sep 2025) Over $3 billion (broader AI deepfake)
US deepfake losses — H1 2025 $547.2 million
US deepfake losses — Q1 2025 only Over $200 million
US = Most targeted country globally $712 million total (Surfshark, 2019–2026)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
| Deepfake Fraud Metric | Statistic |
|---|---|
| Global deepfake fraud losses (total, Jan 2019–Mar 2026) | $2.19 billion |
| Global deepfake fraud losses (2025 alone) | $1.65 billion |
| Global deepfake fraud losses (2026 to date) | $96 million |
| US deepfake fraud losses (2025 full year) | $1.1 billion — tripled from $360M in 2024 |
| US deepfake fraud losses (H1 2025) | $547.2 million |
| US deepfake fraud losses (Q1 2025) | Over $200 million |
| US deepfake losses (broader AI-driven scope, Jan–Sep 2025) | Over $3 billion |
| US — most targeted country globally (2019–2026) | $712 million total deepfake losses |
| US deepfake losses — corporate sector share | 43% occurred in corporate sector |
| US deepfake losses — family impersonation | $124 million (17% of US total) — 99.9% of global family deepfake losses |
Source: Surfshark Deepfake Fraud Study — “Global deepfake fraud reaches $2.19B — US leads in losses” (April 2026), drawing on AI Incident Database, Resemble.AI, and OECD data (January 2019 to March 2026); Keepnet Labs Deepfake Statistics & Trends (March 2026)
The United States is the most targeted country in the world for deepfake-related fraud — and the data from Surfshark’s April 2026 study makes this quantifiably clear. Of the $2.19 billion in global deepfake fraud losses recorded between January 2019 and March 2026, the US accounts for $712 million — more than any other nation, and a figure that has accelerated dramatically: $1.65 billion of the total $2.19 billion was lost in 2025 alone, illustrating the exponential pace at which deepfake fraud is scaling globally. Within the US, 43% of deepfake losses hit the corporate sector — involving scams where deepfakes impersonated executives or fabricated video call participants to authorize fraudulent wire transfers. The Arup incident, where an engineering firm lost $25.6 million when a Hong Kong-based finance worker was fooled by a fully AI-generated video call featuring a fake CFO and multiple fake colleagues, has become the most studied case study in corporate deepfake fraud history — a presence attack that exploited humanity’s hardwired trust in visual confirmation.
A particularly alarming US-specific pattern identified by Surfshark is the family impersonation deepfake scam, through which fraudsters create AI-generated audio or video of a victim’s family member in apparent distress — typically claiming to be arrested, injured, or stranded — and demand urgent payment. The US accounts for 99.9% of all globally reported deepfake family scam losses, with $124 million in documented losses within this category alone. This includes the documented case of a Florida mother who was conned out of $15,000 in July 2025 after receiving a call featuring a convincingly AI-cloned version of her daughter’s voice, accompanied by a fake “attorney” demanding bail money. The deepfake vishing attack surge of 1,600% in Q1 2025 compared to Q4 2024 in the US confirms that the scale-up of these scams is not linear — it is explosive, reflecting the rapid democratization of voice cloning tools that require as little as three seconds of source audio to produce an 85% voice accuracy match.
AI Phishing & Business Email Compromise Statistics in the US in 2026
📊 AI Phishing & BEC — Key US Statistics (2024–2026)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
AI-generated phishing share of all phishing Over 82%
AI phishing — click-through vs. human 4x HIGHER click rate
Phishing open rate (AI emails) 78% of people open them
People clicking malicious links 21% of those who open
Phishing losses (2024, IC3) $70 million (nearly 4x from prior yr)
Phishing losses (2025, IC3) $215.8 million (+208% from 2024)
BEC losses (2024, IC3) $2.77 billion
BEC losses (2022–2024 cumulative, IC3) $8.5 billion
63% of organizations experienced BEC (2024)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
| AI Phishing & BEC Metric | Statistic | Source |
|---|---|---|
| AI-generated phishing — share of all phishing emails | Over 82% | Sift Digital Trust Index Q2 2025 |
| AI phishing click-through rate vs. human-crafted | 4x higher | Vectra AI (March 2026) |
| People who open AI-generated phishing emails | 78% | Sift Digital Trust Index Q2 2025 |
| People who click malicious links in those emails | 21% | Sift Digital Trust Index Q2 2025 |
| Phishing email creation speed with AI | 40% faster to craft | Sift Digital Trust Index |
| Phishing losses (2024, FBI IC3) | $70 million (nearly 4x prior year) | FBI IC3 2024 |
| Phishing losses (2025, FBI IC3) | $215.8 million (+208% from 2024) | FBI IC3 2025 / SpyCloud |
| BEC losses (2024, FBI IC3) | $2.77 billion | FBI IC3 2024 |
| BEC losses (2022–2024, cumulative) | $8.5 billion | FBI IC3 / Nacha (April 2025) |
| Organizations experiencing BEC (2024) | 63% | AFP 2025 Fraud and Control Survey |
Source: FBI IC3 2024 Annual Report, FBI IC3 2025 Annual Report (April 2026), Sift Digital Trust Index Q2 2025, Vectra AI AI Scams in 2026 Guide (March 2026), Association for Financial Professionals (AFP) 2025 Fraud and Control Survey, SpyCloud 2026 Annual Identity Exposure Report, Nacha (April 2025)
AI-powered phishing and Business Email Compromise represent the highest-volume, highest-loss categories in the entire US cybercrime landscape — and the numbers confirm that AI has fundamentally broken the defenses that organizations spent years building against these attacks. The fact that over 82% of all phishing emails are now AI-generated per the Sift Digital Trust Index Q2 2025 means that the grammatical errors, generic salutations, and obvious formatting tells that security awareness training taught employees to spot have been largely eliminated. AI-generated phishing emails are personalized to the recipient’s role, reference real internal context scraped from LinkedIn and corporate websites, and are written in tone-perfect prose that reads indistinguishably from genuine internal communications. The consequence is unambiguous: AI-generated phishing achieves a click-through rate four times higher than human-crafted equivalents, and 78% of people open AI-generated phishing emails while 21% actually click malicious links — rates that would have seemed impossible with the clunky phishing emails of even five years ago.
The downstream financial impact is escalating sharply. Phishing losses reported to the FBI IC3 jumped from $70 million in 2024 to $215.8 million in 2025 — a 208% increase in a single year that SpyCloud’s analysis attributes directly to the mainstreaming of phishing-as-a-service (PhaaS) platforms that give technically unskilled criminals access to turnkey AI-enhanced phishing kits. Business Email Compromise losses totalled $2.77 billion in 2024 alone and an accumulated $8.5 billion over the 2022–2024 period per the FBI IC3 and Nacha analysis — making it the second-largest single fraud category in the US after investment fraud, and the primary delivery mechanism for the largest individual corporate losses. 63% of organizations experienced BEC in 2024 per the AFP’s 2025 Fraud and Control Survey, confirming that this is not a threat for the unlucky few but a near-universal operational reality for any organization that processes financial transactions over email.
Investment Fraud & Crypto AI Scam Statistics in the US in 2026
| Investment Fraud & Crypto Scam Metric | Statistic | Source |
|---|---|---|
| Investment fraud losses (2024, FBI IC3) | $6.57 billion — largest single category | FBI IC3 2024 |
| Cryptocurrency-related IC3 complaints (2024) | 149,686 complaints | FBI IC3 2024 |
| Cryptocurrency-related losses (2024) | $9.32 billion (+66% from 2023) | FBI IC3 2024 / Abnormal AI |
| Crypto investment fraud specifically (2024) | $5.8 billion (41,557 complaints) — 29% rise in cases, 47% jump in losses | FBI IC3 2024 |
| AI-enabled crypto scams: profitability vs. traditional | 4.5x more profitable | Chainalysis / Vectra AI |
| Global crypto scam losses (2025) | $14 billion | Chainalysis 2025 |
| AI Investment fraud losses (FBI IC3 2025, AI-attributed) | $632 million — largest AI subcategory | FBI IC3 2025 |
| “Pig butchering” scam global losses | $12.4 billion | ScamWatchHQ / Chainalysis |
| Global scam losses (GASA survey, 2024) | $442 billion (survey-based; $1 trillion broader) | GASA / Feedzai 2025 |
| AI agents in investment fraud: “Truman Show” operation | 90 AI-generated “experts” directing victims to install fraudulent apps | Check Point / Vectra AI |
Source: FBI IC3 2024 Annual Report, FBI IC3 2025 Annual Report, Chainalysis 2025 Crypto Crime Report, GASA & Feedzai Global State of Scams 2025 (survey of 46,000 adults, 42 countries), Check Point Research, Vectra AI (March 2026), ScamWatchHQ (2025)
Investment fraud powered by AI is the single largest financial loss category in American cybercrime — and the numbers dwarf every other category by a wide margin. The FBI IC3’s 2024 report recorded $6.57 billion in investment fraud losses — more than double the second-place BEC category at $2.77 billion — with $5.8 billion of that directly linked to cryptocurrency investment fraud. The mechanism behind the majority of these losses is the “pig butchering” (shā zhū pán) scam ecosystem, where fraudsters invest weeks or months in building emotional trust with victims through online relationships before steering them toward AI-fabricated investment platforms that show fictitious gains until the victim attempts to withdraw and is either extorted for “taxes” or simply blocked. Global pig-butchering losses have been estimated at $12.4 billion by ScamWatchHQ and Chainalysis, with the US among the most heavily targeted nations. The Chainalysis 2025 Crypto Crime Report found that AI-enabled crypto investment scams are 4.5 times more profitable than traditional non-AI fraud — the most economically significant differential between AI and traditional crime documented anywhere in the literature.
The sophistication ceiling on these scams keeps rising. Check Point Research documented the “Truman Show” operation, in which bad actors deployed 90 distinct AI-generated expert personas simultaneously across controlled messaging groups, each conducting individualized conversations with targets while directing them to install mobile apps seeded with server-controlled fraudulent trading data. This represents the full industrialization of investment fraud — no human scammers managing individual relationships, just AI agents sustaining dozens of convincing “expert” identities in parallel, 24 hours a day, across multiple languages. The FBI IC3’s 2025 report specifically counted AI investment fraud at $632 million in formally AI-attributed losses — the single largest AI crime subcategory — but this figure captures only the cases where victims specifically identified AI involvement, meaning the true AI-driven share of the $6–7 billion annual investment fraud total is almost certainly a majority of the entire category.
Deepfake Scam Types & Victim Demographics in the US in 2026
| Deepfake Scam Type / Victim Metric | Statistic | Source |
|---|---|---|
| Celebrity / government deepfake investment scams | $1.13 billion (52% of all deepfake fraud losses) | Surfshark (April 2026) |
| US deepfake scams using celebrity likenesses | 48% of US deepfake incidents in 2025 | Keepnet Labs (March 2026) |
| US deepfake — corporate sector losses | 43% of total US deepfake losses | Surfshark (April 2026) |
| US deepfake — family impersonation losses | $124 million (17% of US total) | Surfshark (April 2026) |
| Deepfakes as share of global fraudulent activity | 11% of all global fraud | Sumsub Identity Fraud Report 2025 |
| Deepfake files: 2023 vs. 2025 projection | 500,000 → 8 million deepfake files | Keepnet Labs / UK Government |
| Voice clone: audio required | Just 3 seconds at 85% accuracy | DeepStrike.io (September 2025) |
| Voice clone: cost to create | As low as $1 and under 20 minutes | ScamWatchHQ / Vectra AI |
| 77% of voice clone victims lost money | Of those who confirmed engaging with a voice clone | AARP / Capital One Shopping Research |
| 1 in 3 Americans (33%) | Believe someone has tried to scam them using AI/deepfake | Sift Digital Trust Index Q2 2025 |
Source: Surfshark Deepfake Fraud Study (April 2026), Keepnet Labs Deepfake Statistics (March 2026), Sumsub Identity Fraud Report 2025, DeepStrike.io Deepfake Statistics 2025 (September 2025), Sift Digital Trust Index Q2 2025, Vectra AI (March 2026)
The taxonomy of deepfake fraud in 2026 has expanded well beyond the “CEO fraud” scenario that dominated early headlines, and understanding the full range of scam types is essential for both consumer protection and organizational security planning. The dominant financial loss type globally is still celebrity and government impersonation deepfakes used to endorse fraudulent investment opportunities — accounting for $1.13 billion or 52% of all deepfake fraud losses per Surfshark’s comprehensive study drawing on the AI Incident Database and OECD data through March 2026. These scams feature AI-generated video of recognizable public figures — politicians, billionaires, sports stars, actors — appearing to endorse a specific investment platform or cryptocurrency, distributed through social media advertising to reach the broadest possible victim pool. 48% of US deepfake scams in 2025 specifically used celebrity likenesses per Keepnet Labs, reflecting both the effectiveness of celebrity trust proxies and the ease of generating convincing celebrity deepfakes from publicly available video footage.
The family impersonation deepfake — where an AI-generated voice or video of a victim’s child, parent, or spouse claims to be in distress and needs immediate financial help — is a particularly cruel and effective variant that appears to be concentrated almost entirely in the United States, which accounts for 99.9% of all globally reported losses in this category. The low cost of these attacks makes them especially accessible to opportunistic fraudsters: the Biden deepfake robocall that disrupted the 2024 New Hampshire primary cost just $1 to create and took under 20 minutes — and the same cost structure applies to personal family impersonation scams. The deepfake job candidate category is now a formally documented, institutionally acknowledged threat: the FBI, DOJ, and CISA have documented North Korean state-sponsored IT workers using deepfake technology to obtain employment at 136 or more US companies, with operatives earning over $300,000 per year before escalating to data theft and extortion — a fraud category that blends financial crime with national security compromise.
AI Fraud by Victim Demographics in the US in 2026
| Victim Demographics Metric | Statistic | Source |
|---|---|---|
| Adults 60+: total IC3 losses (2024) | $4.885 billion — single largest victim cohort | FBI IC3 2024 |
| Adults 60+: IC3 complaints (2024) | 147,127 complaints (+46% from 2023) | FBI IC3 2024 |
| Adults 60+: average loss per victim | $83,000 per affected individual | FBI IC3 2024 |
| Adults 60+ losing over $100,000 | 7,500 individuals in 2024 | FBI IC3 2024 |
| Gen Z / Millennials: scam victimization rate | Higher than Boomers despite higher AI awareness | Sift Digital Trust Index |
| Consumers saying scams harder to spot (2025) | 70% — vs. prior year | Sift Digital Trust Index Q2 2025 |
| 1 in 3 Americans (33%) | Confident they could identify an AI scam — yet 20% fell for phishing | Sift Digital Trust Index |
| Successfully defrauded by AI scam (2025) | 27% of those targeted — up from 19% prior year | Sift Digital Trust Index Q2 2025 |
| Older adults (60+): online scam losses (2023) | Over $3.4 billion in 2023 alone (+11% from 2022) | Keepnet Labs / FTC |
| Identity theft reports to FTC (2024) | 1.1 million reports | FTC via AiPrise (March 2026) |
Source: FBI IC3 2024 Annual Report, Sift Digital Trust Index Q2 2025, FTC Consumer Sentinel Network 2024, Keepnet Labs (March 2026), AiPrise via Fintech Global (March 2026)
Older Americans bear the heaviest financial burden of AI-enabled cybercrime — a pattern that has intensified every year and reached a new extreme in 2024. Adults aged 60 and over filed 147,127 complaints with the FBI IC3 in 2024 — a 46% increase from 2023 — and suffered $4.885 billion in total losses, representing the largest loss amount of any demographic group. The average loss per affected individual aged 60 or older was $83,000, and 7,500 individuals in this cohort each lost over $100,000 in a single year. These numbers reflect a population that is disproportionately targeted because of the perceived wealth concentration in retirement assets, the relative unfamiliarity with AI-generated content manipulation, and the higher susceptibility to tech support fraud, government impersonation scams, and romance fraud — all of which are now being significantly enhanced by AI voice cloning and deepfake video. The FBI explicitly flagged people over 60 as the demographic suffering the most losses and submitting the highest number of complaints for the second consecutive year.
The generational paradox in AI fraud victimization is one of the most counterintuitive findings in the 2025–2026 research literature: Gen Z and Millennials — the most AI-literate, digitally native generations — are also falling victim to AI scams at higher rates than older generations per Sift’s Digital Trust Index. This appears to be a confidence effect: younger consumers are more likely to engage with AI-generated content generally, and their familiarity breeds a false sense of security about their ability to identify fraudulent AI content. 33% of Americans say they are confident they could spot an AI-generated scam — yet 20% of the same respondents admitted to falling for phishing attacks in the past year. The fraction of targeted individuals who were successfully defrauded rose from 19% to 27% in a single year (2024 to 2025 per Sift), confirming that the improvement in AI scam quality is outpacing improvements in consumer detection capability across all age groups.
Corporate AI Fraud, Identity Fraud & Emerging Threats in the US in 2026
| Corporate & Emerging Threat Metric | Statistic | Source |
|---|---|---|
| Companies reporting increased fraud losses (2024–2025) | ~60% | Experian / FTC (January 2026) |
| Companies that boosted fraud prevention budgets (2025) | More than 70% | AiPrise / Fintech Global (March 2026) |
| Business leaders: AI fraud is top 2026 challenge | 72% | Experian Future of Fraud Forecast (January 2026) |
| Consumers expecting stronger online safeguards | 80% | AiPrise / Fintech Global (March 2026) |
| Gartner: IDV solutions unreliable standalone by 2026 | 30% of enterprises no longer trust them alone | Gartner / DeepStrike.io |
| Gartner: fake candidate profiles by 2028 | 1 in 4 candidate profiles could be fake | Gartner / Vectra AI |
| GenAI-enabled scam growth (May 2024 – April 2025) | +456% | Sift Digital Trust Index Q2 2025 |
| Breached personal data growth (Q1 2025) | +186% YoY | Sift Digital Trust Index Q2 2025 |
| Phished identity records (2025, SpyCloud) | 28.6 million records recaptured from criminal underground | SpyCloud 2026 |
| Experian fraud prevention: losses avoided globally (2025) | $19 billion in client fraud losses avoided | Experian (January 2026) |
Source: Experian Future of Fraud Forecast (January 2026), AiPrise via Fintech Global (March 2026), Sift Digital Trust Index Q2 2025, Gartner (via DeepStrike.io and Vectra AI), SpyCloud 2026 Annual Identity Exposure Report
The corporate response to AI fraud in 2026 is being shaped by a combination of financial urgency and strategic uncertainty — financial urgency because 60% of US companies reported increased fraud losses between 2024 and 2025, and strategic uncertainty because the speed of AI-powered attack innovation is outpacing the development of reliable defensive countermeasures. 72% of business leaders identified AI-enabled fraud and deepfakes as among their top operational challenges for 2026 per Experian’s January 2026 Future of Fraud Forecast — the most credible annual industry benchmark — and more than 70% responded by increasing their fraud prevention budgets, though 80% of consumers simultaneously say they expect stronger safeguards from businesses they interact with, creating a mounting accountability pressure. The most significant structural vulnerability indicator comes from Gartner’s prediction that by 2026, 30% of enterprises will no longer consider standalone identity verification and authentication solutions to be reliable — because AI-generated synthetic identities, deepfake liveness checks, and AI-fabricated documentation can now defeat the majority of single-layer verification systems that most organizations currently rely on.
The emerging threat categories flagged by Experian’s 2026 forecast deserve specific attention because they represent the next wave of AI fraud that organizations are currently underprepared for. Agentic AI fraud — where autonomous AI systems conduct multi-step fraud operations including account creation, relationship building, phishing, and fund extraction without any human operator — is projected to be the defining fraud threat of the near-term future. AI-powered website cloning is already overwhelming fraud teams as AI tools make it trivially easy to replicate legitimate banking, retail, and government websites with pixel-perfect accuracy that defeats human visual inspection. AI romance fraud bots capable of sustaining emotionally intelligent, personalized long-term relationships with dozens of victims simultaneously — without any human scammer behind the keyboard — are at the frontier of the “bots that break hearts and bank accounts” threat category that Experian has specifically identified as a top 2026 emerging risk. Together, these emerging categories confirm that the AI fraud landscape of 2027 will make 2026 look manageable — unless the investment in AI-powered detection, behavioral analytics, and multi-layered verification infrastructure accelerates to match the pace of the threat.
Disclaimer: This research report is compiled from publicly available sources. While reasonable efforts have been made to ensure accuracy, no representation or warranty, express or implied, is given as to the completeness or reliability of the information. We accept no liability for any errors, omissions, losses, or damages of any kind arising from the use of this report.

