FBI: Why 85% of Cybercrime Losses in 2025 Point to a Human Problem
Key Takeaways
- 85% of losses reported in the FBI’s 2025 IC3 report came from fraud: Social engineering drives most cybercrime damage, not technical failures.
- AI is making cyberattacks harder to catch: Voice cloning and deepfakes give social engineering a credibility upgrade.
- The human layer is the largest gap: No technical control can stop every employee who’s been successfully manipulated.
Every year, the FBI Internet Crime Compliant Center (IC3) report publishes and the total loss figure climbs. In 2025, total cybercrime losses amounted to $20.8 billion, but the drivers behind those losses tell a different story than most cybersecurity leaders are walking away with.
What Does the FBI’s 2025 Internet Crime Report Show?
Cyber-enabled fraud accounted for 85% of all IC3-reported losses in 2025. IC3 defines cyber-enabled fraud as criminal schemes that use the internet or digital technology to deceive victims into transferring money, data, or access. Roughly $17.7 billion of last year’s losses trace back to manipulation instead of technical system exploits.
- 452,868 complaints were filed under cyber-enabled fraud categories in 2025, accounting for 45% of all IC3 complaints that year
- 85% of total dollar losses traced back to those same fraud categories, despite representing less than half of all complaints
The FBI IC3 report shows that fraud dominates cybercrime in 2025 by both volume and financial damage. This data makes a strong case for cybersecurity awareness training, both at work and in general. Individuals who can recognize and resist manipulation are far more likely to stop a social engineering attack.
Which Types of Cybercrime Caused the Most Financial Damage in 2025?
The five highest-loss crime categories reported to FBI IC3 in 2025 were all social engineering attacks. These attacks are schemes designed to exploit human fear, urgency, or obedience instead of technical system vulnerabilities.
| Crime Type | Reported Losses in 2025 |
| Investment Fraud | $8.6 billion |
| Business Email Compromise (BEC) | $3.0 billion |
| Tech Support Scams | $2.1 billion |
| Confidence / Romance Fraud | $929 million |
| Government Impersonation | $797 million |
Table 1: Top 5 Cyber-Enabled Fraud Categories by Reported Loss (FBI IC3, 2025)
Every category in the top five cybercrimes requires a person to make a decision — to wire funds, click a link, grant access, or trust a stranger. None of the top 5 cyber-enabled fraud crimes hinged on an unpatched vulnerability or a misconfigured server.
What is Business Email Compromise (BEC)?
BEC is a fraud scheme in which attackers impersonate executives, vendors, or colleagues via spoofed or compromised email accounts to authorize fraudulent wire transfers or redirect payments.
How Did Social Engineering Losses Compare Between 2024 and 2025?
Social engineering losses rose across most major categories reported in 2025, with some nearly tripling compared to the year prior. Some notable increases include:
- BEC grew from $2.77 billion to $3.04 billion
- Tech support scams climbed from $1.46 billion to $2.13 billion
- Phishing and spoofing nearly tripled, from $70 million to $215 million
- Government impersonation almost doubled, from $405 million to $797 million
The growth rate of these social engineering attacks signals that current cyber defenses are not keeping pace with the financial destruction being wrought. At the same time, it also raises the question of whether organizations are targeting the right layer of risk: the Human Layer.
Why Can’t Technical Cybersecurity Tools Stop These Attacks?
Security tools are built to detect and block technical threats, but social engineering attacks bypass that layer by targeting human emotions.
The Seven Emotional Triggers Behind Every Successful Social Engineering Attack
In NINJIO’s The Unhackable Workforce Report, we found that every social engineering attack exploits at least one of seven core human emotions: fear, urgency, obedience, curiosity, opportunity, greed, and sociableness.
These attacks are effective because an individual’s emotional responses are deeply instinctive. Cybercriminals may layer multiple emotional triggers in a single social engineering attempt to compound pressure and reduce the window for rational decision-making.
What a Multi-Trigger Attack Looks Like
A phishing email impersonating a CEO that threatens consequences unless a deadline is met immediately combines obedience with fear and urgency. Each emotion reinforces the others, making it significantly harder to pause and verify before acting.
Cybersecurity Tools Have No Visibility Into Psychological Vulnerability
Cybersecurity tools and programs such as an email gateway evaluates sender reputation, domain authenticity, and payload signatures. However, it has no mechanism to assess whether the individual using the email client is highly susceptible to obedience-based pressure in that moment, or whether urgency cues cause them to skip verification steps that happen outside the digital portal.
This individual and their behavioral gaps require understanding each person’s specific emotional vulnerabilities, and training them to recognize the feeling of being manipulated before they act on it.
How Is Generative AI Making Social Engineering More Dangerous?
Generative AI is lowering the barrier to executing convincing social engineering attacks. In 2025, the FBI received more than 22,000 complaints with a reported AI connection, reflecting over $893 million in losses.
AI-enabled social engineering tactics emerging from this data include:
- AI-generated BEC emails: Cybercriminals use AI chat tools to draft polished, executive-sounding messages that mimic internal communication styles. This makes fraudulent wire transfer requests harder to distinguish from legitimate ones.
- Voice cloning in distress scams: Cybercriminals clone the voices of family members, colleagues, or executives to fabricate urgent, emotionally-charged scenarios. Victims act quickly because the voice they hear sounds like someone they trust.
- AI-fabricated investment personas: Fraudulent investment clubs use AI-generated video and voice content to impersonate celebrities or CEOs. Fake endorsements like this create a false sense of credibility before victims are defrauded.
What this means for employees
Social engineering attempts are becoming more personalized, more polished, and more difficult to catch in real time. Someone who could previously spot a phishing email by its poor grammar or generic phrasing now faces attacks that are grammatically perfect, contextually relevant, and emotionally sophisticated.
What Should Organizations Do to Address Human Risk in Cybersecurity?
Cybersecurity awareness training is not a soft skill program. When 85% of reported fraud losses trace back to individuals being manipulated, the human layer of defense deserves the same category of investment as any other critical security infrastructure. The data from the FBI IC3 report translates to this: Most organizations would not operate without a firewall. Running without a structured, continuously updated awareness training program carries a comparable level of exposure.
Addressing this means moving beyond annual compliance training toward programs that adapt to current social engineering attacks. In practice, that looks like:
- Delivering cybersecurity awareness training at least once a month to keep individuals current on active attack vectors
- Running simulated phishing campaigns to establish how each individual responds to different emotional triggers
- Using those results to deliver personalized security coaching that targets each person’s specific vulnerabilities
With this structure, individuals learn both what an attack looks like and what manipulation feels like, building your organization’s cyber defense from the human layer up.
Want to see how organizations are building stronger human defenses against social engineering with NINJIO? Get a free demo today.
Frequently Asked Questions
A: Simulated phishing should run once or twice a month to give your organization a more accurate and current picture of individual vulnerabilities than periodic or one-off testing.
A: Effective cybersecurity awareness programs track behavioral change, specifically whether individuals are reporting suspicious activity and responding differently to simulated phishing campaigns over time.
A: Frame the cybersecurity awareness program as critical infrastructure alongside your tech stack investment. The FBI’s 2025 data shows the majority of cybercrime losses trace to human behavior, making monthly, updated cybersecurity awareness training a risk management priority instead of a cost to cut.
A: Start with a phishing simulation baseline. Seeing how individuals respond to different emotional triggers gives you the data to provide personalized security coaching where it will have the most impact.
A: No program eliminates risk entirely, but organizations using continuous cybersecurity awareness training alongside personalized security coaching report measurable reductions in risky behaviors like simulated phishing click-throughs.
About NINJIO
NINJIO’s human risk management platform reduces cybersecurity risk through personalized security coaching, engaging awareness training, and adaptive testing. Our multi-pronged approach to risk mitigation focuses on the latest attack vectors to build employee knowledge and the behavioral science behind social engineering to sharpen users’ intuition. Our simulated phishing and coaching tools build a proprietary Emotional Susceptibility Profile for each user to identify their specific social engineering vulnerabilities and change behavior.