Why Human Risk Is the Real Cybersecurity Battleground
An interview with former FBI counterintelligence operative and cybersecurity expert Eric O’Neill
For most organizations, cybersecurity is still framed as a technology problem. Firewalls, endpoint protection, vulnerability scans, and AI-driven detection tools dominate the conversation in boardrooms and security operations centers alike. Yet some of the most damaging cyber incidents in recent years have had very little to do with software vulnerabilities, and everything to do with human behavior.
Few people understand that reality better than Eric O’Neill. A former FBI undercover operative who helped capture notorious Russian spy Robert Hanssen—the most damaging spy in FBI history—O’Neill has spent decades studying how espionage tactics evolve and how adversaries exploit trust inside organizations. His work investigating Hanssen became the basis for the film Breach and his book Gray Day. In his newest book, Spies, Lies and Cyber Crime, O’Neill explores how the tactics of traditional espionage have migrated into modern cybercrime.
Today, cybercriminals operate much like intelligence agencies. They conduct reconnaissance, identify human vulnerabilities, craft deceptive narratives, and infiltrate organizations through manipulation rather than brute-force technical attacks. In many cases, attackers don’t “hack” computers at all—they simply trick people.
For CISOs and security leaders, this shift has profound implications. If cybercrime increasingly mirrors espionage, defending organizations requires more than stronger technology stacks. It requires understanding how humans make decisions under pressure, how attackers exploit trust, and how leadership can build cultures that reduce human risk.
In this conversation with NINJIO, O’Neill explains why modern cyberattacks target people rather than machines, how AI-driven deception is changing the threat landscape, and why organizations must treat employees as the first line of cyber defense.
NINJIO: What parallels do you see between classic espionage and today’s human-driven cyber threats?
Eric O’Neill: This idea is the foundation of my new book, Spies, Lies and Cyber Crime. My theory is simple: there really aren’t hackers, there are spies.
Espionage has always been about accessing information. Today that information lives inside networked computer systems, which means the tactics spies used for decades have simply migrated into the digital world. Cybercriminals have essentially copied the espionage playbook. They conduct reconnaissance, learn about their target, and then deceive a person to gain access. Once inside, they move laterally and quietly across systems to steal information—or destroy it.
The biggest misconception people have about cyberattacks is that one computer is hacking another computer. In reality, the internet is often just the delivery system to reach a person and manipulate them.
NINJIO: During the Robert Hanssen investigation, failure wasn’t an option. What did that experience teach you about behavioral risk inside trusted organizations?
Eric O’Neill: The Hanssen case showed that trusted insiders will always exist. Hanssen spied for Russia for more than twenty years while working inside the FBI. My job was to go undercover inside headquarters, gain his trust, and find the evidence needed to arrest him.
That experience revealed something important: organizations must watch not only for insiders who intentionally betray trust, but also what I call “virtual trusted insiders.” These are people whose credentials have been compromised by attackers. Criminals steal usernames, passwords, and multi-factor authentication credentials, then operate inside networks using that person’s identity.
Whether it’s a traitor on the inside or an attacker using stolen credentials, the solution is the same: understand your data. Know who is accessing it, when, from where, and why. More importantly: context is everything.
NINJIO: Many executives still view cybersecurity as purely a technology problem. Why is that mindset increasingly dangerous?
Eric O’Neill: In reality, most breaches happen when a person is deceived—not when a computer is compromised. Take the MGM attack in Las Vegas. The attackers didn’t hack a system. They made a phone call to the help desk and convinced an employee to reset credentials for an engineer. That single act caused more than $100 million in damage. One human interaction and no code. That’s why cybersecurity has to be approached as both a technical and a people problem.
NINJIO: AI is accelerating social engineering. What should organizations be preparing for?
Eric O’Neill: There is no doubt that AI is changing everything. We’re already seeing deepfake video conferences where attackers impersonate executives using AI-generated avatars. In one case, a financial manager joined a video call with what appeared to be his CFO and colleagues. Over two weeks he wired $25 million—because every person in the meeting looked real. All the “colleagues” were all AI-generated.
We’re entering a world where you can’t trust what you see or hear online. Organizations need verification processes and cultures where employees feel comfortable questioning unusual requests.
NINJIO: The Allianz Risk Barometer ranks cyber incidents as the top global business risk. What finally gets a board’s attention?
Eric O’Neill: It always comes down to cost. The cost to remediate a breach is often ten times higher than the cost of preventing it. When I talk to boards, I explain the ROI of vulnerability assessments and human risk training. Organizations usually discover they’re overspending on redundant technologies while ignoring the human side of cybersecurity. Once executives see how prevention lowers insurance costs and reduces reputational risk, the conversation becomes much easier.
NINJIO: You’ve seen firsthand how one individual can compromise an institution. How do you help leaders understand human behavior as an attack surface?
Eric O’Neill: I’m a “show, don’t tell” person. Stories are the most powerful teaching tool. People remember stories, not PowerPoint slides or policy documents. My approach for “showing” follows three rules, and I use stories as the basis for understanding:
- Entertain – If you’re not entertaining, nobody listens.
- Inform – Stories help people absorb complex information.
- Inspire – The goal is to change how people think about security.
When leaders understand how real attacks unfold, they begin to see employees differently—not as liabilities, but as defenders.
NINJIO: Where do CISOs most often lose the C-suite when advocating for human risk management?
Eric O’Neill: When they drown executives in numbers. Executives respond to stories and consequences. Tell them what happened to organizations that ignored these risks. Explain the reputational damage, the lawsuits, the regulatory fallout. Then compare the cost of prevention with the cost of recovery. That’s when the lightbulb goes on.
NINJIO: What does accountability look like in human risk management without creating a culture of fear?
Eric O’Neill: Positive reinforcement is very important. I once visited a security operations center where employees proudly displayed individually wrapped Swedish Fish candies on their desks. Why? Because the CISO gave one to anyone who caught and reported a phishing email. That small reward became a badge of honor. It’s a brilliant example of reinforcing the behavior you want instead of punishing mistakes.
NINJIO: What language resonates most with CEOs and CFOs when discussing human risk?
Eric O’Neill: I describe cybersecurity as a mosaic: Technology, training, and tools all come together to create the complete picture. But if you ignore human risk, you’re leaving a massive gap. Every employee is essentially manning the walls of your organization’s defenses.
Attackers know that—and that’s where they aim.
NINJIO: Looking ahead, what separates organizations that manage human risk effectively?
Eric O’Neill: The ones that ignore human risk will eventually be breached. Cybercriminals don’t care who you are—they only care whether you’re vulnerable. Ignoring human risk leads to reputational damage, lawsuits, regulatory penalties, massive remediation costs, and sometimes bankruptcy for smaller organizations. It’s simply not worth the risk.
About Eric O’Neill
Eric O’Neill is a former FBI counterintelligence operative who helped capture Robert Hanssen, the most damaging spy in FBI history. He is the founder of The Georgetown Group, a premier security services firm, and he serves as a national security strategist to cybersecurity companies. Additionally, O’Neill is the founder of the advisory firm Nexasure; a celebrated keynote speaker; and the author of Gray Day and Spies, Lies and Cyber Crime. To learn more about Eric O’Neill visit: ericoneill.net.
Frequently Asked Questions
A: Because attackers have realized it’s often easier to manipulate a person than break through hardened technology. Social engineering tactics like phishing, vishing, and impersonation exploit trust, urgency, and human decision-making rather than technical vulnerabilities
A: It reflects the idea that today’s cybercriminals operate like traditional intelligence agents. They gather information, study their targets, and use deception to gain access. The “hack” is often just the final step in a longer process of human manipulation.
A: AI is making attacks more convincing and scalable. From deepfake video calls to highly personalized phishing messages, attackers can now impersonate trusted individuals with alarming realism. This makes verification processes and employee skepticism more critical than ever.
A: A virtual trusted insider is an attacker using stolen credentials to operate inside an organization as if they were a legitimate employee.
A: By focusing on positive reinforcement instead of punishment. Encouraging and rewarding behaviors like reporting phishing attempts helps build engagement and awareness. When employees feel empowered rather than blamed, they become active defenders of the organization.
About NINJIO
NINJIO’s human risk management platform reduces cybersecurity risk through personalized security coaching, engaging awareness training, and adaptive testing. Our multi-pronged approach to risk mitigation focuses on the latest attack vectors to build employee knowledge and the behavioral science behind social engineering to sharpen users’ intuition. Our simulated phishing and coaching tools build a proprietary Emotional Susceptibility Profile for each user to identify their specific social engineering vulnerabilities and change behavior.