IT Leaders Can’t Afford to Ignore ‘Cheapfakes’
Key Takeaways
- Cheapfakes can be as credible as AI-driven deepfakes: Simple media manipulations can be just as believable as sophisticated AI fabrications, making them a persistent threat.
- Emotional vulnerabilities drive cheapfake success: These low-tech attacks exploit confirmation bias, fear, and other emotions, making individuals susceptible regardless of the attack’s technical sophistication.
- Verify-before-trusting remains the cornerstone defense: Organizations must train employees to question sensational or divisive content, especially when it aligns with their biases or creates a sense of urgency.
Cybercriminals are embracing AI-powered tools like large language models (LLMs) and deepfakes, but they haven’t abandoned easier tactics that have already proven effective. Many attacks still rely on “cheapfakes”, that is, media that’s been edited or mislabeled to trick people into believing it’s legitimate.
Cheapfakes require less skill and fewer resources than deepfakes, yet they still work pretty well. While organizations chase sophisticated AI threats, attackers keep winning with simple manipulations.
This reminds us that sometimes ‘going back to basics’ is a valid strategy. IT and security leaders need to reinforce one core principle with their teams: Verify before you trust.
What’s the Difference Between Cheapfakes and Deepfakes?
Cheapfakes edit or mislabel real media using simple tools like Photoshop or video editors. You don’t need technical AI skills to make these. Deepfakes use AI to create entirely synthetic media from scratch, requiring more technical skill. Both are convincing to viewers, but cheapfakes are far more common because they’re easier and faster to produce.
What Makes Cheapfakes So Dangerous?
Cheapfakes are forms of media that have been mislabeled or subtly edited to convince people that fabricated content is real. Unlike deepfakes that use AI to create synthetic media from scratch or fundamentally change existing content, cheapfakes slightly alter or reframe authentic content, which may make them seem more credible to viewers.
5 Common Cheapfake Techniques:
These manipulations don’t require advanced skills. All you need are some basic editing tools and an understanding of how to exploit human psychology:
- Slowing down or speeding up video to distort behavior
- Using real images with false captions about time or location
- Editing authentic media with basic tools like Photoshop
- Cropping content to remove important context
- Mislabeling old footage as recent events
Because cheapfakes start with authentic content, many viewers are inclined to believe they’re seeing the whole truth. In 2024, selectively edited videos of then-President Biden at public events flooded social media. Some of these included clips taken out of context to create false impressions about his fitness for office. These simple manipulations fooled thousands of viewers despite requiring no sophisticated AI tools.
These methods get used in cyberattacks to fool employees into clicking malicious links, engaging with fraudulent content, or making security decisions based on false information.
Why Smart People Fall for Simple Tricks
Cheapfakes work because they exploit universal emotional vulnerabilities. They may also leverage confirmation bias make their manipulation tactics more effective.
The Confirmation Bias Problem
Many cheapfakes exploit confirmation bias, which is our tendency to accept information that reinforces what we already believe. We tend to consume edited content less critically when it generates outrage or validates our assumptions. This is why false news spreads more quickly than accurate information on social media.
A 2024 study found that cheapfakes “can be at least as credible as more sophisticated forms of artificial intelligence-driven audiovisual fabrication.” The researchers discovered that simple manipulations work because they:
- Present sensational content that grabs attention
- Stoke social or political division
- Flatter our preconceptions and biases
- Create emotional urgency that bypasses critical thinking
IT and security leaders need to recognize that these emotional susceptibilities are universal. Technical expertise doesn’t make anyone immune to psychological manipulation. Every person needs training to identify when their biases, fears, or other traits are being used against them.
Building Defenses Against Low-Tech Threats
You can’t rely solely on technical cyber defenses to stop cheapfakes, since the efficacy of this method depends on your ability to assess the authenticity of the media that you’re consuming. Defending against cheapfakes requires changing how people evaluate information, especially when it triggers strong emotional responses. One such defense is the ‘verify-before-you-trust’ mindset.
Train the ‘Verify-Before-You-Trust’ Mindset
Employees should question sensational or divisive content immediately. Focus training on three critical red flags:
- Content That Triggers Emotions First: When content makes you angry, fearful, or outraged before you’ve thought it through, pause. Do not share the media immediately. Cheapfakes work by hijacking emotional responses to bypass critical thinking.
- Perfect Alignment with Your Beliefs: Be most skeptical of content that confirms exactly what you already think. Confirmation bias makes us accept manipulated media without questioning its authenticity.
- Missing Context or Attribution: No clear source? No verifiable timestamp? No credible attribution? These gaps often hide manipulation. Legitimate content includes context; cheapfakes strip it away.
See how organizations build resistance to manipulation tactics: Explore our case studies showing real-world results from human risk management programs.
Address Individual Vulnerabilities in Your Cybersecurity Awareness Training
Generic cybersecurity awareness training treats all individuals the same, but people have different psychological and emotional vulnerabilities.
Some have the tendency to fall for authority-based deceptions; others are more easily manipulated through fear or curiosity. Personalized security coaching targets individual susceptibilities instead of teaching generic defenses against cheapfakes and other forms of scams.
Human risk management programs identify which psychological triggers work on each employee through personalized security coaching, then deliver targeted training to build resistance to those specific manipulation tactics.
The Bigger Picture: Simple Tactics Still Work
Organizations have begun to invest in high-tech, AI-powered cybersecurity tools and the latest threat detection tools, but individuals can still be susceptible to less sophisticated attacks like cheapfakes all the same.
The emotional and psychological vulnerabilities that make cheapfakes work also make individuals susceptible to phishing, pretexting, and other forms of social engineering attacks.
You can address these human risks through cybersecurity awareness training and continuous assessment to build stronger defenses than organizations that are chasing only technical solutions. The technology will change, but human nature will not.
Ready to address the human vulnerabilities that cheapfakes exploit? Get a demo to see how NINJIO’s human risk management program builds critical thinking skills and resistance to manipulation tactics across your workforce.
Frequently Asked Questions
Q: What’s the difference between cheapfakes and deepfakes?
A: Deepfakes use AI to create entirely synthetic media (fake videos, voices, images). Cheapfakes edit or mislabel real media using simple tools. Both can be equally convincing, but cheapfakes require far less technical skill to create.
Q: How can employees tell if content has been manipulated?
A: Look for context clues, verify sources, check publication dates, search for the original media, and be suspicious of content that triggers strong emotions or confirms existing beliefs. When in doubt, verify through official channels before sharing or acting.
Q: Are cheapfakes used in targeted attacks against organizations?
A: Yes. Attackers use cheapfakes in spear phishing campaigns, business email compromise, and social engineering attacks. They might alter screenshots of legitimate communications or create false urgency around manipulated media to trick employees into harmful actions.
Q: Why do cheapfakes work better than people expect?
A: They exploit confirmation bias and emotional triggers. When content aligns with what we already believe or creates strong feelings, our critical thinking diminishes. Starting with authentic media also makes cheapfakes seem more credible than completely fabricated content.
Q: How should organizations train employees to recognize cheapfakes?
A: Focus on building critical thinking habits rather than teaching people to spot specific manipulation techniques. Train employees to pause when content triggers emotions, verify sensational claims, and question whether media aligns suspiciously well with their biases.
Q: Can technical controls detect cheapfakes before employees see them?
A: Some tools can flag manipulated media, but cheapfakes often evade automated detection because they use authentic source material. Employee training and verification protocols remain the most reliable defense against these low-tech manipulations.
About NINJIO
NINJIO reduces human-based cybersecurity risk through engaging training, personalized testing, and insightful reporting. Our multi-pronged approach to training focuses on the latest attack vectors to build employee knowledge and the behavioral science behind human engineering to sharpen users’ intuition. The proprietary NINJIO Risk Algorithm™ identifies users’ social engineering vulnerabilities based on NINJIO Phish3D phishing simulation data and informs content delivery to provide a personalized experience that changes individual behavior.