Every edition hits your inbox like a surprise caffeinated squirrel in your car seat—fast, unpredictable, and strangely motivating.

Cybersecurity Sales Teams: We are your black-ops sales briefing capability — unauthorized, unfiltered, and over-caffeinated

Spread the sales insights (assuming your friends are deserving) — dailyraptor.com.

Tuesday, June 17th, 2025 Edition

Trust Is the New Attack Surface

Seeing is no longer believing

The Spy Penetrated the Screen: North Korea’s Employment Scam Meets Deepfakes

Over the past several months, we’ve chronicled a startling revelation: North Korea has infiltrated hundreds of firms (many in F-500), government entities and political organizations through a sophisticated remote-worker employment scam. Using stolen U.S. identities, AI‑enhanced résumés, fake LinkedIn profiles, and even face-changing software during video interviews, these operatives have secured legitimate IT positions at scale (Wired.com).

For more background, see our May 7, 2025 issue: The North Korean Spring Offensive.

Inside the U.S., facilitators operate so-called “laptop farms” — receiving company-issued hardware and granting access to DPRK agents. This scam is generating an estimated $250–600 million annually, funding North Korea’s weapons programs (Wired).

This operation is a textbook case of evolved social engineering: exploiting trust, technology, and the new norms of remote work. What’s more, the rise of deepfake and voice-cloning tools enables real-time manipulation of audio and video — allowing operatives to impersonate job candidates, managers, or executives with alarming realism.

From North Korea to Everywhere: The Broader Deepfake Threat

This is no longer a “North Korea-only” problem.

Any adversary — from criminal syndicates to rival nation-states — can now use deepfakes to infiltrate systems, launch attacks, or spread misinformation.

In today’s threat landscape:

  • Executive impersonation: Attackers use deepfakes in live video or voice calls to trick staff into authorizing wire transfers or sharing sensitive data.

  • Crisis confusion: Deepfake voices and videos can derail incident response efforts during live attacks.

  • Fake personas: AI-generated “consultants” on LinkedIn build trust and harvest intel.

  • Tailored phishing: Deepfakes help craft emails that resonate with company culture.

  • Misinformation flooding: Synthetic media can overload alerts and bury real threats in noise.

Deepfakes magnify traditional social engineering tactics — boosting scale, realism, and reach.

A Deepfake Conference Call That Cost $25 Million

Nation-state threats are only one part of the picture.

Over a year ago, a knowledge worker unknowingly attended a deepfake video conference with multiple “colleagues” — including the company’s CFO. The problem? Every participant was fake. He was the only real person on the call.

The deepfake CFO instructed a wire transfer of $25 million USD. Believing he was acting under legitimate authority — and seeing familiar (but fake) faces — the employee complied. It was a hard and costly lesson in the power and realism of deepfake tech.

How Did We Get Here?

As social engineering has evolved from crude deception to psychological precision, deepfakes have become its most dangerous incarnation.

Where spies once relied on disguises or forged documents, today’s attackers use AI-generated voices and faces to impersonate CEOs, officials, or even family members with chilling accuracy.

Deepfakes aren't just technical tricks — they're built on the same trust-exploiting principles that have worked for centuries:
Manipulate context. Exploit urgency. Evade critical thinking.

What’s new?

  • Speed

  • Scale

  • Sophistication

A synthetic voice can now launch a phishing attack more convincingly than any human ever could.

The Long Game: Deepfakes & the History of Social Engineering

Before there were deepfakes, spoofed emails, or cloned voices, there were masks, forgeries, and charm. Social engineering — manipulating people to gain access — is as old as human conflict.

Here’s how it evolved:

Primitive & Early Uses

  • Disguises & mimicry: Early humans used body paint and costumes to infiltrate groups or lure prey.

  • False signals: Fire or drum patterns were faked to mislead rival tribes.

  • Oral deception: Myths and stories were used to manipulate alliances or intentionally incite conflict.

Wartime Deception (Pre-WWII & WWII)

  • Trojan Horse: A legendary infiltration tactic hidden in plain sight.

  • Operation Fortitude: The Allies faked armies and radio chatter to confuse the Nazis before D-Day. The extensive use of fake equipment, false radio traffic & falsified spycraft activity to divert & distract the Germans.

  • Agent Zigzag: A double agent who misled Nazi intelligence through personal charm.

  • Equipment Deception: Inflatable tank battalions along with fake airfields that misled aerial reconnaissance.

Post-WWII & Cold War Espionage

  • Identity theft & seduction: Tools of the CIA and KGB.

  • Honey traps: Romantic manipulation to compromise targets.

  • Insider cultivation: Years-long relationship-building to elicit secrets.

  • Misinformation campaigns: Fake news seeded to sway public or political sentiment.

Modern Forms of Social Threats

  • Phishing: Spoofed emails that steal credentials or deliver malware.

  • Business Email Compromise (BEC): Executives impersonated to trigger fund transfers.

  • Deepfake media: Synthetic voices or faces used to trick systems or create confusion.

  • Social platform pretexting: Fake LinkedIn personas used to build trust and extract IT or physical access.

  • Vishing: Voice phishing using spoofed IDs or AI-generated calls.

Where to go from here? How do we Reduce Organizational Risk?

Final Word

Deepfakes are not a future threat — they’re already here.
They undermine the oldest human safeguard available: Human Trust.

The challenge ahead isn’t just better detection — it’s educating teams, building critical skepticism, and designing authentication protocols that don’t rely on appearance or voice alone.

What is your own experience with Deepfakes within your environment or at home?

Hit reply and let me know—I read and respond to every one.

The DR Team
/smb

PS: Ivy is all too familiar with “Deepfakes” in her “own world…”