Back to Posts

Deepfake Detection Guide

Feb. 1, 2026

Man looking at two cartoonish images of himself

Deepfakes (a portmanteau of "deep learning" and "fake") are images, videos, or audio that have been edited or generated using AI tools or audio-video editing software. They may depict real or fictional people and are considered a form of synthetic media.

As deepfakes improve, businesses are increasingly relying on biometric verification powered by machine learning to confirm human presence, shifting impersonation defense into a continuous AI‑versus‑AI dynamic. This Deepfake Detection Guide provides techniques for detecting deepfakes through manual observation, automated tools, and organizational controls designed to prevent fraud even when detection is uncertain.

In 2024, scammers used AI-generated video to impersonate senior staff at Arup during a video conference. A finance employee in Hong Kong was convinced to transfer HK$200 million (about $25.6 million U.S.) before the fraud was discovered.

If you want hands-on practice, Detect Fakes lets you test your ability to distinguish manipulated video from real video.

Executive leadership is increasingly exposed. A Deloitte poll focused on deepfake-enabled financial fraud reported that 25.9% of executives said their organizations experienced at least one deepfake incident targeting financial and accounting data in the prior 12 months.

Voice-based impersonation presents similar risk. According to Consumer Reports, some commercially available voice-cloning tools can produce convincing synthetic voices from very short audio samples, undermining voice familiarity as a reliable method of identity verification.

Manual Detection: What to Look For

Deepfake technology continues to improve, but synthetic media often still contains subtle artifacts. No single indicator is definitive, but multiple anomalies in combination are a stronger signal that content may be manipulated.

Eyes and Blinking

Human blinking varies by person and situation, but blink duration is commonly cited in the 0.1 to 0.4 second range. Deepfakes may show abnormal blinking patterns, including blinking that is too frequent, too infrequent, mechanically regular, or absent.

Look for eyes that appear fixed or staring, with movements that seem disconnected from the scene. Examine eye reflections carefully. In authentic video, reflections in both eyes generally match the visible light sources in the environment. Inconsistent reflections between eyes, or reflections that do not correspond to the scene, can be indicators of manipulation.

Pupil behavior can also look off. Real pupils adjust based on lighting and focus. Deepfaked eyes may show pupils that remain static when they should dilate or constrict.

Facial Features and Skin

Pay attention to skin texture. Real skin contains natural variation: pores, wrinkles, freckles, uneven tone, and subtle shadows. Deepfakes may smooth these details into an overly uniform, waxy, or airbrushed appearance that can feel uncanny.

Watch for facial features that do not seem cohesive. The eyes might look sharper than the surrounding skin, or the teeth might appear distorted or unnaturally uniform. Facial hair can be a weak point. Mustaches, sideburns, and beards may appear inconsistent or fail to move naturally with facial expressions.

Look for expressions that shift abruptly rather than transitioning smoothly, or facial movements that do not feel coordinated across the whole face.

Lip Synchronization

Matching lip movements to speech requires coordinating dozens of tiny facial muscles and remains a common failure point. Watch for mouth shapes that do not match sounds, particularly “p,” “b,” and “m” where lips close, and “f” and “v” where the lower lip contacts the upper teeth. Slowing down playback can make errors easier to spot.

Lighting and Shadows

Realistic lighting requires the content to follow the physical rules of a scene. Examine whether shadows on the face match the apparent direction of light sources. Look for shadows that appear at the wrong angles, change inconsistently between frames, or do not align with other objects in the scene.

Check whether the subject’s overall lighting matches the environment. If a person appears brightly lit while the background is dark, or vice versa, the content may have been composited from different sources.

Edge Artifacts and Motion

Deepfakes often require blending a synthetic face onto an existing video. This can leave artifacts at boundaries: blurry edges where the face meets hair or neck, flickering outlines, or a subtle “halo” around the face. These are often easier to see when the subject turns their head or moves quickly.

Watch for unnatural body movement: jerky gestures, heads that seem disconnected from bodies, or hair and accessories (glasses, earrings) that do not move naturally with the person.

Audio Indicators

Audio deepfakes can sound unusually clean, oddly paced, or emotionally flat. Listen for robotic undertones, awkward pauses, or rhythm that feels unnatural. Background noise can also be a tell. Real recordings typically contain some ambient sound and room echo consistent with the environment.

Automated Detection Tools

Automated detection tools analyze technical signals that humans struggle to evaluate consistently at scale. They typically return probabilistic confidence scores, not binary verdicts, and effectiveness varies with compression, recording quality, environmental noise, and the novelty of the generation technique.

Real-Time Communications Defense (Netarx)

Based in Farmington Hills, Mich., Netarx provides real‑time deepfake and impersonation defense across voice, video, email, SMS/messaging, and images, surfacing simple in‑workflow indicators (green/yellow/red) to help employees judge the trustworthiness of live interactions. It uses multiple AI models plus metadata/device context and can be deployed as software-as-a-service (SaaS) with agents and browser extensions.

Identity Proofing & Account Recovery (Okta + Partners)

Okta provides the identity platform; deepfake‑resistant ID verification is commonly handled via partners such as Nametag (Deepfake Defense) and Incode, which integrate with Okta to harden onboarding and recovery against AI impersonation.

Asynchronous Screening & Investigation (Reality Defender, Sensity)

Reality Defender offers multi‑model screening via API and browser UI for AI‑generated or manipulated media, suitable for trust & safety or moderation teams. Sensity AI focuses on investigation‑oriented detection and reporting across video, images, and audio for legal/compliance workflows.

How to Evaluate Tools

Most companies pair email security (Proofpoint/Abnormal) with identity proofing (Okta + partners) and add real‑time communication defense (Netarx) for live interactions. For content you receive or host (user uploads, marketplace listings), consider Reality Defender/Sensity for asynchronous screening. Use Pindrop if you run a high‑volume call center or handle sensitive transactions by phone.

Tool Limitations

No detection tool is perfect. Common issues include false positives triggered by compression or low-quality recordings, new synthesis methods that temporarily evade detectors, and the computational cost of real-time analysis. Most tools provide probabilistic assessments rather than definitive answers.

The most effective approach combines automated screening with human review and contextual verification for high-stakes content.

Organizational Controls

Detection alone cannot fully protect against deepfake fraud. Sophisticated attacks may evade both human observation and automated tools. Organizations need process controls that prevent fraud even when detection fails.

Multi-Person Authorization

Require multiple people to approve financial transactions above a defined threshold, regardless of who appears to be requesting it. Apply the “four eyes” principle to high-value payments, vendor banking detail changes, and unusually urgent requests so that no single person can complete a risky action alone.

Out-of-Band Verification

Establish protocols requiring verification through a separate, pre-established channel for unusual requests. If you receive a video call requesting an urgent transfer, verify through a different channel using known contact information from your directory, internal messaging, or in-person confirmation when possible.

Bloomberg reporting describes a Ferrari impersonation attempt involving a convincing voice imitation of CEO Benedetto Vigna that failed after an executive used a personal verification question. Bloomberg via Cisco PDF

Communication Platform Controls

Deepfake fraud often begins on personal platforms. Set policies that financial and sensitive requests must use official corporate systems. Train employees to treat “use my personal app,” “my number changed,” or “keep this off email” as high-risk signals.

Employee Training

Training should cover both deepfake recognition and social engineering patterns, with practice spotting manipulated media. The MIT Media Lab’s Detect Fakes site was built to let people test their ability to recognize deepfakes. CyberIR@MIT

Training should emphasize that seeing someone on video or hearing a familiar voice does not confirm identity. Build a culture where verification is expected and welcomed rather than perceived as distrust.

Incident Response Planning

Develop procedures for responding to suspected deepfake attacks, including steps to halt pending transactions, escalation paths that do not rely on potentially compromised channels, and rapid notification to financial institutions for suspected fraudulent transfers. Speed matters in fraud response.

Real-World Case Studies

Arup: HK$200 Million Video Conference Fraud (2024)

Reporting on the Arup case describes how scammers used deepfake video of colleagues during a video conference to convince a Hong Kong employee to transfer HK$200 million. The Guardian

Lessons: video verification is not identity verification. Out-of-band verification and multi-person authorization reduce the chance of a single point of failure.

LastPass: Employee Recognizes Red Flags (2024)

LastPass reported an attempted voice phishing incident in which an employee received calls, texts, and at least one voicemail featuring an audio deepfake impersonating the CEO via WhatsApp. LastPass

Lessons: platform choice, urgency, and off-hours contact are often stronger tells than media artifacts. Train people to report quickly instead of engaging.

WPP: Multi-Stage Impersonation Attempt (2024)

Reporting on the WPP incident describes scammers creating a fake WhatsApp account and setting up a Microsoft Teams call to impersonate CEO Mark Read using AI-based techniques. Entrepreneur

Lessons: modern attacks can be multi-stage and multi-platform. Requests involving both money and personal information should raise the escalation level immediately.

Verification Checklist

When receiving requests for financial transactions, sensitive information, or unusual actions from apparent executives or colleagues:

Channel verification: Did the request arrive through official corporate channels? Is this how this person normally communicates? Am I being asked to use an unusual platform or method?

Request characteristics: Is there unusual urgency or pressure? Is secrecy being emphasized? Does this request fall outside normal procedures? Would this person normally make this request directly to me?

Identity verification: Can I verify identity through a separate, pre-established channel? Can I call back using known contact info from a trusted directory? Can I ask a question only the real person would know?

Technical indicators: Does the video show anomalies (lighting, blinking, lip sync, edges, skin texture)? Does the audio sound natural for the environment? Can I run the content through a detection tool?

Process controls: Does this transaction require additional authorization? Have I documented this request appropriately? Should I escalate before proceeding?

When in doubt, slow down. Legitimate requests can wait for verification. Fraudulent ones usually cannot withstand it.

Contact Us for customized security awareness training incorporating deepfake recognition, simulated social engineering exercises, and organizational controls assessment.