Facial Expression Fidelity
Evaluates micro expressions and natural facial movement patterns. Deepfake impersonation models often struggle to replicate these subtle signals accurately.
Attackers use AI-generated videos to impersonate executives, coworkers, public figures, and even family members. Before you send money, share sensitive information, or act on a video request, verify that the recording hasn’t been manipulated.
Drag & drop a video here
or click to browse files
Supported formats: MP4, MOV, AVI, WebM
Maximum file size: 500MB
Understanding the Threat
Impersonation deepfakes exploit trust in familiar faces and voices. By recreating a person’s appearance or speech with AI, attackers can convince victims that a video message is real.
Attackers collect publicly available video and audio of a person they want to impersonate. This may include executives, employees, influencers, or family members. The material often comes from interviews, social media posts, or recorded meetings.
Using face swap and voice cloning technology, attackers generate AI videos that mimic the target’s appearance and speech patterns.
The impersonation deepfake is delivered through a video message or live call. The attacker pressures the victim to act quickly by sending money, sharing credentials, or providing sensitive information.
Funds or data are transferred before the victim verifies the identity through another trusted communication channel.
Detection Technology
Our deepfake detection system analyzes facial behavior, voice alignment, and video metadata to identify AI generated impersonation videos.
Evaluates micro expressions and natural facial movement patterns. Deepfake impersonation models often struggle to replicate these subtle signals accurately.
Checks whether speech audio matches lip movement. Voice cloned deepfake videos often show timing mismatches between sound and mouth motion.
Studies frame to frame continuity, camera angles, and environmental consistency to detect artifacts introduced during AI video generation.
Inspects the underlying video encoding and metadata for patterns associated with synthetic video generation or post processing manipulation.
Why It Matters
Impersonation deepfakes are already causing significant financial losses and undermining trust in digital communication. Criminals now use AI-generated video and voice cloning to impersonate executives, coworkers, and family members in scams that can bypass traditional verification methods.
Step-by-Step Guide
Use this process when a video message or call appears to come from someone you know but asks for something unusual, urgent, or sensitive.
Save the clip or meeting recording that contains the suspicious request or message.
Most messaging and conferencing platforms allow you to save video or record calls. Preserve the highest quality version possible so visual and audio details remain intact.
Upload the recording to our detector to check for face-swap and voice-clone indicators.
We analyze facial regions, lip-sync alignment, temporal consistency, and audio patterns that reveal impersonation deepfakes.
Independently contact the person being impersonated through a known, verified method.
Call their verified phone number, corporate directory number, or meet in person. Never rely solely on the channel where the suspicious video arrived.
If the clip appears synthetic or suspicious, report the incident to the appropriate security or fraud team.
Provide the recording and details of the request so the situation can be reviewed through appropriate channels.
Signs of impersonation deepfakes can include unnatural facial movement, mismatched lip sync, inconsistent lighting, or unusual audio patterns. Detection tools can analyze these signals, but independent verification through trusted channels is always recommended.
We focus on analyzing recorded clips and call recordings. For real-time calls, we recommend recording suspicious sessions when allowed and uploading them immediately for analysis.
Accuracy depends on the quality of the recording and sophistication of the attacker. Our system performs well against many current face-swap and voice-cloning techniques and is continuously updated.
Common situations include fake executive requests, fraudulent vendor or client messages, and impersonated friends or relatives asking for emergency money or sensitive information.
No. We do not permanently store uploaded videos. If you are logged in, we only keep basic analysis information so you can view your results again.
A low score suggests we did not detect strong signs of AI generation, but it does not prove authenticity. Combine our results with common-sense checks and organizational verification policies.
AI-generated and deepfake videos spread quickly online. Before you trust a viral clip or breaking news video, run it through our AI video detector to check for manipulated footage, synthetic media, or signs of alteration. Get a clear signal on whether a video may be fake before you share it.
Dating SafetyCatfishers and scammers on dating apps and social media platforms like Hinge, Bumble, Tinder, and Instagram now use AI to send convincing videos of fake partners.