Face-Swap Boundaries
Identifies blending artifacts at the edges of swapped faces, including color mismatches, lighting inconsistencies, and resolution differences between the face and surrounding skin.
Face-swap videos, fake celebrity endorsements, and AI-generated clips flood social media. Verify what you see before you share or believe it.
Drag & drop a video here
or click to browse files
Supported formats: MP4, MOV, AVI, WebM
Maximum file size: 500MB
Understanding the Threat
Social media deepfakes exploit platform algorithms and human psychology to achieve maximum viral reach.
Creators use face-swap apps and AI video tools to place a celebrity or public figure face onto another body, or generate entirely synthetic video clips designed to provoke engagement.
Videos are edited with trending audio, hashtags, and captions designed to trigger platform algorithms and appear on recommendation feeds.
Shocking or entertaining deepfakes get shared rapidly. Many viewers do not question authenticity, especially when the content confirms existing beliefs or biases.
Fake celebrity endorsements promote scams. Political deepfakes influence opinion. Personal deepfakes harass or defame individuals. The damage compounds with every share.
Detection Technology
Our AI is trained to catch the artifacts commonly present in social-media-grade deepfakes and face-swaps.
Identifies blending artifacts at the edges of swapped faces, including color mismatches, lighting inconsistencies, and resolution differences between the face and surrounding skin.
Compares vocal characteristics against known voice profiles to detect cloned or mismatched audio paired with the face-swapped video.
Analyzes body movement, gesture timing, and head-pose transitions to detect the rigid or unnatural motion patterns typical of face-swap and puppet-based deepfakes.
Accounts for social media re-compression artifacts and distinguishes them from generation artifacts, maintaining detection accuracy even on heavily compressed platform videos.
Why It Matters
Social media deepfakes have moved from a niche concern to a mainstream threat. Celebrity deepfakes are used to promote cryptocurrency scams, fake product endorsements, and misinformation campaigns. Political deepfakes have been used to influence elections in multiple countries. The personal toll is equally severe, with individuals facing harassment through non-consensual deepfake content.
Step-by-Step Guide
Follow these steps when a video on social media seems too outrageous, too convenient, or too perfectly timed.
Save the video from the social media platform before it is edited or removed.
Use the platform built-in save feature or a screen recording. Downloading preserves the version you are investigating.
Submit the video to our detector for deepfake and face-swap analysis.
Our system is trained on social-media-grade content and accounts for platform compression that can degrade other detection tools.
Investigate the account that posted the video for signs of inauthenticity.
Look for newly created accounts, inconsistent posting history, engagement patterns that suggest bot amplification, and lack of verified identity.
If the video is flagged as a deepfake, report it to the platform and share the finding with others who may have seen it.
Platform reporting helps train moderation systems. Sharing fact-checks in the comments of viral deepfakes helps counter their spread.
Yes. Our system is specifically trained to handle the compression and processing that social media platforms apply to uploaded videos, which can affect detection on less specialized tools.
Most platforms allow you to save videos within the app. For platforms that do not, you can use screen recording. Downloading through the app typically preserves better quality for analysis.
No. Many face-swap videos are created for entertainment and are clearly labeled as such. The concern is when deepfakes are used to deceive, scam, harass, or manipulate public opinion without disclosure.
Report the content to the platform immediately using their impersonation or manipulated media reporting tools. Document the content with screenshots and URLs. Depending on your jurisdiction, you may have legal recourse.
Major platforms use a combination of automated AI detection, user reports, and manual review. Policies vary, but most now have specific rules against deceptive deepfakes. However, detection is imperfect, which is why independent verification tools are valuable.
Yes, although heavy compression makes detection harder. Our models are trained on compressed social media content specifically to maintain accuracy even with degraded video quality.
Fraudsters use AI to impersonate executives, colleagues, and family members on live video calls. Verify recordings and clips before acting on urgent requests.
News VerificationAI-generated news broadcasts and fake press conferences are undermining public trust. Verify video footage before it spreads.