Real or Fake Game: How to Run an AI Detection Workshop

Real or Fake Game: How to Run an AI Detection Workshop

Ivan JacksonIvan JacksonMay 9, 202613 min read

You're probably trying to teach video authenticity to people who don't have time for theory. A newsroom team needs a fast drill before election coverage. A classroom needs something more engaging than a slide deck on misinformation. A legal or security group wants practice spotting what looks wrong before a bad clip enters a workflow as if it were genuine.

That's where a real or fake game works better than a lecture.

A good workshop doesn't ask people to passively absorb a warning about AI-generated media. It puts them under light pressure, makes them commit to a judgment, and then shows them where their instincts hold up and where they fail. That moment matters. People remember the clip they got wrong far longer than the bullet point they skimmed on a slide.

Why a Real or Fake Game Is Your Best Training Tool

The biggest mistake I see in media literacy sessions is overexplaining before participants have skin in the game. If you spend the first half hour defining deepfakes, synthetic media, metadata, and manipulation tactics, people nod politely and keep checking their phones. They don't feel the problem yet.

A real or fake game fixes that because it turns detection into a live decision. Participants watch a clip, commit to a choice, defend it, and only then learn what they missed. That sequence creates attention you can't get from a presentation alone.

A teacher presenting images of two different women on a large screen to a classroom of students.

Gamification changes how people look

Games like Factle show why this format works. Factle asks players to rank facts within five attempts, which trains probabilistic thinking rather than simple recall, as described by Front Office Sports on Factle. That same mindset is useful in video verification. Participants stop asking, “Can I instantly tell?” and start asking, “What evidence supports this judgment?”

That's a better habit for any professional setting.

Practical rule: Don't teach “spot the fake” as a magic trick. Teach evidence gathering under uncertainty.

When people play, they reveal their shortcuts. Some trust visual polish too much. Some assume low quality means fake. Some overvalue facial oddities and ignore context. Others get trapped by confidence. The game exposes those habits quickly.

It creates diagnostic value, not just engagement

A workshop should tell you something about the room. If most participants misclassify obviously authentic footage because it looks strange, you have a credibility problem. If they consistently trust polished synthetic clips, you have an overreliance on surface realism.

That's why I treat the game as both a teaching tool and a diagnostic tool. It shows where the audience is vulnerable before the stakes become real.

A classroom learns digital citizenship differently from a newsroom. A fraud team needs to discuss impersonation risk, chain of custody, and escalation. But all of them benefit from the same basic structure. Show the clip. Force a judgment. Ask for reasons. Then unpack the evidence.

Defining Your Game's Objectives and Materials

If you don't decide what success looks like before the workshop starts, the session turns into entertainment with no durable lesson. The right setup begins with a narrow objective, then builds materials around it.

A slide titled Game Objectives and Materials outlining four numbered steps for designing educational video games.

Pick one learning outcome first

A general-awareness session looks very different from a forensic drill.

Use one of these workshop objectives as your lead:

  • Awareness building means participants should leave with a healthy skepticism and a basic review routine.
  • Technique training means they should learn to inspect lip sync, lighting continuity, motion behavior, audio quality, and context before making a decision.
  • Workflow testing means the session should mirror how a team handles incoming footage, who makes the call, and when to escalate.
  • Policy reinforcement means the game supports existing standards for verification, evidence handling, or publication review.

If you try to do all four, the workshop gets muddy.

Match the room to the exercise

Different audiences need different clip sets and different language.

Audience type Strong objective What to emphasize
Journalists Verification workflow Source credibility, upload history, contextual checks
Educators Critical thinking Why intuition fails, how to slow down and inspect
Legal teams Evidentiary caution Documentation, provenance, explainable reasoning
Security teams Fraud resistance Impersonation tactics, escalation, response steps

Gather the materials that actually matter

Most facilitators overfocus on slides and underprepare the clip pipeline. The clips matter more.

Use this checklist:

  • Display setup with a projector or large screen, plus speakers loud enough for everyone to hear dialogue and artifacts.
  • Playback control on a laptop that lets you pause, replay, and scrub frame by frame when needed.
  • Curated clip library with clearly labeled files, a hidden answer key, and notes on why each clip is in the set.
  • Voting method such as paper cards, a live poll, a chat channel, or hands raised by confidence level.
  • Facilitator notes that list the teaching point for each clip, not just whether it's real or fake.
  • Analysis tool access so you can reveal evidence after the vote rather than asking people to trust your answer.

Keep a printed backup of your running order. If playback software fails, you still need the sequence, timing, and debrief prompts.

The strongest workshops feel simple to participants because the logistics are tight. They don't see the prep. They just feel that the game moves, the clips make sense, and each reveal teaches something concrete.

Curating Your Collection of Real and Fake Videos

The clip library determines whether the workshop feels sharp or shallow. If every fake is obvious and every real clip is conventional, participants learn almost nothing. They leave thinking they're better at detection than they are in reality.

That's a bad outcome.

A laptop on a white desk displaying a web interface with video thumbnails labeled as synthetic or authentic.

Build around contrast, not volume

You don't need a giant library. You need a balanced one.

A strong set usually includes:

  • Clearly authentic clips that still look suspicious at first glance because of lighting, compression, unusual motion, or camera angle.
  • Weak synthetic clips that participants should catch if they're paying attention.
  • Convincing synthetic clips that create real disagreement in the room.
  • Misleading edits that aren't fully AI-generated but still distort reality through cuts, timing, cropping, or audio manipulation.
  • Borderline cases where the best answer is “I don't have enough evidence yet.”

That last category matters. In real verification work, uncertainty is often the most honest conclusion.

Don't let the game become a style test

Novices often rely on vibe. They think fake means glossy, uncanny, or overly smooth. But human accuracy in fabrication spotting can fall below 70% for novices, as noted in the discussion of JetPunk-style “fake or real” statistics quizzes in Bored Panda's writeup. Your curation should challenge that instinct.

Mix clips so participants can't win by following one simple rule.

A real handheld phone video with bad audio may look less trustworthy than a polished synthetic talking head. A prerecorded trailer can be deceptive without being fully synthetic. A clip can be authentic footage with a manipulated soundtrack. Those distinctions produce better discussion than a stack of obvious face swaps.

Source ethically and document everything

Use material you can legally and responsibly show in a training context. Public archives, internally approved examples, creative commons footage, and clips created for educational use are safer than grabbing random social posts and hoping nobody asks where they came from.

Create a private spreadsheet with these fields:

Clip field Why it matters
File name Keeps playback orderly
Real or fake status Prevents facilitator error
Source location Helps with provenance discussion
Manipulation type Guides the debrief
Teaching point Ensures every clip earns its place

For inspiration on how a lightweight game format can sharpen discrimination before you move into video analysis, this real or AI game example is a useful reference point.

The best “gotcha” clip isn't the hardest one. It's the one that teaches a review habit the audience can reuse tomorrow.

Establishing Gameplay Flow Rules and Scoring

A workshop falls apart when participants don't know when to speak, when to vote, or what counts as a strong answer. Clear rules make the room more confident, not less.

I like a format that keeps each round short and repeatable. People should spend their energy on judgment, not on figuring out the mechanics.

A round structure that works

Run each clip through the same sequence:

  1. Play the clip once without commentary. Let people react instinctively.
  2. Give silent note time. Ask each person to write “real,” “fake,” or “uncertain.”
  3. Replay the clip. Let them focus on one thing they missed the first time.
  4. Open brief discussion. Ask for evidence, not guesses.
  5. Call the vote. Require a final answer before the reveal.
  6. Debrief after scoring. Explain what clues mattered and which ones didn't.

That rhythm keeps the session moving and reduces the urge to shout out answers too early.

Use scoring to reward thinking, not bravado

A simple correct-or-incorrect score works for classrooms, but professional audiences benefit from a little more structure.

Try one of these models:

  • Binary scoring gives one point for a correct answer. Use it when the group is large and time is tight.
  • Confidence scoring asks participants to rate certainty on a small scale. Correct high-confidence answers earn more, but wrong high-confidence answers lose more.
  • Team justification scoring rewards the quality of the reasoning, not only the final label.
  • Escalation scoring gives points for choosing “uncertain” when the evidence is notably weak and explaining what further checks are needed.

That last one is especially useful in journalism, legal review, and corporate security. In those settings, a premature confident answer can be worse than a cautious pause.

Frustration is part of the lesson. What matters is where you direct it after the reveal.

Research on reactions to deceptive gameplay advertising shows that being misled creates strong frustration and negative response patterns, as discussed in Global Games Forum's coverage of fake gameplay ads. In a workshop, that emotional response can help if you handle it well. Don't shame people for missing a clip. Use the miss to ask what assumption led them there.

Ground rules that improve discussion

Set these at the start:

  • Evidence beats confidence. “It just feels fake” isn't enough.
  • One person talks at a time. Fast rooms can distort the vote.
  • No answer leaks. If someone already knows a clip, they stay quiet until debrief.
  • Uncertain is allowed. That option makes the game more realistic.

Those rules change the tone. The session stops being a guessing contest and starts becoming a verification drill.

Revealing the Truth with AI Video Detector

The reveal is where the workshop either becomes memorable or collapses into “trust me, I checked.” If you want participants to improve, they need to see why a clip was flagged, not just hear the answer.

Start the reveal by showing the tool in the same calm, procedural way a team would use it in practice.

A person holding a tablet displaying AI video detection software with thermal and object tracking features.

Run the check live and narrate the process

After the vote, upload the file and tell participants what the system is evaluating. Keep the explanation concrete. Don't drift into abstract AI jargon.

A useful script sounds like this:

  • The system is checking the visual frames for generation artifacts.
  • It's reviewing audio for irregularities that people often miss.
  • It's looking at motion over time, not just still images.
  • It's inspecting metadata for inconsistencies.

That framing matters because people tend to think detection means “zoom in on the face.” Professional analysis is broader than that.

Academic work on fake game detection uses benchmarks such as GAN fingerprint analysis and temporal consistency checks, including motion vector variance, according to the ACM abstract on fake game detection. Those ideas help audiences understand that forensic review isn't guesswork. It relies on repeated signals.

If your group needs a plain-language primer first, this practical artificial intelligence explanation helps explain how AI systems process patterns without turning the workshop into a computer science lecture.

Translate the signals into plain English

Participants don't need advanced engineering vocabulary, but they do need interpretable concepts.

Use language like this:

  • Frame-level analysis looks for visual fingerprints and artifact patterns that don't typically appear in natural footage.
  • Audio forensics checks for spectral anomalies, unnatural transitions, or voice inconsistencies.
  • Temporal consistency tests whether movement holds together from moment to moment.
  • Metadata inspection looks for encoding or file-level irregularities that deserve scrutiny.

For a deeper practitioner view of the evidence categories involved, this guide on what AI detectors look for in video is worth reviewing before you run the session.

A short product demo can help audiences visualize what the reveal stage should feel like in practice:

Make the output part of the lesson

Don't flash a result and move on.

Pause on the confidence score and ask two questions:

  1. What did the room notice that aligned with the result?
  2. What did the tool detect that nobody mentioned?

That comparison is the heart of the workshop. It shows where human judgment is strong and where machine assistance adds value. It also keeps the tool from feeling like a black box that overrules the room.

The best reveal teaches humility without removing agency. People should leave thinking, “I can review more carefully, and I also know when to use analysis support.”

Leading the Debrief and Adapting for Your Audience

The debrief is where participants convert a fun round into a repeatable habit. If you skip it or rush it, the workshop becomes trivia. If you handle it well, people leave with a working framework.

I keep the discussion anchored to three questions: What fooled you, what held up under inspection, and what should happen next when the stakes are real?

Turn findings into action steps

After several rounds, patterns usually emerge. Participants may overtrust fluent speech, overreact to visual weirdness, or ignore source context. Name those patterns directly and tie them to a simple review habit.

Use prompts like these:

  • What was the first clue you relied on, and was it dependable?
  • Which clips needed contextual verification more than visual inspection?
  • When was “uncertain” the strongest professional answer?

A strong debrief doesn't ask who won. It asks which judgment habits are safe to reuse outside the room.

There's also a legal and policy angle. A significant gap still exists in standards for deceptive video marketing across major jurisdictions, which makes internal review frameworks important for legal and enterprise teams, as discussed in this analysis of deceptive video marketing ambiguity.

That's why the workshop should end with a workflow, not just a score.

Adapting the Real or Fake Game for Different Audiences

Here's a practical way to tailor the same exercise.

Audience Focus Example Clip Key Debrief Point
Newsroom Vetting user-submitted footage Breaking-news social clip with unclear origin Verification starts with provenance, not only visuals
Legal team Evidence handling Video offered as supporting proof in a dispute Document reasoning and preserve chain of review
Enterprise security Impersonation and fraud Executive-style message requesting urgent action Escalate suspicious requests outside the video channel
Classroom Media literacy Viral influencer-style clip Confidence isn't evidence
Platform moderation Triage decisions Short reposted clip with edits and unclear source Fast review still needs a consistent rubric

If you want participants to continue practicing after the session, send them a follow-up resource on how to spot AI video so the workshop becomes part of a larger habit, not a one-off event.

The most useful version of a real or fake game doesn't end with “got it right” or “got it wrong.” It ends with a better decision process. That's what teams need.


If you want to run this workshop with a live forensic reveal, AI Video Detector gives teams a privacy-first way to analyze video authenticity using frame-level analysis, audio forensics, temporal consistency, and metadata inspection. It's a practical fit for newsrooms, legal review, fraud prevention, moderation, and education when you need evidence-based support instead of guesswork.