The faces you see online are not always real. AI can generate photorealistic human faces from scratch, swap faces onto existing video in real time, and produce audio that sounds exactly like a specific person saying things they never said.

This isn’t a future threat. It’s the current landscape. Here’s how to recognize it.

Why This Matters

AI-generated faces appear in:

  • Fake news articles with invented experts and quotes
  • Romance scam profiles on dating apps and social media
  • Astroturfing campaigns with armies of fake reviewers
  • Political disinformation using fabricated video statements
  • Business fraud (“CEO” video calls authorizing wire transfers)
  • Fake stock photos and testimonials on scam websites

Being able to spot the signs isn’t paranoia — it’s a basic media literacy skill in 2026.

Visual Tells in AI-Generated Still Images

AI image generation has improved dramatically, but patterns still emerge. Here’s what to look for:

Eyes and Pupils

Eyes are one of the hardest things for AI to get right consistently. Look for:

  • Pupils that aren’t round or are different sizes
  • Irises with inconsistent texture or unnatural coloring
  • Reflections in the eyes that don’t match each other or the environment
  • Eyes that look slightly unfocused or don’t quite align

Ears and Hair

Ears are complex — AI frequently distorts them. Look for asymmetry, missing cartilage detail, or ears that blend unnaturally into hair. Hair is also a common failure point: look for strands that merge strangely at the edges, hair that passes through clothing or accessories, or textures that look like painted fiber rather than individual strands.

Teeth and Lips

Many AI face generators struggle with teeth. Look for:

  • Too many or too few teeth
  • Teeth that blend into one another with no gaps or definition
  • Lips that don’t quite meet symmetrically

Background and Edges

Where the face meets the background is often where AI struggles most. Look for:

  • Blurring or smearing at the hair-background boundary
  • Accessories (glasses, earrings, jewelry) that have strange geometry or are half-clipped into the face
  • Background elements that look normal individually but don’t make spatial sense together (two windows, mismatched lighting angles)

Symmetry

Real faces are slightly asymmetric. AI faces are sometimes too symmetric, or have asymmetries that appear in unexpected places — like two different earring styles, or facial features that seem slightly off-axis.

Visual Tells in Deepfake Video

Deepfake video detection is harder than still image detection because you’re watching motion, not scrutinizing a static image. But the tells are there:

Flickering and temporal inconsistency. The face may be perfectly stable in one frame and slightly distorted in the next. Pay attention to the edges of the face — the boundary between real hair and the swapped face often flickers.

Unnatural blinking. Early deepfakes blinked rarely or unnaturally. Modern ones are better, but blink rate and patterns are still sometimes off.

Lighting mismatch. The face and the body are often lit by different light sources when the original video and the face swap don’t match. The face may look slightly brighter, flatter, or more smoothly lit than the neck and shoulders below it.

Audio-visual sync. In audio deepfakes layered onto video, watch the lip movements against the spoken words. Subtle mismatches — especially on hard consonants (P, B, M) — are often visible.

Head movement. Large or fast head turns often degrade deepfake quality. If a face looks fine in a frontal shot but distorts when the person turns their head quickly, that’s a red flag.

Detection Tools

Visual inspection is a starting point, but dedicated tools go further:

Google Reverse Image Search — Right-click any image → Search image. If a “real person” photo appears across dozens of unrelated profiles, it’s likely fake or stock.

TinEye (tineye.com) — Similar to Google reverse image search, with a larger archive and useful date-sorting to find the original source of an image.

Hive Moderation (hivemoderation.com) — Free AI-generated image detector. Upload or paste an image URL; it returns a confidence score for AI generation.

Illuminarty (illuminarty.com) — Detects AI-generated images and sometimes identifies which model produced them (Midjourney, Stable Diffusion, DALL-E).

FakeCatcher (Intel) — Intel’s deepfake detection system, designed for real-time video analysis. Not consumer-facing yet, but worth knowing it exists.

Reality Defender — Enterprise-focused but increasingly accessible tool for video deepfake detection.

Note: No detection tool is 100% reliable. As generation quality improves, detection tools struggle to keep up. Use them as one signal, not the final word.

Behavioral Red Flags Beyond Visual Tells

Sometimes you can’t see the manipulation. Here’s what else to watch for:

No other photos exist. A person with one profile photo and no tagged appearances elsewhere, no old posts, no variety of expressions or angles — suspect.

The profile is new. A LinkedIn created last month with 500 connections and a suspiciously polished profile photo is a pattern.

The “expert” can’t be verified. If an article quotes Dr. Someone from the University of Somewhere, and that person doesn’t appear in faculty directories, academic papers, or any other verifiable source, the expert may not exist.

The video is conveniently low-resolution. Deepfake creators sometimes compress video heavily to obscure artifacts. High-stakes video (a politician saying something damaging, a CEO making announcements) that’s surprisingly low quality is a warning sign.

The audio doesn’t match. Voice cloning is easier than video deepfakes. If a voice sounds right but the cadence, pacing, or word choice is subtly off, consider whether the audio might be synthetic.

What to Do If You Suspect Something Is Fake

Don’t share it without verifying. The spread of deepfakes depends on people sharing before thinking. If something looks suspicious:

  1. Run a reverse image search
  2. Check if the original source is verifiable
  3. Search for the same story or clip from another source
  4. Upload to a detection tool
  5. Look for reporting from established fact-checking organizations (Snopes, PolitiFact, AFP Fact Check)

You won’t be right 100% of the time. But slowing down before amplifying is the most effective defense we have right now.