How to Spot AI-Generated Text, Images, and Video (Before You Get Fooled)

AI fakes are everywhere. Here's how to spot AI-generated text, images, and deepfake videos using free tools and your own eyes.

How to Spot AI-Generated Text, Images, and Video (Before You Get Fooled)

Key Takeaways

  • AI-generated text can be detected by tools like GPTZero (99%+ accuracy) and Originality.ai, but no detector is perfect — false positives happen, and skilled editors can bypass most checks.
  • AI images have telltale signs: mangled hands, too-smooth skin, warped text, and inconsistent lighting. But these flaws are disappearing fast with each model update.
  • AI video (deepfakes) are the hardest to spot. Look for unnatural blinking, mismatched lip sync, and inconsistent shadows. Detection tools like Sensity and Reality Defender are your best bet.
  • No single method is foolproof. The best approach combines automated tools with human judgment — look for patterns, check sources, and trust your instincts when something feels off.
  • The C2PA standard is emerging as a long-term solution: cryptographic signatures that prove content is authentic at the point of creation, rather than trying to detect fakes after the fact.

Why This Matters More Than You Think

Last month, I received an email from a friend asking if I'd seen a video of a politician making a shocking statement. The video looked real. The voice sounded right. The lip movements matched the words. I almost shared it before something nagged at me — a slight glossiness in the skin, a too-perfect backdrop.

It was a deepfake. And I almost fell for it.

Here's the uncomfortable reality: an estimated 8 million deepfakes were shared online in 2025, up from half a million just two years earlier. AI-generated text floods social media, academic submissions, and product reviews. AI images win photography contests. And the technology behind all of it gets better every single month.

Whether you're a teacher checking student papers, a hiring manager reading cover letters, a journalist verifying sources, or just someone scrolling through your social feed — you need to know how to tell what's real and what isn't. I've spent weeks testing every major detection tool and learning the visual, textual, and audio patterns that give AI content away. Here's everything I've found.

Spotting AI-Generated Text

What AI Writing Looks Like

AI-generated text has gotten remarkably good. ChatGPT, Claude, and Gemini can all produce fluent, grammatically correct prose that passes casual inspection. But patterns remain — subtle ones that your brain might notice even before you can articulate why.

Uniform sentence rhythm. Human writing has natural variation — short punchy sentences followed by long, winding ones. AI tends to produce text with suspiciously consistent sentence lengths. Read a paragraph aloud. If every sentence feels the same "weight," that's a signal.

Hedging language everywhere. AI loves qualifiers: "It's important to note that," "While there are various perspectives," "This can be particularly useful." Humans make assertions. AI hedges. If a piece reads like it's trying not to offend anyone about anything, AI probably wrote it.

Perfect structure, no personality. AI produces well-organized text with clear topic sentences and logical transitions. Which sounds like a compliment — until you realize that real human writing is messier. People go on tangents, use unexpected metaphors, make jokes that don't quite land. Perfection is the tell.

Vocabulary choices. Certain words appear disproportionately in AI text: "delve," "crucial," "furthermore," "landscape," "notably," "comprehensive." If you see three of these in one article, your suspicion should spike. Our guide to writing with AI covers exactly how to avoid these patterns when using AI as a drafting tool.

Manual Detection Techniques

  • The "surprise" test: Does anything in the text surprise you? A unique opinion, an unexpected analogy, a personal detail? If every sentence is exactly what you'd expect, it's likely AI.
  • Check the facts: AI confidently states things that aren't true. Verify any specific claims, statistics, or quotes. If a "study from Harvard" doesn't exist when you search for it, you're reading AI output.
  • Look for the thesis: Human writers usually have a point of view. AI writing often sounds balanced to the point of saying nothing. "There are advantages and disadvantages to both approaches" is classic AI fence-sitting.
Digital matrix of text data representing AI-generated content detection analysis
AI text detection relies on statistical patterns — consistent rhythm, hedging language, and suspiciously perfect structure are the most reliable human-readable signals.

Spotting AI-Generated Images

The Classic Tells (Still Work in Late 2025)

Hands and fingers. This is still the single most reliable visual check. AI struggles with hand anatomy — extra fingers, impossible joint angles, fingers that merge into each other. Always zoom in on hands first.

Text in images. Any text that appears within an AI-generated image is usually garbled, misspelled, or nonsensical. Signs, book covers, T-shirt slogans, street names — if the text doesn't make sense, the image is likely AI-generated.

Background inconsistencies. Look at the edges of the main subject. Where the person or object meets the background, you'll often see smearing, warping, or impossible geometry — a fence that bends into a tree, a chair leg that merges with the floor.

Too-perfect skin. AI-generated faces often have unnaturally smooth skin — no pores, no subtle discoloration, no asymmetry. Real human skin has texture. If a face looks like it's been run through a beauty filter, be suspicious.

Symmetry in asymmetric things. Earrings that are slightly different, hair that falls differently on each side, wrinkles that aren't mirrored — these are signs of real photos. AI tends to over-symmetrize faces and objects.

Newer Tells (As Models Improve)

Lighting direction. In real photos, light comes from consistent sources. AI sometimes generates images where shadows point in different directions, or where the light on a face doesn't match the light on the background.

Reflections. Mirrors, glasses, windows, and water surfaces are hard for AI. The reflection might show something that doesn't match the scene, or the reflection angle might be physically impossible.

Repetitive patterns. Look at crowds, bookshelves, or backgrounds with repeated elements. AI often generates subtle duplicates — the same face appearing twice in a crowd, or the same book spine repeated.

If you've been exploring AI image creation yourself — perhaps through tools like those in our AI image generator comparison — you'll find that understanding how these tools work makes spotting their output much easier.

Spotting AI-Generated Video and Deepfakes

Audio Deepfakes

Voice cloning has become alarmingly accurate. A few seconds of sample audio can now produce a convincing voice clone. The tells are subtle:

  • Flat emotional range: Cloned voices often lack the micro-variations in tone that come with genuine emotion. The voice sounds "right" but feels empty.
  • Breathing patterns: Real speech includes natural pauses for breath. AI-generated speech sometimes skips these, producing unnaturally long sentences without audible inhales.
  • Background audio mismatch: If the voice sounds studio-clean but the video shows an outdoor setting, something's wrong.

Video Deepfakes

Video deepfakes — face-swapped or entirely generated videos — are the hardest AI content to detect with the naked eye. But patterns exist:

Unnatural blinking. Early deepfakes barely blinked at all. Current ones blink, but the rhythm is often off — too regular, too fast, or with eyelids that don't fully close.

Lip sync drift. Watch carefully where mouth movements meet speech. In deepfakes, there's often a subtle delay or misalignment between what you hear and what you see, especially on hard consonants like "b," "p," and "t."

Edge artifacts around the face. The boundary where a swapped face meets the original head often shows a faint line, color shift, or blurring. This is most visible near the jawline, hairline, and ears. Pausing the video and stepping frame-by-frame makes these artifacts much more obvious.

Temporal inconsistency. Fast-forward through the video. Do the person's accessories (glasses, earrings, collar) stay consistent? Does their skin tone shift? Real video maintains consistency; deepfakes sometimes "glitch" between frames.

Close-up of screen showing video analysis with frame-by-frame detection markers
Frame-by-frame analysis reveals deepfake artifacts invisible at normal playback speed — look for edge blurring, inconsistent shadows, and lip sync drift.

The Best Detection Tools (Free and Paid)

Text Detection

ToolAccuracyPriceBest For
GPTZero~99% (RAID benchmark)Free tier + $10/moEducators, journalists — lowest false positive rate (0.24%)
Originality.ai~99%From $15/moContent marketers — includes plagiarism check
Copyleaks~95%Free tier + paidAcademic institutions — multi-language support
Turnitin AI Detection~98%Institutional licenseUniversities — integrated into existing workflow

My recommendation: Start with GPTZero's free tier for casual checks. Its false positive rate of 0.24% — roughly one in 400 documents — is the lowest I've found, which means it's less likely to wrongly accuse a human writer. For professional use, Originality.ai's combined AI detection + plagiarism check offers the most value.

Image Detection

ToolWhat It DoesPrice
AI or NotClassifies images as AI or human-madeFree tier available
Hugging Face DetectorOpen-source AI image detection modelsFree
SensityMulti-layered analysis (visual + metadata)Enterprise pricing
TruthScanAI image detection with confidence scoresFree tier available

Video/Deepfake Detection

ToolApproachPrice
Reality DefenderMultimodal detection (video + audio + image)Enterprise
SensityReal-time monitoring + forensic analysisEnterprise
MIT Detect FakesEducational tool to train your eyeFree

For everyday use, start with the free tools. AI or Not for images and GPTZero for text will handle 90% of what you encounter. Enterprise-grade tools like Sensity and Reality Defender are designed for newsrooms, financial institutions, and government agencies dealing with high-stakes verification.

Why Detection Will Always Be an Arms Race

Here's the uncomfortable truth that detection tool companies don't advertise: detection is fundamentally harder than generation.

Every time a detection tool learns to spot a pattern, the next model version is trained to avoid that pattern. DALL-E 2 produced obviously AI hands. DALL-E 3 mostly fixed them. Midjourney v6 produces hands that are indistinguishable from photos in many cases. The same pattern applies to text — GPT-4's writing is harder to detect than GPT-3.5's, and the gap will continue narrowing.

Detection tools currently work well because they're trained on the same models we're trying to detect. But as new models emerge, there's always a lag between release and reliable detection. GPTZero's 99% accuracy is measured against known models. Against a brand-new, unreleased model, no detector has been tested.

This doesn't mean detection is useless — it means you shouldn't rely on any single tool as absolute truth. Use detection tools as evidence, not verdicts.

The Real Solution: Content Provenance

The long-term answer to "is this real?" isn't better detection — it's better proof of authenticity.

The Coalition for Content Provenance and Authenticity (C2PA) is developing a standard where authentic content carries a cryptographic signature from the moment of creation. Think of it like a digital certificate of authenticity. When you take a photo with a C2PA-compatible camera, the image gets a tamper-proof seal showing when, where, and how it was captured.

Major players are already adopting this: Adobe's Content Credentials, Google's SynthID (which watermarks AI-generated content), and camera manufacturers like Leica and Sony are building C2PA support directly into hardware.

The shift in thinking is significant: instead of asking "is this fake?" we'll ask "can this prove it's real?" Content without provenance information won't necessarily be fake — but it will be unverified, and that distinction will increasingly matter.

Network of connected digital verification nodes representing content provenance authentication
Content provenance standards like C2PA flip the question: instead of detecting fakes, authentic content proves itself real through cryptographic signatures.

Frequently Asked Questions

Can AI detection tools wrongly flag human writing?

Yes. False positives are a real problem. GPTZero's false positive rate of 0.24% sounds low, but across millions of checks, that's thousands of human writers wrongly flagged. Non-native English speakers and people who write in a formal, structured style are disproportionately affected. No AI detection result should be treated as definitive proof — always combine tool results with human judgment and context.

Can someone edit AI text to avoid detection?

Absolutely. Light editing — adding personal anecdotes, varying sentence length, injecting opinions — can make AI-generated text undetectable by current tools. This is why detection tools are better at catching lazy, unedited AI output than sophisticated human-AI collaboration. The more a human touches the text, the harder it is to detect the AI component.

Are AI-generated images always detectable?

Not anymore. Top-tier image generators like Midjourney v6 and DALL-E 3 produce images that can fool both humans and detection tools in many cases. Detection accuracy varies wildly depending on the subject (faces are easier to detect than landscapes), the model used, and the resolution. A casual photo of a sunset generated by AI might be truly undetectable. A portrait with visible hands is still usually catchable.

Is it illegal to create deepfakes?

It depends on jurisdiction and intent. Creating a deepfake for satire or entertainment is generally legal in most countries. Using a deepfake for fraud, non-consensual explicit content, election interference, or identity theft is illegal in an increasing number of jurisdictions. The EU's AI Act and various U.S. state laws are rapidly expanding legal restrictions on malicious deepfakes. The technology itself isn't illegal — the application can be.

How do I verify if a viral video is real?

Use a multi-step approach: (1) Check if established news organizations have reported on it — if a video is truly newsworthy but only appears on social media, that's a red flag. (2) Run a reverse image search on a still frame using Google Images or TinEye. (3) Check the account that posted it — how old is the account? What else have they posted? (4) Upload it to a free detection tool like AI or Not. (5) Watch it at 0.25x speed and look for the visual artifacts described in this article.

Sources & References

Subscribe to AI Log

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
[email protected]
Subscribe