I get pitched AI startups almost every week. Some founders come with code, clear metrics and a prototype you can try in 10 minutes. Others arrive with buzzwords, an impressive slide deck, and a demo that somehow never quite answers the simple question: what exactly does the product do, and how will it make money?
Over the years I’ve learned to spot a few consistent warning signs – red flags that investors and partners too often overlook when the word “AI” is involved. These aren’t about hyperbole or marketing polish; they’re about substance. If you’re an investor, operator, or simply someone trying to separate genuine innovation from often-costly hype, here are the three red flags I see most frequently, and the practical checks I use to test whether an AI pitch is real.
Red flag: vague problem definition wrapped in technical jargon
I can forgive a founder who oversells their vision. I can't forgive one who can’t clearly explain the problem they're solving in plain English. AI is being framed as a solution to everything, and the result is products that solve nothing well.
What to listen for:
My practical checks:
When founders can’t communicate the problem in plain language, they often rely on the glamour of AI to mask weak product-market fit. That’s where deals go south.
Red flag: proprietary claims without reproducible evidence
“Proprietary algorithm,” “patent pending,” and “we beat GPT-4 on XYZ benchmark” are phrases I hear a lot. Sometimes they’re true. Often, they’re not. The dangerous middle ground is when founders make technical claims that sound plausible but offer no way to verify them.
What to look for:
My practical checks:
Proprietary claims aren’t inherently suspicious, but they must be testable. If founders can’t or won’t let you verify, treat it as a major red flag.
Red flag: dependence on general-purpose models with no clear differentiation
Building on top of OpenAI, Anthropic, or Hugging Face models is sensible – those APIs are powerful and accelerate development. The problem arises when a startup’s “secret sauce” is merely how it strings together prompts or the UX layer without meaningful differentiation or defensibility.
Things I see go wrong:
My practical checks:
How I verify quickly — practical checklist I use during diligence
| Check | Why it matters | Quick pass/fail test |
| Customer story + measurable outcome | Confirms product-market fit and real impact | Founder gives one concrete customer example and metric |
| Reproducible evaluation | Verifies technical claims and prevents hype | Founder shares test data or allows an engineer test |
| Data ownership and cost model | Determines defensibility and profitability | Founder outlines data pipeline, margins under stress |
Beyond that checklist, I always seek one of three validating signs: a real paying customer, code I can review (even a small repo), or a technical founder who can answer detailed questions about failure modes and mitigation. If none of those are present, I become more cautious.
Red flags investors often miss
To close, here are the practical behaviors I see investors overlook time and again:
AI investing isn’t about being the most optimistic person in the room. It’s about asking disciplined, technical, and customer-focused questions. If you take away one practical habit from this piece, let it be this: demand testability. Ask for something you can try, measure or reproduce in a short period. If the founder pushes back, ask yourself why.