top of page
Search

Why You Should Retry Every AI Tool You've Given Up On

  • Writer: Matt Pisoni
    Matt Pisoni
  • Apr 2
  • 2 min read

And just like that, it's dead. They cross it off the list. Tell the team. Everyone nods and goes back to doing it manually — forever. It happens constantly. And honestly, it might be the most expensive mistake founders are making with AI right now.

The Problem

Someone finds a task they want to automate. They try an AI tool. It underdelivers — maybe it's 70% there, maybe it fails completely. And they conclude: "AI can't do that." Here's the thing — that conclusion isn't wrong. It's just incomplete. The accurate version is: "AI can't do that yet."

Two words. Completely different implication. AI capability isn't a fixed ceiling. It's an exponential curve. What failed six months ago might work perfectly today. The agent that couldn't handle your workflow in Q1 might crush it in Q3. Treating a point-in-time failure as a permanent verdict is how you fall behind.

The Fix: Wait 3 months and try it again

Simple rule. I mean it literally.

If something was important enough to try, put a date on your calendar three to six months from now and try it again. That's it.

Tried an AI tool to automate part of your sales process and it wasn't reliable enough? Calendar it. Tested an agent for customer support and the accuracy wasn't there? Calendar it. Experimented with code generation and it kept getting it wrong? Calendar it.

Six months is the sweet spot. Long enough for meaningful capability jumps — models improve, tools get refined, reliability goes from 40% to 90%. Short enough that you still remember why the use case mattered. Don't just think "I'll revisit this someday." That's how someday becomes never.

My Suggestion

Go back through your "AI failed here" list.

If you tested something six months ago and wrote it off, that's your homework. Try it again. I'd bet at least one thing on that list works now. The models have improved. The tools are better. The integrations are more robust. The failure rate from six months ago tells you almost nothing about today. The founders who win at this aren't the ones who get it right on the first try. They're the ones who keep testing, keep retrying, and understand that "doesn't work yet" is fundamentally different from "doesn't work."


Set a recurring 3 to 6-month reminder: "Retry AI experiments that failed."

When it goes off — pick three, run the test.

On an exponential curve, persistence beats perfection.

 
 
bottom of page