The AI project failed. The vendor blames the data. The team blames the vendor. Leadership blames the timeline. Everyone's wrong.
The project failed because nobody asked the right questions before it started.
I've had this conversation dozens of times - with founders who spent six figures on an AI implementation that technically worked and delivered nothing, with operations leaders who watched a promising pilot die in rollout, with executives who are now more sceptical of AI than they were before they tried it. The pattern is consistent enough that I can tell within the first ten minutes of a conversation whether a project is going to deliver or not.
The indicator isn't budget. It isn't the technology stack. It isn't even the vendor.
It's whether anyone did the diagnosis before they started building.
The Mistake: Solution Before Problem
Here's what usually happens. A business decides it wants to "do AI." Leadership aligns. Budget gets approved. The vendor shortlist gets put together. Demos happen. A vendor gets selected. The project kicks off.
At no point in that sequence did anyone rigorously define the problem being solved.
The vendor shows you what they can build. You decide it looks good. You assume the fit is there because the demo is impressive. Six months later, the system is live, the team isn't using it, and you're trying to figure out what went wrong.
What went wrong was the first conversation.
A vendor who demos their product before understanding your business isn't offering you AI implementation. They're offering you a product and hoping it fits your problem. Those are different things. The distinction costs real money when you pick the wrong one.
What Diagnosis Actually Looks Like
Before any AI project we take on at Mostly Human, we spend the first engagement understanding the business, not the technology. That means asking questions that feel almost too basic:
What does your team actually do all day? Not the job title version: the real version. What are the repetitive tasks that consume hours every week? Where are the bottlenecks that people have just accepted as part of the job?
Where does work slow down or get dropped? Not the polished process diagram on the wall: the actual flow. Where does stuff pile up? Where does quality drop under pressure? Where do people do the same task manually because "the system doesn't handle it"?
What does a good outcome look like in six months, in measurable terms? Not "more efficient" or "better visibility." Actual numbers. Hours saved. Error rates reduced. Decisions made faster by how much.
What has already been tried? Virtually every business we work with has attempted something before. Understanding what was tried and why it didn't land tells you more about the constraints than any technical assessment.
These questions feel slow. They are slow. And they're the reason the systems we build actually get used.
A Concrete Example
One client came to us after a failed reporting automation project. The previous vendor had built a dashboard that pulled data from three systems and generated weekly reports automatically. Technically, it worked perfectly.
The ops team wasn't using it.
When we spoke to the team, the reason was immediate: the reports were being generated in a format that didn't match how the business actually reviewed performance. The previous vendor had built what was specified in the brief. The brief had been written by someone who didn't sit in the weekly review meetings. Nobody had asked the end users what they actually needed to see.
We rebuilt the system. Same data sources, different logic, different output format. Took six weeks. The team adopted it in the first week and hasn't gone back.
The second project succeeded not because the technology was better. It succeeded because we started with the right questions.
The Questions Worth Asking Before You Sign Anything
If you're evaluating AI vendors right now, or trying to restart a project that stalled, here's a practical filter.
Ask them what they want to understand about your business before they propose anything. A good partner will have a list of questions. A bad one will want to show you a demo.
Ask what won't work. Every honest AI implementation has limits. If a vendor can't articulate the boundaries of what they're proposing, they're either overselling or haven't thought it through. Both are expensive.
Ask them to describe what changes for your team in the first 90 days. Not at launch: in the first 90 days of real usage. If they can't answer specifically, they're describing a deployment, not an implementation.
Ask if you can speak to a client in a similar business. Not a testimonial on the website. A real conversation with someone who's been through it.
The Real Reason This Matters
There's a version of this conversation where I could just say "ask better questions and you'll get better results." That's true, but it misses the point.
The reason most AI projects fail is that the entire industry has been set up to sell technology, not outcomes. Vendors are incentivised to get projects started, not to ensure they succeed. The demo closes the deal. The implementation is someone else's problem.
Business-first AI means the outcome is the point. The technology is how you get there. That inversion, starting with the business problem and working back to the tool, sounds obvious. It's still not standard practice.
If you've been through an AI implementation that didn't deliver, the technology probably wasn't the problem.
You deserved a better implementation. And the version that would have worked started with better questions.
If you're planning an AI implementation, or reviewing one that didn't land, the first step is understanding exactly where the bottleneck is before you build anything. That's the conversation we start with.