March 10, 2026· 10 min read
How to Evaluate an AI Consultant: The 10-Point Checklist
A practical 10-point checklist for evaluating AI consultants: specific questions to ask, red flags to spot, and data on why 80% of AI projects fail.
Joseph Musembi · Founder, Raison Consult

How to evaluate an AI consultant: the 10-point checklist
More than 80% of AI projects fail. Not because the technology doesn't work -- because the people hired to implement it didn't do their job.
That stat comes from RAND Corporation research published in 2024, based on interviews with 65 experienced data scientists and engineers. A third of projects get abandoned before production. Another 28% ship but deliver zero value. And 18% produce some value but can't justify the cost.
I've seen this pattern up close. Companies hire consultants who talk a good game, deliver a strategy deck, and disappear. Six months later, the deck sits in a shared drive, nothing has been deployed, and the company concludes "AI doesn't work for us."
AI works fine. The consultant didn't.
So here's the checklist I wish I had before I started evaluating AI consulting firms -- and the one I'd hand to any business owner before they sign an engagement.
1. Do they implement, or just advise?
This is the single most important question. And it's where most companies get burned.
A lot of AI consulting firms sell strategy. They'll audit your processes, identify opportunities, build a roadmap, and hand you a PowerPoint. Then you need to find somebody else to build the thing. Or worse, your internal team (which doesn't have AI experience) tries to execute a plan written by people who never had to.
RAND's study found that one of the five root causes of AI project failure is "technology over utility" -- organizations focus on the latest technology rather than solving real problems for actual users. Strategy-only consultants contribute to this directly. They recommend tools and architectures without ever having to make them work in your environment.
What to ask: "Who does the implementation? Is it your team, or do we need to hire someone else after this engagement?"
If the answer is "implementation is a separate engagement" or "we recommend you hire a developer to execute the roadmap," that's your signal. You're paying for a plan, not a result.
What good looks like: The same team that assesses your needs also builds and deploys the system. They own the outcome from diagnosis through deployment and post-launch optimization.
2. Can they show results with specific numbers?
"We helped a company improve efficiency with AI" means nothing. Every consulting firm says this. It's the AI equivalent of "results may vary."
What actually matters: specific numbers, specific outcomes, specific timelines.
Here's the difference:
| Vague (red flag) | Specific (good sign) |
|---|---|
| "We helped improve customer response times" | "We deployed AI intake that responded to 94% of inquiries within 30 seconds, up from a 4-hour average" |
| "We reduced operational costs" | "We automated document categorization, cutting processing time from 12 hours/week to 45 minutes" |
| "Our AI solution increased revenue" | "AI cart recovery generated $23,000 in recovered revenue in the first 60 days" |
Gartner's 2025 survey found that 63% of high-maturity AI organizations run financial analysis and measure customer impact to track success. Your consultant should be doing this with every engagement.
What to ask: "Can you share a case study with specific metrics -- revenue impact, time saved, cost reduction, response time improvements?"
If they can't, they either don't measure results or don't have results worth measuring.
3. Do they know your industry?
A consultant who built AI for hospitals won't automatically know how to build AI for law firms. Different industries have different workflows, regulations, compliance requirements, tools, and terminology. This matters more than people realize.
A generalist AI consultant working with a CPA firm needs to learn about tax season workflows, multi-client document management, QBO and Thomson Reuters integrations, and IRS compliance before they can even scope the project. That learning curve? You're paying for it.
An industry-specific consultant walks in already knowing that 86% of small CPA firms are "observing" AI rather than using it, that the biggest bottleneck is document categorization during tax season, and that the real win is cutting 30-40% of staff time spent on manual data entry.
What to ask: "What do you know about [my industry]? How many clients in my space have you worked with?"
Listen for specifics. If they mention your industry's tools by name (Clio for legal, Shopify for e-commerce, QBO for accounting), talk about common workflows, and reference industry-specific pain points, they've done the work. If the answer is generic -- "AI is transforming every industry" -- they'll learn on your dime.
4. How fast do they move?
The AI consulting industry has a speed problem. Baker Tilly's research found mid-market companies spend an average of $600,000 on AI initiatives. A lot of that money goes toward assessing and strategizing before anything gets built.
Meanwhile, Pertama Partners data shows 95% of GenAI pilots fail to scale to production, with a median timeline of 14 months from pilot approval to shutdown. Fourteen months of work. Then the plug gets pulled.
Speed matters for a specific reason: the longer an AI project runs before producing results, the more likely it is to get killed. Budgets get cut. Sponsors change roles. Priorities shift. The project that was "strategic" in Q1 becomes "deprioritized" by Q3.
What to ask: "When will I have a working system? Not a prototype. Not a demo. A system running in my business, handling real work."
A reasonable answer for most mid-market implementations: 4-8 weeks for an initial deployment. If they're talking about 3-6 months before you see anything working, ask why.
5. Do they assess your data before promising outcomes?
This is where the good consultants separate themselves from the ones who just want the contract.
Data quality is the single biggest factor in whether your AI project succeeds or fails. Research compiled by WebProNews found that 42-85% of AI project failures are directly caused by poor data quality. Gartner estimates that poor data quality costs businesses $12.9 million annually.
A good consultant won't promise outcomes before they've looked at your data. They'll want to understand what data you have, how clean it is, where it lives, and whether it's actually usable for what you want to do. A bad consultant will promise the moon in the sales call and figure out the data problem after you've signed.
What to ask: "What's the first thing you'll do after we engage?"
If the answer is "assess your data and current processes" -- good. If the answer is "start building" -- they're skipping the most important step.
Red flag: Any consultant who guarantees specific outcomes before seeing your data is selling you, not advising you.
6. Do they publish their pricing?
I wrote about this in my AI consulting pricing guide, and I'll say it again: firms that hide pricing behind a sales call are usually charging a premium for the "discovery" process itself.
This is a personal bias, but I think it's a reasonable one. The AI consulting market is a $14 billion industry growing at 24% annually. There's money sloshing around. And a lot of firms exploit the ambiguity around AI pricing to charge whatever the client will bear.
Pricing transparency tells you two things about a consultant:
- They've productized their offering. They know what they deliver, how long it takes, and what it costs. This means they've done it enough times to have a process.
- They're not afraid of comparison. Firms that publish pricing are inviting you to shop around. That confidence usually comes from knowing their value proposition holds up.
What to ask: "Can you give me a pricing range right now, in this conversation?"
A good consultant can give you a range within 15 minutes. "For your situation, we're typically in the $X-$Y per month range, depending on [2-3 specific factors]." If they can't, they either don't have a repeatable process or they're sizing you up.
7. What happens after they leave?
Most AI consulting relationships have an end date. The consultant deploys a system, optimizes it, and eventually moves on. The question is: what do you have when they're gone?
This is where a lot of companies get stuck. The AI system works great while the consultant is managing it. Three months after the engagement ends, performance degrades, nobody knows how to fix it, and the system gets abandoned. Gartner's 2025 data shows only 45% of high-maturity organizations keep their AI projects operational for three or more years. For low-maturity organizations, it's 20%.
What to ask: "What does handoff look like? Will my team be able to maintain and optimize this system after you're done?"
What good looks like:
- Documentation of everything built
- Training for your staff on how to use and adjust the system
- A transition period where your team runs the system with the consultant available for support
- Clear ownership of the code, models, and data
- A maintenance option if you want ongoing support
Red flag: "You'll need to keep us on retainer to maintain the system." Some ongoing support is normal, but if the system can't function without the consultant's constant involvement, you don't own a system -- you're renting one.
8. Do they challenge your assumptions?
This one's counterintuitive. You want a consultant who will push back on you.
RAND's research identified "misunderstanding of the problem" as the number one root cause of AI project failure. Industry stakeholders often miscommunicate or fundamentally misunderstand what problem needs to be solved using AI.
A consultant who just says "yes, we can do that" to everything you ask is dangerous. They're optimizing for closing the deal, not for your success. The good ones will say things like:
- "That's technically possible, but I don't think it'll move the needle for your business. Here's what I'd focus on instead."
- "AI isn't the right solution for that problem. Here's a simpler approach."
- "We could do that, but your data isn't ready. We need to fix [specific issue] first."
Solutions Review's analysis puts it well: true consulting means challenging your assumptions rather than just executing your requests.
What to ask: "Have you ever told a client that AI wasn't the right solution for their problem?"
If the answer is no, they either haven't worked with enough clients or they're too afraid of losing business to be honest. Both are bad signs.
9. Who actually does the work?
This is the bait-and-switch problem. You meet senior people during the sales process. Impressive backgrounds, sharp insights, compelling vision for your AI future. You sign. And then the actual work gets handed to junior staff you've never met.
It's not unique to AI consulting -- it happens across professional services. But it's worse in AI because the field is young and the gap between a senior engineer's output and a junior one's is enormous.
What to ask: "Who will be working on my project day-to-day? Can I meet them before we sign?"
What to look for:
- A small, senior team beats a large, junior team. A 3-person team that knows your industry will outperform a 15-person team of generalists.
- Ask about the team's experience with production AI systems. Building a demo is easy. Building something that handles real traffic, real edge cases, and real data at 2 AM when nobody's watching is hard.
- Check if the people you meet in sales are the people who do the work. At smaller firms, they often are. At larger firms, they rarely are.
10. Do they align incentives with outcomes?
Most AI consultants charge by the hour or on a monthly retainer. Both models have a structural flaw: the consultant gets paid the same whether your AI project succeeds or fails.
Outcome-based or hybrid pricing fixes this. The consultant charges a lower base fee and earns a percentage of the value they create. If AI intake recovers $60,000/month in previously lost leads, the consultant earns a share of that recovery. If the system doesn't perform, they eat the downside.
Few firms offer this model because it requires genuine confidence in their ability to deliver. But when you find one that does, it tells you something about how they think about their work.
What to ask: "Are you willing to tie any portion of your fee to measurable outcomes?"
Even if they can't do pure outcome-based pricing, their response is informative. A consultant who says "we'd consider a performance bonus tied to [specific metric]" is signaling confidence. One who says "our rates are our rates" may have good reasons, but they're also telling you where their risk tolerance sits.
The quick-reference checklist
Print this. Bring it to your next AI consulting evaluation.
| # | Question | Good answer | Red flag |
|---|---|---|---|
| 1 | Do they implement? | Same team assesses and builds | "Implementation is a separate engagement" |
| 2 | Specific results? | Named metrics: revenue, time, cost | "We help companies improve efficiency" |
| 3 | Industry knowledge? | Names your tools, workflows, pain points | "AI is transforming every industry" |
| 4 | Speed to working system? | 4-8 weeks for initial deployment | 3-6 months before anything runs |
| 5 | Data assessment first? | "First we assess your data" | Guarantees outcomes before seeing data |
| 6 | Pricing transparency? | Range within 15 minutes | "We need a discovery process first" |
| 7 | Handoff plan? | Documentation, training, code ownership | "You'll need to keep us on retainer" |
| 8 | Challenge assumptions? | Has told clients AI was wrong fit | Says yes to everything |
| 9 | Who does the work? | Small, senior team you can meet | Senior sales team, junior delivery team |
| 10 | Incentive alignment? | Open to outcome-based components | Fixed fees regardless of results |
A consultant who checks all ten boxes is rare. But you don't need perfection -- you need someone who clears at least seven or eight, and who's honest about the ones they don't.
What this looks like in practice
I built Raison Consult around these principles because I kept seeing the same problems from the other side of the table. Companies were paying for strategy and getting decks. Paying for AI and getting demos that never made it to production.
We deploy AI systems for mid-market companies in e-commerce, legal, and accounting/CPA at $5,000-$15,000/month. We publish our pricing because we think hiding it wastes everyone's time. We deploy within 4-8 weeks because we've learned that speed-to-working-system is the strongest predictor of whether a project survives.
That doesn't make us the right fit for everyone. If you need a 30-person team for an enterprise-wide AI transformation, we're not your firm. If you need a working AI system in your business within a month, we probably are.
Frequently asked questions
What is the most important factor when choosing an AI consultant?
Implementation capability. A consultant who can assess, build, and deploy a working AI system is more valuable than one who produces strategy documents. RAND Corporation research found that 80% of AI projects fail, with "technology over utility" -- focusing on tools instead of outcomes -- as a root cause. The consultants who ship working systems address this directly.
How much should I expect to pay for AI consulting in 2026?
AI consulting rates range from $100/hour for freelancers to $700/hour for Big Four firms. Monthly retainers run $5,000-$50,000 depending on scope and firm size. For mid-market companies, $5,000-$15,000/month for implementation-focused consulting is the typical range. See our complete AI consulting pricing guide for detailed breakdowns by engagement type and company size.
How long should an AI implementation take?
For most mid-market use cases (AI customer support, intake automation, document processing), expect an initial deployment within 4-8 weeks. Pilots that drag past 3-6 months have a high failure rate -- Pertama Partners data shows a median 14-month timeline from pilot approval to shutdown, with 95% of GenAI pilots failing to scale.
What are the biggest red flags when evaluating an AI consultant?
Five warning signs: (1) they can't share case studies with specific metrics, (2) implementation is outsourced or "a separate engagement," (3) they guarantee outcomes before assessing your data, (4) they won't give you a pricing range in the first conversation, and (5) the senior people you meet in sales aren't the ones doing the work.
Should I hire a generalist or industry-specific AI consultant?
Industry-specific, if you can find one. Generalist consultants need to learn your workflows, compliance requirements, and tools on your time and budget. An industry-specific consultant already understands these, which means faster deployment and fewer expensive mistakes. The trade-off: industry specialists are rarer, especially for less common verticals.
Why do most AI projects fail?
According to RAND Corporation, the five root causes are: misunderstanding the problem, insufficient data, focusing on technology over utility, infrastructure gaps, and applying AI where it's not appropriate. Note that only one of these is a technical issue. The rest are business and leadership problems, which is exactly why choosing the right consultant matters so much.
Last updated: March 4, 2026.
Sources
Data and statistics cited in this checklist are drawn from the following research and publications:
- RAND Corporation: The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed (2024). Study of 65 data scientists and engineers on why 80%+ of AI projects fail.
- Gartner: AI Maturity and Project Longevity Survey (2025). Survey of 432 organizations on AI maturity, trust, and project sustainability.
- Gartner: Lack of AI-Ready Data Puts AI Projects at Risk (2025). Research on data quality costs and AI readiness barriers.
- Pertama Partners: AI Project Failure Statistics 2026 (2026). Comprehensive failure rate data including GenAI pilot statistics and cost of failed projects.
- Baker Tilly: Mid-Market AI Investment (2025). Mid-market AI spending averages and budget allocation patterns.
- SNS Insider: AI Consulting Services Market Report (2025). Market size ($14 billion) and growth projections (24% CAGR).
- WebProNews: Poor Data Quality and AI Project Failures (2025). Data quality impact analysis citing 42-85% failure attribution.
- Solutions Review: What to Look For in an AI Consultancy (2025). Framework for evaluating AI consulting partners.
About the author: Joseph Musembi is the founder of Raison Consult, an AI implementation consultancy that deploys AI for mid-market companies in 4-8 weeks. Book a free AI assessment to see where AI can save you time and money.
Related posts
Why AI Projects Fail at the Implementation Level (Not Strategy)
80% of AI projects fail, and most fail at implementation, not strategy. Data from RAND, MIT, and BCG on what actually goes wrong and how to avoid it.
Mar 4, 2026
AI Consulting Pricing in 2026: What It Actually Costs at Every Budget Level
Actual AI consulting rates in 2026: hourly, project, and retainer pricing from Big Four to boutique. Includes comparison tables and cost breakdowns by company size.
Feb 26, 2026