The 80% Fallacy

Or: Why building your AI use case might be harder than you think

Here's a typical mistake I see again and again in AI development:

When you start building your AI use case, you're likely to get pretty good results very quickly - in weeks, days, maybe even hours. "Wow, that worked pretty well! Almost there!" is what I often hear. I call this the 80% fallacy - the belief that 80% accuracy means you're at 80% of project completion.

In reality, you've only just begun.

The truth is, the journey from 80% to 100% is far more challenging than the journey from 0% to 80%. It's a reality that catches many beginners off guard, leading to wasted resources, missed deadlines, and failed projects.

So why does this happen? And more importantly, how can you avoid falling into the 80% Fallacy trap and set your AI projects up for success?

Let's find out!

The Problem: Underestimating AI Development Challenges

To see the 80% Fallacy live and in action, let me quote from a recent blog post that the LinkedIn Engineering team published on their efforts scaling Generative AI across their platform:

LinkedIn's experience is a classic example of the Early Success and Linear Progression Misconception.

When AI projects show promising initial results, it's easy to assume that the rest of the journey will follow a similar, linear path. Teams extrapolate from their early wins and make overly optimistic projections about when the project will be complete.

You can picture it like this:

The Exponential Difficulty of Completion means that as AI projects advance, challenges and complexities multiply. Minor issues in the early stages can suddenly become major roadblocks. This phenomenon applies to virtually all AI archetypes, industries, and maturity levels.

For example, Tesla is grappling with this challenge as autonomous driving proves harder than anticipated. Amazon also had to pull back on their AI-powered "grab-and-go" concept (watch me discussing this with Philipp Neuberger here). These examples illustrate the pervasiveness of the 80% Fallacy, affecting even prominent players in the AI space.

Underestimating these challenges can have severe consequences. Projects can drag on past deadlines, burning through budgets and resources. Teams can become demoralized as goalposts move further away. Some projects may even be abandoned altogether, resulting in wasted time and money.

To solve this problem, we must first understand why the 80% Fallacy exists and why the last 20% is so much harder than it seems.

Why The Fallacy Happens

The journey from 80% to 100% in AI development is far more challenging than the initial sprint to 80% due to two main factors: Technical Complexity (primarily for Classical AI) and Unpredictable Outputs (especially for Generative AI).

Technical Complexity (Classical AI)

For Classical AI, the main challenge is the Curse of Dimensionality. As the number of variables and edge cases increases, the data and computational requirements grow exponentially.

Examples:

  • Healthcare: Rare diseases and atypical patient profiles

  • Autonomous driving: Countless edge cases in the wild

  • Translation: Custom business glossaries and synonyms

Non-Deterministic Outputs (Generative AI)

Generative AI, on the other hand, has its own challenges. While classical AI produces consistent outputs for the same input, generative AI models can produce a wide range of responses based on subtle variations in the input and the model's inherent randomness.

Examples:

  • Chatbots: Users ask the same questions, but get different answers

  • Content creation: Inconsistent tone, and factual inaccuracies

  • Software engineering: LLM may return more than just plain code

To address these issues, developers often use 'duct-tape' solutions and unsustainable quick fixes that don't scale well (hard-coding rules to catch bad LLM output, anyone?).

The bottom line is that AI development, whether classical or generative, is rarely a smooth, linear process. The last 20% is often a rocky road filled with unexpected obstacles. And there's really no shortcut.

So what should we do? Just keep pushing back AI project release dates and that's it?

There's a better way.

The Solution: Aligning Goals with Capabilities

If we can’t get an AI model past a certain threshold, we need to lower the requirements to but it into production. That’s exactly where Augmented AI fills the spot. Augmented AI use cases allow you to pair AI systems with human experts and create significant business impact even when the AI isn't 100% accurate.

And because the initial investment to achieve an 80% solution is typically quite low (see my 20/20 rule), projects leveraging augmented AI can achieve positive ROI quickly - often in 3 months or less. When done right, the same "flawed" AI services that couldn't serve your highly automated, highly integrated use cases suddenly deliver significant performance improvements, such as 20% faster task completion when working with process or domain experts.

Make no mistake: 20% faster task completion may not transform your business, but for many industries, it's already a big deal.

Moreover, as you implement Augmented AI, you collect valuable data that can help train your systems to become more accurate and potentially more autonomous over time.

Tailoring use cases for augmentation, not automation

The key to success with Augmented AI is to redesign your use cases to scenarios where the "flaws" of AI become marginal, irrelevant, or even advantageous when paired with a human who knows their craft.

Consider these real-world examples:

  • Instead of building a chatbot that handles all customer queries automatically, build an internal support bot that allows support agents to respond to queries 2x faster.

  • Instead of having an AI that autonomously creates and distributes your marketing content, have create a custom GPT that turns your existing long-form content (like podcasts and webinars) into short-form social media pieces, filling your 3-months content pipeline in less than a day.

  • Instead of anticipating an AI that tries to automatically generate leads, build an AI system that collects and prioritizes existing leads more effectively so your sales team works with a better focus.

The common theme is to use AI to give people leverage, an unfair advantage that they wouldn't otherwise have. Let them take the lead, don't put them in the loop. This people-centric approach not only drives better results, but also helps build trust and buy-in for your AI initiatives.

It's a win-win: you drive business value today while laying the foundation for more advanced AI capabilities in the future.

And needless to say, managing expectations and stakeholders is an underrated skill. Be sure to clearly communicate the goals, milestones, and constraints of your AI efforts, plus engage key stakeholders frequently.

Conclusion

The 80% fallacy shouldn't be a barrier to AI adoption. It should be an invitation to think differently about how we use AI in the real world.

Align (ambitious) goals with (existing) capabilities of AI, and focus on augmenting rather than replacing human workflows to start driving real business value from AI today.

Getting to 80% accuracy quickly and the last 20% slowly isn't a disadvantage - it's an opportunity to ship quickly and iterate in strong tandem with human experts.

So don't let the pursuit of perfection hold you back.

Stay augmented - stay ahead!

See you next Friday,

Tobias

Reply

or to participate.