If you're reading this article, there's a high chance you've seen a recent talk of mine (welcome to all new subscribers). If so, this is your recap. If not, even better. This is your update.

The companies I talk to these days – no matter from the EU or the US – all ask me the same question:

"How do we get business impact from AI?"

I’ve been working on this question for the past couple of years. And somewhere around project 50, I made an important realization that changed the way I think about AI implementation.

Today, I'd like to share that realization with you – what it means and how you can use it to shortcut your way to Profitable AI.

Let’s dive in.

My Realization

Profitable AI should be easy. GPT-5 is now matching the performance of industry professionals in 80% of all cases. The cost per token is going down steadily. And the vast majority of enterprises are now using AI in at least one business function.

But getting Profitable AI isn't easy.

Profitable AI is rare, fragile, and misunderstood.

Let’s unpack that.

Rare

Where to begin? Maybe at something right in front of your eyes every day.

Take AI article narration on news websites. You press a button, the article gets read aloud by a surprisingly natural voice. Better accessibility. Higher engagement. The kind of feature every product manager would say: "We need this!"

But let's look at the economics. A system like this – text-to-speech pipeline, CMS integration, dev + production infrastructure – can easily cost $30–50K just to set up. Plus the running cost. If your site is publishing 150 articles a day at about 5 minutes of audio per article, you're looking at $70–100K per year in API costs, maintenance, monitoring, and licenses combined. That's the equivalent of about 1,000 paid subscribers – just to keep the feature alive.

Assumptions

150 articles per day

5 minutes per article

= 750 min / day

Running Cost

API: ~30K / year

Infra + logs: ~$20K / year

Maintenance: ~$50K / year

Total annual run rate

= ~$100K

Now, if you're the Washington Post these economics might still work out. But for a small or mid-sized regional publisher with high volume but few readers per article? Very different conversation.

And this is a relatively simple feature. No agents. No complex reasoning. "Just" calling an API.

So even the "easy" stuff gets expensive fast.

But maybe we have to zoom out, look at the bigger picutre.

OpenAI has 800 million ChatGPT users, the fastest-growing app ever, but is estimated to not be profitable until 2030. Their AI video generation app Sora was shut down after reportedly burning ~$1M per day.

But maybe we're just looking at it the wrong way. Uber wasn't profitable for 15 years. So let's look at "normal" businesses.

recent NBER study found that around 70% of companies actively use AI, but over 80% report no impact on productivity. I wrote about this gap a few weeks ago, and it's not actually new. Economist Paul David showed in 1990 that factories replacing steam engines with electric motors saw no gains for almost 30 years because they didn't redesign the process.

We're at the same moment with AI. Bolting it onto existing workflows won't work.

The gains come from rethinking the process itself.

Fragile

Let's say we actually found a good use case and we were able to move it to production. Is success stable?

No.

In traditional IT projects, most costs are front-loaded. You build the system, deploy it, and the marginal cost of the next user is essentially zero. A CRM is expensive to set up, but cheap to add the 500th user.

AI works differently. Costs typically start relatively low, but then increase with usage. Someone has to pay the GPU bill eventually. Plus, ongoing effort for retraining and calibration — just to keep the system at the same level. We're not talking about adding features yet.

That's why an AI solution must continuously prove value to remain economically viable. Otherwise it becomes too expensive to keep alive and at some point people will ask:

What are we even paying this for?

The Swedish payment platform Klarna learned this publicly. In 2024, they announced their AI chatbot handled two-thirds of customer service, doing the work of 700 agents. This case study was a big deal. Finally, proof that AI could replace human work at scale. But about a year later, they quietly scaled back. Customers preferred humans — and a system doing the work of 700 people doesn't run for free.

Even "successful" AI is fragile.

Misunderstood

The third hurdle is that companies treat everything the same that has "AI" in it. Same expectations, same budgeting approach, same success criteria.

But there are two fundamentally different tracks – and they follow completely different rules.

Track 1: Productivity AI. These are Assistants and Copilots. AI that gives you answers when you ask and AI that helps you do the work inside a given tool — but it doesn’t do the work for you. If I don't ask ChatGPT, it doesn't answer. If I don't accept the Outlook suggestion, no email gets sent. The value is real but diffuse. Faster drafts, better research, quicker emails are useful but hard to put into a spreadsheet ROI column. Don’t even try it. The focus here is to measure adoption and behavior.

Track 2: Engineered AI. This is Autopilot and Agent territory – purpose-built systems for a specific outcome. A chatbot that resolves customer inquiries. A document classifier. A screening pipeline. It’s AI that does the work for you. In contrast to Productivity AI, having a spreadsheet with an ROI column is mandatory. Because the only question for these systems is: does the value justify the cost?

The misunderstanding happens when companies measure Track 1 like Track 2 (demanding hard ROI from Copilot), or budget Track 2 like Track 1 ("let’s just buy the best tool – how much is it per month?"). That’s a recipe for disaster and the reason so many good AI ideas eventually fail in production.

What to do about it

So – Profitable AI is rare, fragile, misunderstood.

Now what?

I've found that overcoming these three obstacles comes down to three things.

1) Get a metric before you build

If you're building something – or buying anything – beyond Assistant or Copilot territory you're in Engineered AI land. For that, you need a number that makes the opportunity visible. Not "we could save time" but "we're spending 800 hours per year on screening incoming investment proposals." That number is what turns a vague idea into a business case worth discussing. I've walked through this approach in detail with The $100K Deal ScreenerThe $37K Email Summary, and The $135K Compliance Report.

2) A threshold to filter the noise 

Not every idea deserves a concept, or even a prototype. My rule of thumb is that if a use case can't at least theoretically generate $10K per year in impact, the effort of building (or buying) Engineered AI simply isn’t worth it. This single question alone eliminates 80% of the "cool ideas". My framework for this is the $10K Rule.

3) A roadmap to commit

AI initiatives don't work well as one-off projects with fixed deadlines. Too many unknowns. Instead, I use a stage-gate process – a funnel where each idea moves through defined stages, and at each gate you decide: continue, stop, or go back.

The goal is to get through this discovery phase as fast and cheap as possible. Because ROI only shows up in production. Everything before that is pure cost – and the more you spend before production, the more your J-curve starts looking like a U-curve.

That's also why AI roadmaps beat AI projects. A project is a one-shot bet. A roadmap is a structured path that lets you combine multiple initiatives across different stages. You can pivot, stop, or double down at every gate.

Conclusion

A metric, a threshold, and a roadmap.

These are the core ingredients for Profitable AI. The one that works, the one that pays for itself.

If you want to go deeper, the full frameworks and methodologies are in The Profitable AI Advantage.

I hope they'll be as useful for you as they have been for me.

See you next Saturday,
Tobias

Reply

Avatar

or to participate

Keep Reading