What AI Can and Cannot Do

The principle powering every AI application

Understanding what AI can and can't do can be complicated, until you find out that it's not.

Because at its core, every AI application follows the same principle.

In this newsletter, you'll learn what that principle is, how to identify practical use cases for AI, and how to develop a good understanding of the areas where AI is fundamentally good.

Let’s go!

The Core Principle of AI: Predicting From Historical Data

At the heart of AI is a simple yet powerful concept: prediction. Modern machine learning-based AI systems analyze patterns in historical data to predict outcomes for new data coming in, whether it's predicting numbers, classifying information, generating text, or creating images and audio.

It all comes down to two questions:

  • What has the system learned from statistical patterns in its training data?

  • How can these patterns be used to make a "good" prediction?

Understanding this principle is critical because it explains both the strengths and limitations of current AI systems.

Let’s explore them in more detail.

What AI Can Do

AI's ability to recognize patterns in data allows it to make accurate predictions across various domains.

Here are some key areas where AI excels:

Predicting Numbers (Regression)

AI can predict continuous numerical values based on input data, known as regression. Examples include time series forecasting, where AI predicts future values based on historical patterns, and recommender systems, where AI predicts the likelihood that a user will click on an item and ranks options based on that probability.

AI Archetype used: Supervised Machine Learning or Generative AI

Predicting Classes (Classification)

Classification tasks involve AI assigning data points to predefined categories. Examples include spam email detection, where AI predicts whether an email is spam or not, sentiment analysis, where AI predicts the sentiment of a given text (e.g., positive, negative, or neutral), and recommender systems, where AI recommends the next best action based on learned patterns.

AI Archetype used: Supervised Machine Learning or Generative AI

Predicting Text

AI can generate or translate text based on learned patterns. Which is essentially a classification problem as well: Language translation is an example of AI predicting the output text in a target language given the input text in a source language. Generative AI-based text generation, like ChatGPT, involves AI predicting the most likely next word or sequence of words based on the input prompt and learned patterns from vast amounts of text data.

AI Archetype used: NLP, Speech-to-Text, or Generative AI

Predicting Pixels

AI can generate or manipulate images based on learned patterns. Examples include deep fakes, where AI generates realistic images or videos of people saying or doing things they never actually did, image style transfer, where AI applies the style of one image to the content of another, and upsampling, where AI predicts a high-resolution version of a low-resolution input image.

AI Archetype used: Generative AI

Predicting Audio

AI can process and generate audio based on learned patterns. Examples include noise cancellation, where AI predicts and removes background noise from audio recordings, speech synthesis, where AI generates human-like speech from text input, and music generation, where AI creates new music based on learned patterns from existing compositions.

AI Archetype used: Text to Speech, Generative AI

It's important to note that these predictions always come with uncertainty. A prediction will never be 100% accurate. If it was, then a fixed rule set would be better (e.g., given X, Y should happen). This is not AI. This is traditional automation. AI is inherently probabilistic. This makes AI a great technology for use cases where you can handle uncertainty but not so great where absolute certainty is required.

What AI Cannot Do

Now that we understand what AI can do, let's look at what it can't do so well:

Fixed-Rule Scenarios

As we've seen, AI thrives in environments where the decision patterns are somewhat ambiguous and some uncertainty in the final decision is acceptable. Conversely, AI struggles in scenarios that require 100% accurate answers or that can be described by simple, static rules.

Take a simple math equation like this, for example:

(3-2) * 5 = x

For humans, solving this straightforward math problem is easy. We know the basic rules of math and can apply our knowledge, such as the Order of Operations. First, solve the parenthesis (3 - 2 = 1), and then multiply this number by 5, so the final result is 5.

AI might struggle with this simple problem because it uses pattern detection and prediction, instead of following a defined rules set. For the math equation above, a generative AI model like GPT-4 would predict the text for x based on patterns it has seen in its training data. But at its core, it doesn’t apply the order of operations rules. It just returns the results that seem most probable.

You can see GPT-4o go crazy with numbers here:

This limitation extends beyond math. In business, many systems rely on static rules for decision-making, such as certain aspects of accounting or legal compliance, making them a poor fit for AI, but a good fit for fixed-rule automation.

Logical Reasoning

Another area where AI falls short is in tasks that require true “thinking” and reasoning in the way humans do. Today’s AI systems are (luckily) not self-aware or conscious. They excel at pattern recognition and prediction in different domains. Without the general world knowledge or common logic skills like us humans, they can’t do things like temporal or spatial reasoning. Try giving ChatGPT a simple, well-known logic problem, modify it a bit and check its answer:

You can see that ChatGPT didn't really "think" about this problem. Instead, it just repeated what it saw in its training data. (The correct answer would have been: "They can all cross the river in one boat because the boat should easily carry a farmer, a goat, and a cabbage if it can carry up to 5 people").

While there is a lot of progress and research being done in this area to teach AI models logic, it’s still a work in progress and you shouldn’t rely on it for your business use cases (yet).

Empathy and Ethical Decision-Making

AI also faces significant challenges in tasks that require empathy and ethical decision-making. AI systems have no feelings or moral principles. They simply repeat patterns they have seen in their training data. If an AI system has been exposed to a lot of hateful content, it is likely to produce similar results.

You might think, "No problem, let's just fix that so that the AI model becomes fair and unbiased." Well, it turns out that this is easier said than done, and has unintended consequences.

Google learned this the hard way when they pushed the diversification efforts on their AI a little too far:

Because of its inherently biased nature (all training data is biased in some way), any AI system lacks the ability to generate truly empathetic responses. AI systems are great at creating echo chambers and giving us the things we want to hear. That's why many people fall in love with it. But so far, AI systems can't replace real human interaction - whether it's therapy or customer support. Ultimately, ethical decisions and value judgments remain the domain of human leaders who can consider the broader social and cultural implications of their decisions.

The Key: Augmentation

The examples above, combined with the prediction principle, should give you a good heuristic to quickly cut through potentially good and bad AI use cases. When I say good and bad, I don’t mean possible and impossible. In fact, every domain mentioned above can be improved with AI.

The key principle here is augmentation. AI augmentation blends AI capabilities with human intuition, experience, and skills. Augmentation allows you to leverage AI to solve problems faster and with higher quality—even in domains where AI alone is inherently flawed.

To give you some examples:

  • Mathematics: AI can help to think creatively about old problems and inform novel solutions.

  • Legal: AI finds relevant cases quickly, saving time, while lawyers build strong arguments.

  • Decision Making: AI can present opposing views, helping you make better-informed decisions.

In all of these cases, the human is not just in the loop, but in the lead - carefully reviewing and evaluating the AI's output. When applied correctly, this principle of augmentation can increase your productivity 10x and improve the quality of your work.

The key is to balance AI capabilities with human expertise to unlock the full potential of AI and human intelligence working together.

If you follow this principle, your AI journey is set up for a great start.

Good luck!

See you next Friday,

Tobias

PS: Found this newsletter useful? Why not leave a testimonial and make my day! Thank you!

Reply

or to participate.