- The Augmented Advantage
- Posts
- Why Having a Discussion with AI is (Mostly) Pointless
Why Having a Discussion with AI is (Mostly) Pointless
And how to extract value from it anyway
One of the most popular ways I see people using AI (yes, that's mainly ChatGPT) is to have a "discussion" with it. They ask AI everything - life advice, business advice, sometimes even medical advice. There isn't much data around it yet, but according to a study in the UK, about 20% of people using AI are using it for "advice".
I wrote about the phenomenon of AI, in particular LLMs, being a "Yes Man" and how to overcome that in my AI Consigliere piece a few months ago. But I didn't really explain why AI fundamentally fails as a discussion partner and how this limitation shapes the way we should interact with these systems.
Today, I'll explain where AI isn't very effective as a consultant, how to avoid common mistakes, and where it works well as an analytical tool. This way, you can use AI as a helpful thinking partner.
Let’s dive in!
Weekend Special
Grab the recording of The AI Agent Flow Workshop (lifetime access) and get 1 week of free access to the AI 10K Club on top where you can watch ANY previous workshop that I hosted so far, including:
Previous workshops are normally not accessible so this is a good chance to quickly dive deeper into any topic you’d like.
Offer valid until I take it offline on Monday morning.
Here’s the link:
Why AI Isn't A Good Discussion Partner
When we talk with a human consultant, we naturally expect them to challenge our assumptions and push back when our ideas don't make sense. We value their critical thinking – their ability to say "Actually, that's not quite right" or "Have you considered this alternative perspective?". Or simply asking: "Why?".
AI systems, however, fundamentally can't do this. This includes even the most sophisticated ones like GPT-4. They're designed to be helpful, collect a reward by generating plausible-sounding content, and generally avoid contradicting the user.
Most critically, modern AI systems don't know what they don't know. Unlike human experts who (ideally) recognize the boundaries of their knowledge, AI will continue to generate responses even when working with incorrect or incomplete information. They lack genuine critical thinking. While they can simulate reasoning, they don't have an independent perspective from which to evaluate ideas. They follow patterns in their training data rather than applying true judgment.
The limitations of AI become most apparent in specific types of interactions.
Let's look at some examples:
Pitfall #1: "Is this approach correct?"
When you ask an AI if your approach is correct, it almost always says yes.
For example, here's ChatGPT's response when I asked if I should pivot my AI consulting business:

Notice what happened? Instead of firmly saying "Wait, why the hell do you even consider that!??", ChatGPT called it a "sharp and forward-thinking pivot". It also gave me a step-by-step list for implementation.
In reality, following this advice would likely end in a disaster (because it’s not even clear why a pivot is needed. My consulting business is booked out).
Pitfall #2: "Why is this the case?"
This is a dangerous one. When you ask an AI why something is happening, it typically accepts your premise without question, even if that premise is flawed or completely incorrect. Technically, it's sampling tokens from a skewed distribution that you skewed yourself by putting the assumption into the context.
Practical example:

The response continues with more "reasons", but see the fundamental problem: There is absolutely no evidence that blue logos cause conversion drops. The premise of the question is completely unfounded, yet ChatGPT happily generates elaborate explanations instead of questioning whether the premise itself is valid.
A human consultant would immediately ask, "What evidence do you have that the logo color is affecting conversion rates?" or "Have you controlled for other variables that might explain the drop?"
Pitfall #3: "Tell me reasons for X"
Ask an AI to provide reasons for almost anything, and it will – regardless of whether those reasons are valid, evidence-based, or even real.
Reality check:

Despite the mild disclaimer at the beginning, the AI proceeded to generate what sounds like reasonable justifications for a practice that research overwhelmingly shows is harmful. Why? Because, I didn’t ask AI – I instructed it to give (=make up) reasons.
A real consultant would likely refuse to provide such a list and would instead present the extensive evidence showing why such a practice is harmful to both employees and the company itself.
These are all examples of using AI in a way that goes against critical thinking, as it mainly just strengthens your existing beliefs and biases.
It turns out that what works well with human advisors (share your beliefs to challenge them) is opposite to how AI systems are wired.
TDLR: Don't Treat AI Like a Real Person
We can't help but anthropomorphize AI systems. Our brains are hardwired to perceive intention, agency, and consciousness in entities that display even minimally human-like behavior (anyone with pets can tell).
When an AI produces fluent text that sounds thoughtful, our brains automatically fill in the blanks, attributing human-like reasoning to what is fundamentally a sophisticated pattern-matching system.
This cognitive bias makes it difficult for us to remember that:
AI has no mental model of us
AI doesn't "know" what it's saying
AI has no stake in the outcome
The last point is the most critical. While human consultants typically have something to lose by giving bad advice (their reputation at least), AI doesn’t have any skin in the game.
AI is only a crazy math formula that craves to collect the "Thumbs up" from you, the human.
3 Ways AI Actually Shines as a Consultant
So should we ditch modern AI systems as consultants and discussion partners? Not at all. The key is understanding what AI is good at and structuring your interactions to leverage these strengths.
Here’s how I approach it:
1. Have AI Develop and Apply Frameworks
AI models are great at organizing information. If you need a clear plan to tackle a complicated issue, AI can assist you in finding or creating frameworks that use proven methods to help solve the problem.
For example, instead of asking for the answer to “Should I pivot my business”, I could ask AI to suggest a suitable framework to make that decision:

By using AI to structure your thinking rather than provide direct answers, you get the benefit of organized analysis without relying on the AI to make critical judgments.
2. Have AI Ask You the Right Questions
One of the most underrated uses of AI is having it generate questions rather than answers. AI can help you expand your thinking by suggesting questions you might not have considered.
For instance:

The AI will generate a comprehensive list of considerations – from market validation, to competition landscape, to strategic fit – that can help ensure you're approaching the problem holistically. You're not asking the AI to make the decision for you. You're using it to ensure you're considering all relevant factors in your own analysis.
3. Have AI Identify Blind Spots
We are usually the best supporters of our own ideas, so it's important to consider other viewpoints or arguments against your position to avoid missing anything.
See here:

By explicitly asking the AI to challenge your thinking rather than validate it, you flip the typical agreement bias on its head and get more valuable input.
The key difference in all these examples is that you're not treating the AI as an oracle with answers, but as a tool to enhance your own critical thinking. You remain in the driver's seat, making the judgments and evaluations while the AI serves as an amplifier for your analytical capabilities.
Conclusion
AI isn't a discussion partner – it's a co-thinking tool. Understanding this fundamental distinction changes how you should approach AI as a consultant.
When you ask if your ideas are correct, why something is happening, or for reasons supporting a position, AI will give you plausible-sounding responses regardless of their validity. It won't push back, challenge your assumptions, or identify flawed premises the way a human consultant would.
However, AI shines when used to develop frameworks, generate questions, and identify potential blind spots in your thinking. By treating AI as an amplifier for your own critical thinking rather than a source of answers, you can extract tremendous value while avoiding its limitations.
The next time you're tempted to ask an AI, "Is this a good idea?" try asking instead, "What questions should I consider to evaluate if this is a good idea?"
The difference in value you'll receive is remarkable.
See you next Friday!
Tobias
Reply