Your AI Agent Probably Should Be a Workflow

And how to know when you're overcomplicating things

Hi there,

AI agents are the new shiny object in tech: VCs are pouring in millions, and your LinkedIn feed is probably full of it.

More than that: frameworks and agent libraries are popping up everywhere, creating a huge FOMO. But let me blunt: You probably don't need any of that stuff (yet).

Today, I'll show you why most 'agents' being built today should really just be workflows, and how to know when you're overcomplicating your AI implementation.

Let's dive in!

Check out today’s sponsor:

What are the limitations of existing security tools in managing AI-related risk? Join this webinar to learn practical approaches to identify blind spots and protect against emerging threats across your AI lifecycle.

Key Topics:

  • Traditional application security vs AI security

  • AI security use cases in the modern enterprise

  • Analysis of AI-related risks and vulnerabilities

  • Strategic recommendations for 2025

The Agentic Hype Trap

"Agentic AI", "autonomous agents", "multi-agent systems" - the tech industry has once again fallen in love with another buzzword. And yes, it's quite tempting. An AI agent that can autonomously tackle complex tasks, chain together multiple steps, and figure things out on its own? The demos look fantastic. Sign me up!

But here's what those shiny demos don't show you:

  • Every new "agentic" capability adds a new layer of abstraction that obscures what's actually happening under the hood (aka hidden development cost). When things break (and they will), debugging becomes a nightmare.

  • When your agent needs to connect to existing systems, handle edge cases, scale beyond the demo scenario, and meet enterprise security requirements - you'll pay a big integration tax. Suddenly that "simple" agent implementation turns into a complex engineering project.

  • Right now, there are dozens of frameworks promising to make agent development easier. LangGraph, AutoGen, CrewAI, you name it! But most of them will either change dramatically or disappear altogether within the next 12 months, given the pace of change in AI. The risk of framework lock-in is real.

So I was really glad when I saw Anthropic's blog post that literally said:

When building applications with LLMs, we recommend finding the simplest solution possible […]. This might mean not building agentic systems at all.

Anthropic

So no, you don't need complex agent frameworks to build effective AI solutions. In fact, choosing the simplest possible approach is often the best strategy.

Let's look at what that means in practice...

When is a Workflow Good Enough?

Simple workflow patterns solve 90% of real-world AI use cases. The types I've used are largely compatible with what Anthropic presented - but there are some nuances.

Most of these workflow types can be implemented in a few dozen lines of code - without any framework.

Type 1: Prompt Chaining

Think of this as breaking down a complex task into smaller, more manageable steps. Each step feeds into the next, pre-defined step.

Example: Content Creation

  1. Define topic and target audience

  2. Generate detailed outline

  3. Write article by outline

  4. Optimize for SEO

  5. Adjust tone & style

This is a battle-tested workflow that's predictable, easy to debug, and allows for human checkpoints wherever needed.

Type 2: Multi-Expert Analysis

In this pattern, the same input is analyzed simultaneously by multiple LLMs, each acting as a specialized expert.

Example: Business Proposal Review

  • LLM 1: Financial validity check

  • LLM 2: Market analysis check

  • LLM 3: Risk assessment

  • Combine insights programmatically into a single review

This is a great workflow when you need to look at the same problem from multiple perspectives that don't really depend on each other.

Type 3: Ensemble Voting

Run the same task multiple times to get higher confidence in the output.

Example: Policy Compliance Check

  • Input: New Data Protection Policy

  • LLM 1: Legal compliance check

  • LLM 2: Legal compliance check

  • LLM 3: Legal compliance check

  • Take majority vote or require unanimous agreement to pass.

This workflow increases the accuracy of an output by scaling the cost of generating it. It's particularly useful in higher stakes classification scenarios.

Type 4: Parallel Processing

Break large tasks into smaller chunks that can be processed simultaneously for faster results (or to overcome context window limitations).

Example: Document Analysis

  • Large contract document

  • LLM 1: Analyze sections 1-3

  • LLM 2: Analyze sections 4-6

  • LLM 3: Analyze sections 7-9

  • Combine findings into comprehensive analysis

This workflow is useful in real-time scenarios where you want to give the user immediate feedback, or when the input document is too long to follow the prompt properly.

Type 5: Iterative Refinement

Two LLMs working as a team - one creates while the other reviews and provides feedback.

Example: Technical Documentation

  • Generator LLM: Write documentation for a software artifact

  • Evaluator LLM: Checks against criteria (e.g., structure adherence)

  • If criteria not met, feeds back to Creator LLM

  • Process continues until all criteria are met

This process works best for scenarios where there's a hard evaluation criterion. Otherwise, the Evaluator LLM will always find something to "improve", leading to worse results in the end.

Type 6: Dynamic Planning

More complex but still not fully agentic. The first LLM plans the steps dynamically, but execution follows a structured workflow.

Example: Market Research

  • Input: Research question

  • Planning LLM: Define research areas

  • Multiple LLMs analyze different aspects (competitors, trends, opportunities)

  • Final LLM synthesizes findings

Most "agent-like" behavior can be achieved using one (or a combination) of these simple patterns. You get 80% of the benefit with 20% of the complexity.

Agentic Workflows

But then again, what are truly agentic workflows anyway?

If you need a quick primer on Agents in general, check out the last post on Understanding AI Agents. In short, an agent is an entity that can:

  • Exist in an environment

  • Perceive this environment

  • Take actions to manipulate it

In terms of LLMs, I like this framework from Hugging Face that shows different levels of LLM autonomy:

Our patterns from above - prompt chaining, multi-expert analysis, etc. - they all operate at the first two levels. The LLM either processes information or makes basic routing decisions. There's no real "agency" involved.

In my opinion, it really only makes sense to talk about agents when you have at least level three - where the LLM actively selects and uses tools based on its own "reasoning". Instead of following predefined steps (like our workflow patterns), these agents decide for themselves which tools to use and when. Beyond that, we get into territory where LLMs control entire program flows or even trigger other agents.

While these higher levels of agency sound exciting, they're often overkill for what most businesses are trying to achieve right now.

Signs You're Overcomplicating Your AI Solution

Here are some clear warning signs that your "agent" should probably be a workflow instead:

⚠️ Your Task Flow is Actually Static

  • You find yourself hardcoding most of the "agent decisions"

  • The same steps happen in the same order every time

    ➜A simple prompt chain would accomplish the same thing

⚠️ No Real Tool Decisions

  • Your "agent" is just calling the same tools in sequence

  • Tool selection could be handled by basic if/then logic

    ➜ You're building complex reasoning for simple routing decisions

⚠️ Forced Complexity

  • You're adding tools just to make it more "agent-like"

  • Simple tasks are being broken into unnecessary sub-steps

    ➜ A single LLM call could handle what you've split into five tools

⚠️ Framework Overload

  • You're spending more time learning agent frameworks than solving problems

  • Simple integrations require mountains of boilerplate code

    ➜ You've added three dependencies to do one basic task

Remember: True agency makes sense when you need dynamic tool selection and complex reasoning. For everything else, stick to simple workflows. You'll get better results with less headache.

Conclusions

While Agentic AI is fascinating, we shouldn't get too carried away by it right now. The question isn't which agent framework is best, but whether you need one at all.

Start simple, stay practical, and only add complexity when you have no other choice. After all, removing complexity is much harder than adding it.

The best AI implementations aren't the most sophisticated - they're the ones that actually drive profit.

Speaking of which, I've got 2 spots left for the AI 10K, where I'll help you add $10K profit to your business with the help of AI. If you're interested, reply to "Workflow" and I'll get you the details. (Doors close on Jan 15.)

See you next Friday,
Tobias

Reply

or to participate.