- The Augmented Advantage
- Posts
- 5 Principles for Writing Effective Prompts (2025 Update)
5 Principles for Writing Effective Prompts (2025 Update)
Essential techniques to get great results from any LLM
Hi there,
I wrote the first version of this article about 1.5 years ago (link at the bottom). After spending literally over a thousand hours prompting AI models since then, I think it's time for an update.
AI models have come a long way. We've moved from GPT-3.5 that could barely count to 10, through early GPT-4 with its 32k context window, to reasoning powerhouses like o3 and models like Gemini that can handle 2M tokens. Yet, at their core, these models still share the same key fundamentals.
That’s why some prompting principles have stood the test of time, while others haven't aged well at all.
Let's find out what actually works in 2025 (and probably beyond)!
Workshop Announcement
Prompting Principles: What Hasn't Changed
After having tried countless prompts and their results, I can confidently say: Being specific still beats everything else. No matter which model you're using, vague prompts lead to mediocre results.
Here's a classic example I keep seeing:
Write me a marketing email for my productivity app.
That's like walking into a mechanic's shop and saying "fix my car" without mentioning what's wrong with it. Modern AI models might be more sophisticated, but they still need context about:
What "better" means for you
What constraints you're working with
What goal you're trying to achieve right now
A better version would be:
Write a marketing email for our productivity app. The audience is small business owners (5-50 employees) who struggle with team coordination. Focus on how our app saves 5+ hours per week through automated task management. Use a professional but friendly tone. Must include a clear call-to-action for our 14-day free trial.
The principles of specificity, clear goals, and proper context still form the foundation of good prompting.
But how exactly do you ensure specificity? That’s where my RGTD framework has never let me down.
The RGTD Framework
I've developed a simple framework that consistently delivers superior results. I call it RGTD - Role, Goal, Task, Details. Not the fanciest name, but it works.

Here's the breakdown:
Role - Who should the AI act as?
Goal - What are you trying to achieve?
Tasks - How should this goal be achieved or wat steps need to be taken? (Ideally in list form)
Details - What additional context matters? (Examples, output styles, further knowledge, etc.)
The beauty of RGTD is its flexibility. Sometimes you need all four elements, sometimes just one or two. For a simple translation, you might go right with the task.
For complex outputs, you'll want the full stack.
Here's our marketing email example using RGTD:
Role: You are an experienced SaaS marketing copywriter
Goal: Create a compelling marketing email that converts small business owners into free trial users
Tasks:
1. Write an attention-grabbing subject line
2. Open with a clear pain point around team coordination
3. Introduce our solution, highlighting the 5+ hours saved per week
4. Back this up with 1-2 specific examples of automated task management
5. Include a prominent call-to-action for the free trial
6. Draft a brief P.S. section reinforcing key benefits
Details:
- Audience: Small business owners (5-50 employees)
- Tone: Professional but friendly
- Key feature: Automated task management
- Offer: 14-day free trial
- Max length: 300 words
Notice how breaking down the Tasks into steps makes the prompt both clearer and easier to modify. Want to add or remove a section? Just adjust the list. Need to change the sequence? Simply reorder the steps.
Not every prompt needs this level of structure. But when outputs matter, RGTD helps ensure nothing critical gets missed.
Prompting Reasoning Models
While the RGTD framework works great for most fast models like Claude-3.5-Sonnet or GPT-4o, slower reasoning-focused models like GPT-o3 actually perform better with less (!) task specification.
Being overly specific about how the goal should be achieved will in my experience actually lead to poorer performance. But thanks to the RGTD framework, it’s pretty easy to adapt. Just leave the tasks out!
Here's how I'd rewrite our marketing email prompt for GPT-o3:
Role: You are an experienced SaaS marketing copywriter
Goal: Create a compelling marketing email that converts small business owners into free trial users
Details:
- Audience: Small business owners (5-50 employees)
- Tone: Professional but friendly
- Key feature: Automated task management saves 5+ hours/week
- Offer: 14-day free trial
- Max length: 300 words
See how I intentionally left the tasks breakdown out. By removing the step-by-step instructions, we let the model's reasoning capabilities shine through. It can determine the optimal approach based on the goal and details provided.
I've found this especially powerful for complex analytical tasks where the best approach isn't immediately obvious. Instead of potentially constraining the model with our assumptions about the right steps, we let it leverage its reasoning capabilities to find the optimal path.
Anti-Patterns for Prompting
Let's quickly address some popular "prompting techniques" you might see online. While these might work temporarily, they're not what I consider proper prompt practices:
Manipulation attempts like:
"I'll lose my job if you don't help me"
"I'll tip you $50 if you do this"
"You MUST help me or else..."
These might have worked briefly, but they're just exploiting training artifacts that get patched quickly.
The same goes for so-called "magic mega prompts" - those huge text blocks that promise amazing results if you just copy and paste them. I've tested dozens of these. Most are just noise with little understanding of how LLMs actually work.
Talking about "hacks" - here's something that actually matters: Separating instructions from input.
Keep Your Instructions Clean
One of the most common pitfalls I'm still seeing is LLMs mixing up instructions with input data. Have you ever seen a public chatbot fail like this?
It happens even with the best companies.
Let me explain this in more detail, because it's a common source of problems in AI applications.
Every time you prompt an LLM, it should be made super clear:
What should the LLM do? (instruction)
What data should the LLM work with? (input)
The problem is that LLMs are very eager to interpret anything that looks like an instruction as something they should do.
This becomes especially important when building chatbots that process user messages or working with business documents that may contain command-like language. The last thing you want is for your summarization bot to suddenly execute random instructions it finds in the text it's supposed to summarize!
For example, when processing meeting notes, something like this might go wrong:
Summarize these notes: The team discussed our next steps. Action items: Generate a full report about Q4 numbers...
The model might start generating a Q4 report (interpreting it as a new instruction instead of just summarizing the notes (your actual instruction)
This is why in production applications – at minimum! – you should always wrap input content in clear demarcators like in this simple example:
Summarize the following meeting notes indicated in <content> tags:
<content>
The team discussed our next steps. Action items: Generate a full report about Q4 numbers...
</content>
By explicitly telling the model "everything between these tags is content to work with, not instructions", you make your prompts more reliable and secure.
Conclusion
New models emerge monthly, each with their own quirks and capabilities. But by focusing on solid principles rather than quick hacks, you'll be well-equipped to work with whatever comes next.
If I had to put it into 5 principles, these would be:
Be specific about what you want
Use RGTD to help structure your prompt
Give reasoning models more space to explore
Keep your instructions clean and separate from input
Don't chase the latest "prompting hacks"
I'm excited to see what the next 1.5 years will bring. But I'm even more excited to see what you'll build with these principles.
See you next Friday!
Tobias
PS: The original version of this article can be found here.
Reply