- The Augmented Advantage
- Posts
- What to Expect from AI in 2024
What to Expect from AI in 2024
Revisiting the past, embracing the present, and (trying) to anticipate the future in AI
Hi there,
Hope you're all enjoying the holidays – perhaps still feeling the aftermath of Christmas.
Looking back, it's clear that this year has been another rollercoaster in the world of AI. (Armand Ruiz wrote a whole ebook on it – highly recommended!) And looking forward, it's interesting how often people predict the future, but rarely come back to check if their previous one was actually right.
So in today's edition, let's review my predictions from last year and see what might happen in 2024 for AI.
Let’s dive in!
Revisiting Last Year's Predictions
Last year, I made a prediction particularly close to my heart: Large Language Models (LLMs) will bring NLP in data analytics to a whole new level. Now, a year later, let's see how this prediction has turned out.
Here's a summary of the progress we've seen:
The Rise of 'Chat with my Data' Apps: Several startups like Akkio launched 'Generative BI' products that allow users to have conversational interactions with their data.
Major BI Players Jumped on the Bandwagon: Microsoft and Tableau haven't been idle either. They've been busy integrating LLMs into their platforms, offering 'copilot' features that are supposed to redefine interaction with data.
ChatGPT Boosted the Data Analytics Workflow: As I’ve shown recently, LLMs indeed supercharged various facets of data analytics, demonstrating their transformative potential.
However, it's not all a success story just yet. Mainstream adoption remains elusive. Trust issues and the quest for accuracy continue to be significant hurdles in taming these sophisticated LLM 'beasts'.
Reflecting on the rationale behind my prediction:
LLMs Becoming More Efficient and Affordable: This hit the mark. The advent of GPT-4 was a game-changer, and the drop in GPT-3.5 pricing was a pleasant surprise, making these models more accessible.
Adaptation to Custom Domains: Here, the reality slightly diverged from my forecast. While organizations have been customizing LLMs for their specific needs, it's not fine-tuning that’s leading the way. Instead, Retrieval-Augmented Generation (RAG) has become the preferred approach in many cases.
It's also clear that most business users still struggle to use BI and traditional dashboards for everyday analytics. I remain convinced that once business users can use a reliable chatbot, most BI dashboards will die instantly. If you're interested in this topic, check out the discussion I had in a recent podcast episode:
In retrospect, I think my prediction was not too bad, especially considering it was made before GPT-4 burst onto the scene.
Now, with the past year's prediction revisited, let’s look forward and explore what AI might have in store for us in 2024.
AI Predictions for 2024
Before we explore what might happen in AI over the next 12 months, let’s start with an anti-prediction:
I don’t expect much better AI models in 2024.
Make no mistake. I would be absolutely stunned to see a GPT-5 model in 2024 that leaves GPT-4 in the dust.
However, two things keep me skeptical:
Tech Giants Struggle: Despite almost unlimited resources, no major tech company has yet released an LLM that significantly outperforms GPT-4. Google, Amazon, Meta, X and co. have easily developed models comparable to GPT-3.5, but not beaten GPT-4. Even Google's upcoming Gemini Ultra model seems to be in the same league as GPT-4.
Regulatory and Legal Hurdles: The environment for building LLMs got tougher. The Wild West of two years ago is now a regulatory and legal minefield. I believe that the EU's AI Act and high-profile copyright disputes are just the tip of the iceberg, indicating increased costs and complexities in developing and refining next-generation models.
As a result, our focus should shift. Instead of waiting for the next big thing, we should harness the tools we currently have. This means identifying the most impactful use cases and learning how to reliably deploy today's LLMs in real-world scenarios.
With that pragmatic approach in mind, let's take a look at what I expect to happen:
Prediction 1: Augmentation Beats Automation
The focus of AI systems in 2024 will be on assisting humans, not replacing them. While GenAI produces brilliant results in 99% of all cases, in 1% it fails miserably. The solution? Keeping a human in the loop remains the most feasible option.
As a result, AI systems will be more interactive and collaborative, enhancing decision-making and efficiency without displacing human workers on a large scale.
Every major enterprise software will have its "copilot" features built in.
How to Prepare
Balancing AI and Human Input: Use AI to take over tedious tasks, so human workers can focus on more complex issues.
Designing AI to Support Humans: AI should blend in seamlessly, not being another distraction, but an assistant that’s there when needed.
Contextual Application: AI systems need to understand the current user persona and their workflow (what are they trying to achieve).
Train People: Ensure that your employees understand the fundamentals of generative AI, its capabilities, and its limitations.
Finding the Augmentation Sweet Spot: It's vital to strike a balance between AI's capabilities and human intuition. Minimal AI input (like spell checks) may seem insufficient, while excessive AI intervention (like auto-generating articles from a keyword) can lead to over-reliance and underutilization of human skills.
This 'sweet spot' varies across different tasks and industries. Finding it will be one of the key challenges for your business.
Prediction 2: Open Source Guarantees (Much Needed) Flexibility
The AI industry is in constant flux. OpenAI almost stumbled over a weekend, and established leaders like Google and Apple are still looking for their place. It's too early to call a winner yet.
Meanwhile, the open source AI community is booming. With over 400,000 models on Hugging Face and significant traction for frameworks like LangChain, I expect this trend to accelerate.
Tools like Huggingface AutoTrain and frameworks building on top of research like the ChipNeMo paper will make advanced LLM customization even more achievable.
This underscores the need for adaptability over commitment to rigid solutions - which can be ensured by building on an open source stack.
How to Prepare
Avoid Vendor Lock-In: Opt for flexible agreements with AI vendors to stay adaptable.
Embrace Open-Source Tools: If you have the talent and strategic alignment, embrace open source to adapt quickly in a volatile market.
Experiment with Customization: Understand the fine-tuning process, required data, and use cases so you're ready when you need it.
Develop Tracking Systems: What gets measured gets managed. You need a system to track how well changes to your models or system designs are performing. Develop reliable performance metrics to make sure you're moving in the right direction.
Instead of locking yourself into a single solution, take a flexible, customizable approach that can adapt to the rapid changes in the AI space.
More predictions?
As we look further into the AI horizon of 2024, other developments loom on the landscape:
AI Commoditization: The pace at which AI technologies are becoming mainstream is staggering. What used to be super hard to do - like building production-scale RAG systems - is now much easier thanks to tools like Llamaindex that have significantly lowered the entry barrier.
RAG Workflows: Speaking of RAG, I expect a rise in commercial services and startups that specifically target these types of architectures. OpenAI's no-code assistant playground is a prime example of how these technologies are becoming more user-friendly and widely applicable.
If you want to read more about what could be coming in 2024, I recommend the report published by Prashanth Southekal, to which I contributed along with other industry leaders. Follow Prashanth on LinkedIn so you don’t miss the release!
Conclusion
The only thing I can confidently predict about AI is that it won't get boring!
Don’t wait for the next ‘big thing’. The key to successful AI adoption lies in leveraging current technologies while being agile enough to embrace future advancements.
For example, if you're running an AI-powered customer support chatbot, advances in LLMs will naturally enhance its capabilities.
Conversely, if your focus is on maintaining a static, rules-based chatbot, the road ahead will be challenging as you compete with AI-powered alternatives.
You don't want to compete with AI. You want to embrace it.
Stay curious about what AI has to offer. And if you haven't already, take the plunge and launch your first Atomic AI use case.
See you next Friday - in 2024!
Tobias
PS: If you found this newsletter useful, please leave a testimonial! It means a lot to me!
Reply