ChatGPT API Released - Here's what you need to know

About the 90% price cut and more use case opportunities

Read time: 5 minutes

Hi everyone,

OK, so originally I had planned to discuss the 3 different options you have to build AI capabilities into your products, services or dashboards.

But then OpenAI released their ChatGPT API to the public on Wednesday.

So I guess my original post will have to wait for another week.

Because I believe the release of the ChatGPT API will have massive impact on various applications and use cases in your businesses, and I just couldn't wait to share it with you all.

Let's find out what all the hype is about and what you need to know!

What's going on?

Until now, ChatGPT has only been available as a web interface.

(In case you've been living under a rock, ChatGPT is an advanced chatbot created by OpenAI that can have human-like conversations on any topic.)

And now, the underlying AI model is available as a service through a public API for anyone to use.

Why is that important?

Previously, if you wanted to access ChatGPT, the only way to use it was to log into OpenAI, type in the prompt, get the answer, and then copy/paste it into another application or process.

There was no easy way to integrate chatGPT into 3rd party applications.

(You could either hack the web interface with an - ahem - unsupported solution. Or you could just use one of the other GPT-3 APIs and hope no one noticed the difference.)

But as of Wednesday, that's changed. You can now officially integrate ChatGPT into any application, website or process you like, as long as there is an online connection (and your company allows it - more on enterprise use cases below).

How does the API work?

The chatGPT API uses the same model as the original chatGPT product seen above. The API takes messages you provide and generates a response message in return.

In particular, the API takes accepts inputs, along with some additional information, provided in a format called Chat Markup Language ("ChatML").

Have a look at the example API call below:

Here are the main components of this API request, as explained in the official chatGPT API documentation:

The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either “system”, “user”, or “assistant”) and content [...].

Typically, a conversation is formatted with a system message first [...]. The system message helps set the behavior of the assistant. In the example above, the assistant was instructed with “You are a helpful assistant.”

The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction.

The assistant messages help store prior responses. They can also be written by a developer to help give examples of desired behavior.

Bear in mind that the model has no memory of past requests. This means that all relevant information must be provided in this conversation (= API request).

In the example above, the user's final question "Where was it played?" only makes sense in the context of the previous messages about the 2020 World Series.

If a conversation doesn't fit within the model's prompt limit of 4,096 tokens (~3,000 words), it will have to be shortened in some way. This is a major limitation at the moment, as it's not (yet) possible to customize (fine-tune) chatGPT to your own data.

So how can you then build custom chatbots that can answer questions about your own documents? Currently the best way to have chatGPT access large amounts of custom data is to use a technology called embeddings. This creates an index over your custom documents and lets chatGPT work only with those indices. Explaining this concept is beyond the scope of this newsletter. Check out tools like LlamaIndex (GPT Index) if you want to learn more about it.

Back to the API!

When you submit a request as shown above, you will get a response that will look like this:

What does it cost?

What’s a bit surprising is that the ChatGPT API is significantly cheaper than the GPT-3 davinci model, with a cost of $0.002 per 1k tokens compared to $0.02 per 1k tokens.

That's a 90% cost reduction!

OpenAI says they achieved this "through a series of system-wide optimizations".

The other part of the story might be that the chat-like interface quickly produces many more tokens compared to the text completion interface we know from GPT-3, and that this was one reason Open AI needed to work on the pricing.

Because what most people forget is that both input and output tokens count toward the billed quantities. For example, if your API call used 100 tokens in the message input and you received 500 tokens in the message output, you would be charged 600 tokens.

Still, it's a massive price drop.

In fact, OpenAI now recommends using ChatGPT instead of davinci for most use cases:

Since gpt-3.5-turbo has similar performance to text-davinci-003, but at 10% of the price per token, we recommend gpt-3.5-turbo for most use cases.

So, bottom line:

The chatGPT API is currently OpenAI's go-to model for almost any text processing use case.

What new use cases are now possible?

Here are some examples:

Integrate ChatGPT into other applications

If you've used ChatGPT before, you can now integrate it directly into your favorite software.

This will happen in some of the most popular software out there, especially those with a thriving ecosystem like WordPress.

Here's how that could look like:

Make existing chatbots more powerful

If you've already built a chatbot with GPT-3 under the hood, switching that connection to the new ChatGPT API will most likely produce more natural conversations, be faster, and way cheaper!

Build new apps & businesses

The possibilities for new applications built on this API are endless.

Because most people don't want to do prompt engineering.

This means that you can build ANY application layer around the chatGPT API that takes user input, translates it into a prompt, and then returns the results from chatGPT to the user.

Want some inspiration to get started?


However, be aware: This segment is ultra competitive - literally hundreds (thousands?) of startups are trying to do this right now, and your individual competitive advantage is rather small - in the end, you're still relying on the underlying API. You need either a very good niche or a very good marketing / distribution strategy to survive.

Enterprise Use Cases

A big bottleneck for many enterprise use cases (especially in Europe) is that you still have to send your data to the Open AI servers in the US, which basically kills any GDPR-related use case.

Even though the data usage policies are stricter with the new API (deletion after 30 days!), the data is still thrown over the fence and you don't want to put anything critical there.

I'm sure this will be addressed eventually. But when? Who knows! The best option for companies based in Europe is to try European alternatives to OpenAI like Aleph Alpha from Germany.

Otherwise, you're limited to use cases that don't involve personal or critical data.

While we're all trying to figure out what those might be, I'm off developing my text-to-sql plugin for Power BI:

This is just the beginning. I don't really expect a big bang, but countless of micro-relevant use cases for every business niche.

The general idea of providing natural language interfaces to enterprise software alone is huge!

Anything else I need to know?

OpenAI has also released their Whisper API, which converts speech to text with excellent performance and at a competitive price ($0.006 / minute).

Interestingly, the ChatGPT and Whisper APIs complement each other quite well.

With both services, you could easily convert voice (speech) to text, pass that text to chatGPT, get the results, and either display it as text to the end user or have it read by another AI service using any voice.

Voice interfaces via consumer devices such as smart speakers from Amazon or Google, or Apple's AirPods, have become increasingly popular, so it's only a matter of time until some developers tie it all together.

Get chatGPT-powered recipe advice from your kitchen smart speaker, anyone?

What's next?

While everyone is busy playing around with the new API and upgrading their existing GPT-3 workflows (including me!), I'm really looking forward to the fine-tuning API that will likely be released at some point, and perhaps even a larger token limit.

It will also be interesting to see how this release affects other OpenAI APIs.

Will they eventually consolidate everything into one product?

Exciting times ahead!

Thanks for reading today's (nerdy) newsletter.

Next week, I'll be back to explain the 3 different ways you can build AI capabilities into your products, services, or dashboards. And guess what - using AI service APIs will be one of them! :)

See you next Friday!




Want to learn more? Here are 3 ways I could help:

  1. Read my book: If you want to further improve your AI/ML skills and apply them to real-world use cases, check out my book AI-Powered Business Intelligence (O'Reilly).

  2. Book a meeting: If you want to pick my brain, book a coffee chat with me so we can discuss more details.

  3. Follow me: I'm regulary sharing free content on LinkedIn and Twitter.

AI-Powered Business Intelligence Book Cover

If you liked this content then check out my book AI-Powered Business Intelligence (O’Reilly).