Azure OpenAI Studio: When ChatGPT Meets The Mothership

A quick tour of OpenAI's enterprise integration capabilities

Read time: 6 minutes

Hey there,

I recently gained access to Microsoft’s Azure OpenAI Studio.

If you’re unfamiliar, Azure OpenAI Studio is an integrated no-code platform for OpenAI in Microsoft's Azure cloud.

So what’s it all about, and why should you care?

Today, I want to provide you with a brief overview and highlight the main benefits it offers.

Let's dive in!

Want to level up your analytics skills? Join my upcoming Business Analytics with Python Bootcamp - LIVE with O'Reilly!

Participation requires an O'Reilly subscription. Use this promo link to try it free for 30 days!

OpenAI on Azure - Same, same but different!

First things first - OpenAI on Azure does not introduce any new technology. It’s still the same APIs as you know from OpenAI directly - in fact, even less since you can’t access GPT-4 (yet).

So what’s the big deal?

You could think that it doesn’t matter whether you access OpenAI services like chatGPT from OpenAI directly or via the mothership Microsoft (which is OpenAI’s main investor).

In my opinion it’s nothing less than a game-changer for enterprises, because it “allows” them to use this technology.

Let’s unpack the 4 main benefits that come with it.

Benefit 1: Data Control

One of the advantages of using Azure is that you can deploy models such as chatGPT under your own account in a dedicated region.

This means that you are able to have more control and governance over where your data is being processed.

Currently you can choose between three different regions to deploy OpenAI models:

  • South Central US

  • East US

  • Western Europe

So yes - you can now actually host OpenAI models within Europe!

That’s why specially the Western Europe region is important because it enables GDPR-related use cases.

Setting up a new OpenAI resource on Azure is very easy:

  1. Create a new resource.

  2. Choose a region.

  3. Go!

Some models are not available in certain regions. For instance, the ChatGPT model (GPT-35-turbo3) is only currently available in US regions. You can find an overview of the models and regions available at this link.

At the time of this writing, the pricing is the same across all regions.

Benefit 2: Governance

An OpenAI resource on Azure is just deployed as any other resource on Azure - with all benefits that come with it:

  • Access management: You can control who has access to this resource using Azure IAM (Identity and Access Management).

  • API key management: Every Azure OpenAI resource comes with two API keys that you can regenerate as often as you like. You do not need to maintain a separate OpenAI account - everything is handled on Azure. (No longer fiddling around with OpenAI API keys!)

  • Cost control: You can monitor the cost of this OpenAI resource by using your budget explorer in Azure. This way, you can keep track of your expenses on one platform.

These are some key components of managing OpenAI services on Azure. However, for some companies, not meeting these requirements could be considered red flags.

Benefit 3: No-code playground

One of the main features of the actual OpenAI studio are the so-called playgrounds. Currently, there are two of them - one for GPT-3 models and one for chatGPT.

GPT-3 Playground

The GPT-3 playground is a simple text completion interface. You can input a prompt, and it will return the completion. A nice feature is the selection of pre-defined templates for different use cases. For instance, if you're building a program that converts natural language queries into SQL code, there's a prompt template for that purpose.

There are many pre-written templates available, as you can see from the screenshot above. You will also find a panel that allows you to customize model parameters such as temperature, maximum tokens, stop sequences, and more.

When you're done, you can simply export the prompt and parameters for a given model using the "View code" option. This will return the Python, curl, or JSON code, respectively.

Having these code snippets available isn't a game changer per se, but it's definitely helpful to get started quickly.

Chat playground

In contrast to the GPT-3 playground, the Chat playground offers a more advanced interface for chat conversations.

Concretely, you have three panels:

  • The Assistant setup helps you in setting up the system message, which defines the desired behavior of your bot. Additionally, you can customize the way your chat bot responds to your inputs by adding some "few-shot" examples.

  • The Chat session panel lets you enter new chat messages, interact with your bot, and observe its current behavior.

  • The Parameters section allows you to adjust the model with settings such as the maximum number of response tokens, the model's randomness (temperature), and the maximum number of past messages to consider for the conversation.

Unlike GPT-3, the Chat playground does not have a "show code" option. You can switch to show the "raw JSON," but that only shows the prompt message you give to the chatbot, not the code.

The ChatGPT Playground is still in preview mode, so we can expect more refinements in the future.

Benefit 4: No-code fine-tuning and deployment

This feature is really cool.

Fine-tuning OpenAI models like Davinci used to be a pretty technical task.

But in Azure OpenAI Studio, it's abstracted to a simple, step-by-step, no-code interface.

Quick primer - What’s fine-tuning?

Fine-tuning is the process of adjusting a pre-trained model using your own data. This allows the model to become more specialized for specific tasks or industries. Fine-tuning should be considered when the base model does not quite meet your needs, or when you need it to pick up specific domain knowledge or perform specific tasks.

To fine-tune a model in Azure OpenAI Studio, simply follow these steps:

  1. Select a base model you want to fine-tune (again, check your region, as not all models are available in every region).

  2. Provide the training data for fine-tuning, ideally in JSONL format. Alternatively, you can upload a CSV file with two columns: "prompt" and "completion."

  3. Provide a validation set, which should contain new examples not part of the training data. The validation data is used to check the performance of the fine-tuning process.

  4. Once the fine-tuning is complete, you can deploy your fine-tuned model as a new endpoint.

  5. Access your fine-tuned model in the playground and via the API, just like any other deployed model.

Next week, I'll share some concrete examples of how to use this fine-tuning process.

While the fine-tuning process in OpenAI Studio is simple and straightforward, it is still quite basic. For instance, unlike in Azure ML Studio, you cannot organize your fine-tuning experiments. There is a file management tab, but its usefulness is not immediately apparent, though it may become more relevant in the future.

You can sense that it is still early days, but it is already very promising.

Some notes about pricing

The pricing for OpenAI services on Azure generally aligns with the services directly from OpenAI. For example, ChatGPT (gpt-3.5-turbo) costs $0.002 per 1K tokens both on Azure and directly from OpenAI. Text-Davinci costs $0.02 per 1K tokens just as from Open AI across all regions.

The only difference in pricing that I found is related to fine-tuning. OpenAI charges per request for both training and inference, while Azure bills hourly for both training and deploying fine-tuned models.

And that hosting doesn't come cheap.

For example, hosting your own fine-tuned Davinci model 24/7 would cost a whopping $2,160 per month.

To put that into context, you could process around 13,000 words per month for that budget when you use a fine-tuned model directly from OpenAI.

Overall, it really depends on your use case which pricing is a better deal overall.

Just be aware - fine-tuning can be expensive.

Fine-tuning pricing on OpenAIFine-tuning pricing on Azure

Fine-tuning pricing on Azure


There are a few current limitations to be aware of:

  • The Studio does not yet support embeddings*. If you want to use embeddings, you can still use the Python OpenAI package, but there's no no-code support.

  • GPT-4 is not yet publicly available from OpenAI Studio (currently there's a preview waitlist for that).

  • The interface still has some quirks and bugs. For example, model deployments sometimes fail for no apparent reason, so you might need to try a few times. However, once the model is deployed, the performance is quite stable. The response time seems on par with OpenAI, although I haven't run any formal benchmarks.

*Embeddings are vector representations of text that encode the meaning of words, phrases, or sentences. They are important building blocks for various natural language tasks such as chat bots or document retrieval.


If you want to try out Azure OpenAI Studio, I suggest giving it a shot.

For big companies wanting to add chatGPT & co. to their apps, Azure is likely the best option. Microsoft made a good move there!

* Tip: pick just a few, low risk use cases such as internal chatbots to increase your chances of getting quick access.

I'm sure we will see a lot of changes in this tool, and I'll come back to it for sure!

Stay tuned for more updates and use cases in the coming weeks!

See you again next Friday!


Want to learn more? Here are 3 ways I could help:

  1. Read my book: If you want to further improve your AI/ML skills and apply them to real-world use cases, check out my book AI-Powered Business Intelligence (O'Reilly).

  2. Book a meeting: If you want to pick my brain, book a coffee chat with me so we can discuss more details.

  3. Follow me: I'm regulary sharing free content on LinkedIn and Twitter.

AI-Powered Business Intelligence Book Cover