Here is the guide for using Azure OpenAI models in 16x Prompt:
- Login to Azure portal
- Click on "Create a resource"
- In the create resource page, search for "OpenAI" and click on it
- Follow the instructions to create a new Azure OpenAI resource
- Go to Azure home page and click on the "OpenAI" resource you just created, or click on "Azure OpenAI" in Azure services
- Inside the OpenAI resource, click on "Go to Azure OpenAI Foundry Portal", or the big button at bottom of the page
- Inside the Azure AI Foundry Portal, you can find your API keys and Azure OpenAl Service endpoint on the home page. Note them down for later use.
- Scroll down on the left nav bar and click on "Deployments"
-
You can create a new deployment by clicking on "Deploy model" button and follow the instructions to deploy a new model.
-
Keep the deployment name as default (same as the model name)
-
Open 16x Prompt and click on the API model settings button in the bottom right corner
-
Go to "Azure OpenAI API" tab
- Enter your Azure OpenAI Endpoint name, for example
https://xyz.openai.azure.com/
- Paste the API key into the "API Key" field
- Enter deployment name, for example
gpt-4o
into the "Deployment Name" field. - Optionally, you can send a second deployment name for comparison, for example
gpt-35-turbo
. - Keep "API Version" as
2024-06-01
unless you have a specific reason to change it. - Click on "Close"
Here is an example of using 16x Prompt to compare the results between two deployments (gpt-4o
vs gpt-35-turbo
):