Model selection and temperature settings

When you create a custom prompt in prompt builder, the panel on the right includes a Settings section. This section allows you to set these parameters:

  • Version of the generative AI model
  • Temperature

This article explores the impact of these parameters.

Generative AI model versions

The dropdown menu allows you to select among the generative AI models, which generate the answer to your custom prompt.

The default model is a GA (generally available) GPT model. As of May 2024, it's GPT 3.5. Previous prompts created in prompt builder rely on this default model. The exact version is gpt-3.5-turbo-0125.

The GPT 4 model is also a GA (generally available) model. The specific version in use is GPT 4o ("o" for "omni").

These exact versions are subject to change.

When applied in Power Apps or Power Automate, the GPT 3.5 and GPT 4o models consume AI Builder credits.

Choose a model

Choose between the models based on status, licensing rules, and functionalities.

GPT model Status Version Licensing rules Functionalities Region availabilities
GPT 3.5 GA - Default model gpt-3.5-turbo-0125 Consumes credits in Power Apps and Power Automate. More information: Power Platform Licensing Guide Trained on data up to September 2021. Context allowed up to 16k tokens. Feature availability by regions for prompts
GPT 4 GA GPT 4o ("o" for "omni") Consumes credits in Power Apps and Power Automate. More information: Power Platform Licensing Guide GPT 4o has knowledge up to October 2023 and has a context length of 128k tokens. Multilingual enhanced proficiency. Better than GPT 3.5 in technical redaction and creativity. Feature availability by regions for prompts

Use of AI prompts in context of Microsoft Copilot Studio

AI prompts don't consume AI Builder credits when in the context of Copilot Studio, but consume messages when they're built on a GA GPT model.

Learn more about message consumption in the Power Platform Licensing Guide.

Temperature

The slider allows you to select the temperature of the generative AI model. It varies between 0 and 1, and guides the generative AI model about how much creativity (1) vs deterministic answer (0) it should provide.

Temperature is a parameter that controls the randomness of the output generated by the AI model. A lower temperature results in more predictable and conservative outputs. To compare, a higher temperature allows for more creativity and diversity in the responses. It’s a way to fine-tune the balance between randomness and determinism in the model’s output.

By default, the temperature is 0, as in previously created prompts.

Temperature Functionality Use in
0 More predictable and conservative outputs.
Responses are more consistent.
Prompts that require high accuracy and less variability.
1 More creativity and diversity in the responses.
More varied and sometimes more innovative responses.
Prompts that create new out-of-the-box content

While adjusting the temperature can influence the model’s output, it doesn’t guarantee a specific result. The AI’s responses are inherently probabilistic and can vary even with the same temperature setting.