Understanding LLM Temperature Parameter Basics | Generated by AI

Home 2025.10

In LLM APIs (like OpenAI, Copilot API, Anthropic, etc.), temperature is a parameter that controls how random or deterministic the model’s output is.


What Temperature Does


What “0.7” Means


How to Use It

In most APIs, you pass it as a parameter in the request. For example:

OpenAI API (chat completion):

{
  "model": "gpt-4.1",
  "messages": [
    {"role": "user", "content": "Write a short story about a dragon and a robot."}
  ],
  "temperature": 0.7
}

GitHub Copilot API: The parameter works similarly. Copilot uses temperature internally to decide between more literal completions vs. more exploratory suggestions. Most IDE integrations set it around 0.7, but some contexts (like code completion) may use lower values for stability.


Rule of thumb:


Do you want me to also show you how temperature interacts with top_p (nucleus sampling), since both are often available in Copilot/LLM APIs?


Back

openai/gpt-5

Donate