Skip to content

fix: remove hardcoded temperature=0.0 from eval LLM calls#1071

Merged
ishanjainn merged 1 commit into
openlit:mainfrom
saivedant169:fix/eval-hardcoded-temperature
Mar 24, 2026
Merged

fix: remove hardcoded temperature=0.0 from eval LLM calls#1071
ishanjainn merged 1 commit into
openlit:mainfrom
saivedant169:fix/eval-hardcoded-temperature

Conversation

@saivedant169
Copy link
Copy Markdown
Contributor

@saivedant169 saivedant169 commented Mar 23, 2026

Issue number: #1068

Change description:

Removed the hardcoded temperature=0.0 from both llm_response_openai() and llm_response_anthropic() in evals/utils.py. The gpt-5 family rejects temperature=0.0 with a 400 error, and the structured output / JSON mode already constrains the response format without needing explicit temperature control.

Checklist

  • PR name follows conventional commit format: fix: ....
  • I have reviewed the contributing guidelines
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change?
  • I have performed a self-review of this change
  • Changes have been tested
  • Changes are documented

Acknowledgment

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of the project license.

Summary by Sourcery

Remove hardcoded temperature settings from evaluation LLM helper functions to rely on provider defaults and avoid incompatibilities with newer models.

Bug Fixes:

  • Prevent 400 errors from gpt-5 family models by no longer forcing temperature=0.0 in OpenAI evaluation calls.

Enhancements:

  • Align Anthropic evaluation helper with provider defaults by omitting an explicit temperature parameter in LLM requests.

gpt-5 family models reject temperature=0.0 with HTTP 400.
The structured output / JSON mode already constrains responses,
so the explicit temperature isn't needed.

Fixes openlit#1068
@saivedant169 saivedant169 requested review from a team and ishanjainn as code owners March 23, 2026 22:12
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Mar 23, 2026

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

This PR updates the OpenAI and Anthropic evaluation helper functions to stop forcing temperature=0.0, allowing each model (including gpt-5 family models that reject zero temperature) to use its default or caller-configured sampling behavior while still relying on structured JSON output constraints.

Sequence diagram for OpenAI eval LLM call without hardcoded temperature

sequenceDiagram
    actor Evaluator
    participant Utils as evals_utils
    participant OpenAIClient as OpenAI_client

    Evaluator->>Utils: llm_response_openai(prompt, model, base_url)
    Utils->>OpenAIClient: client.chat.completions.create(model=model, messages=[{role: user, content: prompt}], response_format={type: json_object})
    OpenAIClient-->>Utils: ChatCompletion(choices[0].message.content)
    Utils-->>Evaluator: JSON_string_content
Loading

Sequence diagram for Anthropic eval LLM call without hardcoded temperature

sequenceDiagram
    actor Evaluator
    participant Utils as evals_utils
    participant AnthropicClient as Anthropic_client

    Evaluator->>Utils: llm_response_anthropic(prompt, model)
    Utils->>AnthropicClient: client.messages.create(model=model, messages=[{role: user, content: prompt}], max_tokens=2000, tools=tools, stream=False)
    AnthropicClient-->>Utils: Message(content)
    Utils-->>Evaluator: JSON_string_content
Loading

File-Level Changes

Change Details Files
Stop forcing temperature=0.0 in OpenAI eval helper to avoid 400 errors and rely on model defaults.
  • Remove explicit temperature=0.0 argument from the OpenAI chat completions call in the eval utility helper.
  • Keep JSON response formatting via response_format={"type": "json_object"} unchanged to preserve structured output behavior.
sdk/python/src/openlit/evals/utils.py
Stop forcing temperature=0.0 in Anthropic eval helper while preserving tool invocation behavior.
  • Remove explicit temperature=0.0 argument from the Anthropic client call in the eval utility helper.
  • Retain existing max_tokens, tools, and stream parameters so runtime behavior is otherwise unchanged.
sdk/python/src/openlit/evals/utils.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • Since evals often rely on determinism, consider whether you want to explicitly pass a supported temperature (e.g., default or model-specific) or make temperature a function parameter so callers can control it instead of relying on the client library default.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Since evals often rely on determinism, consider whether you want to explicitly pass a supported temperature (e.g., default or model-specific) or make `temperature` a function parameter so callers can control it instead of relying on the client library default.

## Individual Comments

### Comment 1
<location path="sdk/python/src/openlit/evals/utils.py" line_range="154" />
<code_context>
         messages=[
             {"role": "user", "content": prompt},
         ],
-        temperature=0.0,
         response_format={"type": "json_object"},
     )
</code_context>
<issue_to_address>
**question (testing):** Removing explicit temperature may reduce determinism and affect eval stability.

Since this helper is used in evaluations, it should likely remain deterministic. Removing `temperature=0.0` makes behavior depend on the SDK default, which may be nonzero and can change over time, causing flaky eval results. If you want to support nonzero temperature, consider adding a `temperature` parameter (with a default of 0.0) so callers can explicitly control this instead of relying on the client default.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread sdk/python/src/openlit/evals/utils.py
@ishanjainn ishanjainn merged commit 8ced8ea into openlit:main Mar 24, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants