fix: remove hardcoded temperature=0.0 from eval LLM calls#1071
Merged
ishanjainn merged 1 commit intoMar 24, 2026
Conversation
gpt-5 family models reject temperature=0.0 with HTTP 400. The structured output / JSON mode already constrains responses, so the explicit temperature isn't needed. Fixes openlit#1068
Contributor
Reviewer's guide (collapsed on small PRs)Reviewer's GuideThis PR updates the OpenAI and Anthropic evaluation helper functions to stop forcing Sequence diagram for OpenAI eval LLM call without hardcoded temperaturesequenceDiagram
actor Evaluator
participant Utils as evals_utils
participant OpenAIClient as OpenAI_client
Evaluator->>Utils: llm_response_openai(prompt, model, base_url)
Utils->>OpenAIClient: client.chat.completions.create(model=model, messages=[{role: user, content: prompt}], response_format={type: json_object})
OpenAIClient-->>Utils: ChatCompletion(choices[0].message.content)
Utils-->>Evaluator: JSON_string_content
Sequence diagram for Anthropic eval LLM call without hardcoded temperaturesequenceDiagram
actor Evaluator
participant Utils as evals_utils
participant AnthropicClient as Anthropic_client
Evaluator->>Utils: llm_response_anthropic(prompt, model)
Utils->>AnthropicClient: client.messages.create(model=model, messages=[{role: user, content: prompt}], max_tokens=2000, tools=tools, stream=False)
AnthropicClient-->>Utils: Message(content)
Utils-->>Evaluator: JSON_string_content
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Contributor
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- Since evals often rely on determinism, consider whether you want to explicitly pass a supported temperature (e.g., default or model-specific) or make
temperaturea function parameter so callers can control it instead of relying on the client library default.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Since evals often rely on determinism, consider whether you want to explicitly pass a supported temperature (e.g., default or model-specific) or make `temperature` a function parameter so callers can control it instead of relying on the client library default.
## Individual Comments
### Comment 1
<location path="sdk/python/src/openlit/evals/utils.py" line_range="154" />
<code_context>
messages=[
{"role": "user", "content": prompt},
],
- temperature=0.0,
response_format={"type": "json_object"},
)
</code_context>
<issue_to_address>
**question (testing):** Removing explicit temperature may reduce determinism and affect eval stability.
Since this helper is used in evaluations, it should likely remain deterministic. Removing `temperature=0.0` makes behavior depend on the SDK default, which may be nonzero and can change over time, causing flaky eval results. If you want to support nonzero temperature, consider adding a `temperature` parameter (with a default of 0.0) so callers can explicitly control this instead of relying on the client default.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
ishanjainn
approved these changes
Mar 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Issue number: #1068
Change description:
Removed the hardcoded
temperature=0.0from bothllm_response_openai()andllm_response_anthropic()inevals/utils.py. The gpt-5 family rejectstemperature=0.0with a 400 error, and the structured output / JSON mode already constrains the response format without needing explicit temperature control.Checklist
fix: ....Acknowledgment
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of the project license.
Summary by Sourcery
Remove hardcoded temperature settings from evaluation LLM helper functions to rely on provider defaults and avoid incompatibilities with newer models.
Bug Fixes:
Enhancements: