Skip to content

Conversation

@Chesars
Copy link
Contributor

@Chesars Chesars commented Dec 11, 2025

Relevant issues

Fixes #17762

Pre-Submission checklist

  • I have Added testing in the tests/litellm/ directory
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

When using litellm.completion() with model="openai/responses/...", images in tool message content were not being transformed from Chat Completion format to Responses API format.

Example Request

import litellm

messages = [
    {"role": "user", "content": "Fetch the image"},
    {
        "role": "assistant",
        "content": None,
        "tool_calls": [
            {
                "id": "call_abc123",
                "type": "function",
                "function": {
                    "name": "fetch_image",
                    "arguments": '{"url": "https://example.com/image.png"}'
                }
            }
        ]
    },
    {
        "role": "tool",
        "tool_call_id": "call_abc123",
        "content": [
            {
                "type": "image_url",
                "image_url": {
                    "url": "data:image/png;base64,iVBORw0KGgo..."
                }
            }
        ]
    },
    {"role": "user", "content": "What color is the image?"}
]

response = litellm.completion(
    model="openai/responses/gpt-4.1",
    messages=messages,
    tools=[{
        "type": "function",
        "function": {
            "name": "fetch_image",
            "parameters": {"type": "object", "properties": {"url": {"type": "string"}}}
        }
    }]
)

The tool message content was being passed directly without transformation:

Chat Completion format (what user sends):

{"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}}

Responses API format (what OpenAI expects):

{"type": "input_image", "image_url": "data:image/png;base64,..."}

OpenAI reject the request with error 400:

Invalid value: 'image_url'. Supported values are: 'input_text', 'input_image', 'output_text', ...

Fix

Transform tool message content using _convert_content_to_responses_format() when the content is a list (multimodal content with images).

Tests Added

File: tests/test_litellm/completion_extras/litellm_responses_transformation/test_completion_extras_litellm_responses_transformation_transformation.py

Added test_convert_chat_completion_messages_to_responses_api_tool_result_with_image() which verifies:

  • Tool messages with image_url content are correctly transformed
  • Output type is input_image (not image_url)
  • Output image_url is a flat string (not nested object)
  • detail field is preserved"

…s API

When using litellm.completion() with model="openai/responses/...", images
in tool message content were not being transformed from Chat Completion
format to Responses API format.

Chat Completion format: {"type": "image_url", "image_url": {"url": "..."}}
Responses API format: {"type": "input_image", "image_url": "..."}

This caused OpenAI to reject the request with error 400 since "image_url"
is not a valid type for function_call_output content.
@vercel
Copy link

vercel bot commented Dec 11, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Error Error Dec 11, 2025 0:56am

@krrishdholakia krrishdholakia merged commit 6a3e646 into BerriAI:main Dec 11, 2025
4 of 7 checks passed
@Chesars Chesars deleted the fix/completion-tool-image-transformation branch December 11, 2025 16:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: LiteLLM completion() breaks image tool-output when calling the Responses API

2 participants