Skip to content

Conversation

kczimm
Copy link
Contributor

@kczimm kczimm commented Aug 15, 2025

The new 3.1.0 spec added variants to the ServiceTier

        ServiceTier:
            type: string
            description: |
                Specifies the processing type used for serving the request.
                  - If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
                  - If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
                  - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or '[priority](https://openai.com/api-priority-processing/)', then the request will be processed with the corresponding service tier.
                  - When not set, the default behavior is 'auto'.

                  When the `service_tier` parameter is set, the response body will include the `service_tier` value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
            enum:
                - auto
                - default
                - flex
                - scale
                - priority
            nullable: true
            default: auto

Copy link
Owner

@64bit 64bit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the update.

@64bit 64bit merged commit 5a6b60e into 64bit:main Aug 16, 2025
harsh-98 pushed a commit to harsh-98/async-openai that referenced this pull request Sep 3, 2025
gilljon pushed a commit to gilljon/async-openai that referenced this pull request Sep 22, 2025
* fix(types)!: change AssistantStreamEvent field name (64bit#400)

BREAKING CHANGES: changed AssistantStreamEvent filed name

* Fix typo in `ChatCompletionToolChoiceOption` docs (64bit#401)

* feat: update image generation API to match latest OpenAI specs (64bit#402)

* feat: update image generation API to match latest OpenAI specs

- Add ImageModeration enum with 'auto' (default) and 'low' values
- Add moderation parameter to CreateImageRequest for gpt-image-1
- Extend ImageQuality enum to support 'high', 'medium', 'low' for gpt-image-1

These changes align with the latest OpenAI API documentation for image generation.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>

* add Auto to ImageQuality

---------

Co-authored-by: Claude <[email protected]>
Co-authored-by: Himanshu Neema <[email protected]>

* fix(deps): bump to [email protected] (64bit#409)

Signed-off-by: Nick Mitchell <[email protected]>

* feat: Add `minimal` reasoning effort for gpt-5 (64bit#411)

* add Scale and Priority to ServiceTier (64bit#416)

* Add streaming support for Responses API (64bit#405)

* Add streaming support for Responses API.

* Update examples/responses-stream/src/main.rs

* Update examples/responses-stream/src/main.rs

* Update examples/responses-stream/src/main.rs

* Delete async-openai/tests/responses.rs

---------

Co-authored-by: Himanshu Neema <[email protected]>

* chore: Release

* Add skip_serializing_if to more option types (64bit#412)

Signed-off-by: John Howard <[email protected]>

* Add Scale and Priority to the `ServiceTier` enum for the Responses API (64bit#419)

* Fix schema of code interpreter call output (64bit#420)

* chore: Release

* fix: CompoundFilter should use CompoundType instead (64bit#429)

* fix: Update `OutputItem` to align with OpenAI's Specification (64bit#426)

* Update `OutputItem` to align with OpenAI's Specification

* Update

* chore: Release

---------

Signed-off-by: Nick Mitchell <[email protected]>
Signed-off-by: John Howard <[email protected]>
Co-authored-by: posky <[email protected]>
Co-authored-by: lazymio <[email protected]>
Co-authored-by: Siyuan Yan <[email protected]>
Co-authored-by: Claude <[email protected]>
Co-authored-by: Himanshu Neema <[email protected]>
Co-authored-by: Nick Mitchell <[email protected]>
Co-authored-by: Timon Vonk <[email protected]>
Co-authored-by: Kevin Zimmerman <[email protected]>
Co-authored-by: Kazzix <[email protected]>
Co-authored-by: John Howard <[email protected]>
Co-authored-by: Advayp <[email protected]>
Co-authored-by: the-spice-must-flow <[email protected]>
Co-authored-by: Ben Levin <[email protected]>
ifsheldon pushed a commit to ifsheldon/async-openai-wasm that referenced this pull request Sep 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants