Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/e2e-master.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
python-version: ["3.8", "3.9", "3.10"]
steps:
- uses: actions/checkout@v4
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8", "3.10", "3.11"]
python-version: ["3.8", "3.10", "3.11", "3.12"]
include:
- python-version: "3.9"
use_coverage: 'coverage'
Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# v32.0.0+snapshot
# v32.0.0a1

Kubernetes API Version: v1.32.0

Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ supported versions of Kubernetes clusters.
- [client 29.y.z](https://pypi.org/project/kubernetes/29.0.0/): Kubernetes 1.28 or below (+-), Kubernetes 1.29 (✓), Kubernetes 1.30 or above (+-)
- [client 30.y.z](https://pypi.org/project/kubernetes/30.1.0/): Kubernetes 1.29 or below (+-), Kubernetes 1.30 (✓), Kubernetes 1.31 or above (+-)
- [client 31.y.z](https://pypi.org/project/kubernetes/31.0.0/): Kubernetes 1.30 or below (+-), Kubernetes 1.31 (✓), Kubernetes 1.32 or above (+-)
- [client 32.y.z](https://pypi.org/project/kubernetes/32.0.0a1/): Kubernetes 1.31 or below (+-), Kubernetes 1.32 (✓), Kubernetes 1.33 or above (+-)


> See [here](#homogenizing-the-kubernetes-python-client-versions) for an explanation of why there is no v13-v16 release.
Expand Down Expand Up @@ -168,6 +169,7 @@ between client-python versions.
| 30.0 | Kubernetes main repo, 1.30 branch | ✓ |
| 31.0 Alpha/Beta | Kubernetes main repo, 1.31 branch | ✗ |
| 31.0 | Kubernetes main repo, 1.31 branch | ✓ |
| 32.0 Alpha/Beta | Kubernetes main repo, 1.32 branch | ✓ |

> See [here](#homogenizing-the-kubernetes-python-client-versions) for an explanation of why there is no v13-v16 release.

Expand Down
2 changes: 1 addition & 1 deletion kubernetes/.openapi-generator/swagger.json.sha256
Original file line number Diff line number Diff line change
@@ -1 +1 @@
5f773c685cb5e7b97c1b3be4a7cff387a8077a4789c738dac715ba91b1c50eda
b20f45563c8a98d4f6f0ccc8fea807c5a646510c40aee9183d87b0be22104194
2 changes: 1 addition & 1 deletion kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ No description provided (generated by Openapi Generator https://github.com/opena
This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:

- API version: release-1.32
- Package version: 32.0.0+snapshot
- Package version: 32.0.0a1
- Build package: org.openapitools.codegen.languages.PythonClientCodegen

## Requirements.
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

__project__ = 'kubernetes'
# The version is auto-updated. Please do not edit.
__version__ = "32.0.0+snapshot"
__version__ = "32.0.0a1"

from . import client
from . import config
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/client/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

from __future__ import absolute_import

__version__ = "32.0.0+snapshot"
__version__ = "32.0.0a1"

# import apis into sdk package
from kubernetes.client.api.well_known_api import WellKnownApi
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/client/api_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def __init__(self, configuration=None, header_name=None, header_value=None,
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'OpenAPI-Generator/32.0.0+snapshot/python'
self.user_agent = 'OpenAPI-Generator/32.0.0a1/python'
self.client_side_validation = configuration.client_side_validation

def __enter__(self):
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/client/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@ def to_debug_report(self):
"OS: {env}\n"\
"Python Version: {pyversion}\n"\
"Version of the API: release-1.32\n"\
"SDK Package Version: 32.0.0+snapshot".\
"SDK Package Version: 32.0.0a1".\
format(env=sys.platform, pyversion=sys.version)

def get_host_settings(self):
Expand Down
4 changes: 2 additions & 2 deletions kubernetes/client/models/v1alpha3_resource_claim_status.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def devices(self, devices):
def reserved_for(self):
"""Gets the reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501

ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501

:return: The reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501
:rtype: list[V1alpha3ResourceClaimConsumerReference]
Expand All @@ -121,7 +121,7 @@ def reserved_for(self):
def reserved_for(self, reserved_for):
"""Sets the reserved_for of this V1alpha3ResourceClaimStatus.

ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501

:param reserved_for: The reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501
:type: list[V1alpha3ResourceClaimConsumerReference]
Expand Down
4 changes: 2 additions & 2 deletions kubernetes/client/models/v1beta1_resource_claim_status.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def devices(self, devices):
def reserved_for(self):
"""Gets the reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501

ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501

:return: The reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501
:rtype: list[V1beta1ResourceClaimConsumerReference]
Expand All @@ -121,7 +121,7 @@ def reserved_for(self):
def reserved_for(self, reserved_for):
"""Sets the reserved_for of this V1beta1ResourceClaimStatus.

ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501

:param reserved_for: The reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501
:type: list[V1beta1ResourceClaimConsumerReference]
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/docs/V1alpha3ResourceClaimStatus.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**allocation** | [**V1alpha3AllocationResult**](V1alpha3AllocationResult.md) | | [optional]
**devices** | [**list[V1alpha3AllocatedDeviceStatus]**](V1alpha3AllocatedDeviceStatus.md) | Devices contains the status of each device allocated for this claim, as reported by the driver. This can include driver-specific information. Entries are owned by their respective drivers. | [optional]
**reserved_for** | [**list[V1alpha3ResourceClaimConsumerReference]**](V1alpha3ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. | [optional]
**reserved_for** | [**list[V1alpha3ResourceClaimConsumerReference]**](V1alpha3ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

Expand Down
2 changes: 1 addition & 1 deletion kubernetes/docs/V1beta1ResourceClaimStatus.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**allocation** | [**V1beta1AllocationResult**](V1beta1AllocationResult.md) | | [optional]
**devices** | [**list[V1beta1AllocatedDeviceStatus]**](V1beta1AllocatedDeviceStatus.md) | Devices contains the status of each device allocated for this claim, as reported by the driver. This can include driver-specific information. Entries are owned by their respective drivers. | [optional]
**reserved_for** | [**list[V1beta1ResourceClaimConsumerReference]**](V1beta1ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. | [optional]
**reserved_for** | [**list[V1beta1ResourceClaimConsumerReference]**](V1beta1ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

Expand Down
Loading
Loading