-
Notifications
You must be signed in to change notification settings - Fork 213
Bug 1769717: Add support to EnsureServicePorts #272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@enxebre: This pull request references Bugzilla bug 1769717, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
0c20f1d to
a05dd54
Compare
|
The services have a lot of defaulting from API server, can we make sure we have tests that show it works in case of defaults...? like TCP/UDP, http/https etc... |
|
/cherrypick release-4.3 |
|
@wking: once the present PR merges, I will cherry-pick it on top of release-4.3 in a new PR and assign it to you. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
| for i, existingServicePort := range *existing { | ||
| var existingCurr *corev1.ServicePort | ||
| for _, requiredServicePort := range required { | ||
| if existingServicePort.Name == requiredServicePort.Name { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Matching by name and then taking the union of in-cluster and in-manifest ports means there's no way to remove an entry. Also, name is optional, although I'm not clear on how that squares with:
All ports within a ServiceSpec must have unique names.
Maybe it's "If you set a name, it must not duplicate an existing ports entry in the same Service"?
Do we even need to support folks adding additional ports values to Service manifests managed by the CVO? Can we just clobber the existing value with the required value and set modified if that changed anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notice that:
// Optional if only one ServicePort is defined on this service.
Matching by name and then taking the union of in-cluster and in-manifest ports means there's no way to remove an entry
There's no union, required overwrite existing via ensureServicePort. Entries are removed via https://github.com/openshift/cluster-version-operator/pull/272/files#diff-f7675c7b39553e431cdc6c3ba8f5c482R215-R218.
See test cases for each scenario (with/without name) https://github.com/openshift/cluster-version-operator/pull/272/files#diff-415c13f11ffc32696c5d69b900b3fe58R283.
Instead of a nested loop this could get better performance by hashing the port name but I just wanted to replicate the pattern already present for similar funcs, e.g ensureContainers
| func ensureContainers(modified *bool, existing *[]corev1.Container, required []corev1.Container) { |
Also I'm wondering why the need for the merge library rather than just unconditionally overwrite/apply with whatever is required, but that's another discussion.
@abhinavdahiya not sure how you mean to test cover that? the cvo does a regular |
|
/approve |
| }, | ||
| }, | ||
| }, | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add a unit test wrt
-
input service has
Protocolset due to defaulting and required doesn't https://github.com/kubernetes/api/blob/11707872ac1cab52aefea0325f4698eaffeda3c8/core/v1/types.go#L3988-L3991
the API defaults to Port if unset, also it might be a string defaulted if the port is known IANA etc..
- also
NodePorthttps://github.com/kubernetes/api/blob/11707872ac1cab52aefea0325f4698eaffeda3c8/core/v1/types.go#L4013
as this value is assigned some value if none is already provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@abhinavdahiya sorry still not clear what we want to cover. EnsureServicePorts as any other ensuring function in this library do not known about defaults. EnsureServicePorts is just mutating a struct when !equality.Semantic.DeepEqual and it will clobber whatever existing has regardless this has been previously applied explicitly by a client or a by the api machinery defaulting. The clobbered struct will get defaults back once applied as it goes through api machinery.
The only think I can think of to increase coverage would be to apply the manifest after running EnsureServicePorts, then client.get the object back and validate that defaults are applied back properly, but the fake client does not apply any defaults unless we build a custom Reactor, therefore we'll be actually just covering our custom Reactor logic which is meaningless and this seems out of the scope to unit for EnsureServicePorts.
Let me know if this does not make sense to you and still want me to add anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EnsureServicePorts as any other ensuring function in this library do not known about defaults. EnsureServicePorts is just mutating a struct when !equality.Semantic.DeepEqual and it will clobber whatever existing has regardless this has been previously applied explicitly by a client or a by the api machinery defaulting.
I think we should prevent un-necessary updates, for the api machinery defaulting.. because that is known right...?
client changes are more than welcome to be over-written.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only think I can think of to increase coverage would be to apply the manifest after running EnsureServicePorts, then client.get the object back and validate that defaults are applied back properly,
the unit tests already have an option of what exists in cluster currently and what we want..
for example, if the Protocol is empty (required), let's not run an update call because we know that it was defaulted. or when TargetPort is empty (required) make sure we are only running update when targetport-port differ because that is client change and not defaulting..
etc..
Even if we don't want to change these, unit-test cases that show that we are running update in presence of defaults is useful for future work when we can be smarter..
hopefully that provides more context...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@abhinavdahiya thanks! Ok I see, I added defaulting for Protocol https://github.com/openshift/cluster-version-operator/pull/272/files#diff-f7675c7b39553e431cdc6c3ba8f5c482R244 and test coverage https://github.com/openshift/cluster-version-operator/pull/272/files#diff-415c13f11ffc32696c5d69b900b3fe58R551-R638
Other values might depend on the serviceType, I don't think we should redo api machinery defaulting logic here any farther.
Please let me know if it looks good now or still wanting any change
|
@enxebre approved the PR as it seems okay in terms of direction, but would make sure we have unit tests for #272 (comment) before we merge. |
|
/bugzilla refresh |
|
@sdodson: This pull request references Bugzilla bug 1769717, which is valid. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/test e2e-aws |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, enxebre The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@enxebre: All pull requests linked via external trackers have merged. Bugzilla bug 1769717 has been moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
@wking: new pull request created: #275 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Avoid:
$ go test ./lib/resourcemerge/
panic: runtime error: index out of range [recovered]
panic: runtime error: index out of range
goroutine 38 [running]:
testing.tRunner.func1(0xc0001ab000)
.../sdk/go1.12.9/src/testing/testing.go:830 +0x392
panic(0xccb520, 0x163f880)
.../sdk/go1.12.9/src/runtime/panic.go:522 +0x1b5
github.com/openshift/cluster-version-operator/lib/resourcemerge.ensureContainers(0xc0000bbd57, 0xc0001d4040, 0xc0001cd760, 0x1, 0x1)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core.go:69 +0x840
github.com/openshift/cluster-version-operator/lib/resourcemerge.ensurePodSpec(0xc0001c5d57, 0xc0001d4010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc0001cd760, 0x1, ...)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core.go:28 +0xc6
github.com/openshift/cluster-version-operator/lib/resourcemerge.TestEnsurePodSpec.func1(0xc0001ab000)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core_test.go:276 +0xc7
testing.tRunner(0xc0001ab000, 0xc0001d8770)
.../sdk/go1.12.9/src/testing/testing.go:865 +0xc0
created by testing.(*T).Run
.../sdk/go1.12.9/src/testing/testing.go:916 +0x35a
FAIL github.com/openshift/cluster-version-operator/lib/resourcemerge 0.010s
(with the core_test.go but the old core.go) when removing an entry
mutated the existing slice without re-entering the:
for i, whatever := range *existing
With this commit, we iterate from the back of the existing slice, so
any removals affect indexes that we've already covered. For both
containers and service ports, any appends happen later in the
function, so we don't need to worry about slice expansion at this
point.
The buggy logic was originally from 3d1ad76 (Remove containers if
requested in Update, 2019-04-26, openshift#178).
This commit cherry-picks e8e80c2 (lib/resourcemerge/core: Fix panic
on container/port removal, 2019-12-16, openshift#282) back to the 4.2 branch.
But since f268b6d (Add support to EnsureServicePorts, 2019-11-25, openshift#272)
and its EnsureServicePorts was never backported to the 4.2 branch,
I've only kept the container portion of e8e80c2 in this commit.
Avoid:
$ go test ./lib/resourcemerge/
panic: runtime error: index out of range [recovered]
panic: runtime error: index out of range
goroutine 38 [running]:
testing.tRunner.func1(0xc0001ab000)
.../sdk/go1.12.9/src/testing/testing.go:830 +0x392
panic(0xccb520, 0x163f880)
.../sdk/go1.12.9/src/runtime/panic.go:522 +0x1b5
github.com/openshift/cluster-version-operator/lib/resourcemerge.ensureContainers(0xc0000bbd57, 0xc0001d4040, 0xc0001cd760, 0x1, 0x1)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core.go:69 +0x840
github.com/openshift/cluster-version-operator/lib/resourcemerge.ensurePodSpec(0xc0001c5d57, 0xc0001d4010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc0001cd760, 0x1, ...)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core.go:28 +0xc6
github.com/openshift/cluster-version-operator/lib/resourcemerge.TestEnsurePodSpec.func1(0xc0001ab000)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core_test.go:276 +0xc7
testing.tRunner(0xc0001ab000, 0xc0001d8770)
.../sdk/go1.12.9/src/testing/testing.go:865 +0xc0
created by testing.(*T).Run
.../sdk/go1.12.9/src/testing/testing.go:916 +0x35a
FAIL github.com/openshift/cluster-version-operator/lib/resourcemerge 0.010s
(with the core_test.go but the old core.go) when removing an entry
mutated the existing slice without re-entering the:
for i, whatever := range *existing
With this commit, we iterate from the back of the existing slice, so
any removals affect indexes that we've already covered. For both
containers and service ports, any appends happen later in the
function, so we don't need to worry about slice expansion at this
point.
The buggy logic was originally from 3d1ad76 (Remove containers if
requested in Update, 2019-04-26, openshift#178).
This commit cherry-picks e8e80c2 (lib/resourcemerge/core: Fix panic
on container/port removal, 2019-12-16, openshift#282) back to the 4.2 branch.
But since f268b6d (Add support to EnsureServicePorts, 2019-11-25, openshift#272)
and its EnsureServicePorts was never backported to the 4.2 branch,
I've only kept the container portion of e8e80c2 in this commit.
No description provided.