-
Notifications
You must be signed in to change notification settings - Fork 213
Bug 1798049: lib/resourcemerge/core: Fix panic on container/port removal #313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1798049: lib/resourcemerge/core: Fix panic on container/port removal #313
Conversation
Avoid:
$ go test ./lib/resourcemerge/
panic: runtime error: index out of range [recovered]
panic: runtime error: index out of range
goroutine 38 [running]:
testing.tRunner.func1(0xc0001ab000)
.../sdk/go1.12.9/src/testing/testing.go:830 +0x392
panic(0xccb520, 0x163f880)
.../sdk/go1.12.9/src/runtime/panic.go:522 +0x1b5
github.com/openshift/cluster-version-operator/lib/resourcemerge.ensureContainers(0xc0000bbd57, 0xc0001d4040, 0xc0001cd760, 0x1, 0x1)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core.go:69 +0x840
github.com/openshift/cluster-version-operator/lib/resourcemerge.ensurePodSpec(0xc0001c5d57, 0xc0001d4010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc0001cd760, 0x1, ...)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core.go:28 +0xc6
github.com/openshift/cluster-version-operator/lib/resourcemerge.TestEnsurePodSpec.func1(0xc0001ab000)
.../lib/go/src/github.com/openshift/cluster-version-operator/lib/resourcemerge/core_test.go:276 +0xc7
testing.tRunner(0xc0001ab000, 0xc0001d8770)
.../sdk/go1.12.9/src/testing/testing.go:865 +0xc0
created by testing.(*T).Run
.../sdk/go1.12.9/src/testing/testing.go:916 +0x35a
FAIL github.com/openshift/cluster-version-operator/lib/resourcemerge 0.010s
(with the core_test.go but the old core.go) when removing an entry
mutated the existing slice without re-entering the:
for i, whatever := range *existing
With this commit, we iterate from the back of the existing slice, so
any removals affect indexes that we've already covered. For both
containers and service ports, any appends happen later in the
function, so we don't need to worry about slice expansion at this
point.
The buggy logic was originally from 3d1ad76 (Remove containers if
requested in Update, 2019-04-26, openshift#178).
|
@openshift-cherrypick-robot: This pull request references Bugzilla bug 1783221, which is invalid:
Comment DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/retitle Bug 1798049: lib/resourcemerge/core: Fix panic on container/port removal |
|
@openshift-cherrypick-robot: This pull request references Bugzilla bug 1798049, which is invalid:
Comment DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/bugzilla refresh |
|
@wking: This pull request references Bugzilla bug 1798049, which is valid. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: openshift-cherrypick-robot, wking The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
5 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
Previous upgrade job failed with: which is rhbz#1792002. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
Previous upgrade job failed with: That's rhbz#1801885. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
Previous upgrade job failed with: |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
3 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
And now AWS is falling over: Eventually we'll bust through these flakes. I hope... :p |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
12 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/hold |
|
/hold cancel We're green?! |
|
@openshift-cherrypick-robot: All pull requests linked via external trackers have merged. Bugzilla bug 1798049 has been moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is an automated cherry-pick of #282
/assign wking