-
Notifications
You must be signed in to change notification settings - Fork 213
Bug 2117033: pkg/cvo/sync_worker: Trigger new sync round on ClusterOperator versions[name=operator] changes #818
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 2117033: pkg/cvo/sync_worker: Trigger new sync round on ClusterOperator versions[name=operator] changes #818
Conversation
|
@wking: This pull request references Bugzilla bug 2117033, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
Requesting review from QA contact: DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
| func clusterOperatorInterfaceVersionOrDie(obj interface{}, name string) (*configv1.ClusterOperator, string) { | ||
| co, ok := obj.(*configv1.ClusterOperator) | ||
| if !ok { | ||
| panic(fmt.Sprintf("%v is %T, not a ClusterOperator", obj, obj)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like a hack, but I wasn't able to find a cleaner way to get at the structured information I want. Improvements welcome :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if we need a panic here. If clusterOperatorEventHandler() can only be called on an operator why do we need to check if the obj is not an operator?
8c3faf4 to
7e0bcca
Compare
|
@wking: This pull request references Bugzilla bug 2117033, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
2be972c to
482d1bf
Compare
…ns[name=operator] changes David and Stephen identified an uneccessary delay [1]: * 9:42:00, CVO gives up on Kube API server ClusterOperator [2] * 9:42:47, Kube API server operator achieves 4.12 [3] * 9:46:22, after a cool-off sleep, the CVO starts in on a new manifest graph-walk attempt [4] * 9:46:34, CVO notices that the Kube API server ClusterOperator is happy [5] The 3+ minute delay from 9:42:47 to 9:46:22 is not helpful, and we've probably had delays like this since my old e02d148 (pkg/cvo/internal/operatorstatus: Replace wait-for with single-shot "is it alive now?", 2021-05-13, openshift#560), which landed in 4.6. This commit introduces a "ClusterOperator bumped versions[name=operator]" trigger to break out of the cool-off sleep. There's plenty of room to be more precise here. For example, you could currently have a versions[name=operator] bump during the sync loop that the CVO did notice, and that queued notification will break from the sleep and trigger a possible useless reconciliation round while we wait on some other resource. You could drain the notification queue before the sleep to avoid that, but you wouldn't want to drain new-work notifications, and I haven't done the work required to be able to split those apart. I'm only looking at ClusterOperator at the moment, because of the many types the CVO manages, ClusterOperator is the one we most frequently wait on, as large cluster components take their time updating. It's possible but less likely that we'd want similar triggers for additional types in the future (Deployment, etc.), if/when those types develop more elaborate "is the in-cluster resource sufficient happy?" checks. The panic-backed type casting in clusterOperatorInterfaceVersionOrDie also feel like a hack, but I wasn't able to find a cleaner way to get at the structured information I want. Improvements welcome :) [1]: https://bugzilla.redhat.com/show_bug.cgi?id=2117033#c1 [2]: From Loki: E0808 09:42:00.022500 1 task.go:117] error running apply for clusteroperator "kube-apiserver" (107 of 806): Cluster operator kube-apiserver is updating versions [3]: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade/1556564581915037696/artifacts/e2e-gcp-sdn-upgrade/openshift-e2e-test/build-log.txt | grep 'clusteroperator/kube-apiserver versions:' Aug 08 09:33:48.603 I clusteroperator/kube-apiserver versions: raw-internal 4.11.0-rc.7 -> 4.12.0-0.ci-2022-08-07-192220 Aug 08 09:42:47.917 I clusteroperator/kube-apiserver versions: operator 4.11.0-rc.7 -> 4.12.0-0.ci-2022-08-07-192220 [4]: From Loki: I0808 09:46:22.998344 1 sync_worker.go:850] apply: 4.12.0-0.ci-2022-08-07-192220 on generation 3 in state Updating at attempt 5 [5]: From Loki: I0808 09:46:34.556374 1 sync_worker.go:973] Done syncing for clusteroperator "kube-apiserver" (107 of 806)
482d1bf to
9222fa9
Compare
|
Update presubmit failed on the likely orthogonal turns up: Comparing with the test-suite monitor: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-version-operator/818/pull-ci-openshift-cluster-version-operator-master-e2e-agnostic-upgrade/1557150272407474176/artifacts/e2e-agnostic-upgrade/openshift-e2e-test/build-log.txt | grep 'clusteroperator/kube-apiserver versions:'
Aug 10 00:58:05.747 I clusteroperator/kube-apiserver versions: raw-internal 4.12.0-0.ci.test-2022-08-09-234236-ci-op-dmywi2n7-initial -> 4.12.0-0.ci.test-2022-08-09-235304-ci-op-dmywi2n7-latest
Aug 10 01:07:09.155 I clusteroperator/kube-apiserver versions: operator 4.12.0-0.ci.test-2022-08-09-234236-ci-op-dmywi2n7-initial -> 4.12.0-0.ci.test-2022-08-09-235304-ci-op-dmywi2n7-latest
Aug 10 02:08:47.822 I clusteroperator/kube-apiserver versions: raw-internal 4.12.0-0.ci.test-2022-08-09-235304-ci-op-dmywi2n7-latest -> 4.12.0-0.ci.test-2022-08-09-234236-ci-op-dmywi2n7-initial
Aug 10 02:17:51.529 I clusteroperator/kube-apiserver versions: operator 4.12.0-0.ci.test-2022-08-09-235304-ci-op-dmywi2n7-latest -> 4.12.0-0.ci.test-2022-08-09-234236-ci-op-dmywi2n7-initialSo the CVO tests are doing A->B->A rollbacks, and I'm looking at the A->B leg in Loki (which is the leg managed by the incoming CVO that has my patch), and the test-suite monitor sees the transition at 01:07:09.155, and the new sync round kicks off at 01:07:09.150539, and the CVO gets deep enough into the manifest graph to notice the kube-apiserver's completion at 01:07:16.712837. So down from possibly-minutes to seconds :tada: |
LalatenduMohanty
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: LalatenduMohanty, wking The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/override ci/prow/e2e-agnostic-upgrade |
|
@wking: Overrode contexts on behalf of wking: ci/prow/e2e-agnostic-upgrade DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
@wking: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
@wking: All pull requests linked via external trackers have merged: Bugzilla bug 2117033 has been moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/cherrypick release-4.11 |
|
@wking: new pull request created: #826 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…ication" formatting Cleaning up a log line I'd fumbled in 9222fa9 (pkg/cvo/sync_worker: Trigger new sync round on ClusterOperator versions[name=operator] changes, 2022-08-09, openshift#818). Changes: * Remove redundant "so do not notify about", because "no need to inform about" covers that with more clarity (we're not informing, and even if we did, it would be useless, because there's already a pending notification in place). * Use "Infof" to get the %s formatted in, avoiding: ...no need to inform about: %sCluster operator... `
@deads2k and @stbenjam identified an uneccessary delay:
The 3+ minute delay from 9:42:47 to 9:46:22 is not helpful, and we've probably had delays like this since my old e02d148 (#560), which landed in 4.6.
This commit introduces a "ClusterOperator bumped
versions[name=operator]" trigger to break out of the cool-off sleep.There's plenty of room to be more precise here. For example, you could currently have a
versions[name=operator]bump during the sync loop that the CVO did notice, and that queued notification will break from the sleep and trigger a possible useless reconciliation round while we wait on some other resource. You could drain the notification queue before the sleep to avoid that, but you wouldn't want to drain new-work notifications, and I haven't done the work required to be able to split those apart.I'm only looking at ClusterOperator at the moment, because of the many types the CVO manages, ClusterOperator is the one we most frequently wait on, as large cluster components take their time updating. It's possible but less likely that we'd want similar triggers for additional types in the future (Deployment, etc.), if/when those types develop more elaborate "is the in-cluster resource sufficient happy?" checks.