Skip to content

Conversation

@LalatenduMohanty
Copy link
Member

As for creating the operator bundle we need to know which version of
SDK should be used to generate the manifests

Signed-off-by: Lalatendu Mohanty [email protected]

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: LalatenduMohanty

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 9, 2020
@LalatenduMohanty LalatenduMohanty changed the title Adding SDK version to the binary Adding SDK version to the operator Dec 9, 2020
As for creating the operator bundle we need to know which version of
 SDK should be used to generate the manifests

Signed-off-by: Lalatendu Mohanty <[email protected]>
log.Info(fmt.Sprintf("Operator Version: %s", version.Operator))
log.Info(fmt.Sprintf("Go Version: %s", runtime.Version()))
log.Info(fmt.Sprintf("Go OS/Arch: %s/%s", runtime.GOOS, runtime.GOARCH))
log.Info(fmt.Sprintf("version: v%s", version.VersionOperator))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the change from Operator Version?

Copy link
Member Author

@LalatenduMohanty LalatenduMohanty Dec 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean from version.Operator to version.VersionOperator ? to keep it consistent with Version.VersionSDK.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you change the label string from "Operator Version:" which seems more clear?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will change ti to operator version

PratikMahajan pushed a commit to PratikMahajan/cincinnati-operator that referenced this pull request Mar 17, 2021
Baked in edges:

  $ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.0-rc.0-x86_64 | grep Upgrades
    Upgrades: 4.2.13
  $ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.0-rc.3-x86_64 | grep Upgrades
    Upgrades: 4.2.16, 4.3.0-rc.0, 4.3.0-rc.1, 4.3.0-rc.2

The wide 'from' regexp was appropriate for 4.3.0-rc.0, which had no
4.3 update sources.  But rc.3 does have update sources, and we want to
allow 4.3.0-rc.0 -> 4.3.0-rc.3, because it is not impacted by the
4.2->4.3 GCP update bug.  The overly-strict regexp was from 6d3db09
(Blocking edges to candidate 4.3.0-rc.3, 2020-01-23, openshift#34).

Also expand the referenced bugs to for the blocked 4.2 -> 4.3 edges:

* Update hangs with [1]:

    Working towards 4.3.0...: 13% complete

  and machine-config going Degraded=True with RequiredPoolsFailed:

    Unable to apply 4.3.0-...: timed out waiting for the condition
    during syncRequiredMachineConfigPools: pool master has not
    progressed to latest configuration: controller version mismatch
    for rendered-master-6c22... expected 23a6... has d780... retrying

  Fixed in 4.2 with MCO 31fed93 [2] and in 4.2 with MCO 25bb6ae [3].

    $ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.2.14 | grep machine-config
      machine-config-operator                       https://github.com/openshift/machine-config-operator                       d780d197a9c5848ba786982c0c4aaa7487297046
    $ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.2.16 | grep machine-config
      machine-config-operator                       https://github.com/openshift/machine-config-operator                       31fed93186c9f84708f5cdfd0227ffe4f79b31cd

  So the 4.2 fix was in 4.2.16.

    $ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.3.0-rc.0 | grep machine-config
      machine-config-operator                       https://github.com/openshift/machine-config-operator                       23a6e6fb37e73501bc3216183ef5e6ebb15efc7a
    $ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.3.0-rc.3 | grep machine-config
      machine-config-operator                       https://github.com/openshift/machine-config-operator                       25bb6aeb58135c38a667e849edf5244871be4992

  So the 4.3 fix was new in rc.3.

* Updates hang with FailedCreatePodSandBox events in the
  openshift-ingress namespace like [4]:

    pod/router-default-...: Failed create pod sandbox: rpc error: code
    = Unknown desc = failed to create pod network sandbox
    k8s_router-default-..._openshift-ingress_...(...): Multus: error
    adding pod to network "openshift-sdn": delegateAdd: error invoking
    DelegateAdd - "openshift-sdn": error in getting result from
    AddNetwork: CNI request failed with status 400: 'failed to run
    IPAM for ...: failed to run CNI IPAM ADD: failed to allocate for
    range 0: no IP addresses available in range set: <ip1>-<ip2>

  Fixed in 4.2 with MCO 9366460 [5] and in 4.3 with MCO 311a01e [6].

    $ git --no-pager log --first-parent --oneline -4 origin/release-4.2
    6e0df82c (origin/release-4.2) Merge pull request #1347 from openshift-cherrypick-robot/cherry-pick-1285-to-release-4.2
    93664600 Merge pull request #1362 from rphillips/fixes/1787581_4.2
    bd358bb7 Merge pull request #1323 from openshift-cherrypick-robot/cherry-pick-1320-to-release-4.2
    31fed931 Merge pull request #1358 from runcom/osimageurl-race-42

  so the 4.2 fix was after 4.2.16's 31fed93186.

    $ git --no-pager log --first-parent --oneline -8 origin/release-4.3
    3ad3a836 (origin/release-4.3) Merge pull request #1399 from celebdor/haproxy-v4v6
    25503eee Merge pull request #1353 from russellb/1211-4.3-backport
    67ab306b Merge pull request #1426 from mandre/ssc43
    d74f56fe Merge pull request #1410 from retroflexer/manual-cherry-pick-from-master
    207cc171 Merge pull request #1406 from openshift-cherrypick-robot/cherry-pick-1396-to-release-4.3
    25bb6aeb Merge pull request #1359 from runcom/osimageurl-race-43
    311a01e8 Merge pull request #1361 from rphillips/fixes/1787581_4.3
    23a6e6fb Merge pull request #1348 from openshift-cherrypick-robot/cherry-pick-1285-to-release-4.3

  So the 4.3 fix was between rc.0's 23a6e6fb37 and rc.3's 25bb6aeb58
  (see 'release info' calls in the previous list entry for those
  commit hashes).

* Update CI fails with [7,8]:

    Could not reach HTTP service through <ip>:80 after 2m0s

  and authentication going Degraded=True with RouteHealthDegradedFailedGet:

    RouteHealthDegraded: failed to GET route: dial tcp <ip>:443:
    connect: connection refused

  Fixed in 4.2 with SDN 677b3a8 [9] and in 4.3 with SDN 74a8aee [10].

    $ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.2.16 | grep ' node '
      node                                          https://github.com/openshift/sdn                                           770cb7bf922a721bc6c62af5490439d6174036fe
    $ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.2.14 | grep ' node '
      node                                          https://github.com/openshift/sdn                                           770cb7bf922a721bc6c62af5490439d6174036fe
    $ git --no-pager log --first-parent --oneline -4 origin/release-4.2
    098a6410 (origin/release-4.2) Merge pull request openshift#95 from danwinship/fork-k8s-client-go-4.2
    9955a65b Merge pull request openshift#72 from juanluisvaladas/too_many_dns_queries_42
    677b3a80 Merge pull request openshift#90 from openshift-cherrypick-robot/cherry-pick-81-to-release-4.2
    770cb7bf Merge pull request openshift#73 from danwinship/egressip-cleanup-4.2

  So the fix landed after 4.2.16's 770cb7bf.

    $ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.3.0-rc.0 | grep ' sdn '
      sdn                                           https://github.com/openshift/sdn                                           d4e36d5019ef0e130e0d246581508821a7322753
    $ git --no-pager log --first-parent --oneline -5 origin/release-4.3
    490a574e (origin/release-4.3) Merge pull request openshift#98 from openshift-cherrypick-robot/cherry-pick-96-to-release-4.3
    85ab1033 Merge pull request openshift#78 from openshift-cherrypick-robot/cherry-pick-57-to-release-4.3
    d4e36d50 Merge pull request openshift#85 from openshift-cherrypick-robot/cherry-pick-84-to-release-4.3
    dabc4ef5 Merge pull request openshift#83 from dougbtv/backport-build-use-host-local
    74a8aee3 Merge pull request openshift#81 from openshift-cherrypick-robot/cherry-pick-79-to-release-4.3

  So the fix landed before rc.0's d4e36d50.

* GCP update CI fails with [11]:

    Could not reach HTTP service through <ip>:80 after 2m0s

  in 4.2.16 -> 4.3.0-rc.0 [12], 4.2.16 -> 4.3.0-rc.3 [13,14,15], and
  4.2.18 -> 4.3.1 [16].  This doesn't happen every time though; at
  least one 4.2.16 -> 4.3.0-rc.3 has passed on GCP [17].  We don't
  have a root-cause yet, but the final failure matches [8] discussed
  above.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1786993
[2]: openshift/machine-config-operator#1358 (comment)
[3]: openshift/machine-config-operator#1359 (comment)
[4]: https://bugzilla.redhat.com/show_bug.cgi?id=1787635
[5]: openshift/machine-config-operator#1362 (comment)
[6]: openshift/machine-config-operator#1361 (comment)
[7]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/214#1:build-log.txt%3A414
[8]: https://bugzilla.redhat.com/show_bug.cgi?id=1781763
[9]: openshift/sdn#90 (comment)
[10]: openshift/sdn#81 (comment)
[11]: https://bugzilla.redhat.com/show_bug.cgi?id=1785457
[12]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/216
[13]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/232
[14]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/233
[15]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/234
[16]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/286
[17]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/230
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 17, 2021
log.Info(fmt.Sprintf("Go Version: %s", runtime.Version()))
log.Info(fmt.Sprintf("Go OS/Arch: %s/%s", runtime.GOOS, runtime.GOARCH))
log.Info(fmt.Sprintf("version: v%s", version.VersionOperator))
log.Info(fmt.Sprintf("operator-sdk version: v%s", version.VersionSDK))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am in favor of the Makefile change, but would rather keep this out of Go, see here and that commit message. Is there a benefit to storing this information in Go?

log.Info(fmt.Sprintf("version: v%s", version.VersionOperator))
log.Info(fmt.Sprintf("operator-sdk version: v%s", version.VersionSDK))
log.Info(fmt.Sprintf("go Version: %s", runtime.Version()))
log.Info(fmt.Sprintf("go OS/Arch: %s/%s", runtime.GOOS, runtime.GOARCH))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the downcasing? The Go project name is canonically title case, e.g. see examples here.

@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 23, 2021
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this May 24, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 24, 2021

@openshift-bot: Closed this PR.

Details

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@LalatenduMohanty LalatenduMohanty removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 9, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 9, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: LalatenduMohanty

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@LalatenduMohanty
Copy link
Member Author

/lifecycle frozen

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 9, 2021

@LalatenduMohanty: The lifecycle/frozen label cannot be applied to Pull Requests.

Details

In response to this:

/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 7, 2021
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 8, 2021
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Nov 7, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 7, 2021

@openshift-bot: Closed this PR.

Details

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants