Skip to content

Conversation

@clobrano
Copy link
Contributor

Add three new test cases to validate etcd cluster recovery from cold boot scenarios reached through different graceful/ungraceful shutdown combinations:

  • Cold boot from double GNS: both nodes gracefully shut down simultaneously, then both restart (full cluster cold boot)
  • Cold boot from sequential GNS: first node gracefully shut down, then second node gracefully shut down, then both restart
  • Cold boot from mixed GNS/UGNS: first node gracefully shut down, surviving node then ungracefully shut down, then both restart

Note: The inverse case (UGNS first node, then GNS second) is not tested because in TNF clusters, an ungracefully shut down node is quickly recovered, preventing the ability to wait and gracefully shut down the second node later. The double UGNS scenario is already covered by existing tests.

@openshift-ci openshift-ci bot requested review from eggfoobar and qJkee October 21, 2025 16:04
@jaypoulz
Copy link
Contributor

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 21, 2025

@jaypoulz: trigger 0 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

@jaypoulz
Copy link
Contributor

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 21, 2025

@jaypoulz: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/0c4ec1d0-ae9a-11f0-95ed-ad8d5e8a115f-0

@clobrano
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/df063eb0-b31c-11f0-9588-ce4096893980-0

@clobrano
Copy link
Contributor Author

Rebasing this to get #30385

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 27, 2025
@clobrano clobrano force-pushed the tnf-e2e-cold-boot-from-mixed-gns-ungns branch from c653990 to 9083969 Compare October 29, 2025 08:38
Add three new test cases to validate etcd cluster recovery from cold
boot scenarios reached through different graceful/ungraceful shutdown
combinations:

- Cold boot from double GNS: both nodes gracefully shut down
  simultaneously, then both restart (full cluster cold boot)
- Cold boot from sequential GNS: first node gracefully shut down, then
  second node gracefully shut down, then both restart
- Cold boot from mixed GNS/UGNS: first node gracefully shut down,
  surviving node then ungracefully shut down, then both restart

Note: The inverse case (UGNS first node, then GNS second) is not tested
because in TNF clusters, an ungracefully shut down node is quickly
recovered, preventing the ability to wait and gracefully shut down the
second node later. The double UGNS scenario is already covered by
existing tests.
@clobrano clobrano force-pushed the tnf-e2e-cold-boot-from-mixed-gns-ungns branch from 9083969 to b6e1384 Compare October 29, 2025 08:41
@clobrano
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 29, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/92aaaa10-b4d4-11f0-8ae3-147c7322b463-0

@clobrano
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 29, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/f41854d0-b503-11f0-9c7a-52e63142ca96-0

Change BeforeEach health checks to skip tests instead of failing them
when the cluster is not in a healthy state at the start of the test.

Previously, the etcd recovery tests would fail if the cluster was not
healthy before the test started. This is problematic because these
tests are designed to validate recovery from intentional disruptions,
not to debug pre-existing cluster issues.

Changes:
- Extract health validation functions to common.go for reusability
- Add skipIfClusterIsNotHealthy() to consolidate all health checks
- Implement internal retry logic in health check functions with timeouts
- Add ensureEtcdHasTwoVotingMembers() to validate membership state
- Skip tests early if cluster is degraded, pods aren't running, or
  members are unhealthy

This ensures tests only run when the cluster is in a known-good state,
reducing false failures due to pre-existing issues while maintaining
test coverage for actual recovery scenarios.
@clobrano
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 30, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/b0181f70-b5be-11f0-9b3d-143ce616f56d-0

@clobrano
Copy link
Contributor Author

payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@clobrano
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 31, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/878d1590-b629-11f0-97f1-f0b4b63689fd-0

@openshift-trt
Copy link

openshift-trt bot commented Oct 31, 2025

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New Test Risks for sha: f918765

Job Name New Test Risk
pull-ci-openshift-origin-main-e2e-aws-ovn-microshift-serial High - "[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-microshift-serial High - "[sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-serial-2of2 High - "[sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]" is a new test that failed 1 time(s) against the current commit

New tests seen in this PR at sha: f918765

  • "[sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Serial] [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Serial] [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Serial] [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]" [Total: 2, Pass: 1, Fail: 1, Flake: 0]
  • "[sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]" [Total: 2, Pass: 0, Fail: 2, Flake: 0]
  • "[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • "[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]" [Total: 2, Pass: 2, Fail: 0, Flake: 0]
  • (...showing 20 of 29 tests)

@clobrano clobrano changed the title NO JIRA: TNF add etcd cold boot recovery tests from graceful node shutdown OCPEDGE-1788: TNF add etcd cold boot recovery tests from graceful node shutdown Oct 31, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 31, 2025

@clobrano: This pull request references OCPEDGE-1788 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

Add three new test cases to validate etcd cluster recovery from cold boot scenarios reached through different graceful/ungraceful shutdown combinations:

  • Cold boot from double GNS: both nodes gracefully shut down simultaneously, then both restart (full cluster cold boot)
  • Cold boot from sequential GNS: first node gracefully shut down, then second node gracefully shut down, then both restart
  • Cold boot from mixed GNS/UGNS: first node gracefully shut down, surviving node then ungracefully shut down, then both restart

Note: The inverse case (UGNS first node, then GNS second) is not tested because in TNF clusters, an ungracefully shut down node is quickly recovered, preventing the ability to wait and gracefully shut down the second node later. The double UGNS scenario is already covered by existing tests.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Oct 31, 2025
@clobrano
Copy link
Contributor Author

clobrano commented Nov 3, 2025

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 3, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/5a1c1ae0-b893-11f0-8047-50d9133ea1bb-0

@clobrano
Copy link
Contributor Author

clobrano commented Nov 7, 2025

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 7, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/4a74ab30-bbda-11f0-80bd-b8a7622b18f9-0

@clobrano
Copy link
Contributor Author

clobrano commented Nov 7, 2025

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 7, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/8e70d3c0-bbe1-11f0-9f1c-e6c898636467-0

@clobrano
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 10, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/d9d0c600-be29-11f0-8d69-5a1123152b7a-0

@clobrano
Copy link
Contributor Author

clobrano commented Nov 10, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/4a74ab30-bbda-11f0-80bd-b8a7622b18f9-0

failed for OCPEDGE-2213

Nov 10 13:52:11.036208 master-1 etcd[4665]: {"level":"warn","ts":"2025-11-10T13:52:11.033139Z","caller":"etcdmain/etcd.go:146","msg":"failed to start etcd","error":"error validating peerURLs {ClusterID:388387a7db949890 Members:[&{ID:732be3217e0b6915 RaftAttributes:{PeerURLs:[https://192.168.111.20:2380] IsLearner:false} Attributes:{Name:master-0 ClientURLs:[https://192.168.111.20:2379]}}] RemovedMemberIDs:[]}: member count is unequal"}

@clobrano
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 10, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/83eac3a0-be4f-11f0-94b8-fb61590621b1-0

@clobrano
Copy link
Contributor Author

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/83eac3a0-be4f-11f0-94b8-fb61590621b1-0

The cluster failed to deploy

@clobrano
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 11, 2025

@clobrano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/dd9fada0-bf00-11f0-8514-620ca41f9ab2-0

@clobrano clobrano force-pushed the tnf-e2e-cold-boot-from-mixed-gns-ungns branch from cfa18ac to 90b1da6 Compare November 14, 2025 09:12
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 14, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: clobrano
Once this PR has been reviewed and has the lgtm label, please assign jeff-roche for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

}
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this is the type of comment you're looking for at this point in the life of the PR, but this block seems suspect to me. Available and Degraded are independent, and we are looking for both to be true at the same time to consider the operator healthy. This way of writing the logic seems clearer and also informs on Degraded on the Not Available statuses:

`// Check if etcd operator is healthy (Available and not Degraded)
available := findClusterOperatorCondition(co.Status.Conditions, v1.OperatorAvailable)
degraded := findClusterOperatorCondition(co.Status.Conditions, v1.OperatorDegraded)

if (available != nil && available.Status == v1.ConditionTrue) &&
(degraded == nil || degraded.Status != v1.ConditionTrue) {
framework.Logf("SUCCESS: Cluster operator is healthy")
return nil
}

// Not healthy - report why
var reasons []string
if available == nil {
reasons = append(reasons, "Available condition not found")
} else if available.Status != v1.ConditionTrue {
reasons = append(reasons, fmt.Sprintf("not Available: %s", available.Message))
}
if degraded != nil && degraded.Status == v1.ConditionTrue {
reasons = append(reasons, fmt.Sprintf("Degraded: %s", degraded.Message))
}
return fmt.Errorf("ClusterOperator is unhealthy: %s", strings.Join(reasons, "; "))`

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see them independent 🤔

available := findClusterOperatorCondition(co.Status.Conditions, v1.OperatorAvailable)
if available == nil {
	err = fmt.Errorf("ClusterOperator Available condition not found") // available == nil ==> err is set, exit from if/else branch
} else if available.Status != v1.ConditionTrue { // here we are sure that (available != nil)
	err = fmt.Errorf("ClusterOperator is not Available: %s", available.Message) // available.Status != v1.ConditionTrue  ==> err is set, exit from if/else branch
} else { // here we are sure that (available != nil && available.Status == v1.ConditionTrue)
	// Check if etcd operator is not Degraded
	degraded := findClusterOperatorCondition(co.Status.Conditions, v1.OperatorDegraded)
	if degraded != nil && degraded.Status == v1.ConditionTrue {
		err = fmt.Errorf("ClusterOperator is Degraded: %s", degraded.Message) // degraded here, err is set, exit from if/else branch
	} else {
		framework.Logf("SUCCESS: Cluster operator is healthy")  // here we are sure that (available.Status == v1.ConditionTrue && degraded.Status != v1.ConditionTrue)
		return nil
	}
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And we return the error message when the ctxt expires

select {
case <-ctx.Done():
	return err
default:
}
time.Sleep(pollInterval)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I've misunderstood the possibilities matrix. Can't the cluster be at the same time Available and Degraded? Doesn't that happen for example if we have 1/2 etcd but kept quorum? (a new cluster where the second member hasn't joined, for example)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this in chat. I understand the the nested if/else might seem complex to read, but also reading complex boolean conditions might be and might. We agreed to keep the code as is.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree. We might make it marginally more readable but it's not worth re-doing it when we know it works. The change is not as simple as I though 😓

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 18, 2025

@clobrano: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-vsphere-ovn-upi 90b1da6 link true /test e2e-vsphere-ovn-upi
ci/prow/e2e-vsphere-ovn 90b1da6 link true /test e2e-vsphere-ovn
ci/prow/e2e-metal-ipi-ovn-ipv6 90b1da6 link true /test e2e-metal-ipi-ovn-ipv6

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@clobrano
Copy link
Contributor Author

Replaced by #30519

@clobrano clobrano closed this Nov 24, 2025
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 24, 2025
@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants