Skip to content

WIP: fix(upsert): add last-applied-configuration annotation to detect manifest removals#7713

Open
muraee wants to merge 2 commits intoopenshift:mainfrom
muraee:feat-cpov2-last-applied-annotation
Open

WIP: fix(upsert): add last-applied-configuration annotation to detect manifest removals#7713
muraee wants to merge 2 commits intoopenshift:mainfrom
muraee:feat-cpov2-last-applied-annotation

Conversation

@muraee
Copy link
Contributor

@muraee muraee commented Feb 12, 2026

Summary

  • Adds a hypershift.openshift.io/last-applied-configuration annotation to ApplyManifest that stores the JSON representation of the desired manifest state after each successful apply
  • On subsequent reconciles, compares the new desired state against this annotation to detect changes that DeepDerivative cannot detect (trailing slice removals, zero-value field clearing)
  • Retains DeepDerivative as a fallback for drift detection (external modifications) and migration (objects without the annotation)

Details

DeepDerivative has two fundamental flaws:

  1. Trailing slice items ignored: For slices, it only compares up to len(desired) items, so removing items from the end (e.g., --image-overrides args) goes undetected
  2. Zero-value heuristics: It treats nil/empty values as "don't care," so intentionally clearing a field to its zero value is also undetected

The annotation-based approach stores the full desired state as JSON on each apply. On next reconcile, a simple string comparison of desiredJSON != lastAppliedJSON catches any change including removals. Objects exceeding 128KB skip the annotation and fall back to DeepDerivative-only comparison.

For migration, objects created before this change get the annotation force-stamped on their first reconcile.

Test plan

  • Unit tests for annotation stamping on create
  • Unit tests for trailing slice removal detection
  • Unit tests for nil/empty slice removal detection
  • Unit tests for idempotency (no-op on identical state)
  • Unit tests for migration force-stamp and subsequent no-op
  • Unit tests for large object annotation skip
  • Unit tests for non-self-referential annotation content
  • Unit tests for deterministic JSON output
  • Unit tests for loop detector compatibility
  • Unit tests for toUnstructured field stripping
  • Fixture regeneration for CPO and hypershift-operator testdata
  • make lint passes
  • make vet passes
  • make test passes with race detection

🤖 Generated with Claude Code

…annotation

Regenerate all testdata fixture files to include the new
hypershift.openshift.io/last-applied-configuration annotation
that will be stamped by ApplyManifest.

Signed-off-by: Mulham Raee <mulham.raee@gmail.com>
Commit-Message-Assisted-by: Claude (via Claude Code)
…fest removals

DeepDerivative has fundamental flaws: it ignores trailing slice
removals and treats zero-value fields as "don't care". This causes
bugs like OCPBUGS-65879 where removing --image-overrides args goes
undetected.

Add a last-applied-configuration annotation pattern to ApplyManifest.
After each successful apply, the desired manifest state is stored as
JSON on the object. On subsequent reconciles, the new desired state
is compared against this annotation to detect any change, including
removals that DeepDerivative would miss.

DeepDerivative is retained as a fallback for drift detection (external
modifications to cluster objects) and migration (objects created before
this change). On migration, the annotation is force-stamped on the
first reconcile.

Signed-off-by: Mulham Raee <mulham.raee@gmail.com>
Commit-Message-Assisted-by: Claude (via Claude Code)
@openshift-ci-robot
Copy link

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@muraee muraee changed the title fix(upsert): add last-applied-configuration annotation to detect manifest removals NO-JIRA: fix(upsert): add last-applied-configuration annotation to detect manifest removals Feb 12, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 12, 2026

Walkthrough

This pull request updates test fixture YAML manifests across the control-plane-operator testdata directory. Changes primarily add hypershift.openshift.io/last-applied-configuration annotations and expand metadata fields (namespace, ownerReferences, resourceVersion) across hundreds of Kubernetes resource manifests. Some files include additional substantive changes such as new initContainer logic, expanded container command scripts, and volume configuration updates.

Changes

Cohort / File(s) Summary
ConfigMap & Secret Annotations
aws-cloud-controller-manager/*/...configmap.yaml, aws-cloud-controller-manager/*/...secret.yaml, azure-cloud-controller-manager/*/...configmap.yaml, azure-cloud-controller-manager/*/...secret.yaml, cluster-policy-controller/*/...configmap.yaml, cluster-storage-operator/*/...secret.yaml
Added metadata.annotations.hypershift.openshift.io/last-applied-configuration containing serialized configuration snapshots. Some secrets expanded with metadata.ownerReferences and resourceVersion fields.
Deployment Metadata Enrichment
aws-cloud-controller-manager/*/...deployment.yaml, aws-node-termination-handler/*/...deployment.yaml, azure-cloud-controller-manager/*/...deployment.yaml, catalog-operator/*/...deployment.yaml, certified-operators-catalog/*/...deployment.yaml, cloud-credential-operator/*/...deployment.yaml, cluster-autoscaler/*/...deployment.yaml, cluster-image-registry-operator/*/...deployment.yaml, cluster-network-operator/*/...deployment.yaml, cluster-node-tuning-operator/*/...deployment.yaml, cluster-storage-operator/*/...deployment.yaml, cluster-version-operator/*/...deployment.yaml, community-operators-catalog/*/...deployment.yaml, control-plane-pki-operator/*/...deployment.yaml, csi-snapshot-controller-operator/*/...deployment.yaml, dns-operator/*/...deployment.yaml
Added last-applied-configuration annotations; expanded metadata with namespace, ownerReferences, resourceVersion. Some deployments include new/expanded container command scripts, additional initContainers (e.g., token-minters, DNS resolution, member reset logic), and volume configurations.
Service & ServiceAccount Annotations
catalog-operator/*/...service.yaml, catalog-operator/*/...servicemonitor.yaml, certified-operators-catalog/*/...service.yaml, cluster-node-tuning-operator/*/...service*.yaml, cluster-version-operator/*/...service*.yaml, community-operators-catalog/*/...service.yaml, control-plane-pki-operator/*/...serviceaccount.yaml, csi-snapshot-controller-operator/*/...serviceaccount.yaml, dns-operator/*/...service*.yaml, etcd/*/...service.yaml, etcd/*/...servicemonitor.yaml
Added metadata annotations for last-applied-configuration; some ServiceMonitors expanded with explicit ownerReferences, resourceVersion, and full spec structures.
RBAC Resources
cluster-autoscaler/*/...role.yaml, cluster-autoscaler/*/...rolebinding.yaml, cluster-network-operator/*/...role.yaml, cluster-network-operator/*/...rolebinding.yaml, cluster-node-tuning-operator/*/...role.yaml, cluster-node-tuning-operator/*/...rolebinding.yaml, cluster-storage-operator/*/...role.yaml, cluster-storage-operator/*/...rolebinding.yaml, cluster-version-operator/*/...role*.yaml, control-plane-pki-operator/*/...role.yaml, control-plane-pki-operator/*/...rolebinding.yaml, csi-snapshot-controller-operator/*/...role*.yaml, dns-operator/*/...role.yaml
Added last-applied-configuration annotations and expanded metadata with ownerReferences and resourceVersion fields.
Pod Disruption & Snapshot Controllers
etcd/*/...poddisruptionbudget.yaml, cluster-image-registry-operator/*/...podmonitor.yaml, cluster-network-operator/*/...servicemonitor.yaml, cluster-policy-controller/*/...deployment.yaml, featuregate-generator/*/...job.yaml
Added last-applied-configuration annotations; some resources expanded with full ownerReferences and spec structures. featuregate-generator jobs include substantially reworked initContainer scripts for feature gate rendering and manifest generation.
Complex Script & Logic Changes
aws-node-termination-handler/*/...deployment.yaml, cluster-image-registry-operator/*/...deployment.yaml, cluster-network-operator/*/...deployment.yaml, cluster-version-operator/*/...deployment.yaml, etcd/*/...statefulset.yaml
Added or expanded initContainers with multi-line bash scripts for token minting, DNS resolution, cluster membership reset, kubeconfig rewriting, payload preparation, and manifest generation. Updated main container commands and environment variables to support new initialization flows.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 12, 2026
@openshift-ci-robot
Copy link

@muraee: This pull request explicitly references no jira issue.

Details

In response to this:

Summary

  • Adds a hypershift.openshift.io/last-applied-configuration annotation to ApplyManifest that stores the JSON representation of the desired manifest state after each successful apply
  • On subsequent reconciles, compares the new desired state against this annotation to detect changes that DeepDerivative cannot detect (trailing slice removals, zero-value field clearing)
  • Retains DeepDerivative as a fallback for drift detection (external modifications) and migration (objects without the annotation)
  • Fixes OCPBUGS-65879

Details

DeepDerivative has two fundamental flaws:

  1. Trailing slice items ignored: For slices, it only compares up to len(desired) items, so removing items from the end (e.g., --image-overrides args) goes undetected
  2. Zero-value heuristics: It treats nil/empty values as "don't care," so intentionally clearing a field to its zero value is also undetected

The annotation-based approach stores the full desired state as JSON on each apply. On next reconcile, a simple string comparison of desiredJSON != lastAppliedJSON catches any change including removals. Objects exceeding 128KB skip the annotation and fall back to DeepDerivative-only comparison.

For migration, objects created before this change get the annotation force-stamped on their first reconcile.

Test plan

  • Unit tests for annotation stamping on create
  • Unit tests for trailing slice removal detection
  • Unit tests for nil/empty slice removal detection
  • Unit tests for idempotency (no-op on identical state)
  • Unit tests for migration force-stamp and subsequent no-op
  • Unit tests for large object annotation skip
  • Unit tests for non-self-referential annotation content
  • Unit tests for deterministic JSON output
  • Unit tests for loop detector compatibility
  • Unit tests for toUnstructured field stripping
  • Fixture regeneration for CPO and hypershift-operator testdata
  • make lint passes
  • make vet passes
  • make test passes with race detection

🤖 Generated with Claude Code

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from csrwng and sjenning February 12, 2026 17:13
@openshift-ci openshift-ci bot added the area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release label Feb 12, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 12, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: muraee

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release approved Indicates a PR has been approved by an approver from all required OWNERS files. area/platform/aws PR/issue for AWS (AWSPlatform) platform area/platform/azure PR/issue for Azure (AzurePlatform) platform area/platform/kubevirt PR/issue for KubeVirt (KubevirtPlatform) platform area/platform/openstack PR/issue for OpenStack (OpenStackPlatform) platform area/platform/powervs PR/issue for PowerVS (PowerVSPlatform) platform and removed do-not-merge/needs-area labels Feb 12, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 12, 2026

@muraee: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/unit edede18 link true /test unit

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@control-plane-operator/controllers/hostedcontrolplane/testdata/azure-cloud-controller-manager/AROSwift/zz_fixture_TestControlPlaneComponents_azure_cloud_config_secret.yaml`:
- Around line 6-7: The last-applied-configuration annotation on the Secret
(annotation key hypershift.openshift.io/last-applied-configuration for Secret
name azure-cloud-config) contains embedded sensitive Secret data; remove that
annotation or replace its value with a non-sensitive placeholder in the fixture
so secret material is not stored in metadata. Locate the Secret YAML containing
the hypershift.openshift.io/last-applied-configuration annotation and either
delete the entire annotation entry or replace the embedded "cloud.conf" blob
with a sanitized placeholder (e.g., "<redacted>" or an empty object) and ensure
ownerReferences/metadata remain intact; for future fixtures prefer using
Server-Side Apply/managedFields or explicit, non-sensitive test values instead
of embedding real secret contents.

Comment on lines +6 to +7
annotations:
hypershift.openshift.io/last-applied-configuration: '{"apiVersion":"v1","data":{"cloud.conf":"ewogICJjbG91ZCI6ICIiLAogICJ0ZW5hbnRJZCI6ICIiLAogICJ1c2VNYW5hZ2VkSWRlbnRpdHlFeHRlbnNpb24iOiBmYWxzZSwKICAidXNlRmVkZXJhdGVkV29ya2xvYWRJZGVudGl0eUV4dGVuc2lvbiI6IGZhbHNlLAogICJzdWJzY3JpcHRpb25JZCI6ICIiLAogICJhYWRDbGllbnRJZCI6ICIiLAogICJhYWRDbGllbnRTZWNyZXQiOiAiIiwKICAiYWFkQ2xpZW50Q2VydFBhdGgiOiAiIiwKICAiYWFkRmVkZXJhdGVkVG9rZW5GaWxlIjogIiIsCiAgImFhZE1TSURhdGFQbGFuZUlkZW50aXR5UGF0aCI6ICIvbW50L2NlcnRzL2Nsb3VkLXByb3ZpZGVyLWNyZWRzIiwKICAicmVzb3VyY2VHcm91cCI6ICIiLAogICJsb2NhdGlvbiI6ICIiLAogICJ2bmV0TmFtZSI6ICJteVZuZXROYW1lIiwKICAidm5ldFJlc291cmNlR3JvdXAiOiAibXlSZXNvdXJjZUdyb3VwTmFtZSIsCiAgInN1Ym5ldE5hbWUiOiAibXlTdWJuZXROYW1lIiwKICAic2VjdXJpdHlHcm91cE5hbWUiOiAibXlOU0dOYW1lIiwKICAic2VjdXJpdHlHcm91cFJlc291cmNlR3JvdXAiOiAibXlSZXNvdXJjZUdyb3VwTmFtZSIsCiAgInJvdXRlVGFibGVOYW1lIjogIiIsCiAgImNsb3VkUHJvdmlkZXJCYWNrb2ZmIjogdHJ1ZSwKICAiY2xvdWRQcm92aWRlckJhY2tvZmZEdXJhdGlvbiI6IDYsCiAgInVzZUluc3RhbmNlTWV0YWRhdGEiOiBmYWxzZSwKICAibG9hZEJhbGFuY2VyU2t1IjogInN0YW5kYXJkIiwKICAiZGlzYWJsZU91dGJvdW5kU05BVCI6IHRydWUsCiAgImxvYWRCYWxhbmNlck5hbWUiOiAiIiwKICAiY2x1c3RlclNlcnZpY2VMb2FkQmFsYW5jZXJIZWFsdGhQcm9iZU1vZGUiOiAic2hhcmVkIgp9"},"kind":"Secret","metadata":{"name":"azure-cloud-config","namespace":"hcp-namespace","ownerReferences":[{"apiVersion":"hypershift.openshift.io/v1beta1","blockOwnerDeletion":true,"controller":true,"kind":"HostedControlPlane","name":"hcp","uid":""}]}}'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find files that reference this annotation or handle last-applied-configuration
rg "last-applied-configuration" --type yaml --type go -A 2 -B 2 | head -100

Repository: openshift/hypershift

Length of output: 253


🏁 Script executed:

# Search for annotation redaction or sanitization logic
rg "redact|sanitize" --type go -i | head -50

Repository: openshift/hypershift

Length of output: 253


🏁 Script executed:

# Check if this fixture file has any special handling
fd "zz_fixture.*azure_cloud_config" --type f

Repository: openshift/hypershift

Length of output: 253


🏁 Script executed:

# Look for test code that uses or validates this fixture
rg "azure_cloud_config_secret|AROSwift" --type go | head -20

Repository: openshift/hypershift

Length of output: 253


🏁 Script executed:

# Check Kubernetes documentation about last-applied-configuration best practices
web_search
Kubernetes last-applied-configuration annotation security best practices sensitive data

Repository: openshift/hypershift

Length of output: 253


🌐 Web query:

Kubernetes last-applied-configuration annotation security sensitive data best practices

💡 Result:

Why the last-applied-configuration annotation is sensitive

kubectl client-side apply stores (a copy of) what you applied in the kubectl.kubernetes.io/last-applied-configuration annotation. If the manifest contains sensitive values (commonly Secrets, but also tokens in CRDs, ConfigMaps, etc.), those values can end up persisted in object metadata and become readable to anyone/tools that can read that object. This pattern has caused real-world leakage (for example, Argo CD cluster Secrets where the full secret body was present in the annotation). [3]

Best practices (security-focused)

  1. Prefer Server-Side Apply (SSA) over client-side apply
    Kubernetes SSA is designed as the replacement for legacy client-side apply and does not rely on the legacy “last-applied” annotation for its core mechanism (it uses field management / managedFields instead). Use:

    • kubectl apply --server-side
      Also consider using a non-default field manager (example in docs) if you want to opt out of kubectl behaviors tied to the legacy annotation. [1]
  2. Do not manage Secrets by “applying” Secret manifests that contain the secret material
    Kubernetes explicitly warns against sharing/checking in Secret manifests (base64 is not encryption). If you put secret material in YAML and apply it, you’re increasing the number of places that secret can leak (git, CI logs, the annotation, etc.). [2]

  3. Remove the annotation from existing sensitive objects (and rotate if needed)
    If a Secret (or any sensitive resource) already has the annotation containing sensitive content, delete it from the live object (and consider rotating the secret, depending on exposure). Advisories for real incidents note that existing Secrets may require manual annotation removal. [3]

  4. Lock down RBAC around Secrets and other sensitive resources
    Minimize who can get/list/watch Secrets. Kubernetes notes that granting list access effectively enables retrieving Secret contents, and that broader permissions (like being able to create Pods in a namespace) can also lead to Secret exposure. [2]

  5. Use an external secret source when possible
    Prefer integrating a dedicated secret store (for example via CSI driver / external secrets patterns) so you don’t ship secret values inside Kubernetes manifests at all. (Kubernetes docs recommend avoiding sharing Secret manifests and discuss provider-based approaches.) [2]

Practical rule of thumb

If a manifest contains credentials, keys, tokens, or private endpoints: assume last-applied-configuration would be a data leak if present, and use SSA / external secrets + strict RBAC.

Sources: Kubernetes Server-Side Apply docs [1], Kubernetes Secret security best practices [2], CVE/incident documentation showing full secret bodies stored in last-applied-configuration [3].


🌐 Web query:

Kubernetes Secret metadata annotation logging security exposure risks

💡 Result:

Why Secret metadata.annotations can increase logging-related exposure risk

  1. Annotations are intended to be broadly retrievable metadata

    • Kubernetes annotations are “arbitrary … metadata” that clients can retrieve from objects. If you put credentials, tokens, or private URLs in annotations, you’re treating a widely-consumed metadata channel as a secret store. [1]
  2. Audit logging level can accidentally capture Secret bodies (including annotations)

    • Kubernetes audit policies have levels:
      • Metadata: logs request metadata but not request/response bodies.
      • Request / RequestResponse: include the request body (and possibly response body). [2]
    • If Secrets are logged at Request or RequestResponse, the audit log can contain the full Secret object payload from the API request/response (which includes metadata.annotations, and for Secrets also includes data). Many example policies explicitly log Secrets only at Metadata to reduce sensitive-data leakage risk. [2][3]
  3. “Secret data in logs” is a known class of risk

    • Kubernetes Secret guidance explicitly warns applications to avoid logging secret data in clear text—because logs are a common accidental exfiltration path. If you store sensitive values in annotations, they’re more likely to end up in log pipelines that capture object YAML/JSON for debugging or auditing. [4]

Practical risk scenarios

  • A cluster admin (or managed Kubernetes defaults) enables audit at Request for core resources → Secret create/update payloads (including annotations) can land in central logging. [2][3]
  • Troubleshooting / GitOps / admission controllers / operators log or export object manifests → annotations get copied into tickets, chat, SIEM, or traces (often with broader access than Secrets should have). [1][4]

Mitigations / best practices

  • Never put sensitive values in annotations (treat them as non-secret metadata). [1]
  • Ensure audit policy logs Secrets at Metadata, not Request/RequestResponse (unless you have strong redaction controls). [2][3]
  • Follow Secret handling guidance: restrict access, encrypt at rest, and ensure apps/operators avoid logging sensitive material. [4][5]

Sources: Kubernetes Annotations [1], Kubernetes Auditing (levels + example policy logging Secrets at Metadata) [2], GKE audit policy levels + Secrets at Metadata example [3], Kubernetes Secret good practices (avoid logging secrets) [4], Kubernetes Secrets cautions (sensitivity/access implications) [5].


Remove or exclude sensitive data from the last-applied-configuration annotation.

The hypershift.openshift.io/last-applied-configuration annotation embeds Secret data, which persists metadata as non-secret and risks exposure if the object is logged, audited, or exported. Per Kubernetes best practices, prefer Server-Side Apply (which uses managedFields instead of the legacy annotation) or omit Secret material from the annotation. Even in test fixtures, this pattern should not be replicated in production deployments.

🤖 Prompt for AI Agents
In
`@control-plane-operator/controllers/hostedcontrolplane/testdata/azure-cloud-controller-manager/AROSwift/zz_fixture_TestControlPlaneComponents_azure_cloud_config_secret.yaml`
around lines 6 - 7, The last-applied-configuration annotation on the Secret
(annotation key hypershift.openshift.io/last-applied-configuration for Secret
name azure-cloud-config) contains embedded sensitive Secret data; remove that
annotation or replace its value with a non-sensitive placeholder in the fixture
so secret material is not stored in metadata. Locate the Secret YAML containing
the hypershift.openshift.io/last-applied-configuration annotation and either
delete the entire annotation entry or replace the embedded "cloud.conf" blob
with a sanitized placeholder (e.g., "<redacted>" or an empty object) and ensure
ownerReferences/metadata remain intact; for future fixtures prefer using
Server-Side Apply/managedFields or explicit, non-sensitive test values instead
of embedding real secret contents.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This appears to be an extremely costly update to a large fleet that is already being stressed with etcd size explosion caused by HyperShift node pools. I don't think IBM Cloud can afford having hypershift.openshift.io/last-applied-configuration added. I would like to discuss with you all at our interlock meeting next week.

@muraee
Copy link
Contributor Author

muraee commented Feb 13, 2026

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 13, 2026
@muraee muraee changed the title NO-JIRA: fix(upsert): add last-applied-configuration annotation to detect manifest removals WIP: fix(upsert): add last-applied-configuration annotation to detect manifest removals Feb 13, 2026
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 13, 2026
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 21, 2026
@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release area/platform/aws PR/issue for AWS (AWSPlatform) platform area/platform/azure PR/issue for Azure (AzurePlatform) platform area/platform/kubevirt PR/issue for KubeVirt (KubevirtPlatform) platform area/platform/openstack PR/issue for OpenStack (OpenStackPlatform) platform area/platform/powervs PR/issue for PowerVS (PowerVSPlatform) platform do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants