Skip to content

Conversation

@mkowalski
Copy link

We have observed that the current way of rendering cluster and service networks as arrays of value generate the following bootstrap config

service-cluster-ip-range:
- 172.30.0.0/16
- fd65:172:16::/112

what subsequently creates the following kube-controller-manager pod manifest

args:
- --service-cluster-ip-range=172.30.0.0/16
- --service-cluster-ip-range=fd65:172:16::/112

what is interpreted as the running process as

FLAG: --service-cluster-ip-range="fd65:172:16::/112"

with the missing value. The reason for this is that the process takes into account only the last instance of the param and does not concatenate all the values provided.

In order to avoid this we are rendering the networks as a single comma-separated string so that the initial config file will contain now

service-cluster-ip-range:
- "172.30.0.0/16,fd65:172:16::/112"

Exactly the same pattern applies to the cluster networks that exist under cluster-cidr key.

Thanks to this kube-controller-manager process will be running with the correct set of networks.

Contributes-to: OCPBUGS-18641

We have observed that the current way of rendering cluster and service
networks as arrays of value generate the following bootstrap config

```
service-cluster-ip-range:
- 172.30.0.0/16
- fd65:172:16::/112
```

what subsequently creates the following kube-controller-manager pod
manifest

```
args:
- --service-cluster-ip-range=172.30.0.0/16
- --service-cluster-ip-range=fd65:172:16::/112
```

what is interpreted as the running process as

```
FLAG: --service-cluster-ip-range="fd65:172:16::/112"
```

with the missing value. The reason for this is that the process takes
into account only the last instance of the param and does not
concatenate all the values provided.

In order to avoid this we are rendering the networks as a single
comma-separated string so that the initial config file will contain now

```
service-cluster-ip-range:
- "172.30.0.0/16,fd65:172:16::/112"
```

Exactly the same pattern applies to the cluster networks that exist
under `cluster-cidr` key.

Thanks to this kube-controller-manager process will be running with the
correct set of networks.

Contributes-to: OCPBUGS-18641
@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. labels Sep 15, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 15, 2023
@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Sep 15, 2023
@openshift-ci-robot
Copy link

@mkowalski: This pull request references Jira Issue OCPBUGS-18641, which is invalid:

  • expected the bug to target the "4.15.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

We have observed that the current way of rendering cluster and service networks as arrays of value generate the following bootstrap config

service-cluster-ip-range:
- 172.30.0.0/16
- fd65:172:16::/112

what subsequently creates the following kube-controller-manager pod manifest

args:
- --service-cluster-ip-range=172.30.0.0/16
- --service-cluster-ip-range=fd65:172:16::/112

what is interpreted as the running process as

FLAG: --service-cluster-ip-range="fd65:172:16::/112"

with the missing value. The reason for this is that the process takes into account only the last instance of the param and does not concatenate all the values provided.

In order to avoid this we are rendering the networks as a single comma-separated string so that the initial config file will contain now

service-cluster-ip-range:
- "172.30.0.0/16,fd65:172:16::/112"

Exactly the same pattern applies to the cluster networks that exist under cluster-cidr key.

Thanks to this kube-controller-manager process will be running with the correct set of networks.

Contributes-to: OCPBUGS-18641

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 15, 2023

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@mkowalski
Copy link
Author

After manual test (via clusterbot launch 4.14,openshift/cloud-provider-vsphere#47,openshift/cluster-kube-controller-manager-operator#745 vsphere,dualstack) I can see in the log of kube-controller-manager that networks are passed correctly, i.e.

flags.go:64] FLAG: --cluster-cidr="10.128.0.0/14,fd65:10:128::/56"
[...]
flags.go:64] FLAG: --service-cluster-ip-range="172.30.0.0/16,fd65:172:16::/112"

and the KubeControllerManagerConfig CR is rendered as

kind: KubeControllerManagerConfig
[...]
  cluster-cidr:
  - 10.128.0.0/14,fd65:10:128::/56
  service-cluster-ip-range:
  - 172.30.0.0/16,fd65:172:16::/112

/cc @rbbratta
/cc @JoelSpeed
/cc @cybertron

@mkowalski mkowalski marked this pull request as ready for review September 15, 2023 13:37
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 15, 2023
@openshift-ci openshift-ci bot requested review from mfojtik and soltysh September 15, 2023 13:38
@mkowalski
Copy link
Author

/retitle NO-ISSUE: Render networks as comma-separated value

@openshift-ci openshift-ci bot changed the title OCPBUGS-18641: Render networks as comma-separated value NO-ISSUE: Render networks as comma-separated value Sep 27, 2023
@openshift-ci-robot openshift-ci-robot removed the jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. label Sep 27, 2023
@openshift-ci-robot
Copy link

@mkowalski: This pull request explicitly references no jira issue.

Details

In response to this:

We have observed that the current way of rendering cluster and service networks as arrays of value generate the following bootstrap config

service-cluster-ip-range:
- 172.30.0.0/16
- fd65:172:16::/112

what subsequently creates the following kube-controller-manager pod manifest

args:
- --service-cluster-ip-range=172.30.0.0/16
- --service-cluster-ip-range=fd65:172:16::/112

what is interpreted as the running process as

FLAG: --service-cluster-ip-range="fd65:172:16::/112"

with the missing value. The reason for this is that the process takes into account only the last instance of the param and does not concatenate all the values provided.

In order to avoid this we are rendering the networks as a single comma-separated string so that the initial config file will contain now

service-cluster-ip-range:
- "172.30.0.0/16,fd65:172:16::/112"

Exactly the same pattern applies to the cluster networks that exist under cluster-cidr key.

Thanks to this kube-controller-manager process will be running with the correct set of networks.

Contributes-to: OCPBUGS-18641

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot removed the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Sep 27, 2023
@cybertron
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 27, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 27, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: cybertron, mkowalski
Once this PR has been reviewed and has the lgtm label, please assign deads2k for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 27, 2023
@mkowalski
Copy link
Author

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 3, 2024
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2024
@mkowalski
Copy link
Author

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2024
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2024
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 2, 2024
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Sep 2, 2024
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 2, 2024

@openshift-bot: Closed this PR.

Details

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@mkowalski
Copy link
Author

/reopen

@openshift-ci openshift-ci bot reopened this Sep 9, 2024
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 9, 2024

@mkowalski: Reopened this PR.

Details

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 9, 2024

@mkowalski: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Oct 10, 2024
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 10, 2024

@openshift-bot: Closed this PR.

Details

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants