-
Notifications
You must be signed in to change notification settings - Fork 1.6k
✨ Add resource requests and limits to kube-rbac-proxy #2429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
|
@camilamacedo86 when you have some time can you have a look at this PR? |
Signed-off-by: Frederic Giloux <[email protected]>
|
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/hold @fgiloux we might want not to go forward with this one. Why?
In this way, seems like that we have it set only for the operator is enough. Otherwise, we will probably be requesting the double. I think we need to re-check. |
|
@camilamacedo86 It is true for scheduling that the aggregated value at the Pod level is taken in consideration. On the Node it depends whether the --cgroups-per-qos flag has been set for the Kubelet. It is the case with OpenShift I am not sure for other distributions. This will determine whether a cgroup level (for aggregated accounting) gets created for Pods or not. That said it is not neutral in terms of resource consumption to add a process, the kube-rbac-proxy in this case. I would agree that the CPU consumption may well be under the approximation level of the controller-manager. It is not the case for memory. At the end it is not that we would be requesting "double" it is that we would account for the overhead that the kube-rbac-proxy brings. Last point, you motivated this enhancement first by the rejection of Pods when QoS and LimitRanges are configured without default values. LimitRanges can be defined for Pods but also for Containers. If a constraint is set at the Container level without default values the Pod may still get rejected if it does not provide it. |
I need to correct my last sentence. This scenario is not possible: If a limitRange is created with a constraint at the Container level Defaults are always created (automatically) |
|
I signed it |
|
I signed it |
|
/ok-to-test |
|
/hold cancel |
|
@camilamacedo86 just missing lgtm now. Let me know if this is good to go or anything needs to be changed |
varshaprasad96
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: camilamacedo86, fgiloux, varshaprasad96 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Frederic Giloux [email protected]
Description
This change introduces default resource requests and limits for the kube-rbac-proxy container scaffolded by KubeBuilder. It also provides a hint to users that this should get amended to match the specific requirements.
Motivation
This is similar to what is already done for the controller-manager.
Fixes #2428