Skip to content

kyma-project/kim-snatch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

509 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

REUSE status linter controller tests e2e tests latest release Coverage Status

KIM Snatch

Overview

KIM Snatch is part of Kyma Infrastructure Manager (KIM) and functions within the worker pool feature. It is deployed on all Kyma runtime instances and manages the assignment of Kyma workloads to the mandatory Kyma worker pool present in all Kyma clusters.

In this way, KIM Snatch ensures that Kyma-related workloads, such as operators for Kyma modules, use only the Kyma worker pool. This leaves the full capacity of additional customized worker pools entirely available for user workloads. KIM Snatch reduces the risk of incompatibility issues by keeping Kyma container images isolated from customized worker pools.

Technical Approach

KIM Snatch introduces the Kubernetes mutating admission webhook.

It intercepts all Pods that are scheduled in a Kyma-managed namespace. Kyma Lifecycle Manager (KLM) always labels a managed namespace with operator.kyma-project.io/managed-by: kyma. KIM reacts only to Pods scheduled in one of these labeled namespaces. Typical Kyma-managed namespaces are kyma-system or, if the Kyma Istio module is used, istio.

KIM Snatch Webhook

Before the Pod is handed over to the Kubernetes scheduler, KIM Snatch adds nodeAffinity to the Pod's manifest. This informs the Kubernetes scheduler to prefer nodes within the Kyma worker pool for this Pod.

Limitations

Using the Kyma Worker Pool Is Not Enforced

Assigning a Pod to a specific worker pool can have the following drawbacks:

  • Resources of the preferred worker pool are exhausted, while other worker pools still have free capacities.
  • If no suitable worker pool can be found and the node affinity is set as a "hard" rule, the Pod is not scheduled.

To overcome these limitations, we use preferredDuringSchedulingIgnoredDuringExecution so that the configured node affinity on Kyma workloads is a "soft" rule. For more details, see the Kubernetes documentation. The Kubernetes scheduler prefers the Kyma worker pool. Still, if scheduling the Pod in this pool is impossible, it also considers other worker pools.

Kyma Workloads Are Not Intercepted

Non-Available Webhook Is Ignored by Kubernetes

Kubernetes calls can be heavily impacted if a mandatory admission webhook isn't responsive enough. This can lead to timeouts and massive performance degradation.

To prevent such side effects, the KIM Snatch webhook is configured with a failure tolerating policy, which allows Kubernetes to continue in case of errors. This implies that downtimes or failures of the webhook are accepted, and Pods get scheduled without nodeAffinity.

Already Scheduled Pods Are Ignored by Webhook

Additionally, no Pods that are already scheduled and running on a worker node receive nodeAffinity because nodeAffinity is only allowed to intercept non-scheduled Pods. This means that running Pods must be restarted to receive nodeAffinity. This webhook does not restart running Pods to avoid service interruptions or reduced user experience.

Contributing

See the Contributing Rules.

Code of Conduct

See the Code of Conduct document.

Licensing

See the license file.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 15