Skip to main content
Version: 0.9.0

Kubernetes Shim Design

Github repo: https://github.com/apache/incubator-yunikorn-k8shim

Please read the architecture doc before reading this one, you will need to understand the 3 layer design of YuniKorn before getting to understand what is the Kubernetes shim.

The Kubernetes shim

The YuniKorn Kubernetes shim is responsible for talking to Kubernetes, it is responsible for translating the Kubernetes cluster resources, and resource requests via scheduler interface and send them to the scheduler core. And when a scheduler decision is made, it is responsible for binding the pod to the specific node. All the communication between the shim and the scheduler core is through the scheduler-interface.

The admission controller

The admission controller runs in a separate pod, it runs a mutation webhook and a validation webhook, where:

  1. The mutation webhook mutates pod spec by:
    • adding schedulerName: yunikorn
      • by explicitly specifying the scheduler name, the pod will be scheduled by YuniKorn scheduler
    • adding applicationId label
      • when a label applicationId exists, reuse the given applicationId
      • when a label spark-app-selector exists, reuse the given spark app ID
      • otherwise, assign a generated application ID for this pod, using convention: yunikorn-<namespace>-autogen. this is unique per namespace.
    • adding queue label
      • when a label queue exists, reuse the given queue name. Note, if placement rule is enabled, values set in the label is ignored
      • otherwise, adds queue: root.default
  2. The validation webhook validates the configuration set in the configmap
    • this is used to prevent writing malformed configuration into the configmap
    • the validation webhook calls scheduler validation REST API to validate configmap updates

Admission controller deployment

Currently, the deployment of the admission-controller is done as a post-start hook in the scheduler deployment, similarly, the uninstall is done as a pre-stop hook. See the related code here. During the installation, it is expected to always co-locate the admission controller with the scheduler pod, this is done by adding the pod-affinity in the admission-controller pod, like:

podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- yunikorn-scheduler
topologyKey: "kubernetes.io/hostname"

it also tolerates all the taints in case the scheduler pod has some toleration set.

tolerations:
- operator: "Exists"