Mastering Taints, Untaints, Node Affinity, and Anti-Affinity in Kubernetes: A Complete Guide to Smarter Scheduling

Harvy
4 min readFeb 3, 2025

--

Kubernetes provides powerful mechanisms to control pod scheduling and manage how pods are distributed across nodes in a cluster. Two essential concepts in this process are taints and tolerations and affinity/anti-affinity. These features allow you to control where pods are placed, making sure they run on the right nodes or avoid co-locating with specific pods.

In this article, we’ll dive into how taints, untaints, node affinity, and anti-affinity work in Kubernetes and how you can use them to manage your pod scheduling more effectively.

What Are Taints and Tolerations in Kubernetes?

Taints are applied to nodes and allow you to prevent pods from being scheduled on those nodes unless they explicitly tolerate the taint. By using taints, you can control the scheduling of pods based on the needs of your workloads.

How to Apply a Taint to a Node:

To apply a taint to a node, use the following command:

kubectl taint nodes <node-name> key=value:effect

Where:

  • <node-name> is the name of the node to which you want to add the taint.
  • key=value defines the taint key and value.
  • effect can be:
  • NoSchedule: Pods without the matching toleration cannot be scheduled on this node.
  • PreferNoSchedule: Pods without the matching toleration will be avoided but can be scheduled if necessary.
  • NoExecute: Pods without the matching toleration will be evicted from the node if already running.

Example:

kubectl taint nodes node1 key=value:NoSchedule

This taint prevents any pod from being scheduled on node1 unless the pod has a corresponding toleration.

How to Add a Toleration to a Pod:

To make sure a pod can be scheduled on a node with a taint, you must add a matching toleration to the pod’s configuration.

Here’s an example of a pod with a toleration for the taint we applied earlier:

apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
tolerations:
- key: "key"
value: "value"
effect: "NoSchedule"
containers:
- name: nginx
image: nginx

In this example, the pod will be scheduled on node1 because it has the matching toleration for the taint key=value:NoSchedule.

How to Remove a Taint from a Node (Untag a Node):

To remove a taint from a node, use the following command:

kubectl taint nodes <node-name> key:NoSchedule-

Example:

kubectl taint nodes node1 key=value:NoSchedule-

This will remove the taint from node1, and pods will be able to be scheduled normally on that node.

Node Affinity: Scheduling Pods Based on Node Labels

Node affinity allows you to constrain which nodes a pod can be scheduled on based on labels attached to the nodes. This feature helps ensure that certain workloads are run on specific types of hardware or resources.

Example of Node Affinity:

Let’s say you want to ensure that your pod is scheduled only on nodes that have a specific label, such as disktype=ssd. You can specify this in the pod's configuration like this:

apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
containers:
- name: nginx
image: nginx

This pod will only be scheduled on nodes that have the label disktype=ssd. If no such node exists, the pod won’t be scheduled.

Preferred Node Affinity:

If you don’t want to enforce strict requirements and prefer to schedule the pod on nodes with certain labels, but are okay with alternatives, you can use preferred node affinity:

apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
containers:
- name: nginx
image: nginx

This preference allows the pod to be scheduled on nodes that match the disktype=ssd label, but it will still be scheduled on other nodes if necessary.

Pod Anti-Affinity: Preventing Pods from Being Co-located

Pod anti-affinity ensures that pods are not scheduled on the same node as other pods that match a certain label. This is useful for spreading workloads across multiple nodes to avoid single points of failure.

Example of Pod Anti-Affinity:

Let’s say you want to avoid running two pods with the label app=frontend on the same node. You can set up pod anti-affinity like this:

apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- frontend
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: nginx

In this example:

  • labelSelector specifies that the pod should not be scheduled on a node that has another pod with the label app=frontend.
  • topologyKey: kubernetes.io/hostname means the anti-affinity rule applies per node. This prevents multiple pods with the same label from being scheduled on the same node.

Using Anti-Affinity with Other Topology Keys:

You can also use other topology keys like failure-domain.beta.kubernetes.io/zone to spread your pods across different availability zones.

Combining Affinity and Anti-Affinity

In Kubernetes, you can combine node affinity and pod anti-affinity in the same pod specification to achieve more advanced scheduling requirements.

Example of Combined Affinity and Anti-Affinity:

apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- frontend
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: nginx

In this case:

  • The pod will be scheduled on nodes that have the label disktype=ssd.
  • The pod will not be scheduled on a node that already has another pod with the label app=frontend.

Final Inputs

By using taints, tolerations, node affinity, and pod anti-affinity, you can gain fine-grained control over pod placement in your Kubernetes cluster. These features allow you to:

  • Prevent pods from being scheduled on specific nodes (via taints).
  • Ensure pods are scheduled on specific types of nodes (via node affinity).
  • Avoid co-locating certain pods (via pod anti-affinity).

With the right combination of these features, you can optimize your cluster’s resource utilization, reliability, and performance.

--

--

Harvy
Harvy

Written by Harvy

Extensive experience working on various platforms, specializing in Linux, DevOps, and DevSecOps, along with expertise in AWS and GCP cloud services

No responses yet