.jpg)


No node lives forever – which is why the ability to drain nodes in Kubernetes is so important. Draining allows you to remove pods from the node that is hosting them, which in turn makes it possible to move them to another node without disrupting your applications and services. If you need to shut down a node permanently or perform maintenance on it, you’ll want to know how to drain Kubernetes nodes.
And we’re here to tell you exactly how to do it. This article breaks down everything you’ve ever wanted to know about draining nodes in Kubernetes using kubectl.
What is kubectl drain node?
In Kubernetes, the kubectl drain node command is a feature that removes workloads from a given node. The purpose of the command is to tell Kubernetes that you no longer want any pods to run on a specified node.

The ability to drain nodes via kubectl is valuable because it provides a graceful way of removing workloads from a server within your Kubernetes cluster prior to shutting the server down or performing maintenance. If you simply turned the node off without draining it first, one or more pods running on it would crash. Major maintenance operations such as updating the operating system or kernel could also cause any pods running on the node to crash if they are not drained first.
If pods stop running unexpectedly due to issues like these, Kubernetes will attempt to reschedule them on other nodes. But that process would take some time, resulting in downtime for your applications – and a negative experience for your users.
With kubectl drain node, you can effectively tell Kubernetes, “Hey, we need this node to stop hosting pods; please take everything off of it.”
Draining mirror pods
Importantly, kubectl drain doesn’t drain mirror pods. Mirror pods are a special type of pod that represents static pods on a node within the Kubernetes API server. Mirror pods offer a way to track the state of static pods (which are managed directly by kubelet) in the Kubernetes API. You can’t remove mirror pods with the drain command because you can’t delete them using the API server.
In most cases, this doesn’t pose a challenge because you wouldn’t typically need to delete a mirror pod. But it’s helpful to know nonetheless that when you remove pods from a node, mirror pods are an exception.
Why should you drain nodes in Kubernetes?
Strictly speaking, you don’t need to drain nodes in Kubernetes. There is nothing stopping you from logging into a node and running a command like shutdown -h now, which tells most Linux-based servers to shut down immediately. Or, if you’re really brazen, you could simply pull the plug out of the wall (if the node is a physical machine) to shut the node down.
But if you shut down nodes or perform major maintenance work without taking the time to drain nodes first, you risk some serious problems. Any pods hosted on the node will stop running suddenly. Even if they are rescheduled quickly on other nodes, there will be some amount of downtime, and user requests might be permanently dropped during the rescheduling process. There is also a risk of data loss or file system corruption if a node shuts down or crashes suddenly without allowing pods to turn off gracefully first.
When you drain nodes, you avoid these risks and help provide benefits such as:
- Low-risk cluster downscaling: Draining helps you remove nodes from a cluster, which you may choose to do to save money if you no longer need as many nodes as your cluster originally included.
- Ensuring application availability during maintenance: Node draining mitigates the risk of application failures during maintenance operations like operating system updates. Even in cases where maintenance work shouldn’t cause a pod to crash, tasks like software updates can be unpredictable. It’s a best practice to drain nodes before undertaking work that could result in application problems.
- Reducing the risk of data loss: As we mentioned, nodes that shut down suddenly without giving applications the chance to turn themselves off gracefully could cause data loss events. This can happen when applications are actively writing data at the time the server shuts down.
Step-by-step tutorial for draining nodes
The process of draining nodes is fairly straightforward, although it involves multiple steps. Here’s a detailed tutorial.
Step 1: Cordon the node
The first step is to cordon the node by marking it as unschedulable. This tells Kubernetes that no new pods should be scheduled to run on the node (it doesn’t affect any pods that are already running or scheduled to run on the node; that’s where the next step – draining the node – comes in).
To cordon a node, first identify the name of the node you want to cordon. You can get a list of nodes using:
Then, cordon the node using the kubectl cordon command:
To verify that the node was successfully cordoned, run kubectl get nodes again. The output should include a line similar to the one below for the node named worker-node-1.
Here, we can tell that the node has been cordoned because its status includes SchedulingDisabled. This means cordoning is in effect for that node.
Step 2: Drain the node
Again, cordoning a node prevents Kubernetes from scheduling new pods on it, but it doesn’t do anything to reschedule pods that are already hosted on the node.
To do that, we use the default kubectl drain node command. This command tells Kubernetes to remove (or “evict,” to use Kubernetes jargon) any pods currently scheduled on the node. You can drain the node with:
Note that if desired, you can run this command with the flag --ignore-daemonsets. Doing so tells Kubernetes to avoid evicting any DaemonSet-managed pods (meaning ones that have been scheduled on the node using a DaemonSet). Typically, if you created a DaemonSet, it’s because you want to run copies of the same pod on multiple nodes. DaemonSet-managed pods are often used for use cases like deploying Kubernetes metrics and monitoring software. You may want to keep such software running until the node completely shuts down or during maintenance work.

Step 3: Confirm pod eviction
After you start the pod eviction process using kubectl drain node, it may take a little time to finish, since you need to wait for the pods to shut down. You’ll know the process is complete when kubectl drain returns successfully.
You can also confirm that it has completed successfully by running the command:
The output lists all of the pods in your cluster and describes the node each one is running on. Check to make sure no pods are running on the node you have drained (unless you excluded them from eviction using the --ignore-daemonsets command to exclude DaemonSet-managed pods from eviction).
Step 4: Perform maintenance work or delete node
Once you’ve confirmed that eviction was successful, you can perform maintenance work on the node.
Alternatively, if your goal is to remove the node from your cluster entirely, run the following command to delete it:
After this, you can shut the node down if desired (you could also keep it running if you want, but it will no longer be part of your Kubernetes cluster).
Step 5: Re-enable scheduling
If you’ve completed maintenance work and want to restore the node to a normal state, you can uncordon it with:
This tells Kubernetes that the node can now be used to schedule pods as normal. Whether or not Kubernetes actually places any pods on the node right away will depend on whether there are any pods currently in need of scheduling, and/or whether you’ve configured a DaemonSet or similar resource to force pods to run on the node.
To verify that the node has been successfully uncordoned, run kubectl get nodes and check the node’s status. It should indicate that the node is Ready but with no mention of SchedulingDisabled.
How to use the kubectl drain command: Examples
We just walked through the basic steps for using kubectl drain and talked about some of its options, but let’s dive a bit deeper by looking at additional examples.
Syntax and basic usage
As we’ve noted, the syntax for the default kubectl drain is pretty simple. You just have to specify the name of the node you want to drain:
Command options and flags
You can run the kubectl drain command with no options or arguments other than the name of the node you want to drain. However, the command supports several options and flags, which help to manage the way pod eviction takes place:
- --ignore-daemonsets: Ignores DaemonSet-managed pods (since they should not be evicted). (Recommended)
- --delete-emptydir-data :Forces eviction of pods using emptyDir volumes, which will cause data loss. (Use with caution!)
- --force: Removes pods even if they are not managed by a controller (e.g., standalone pods).
- --grace-period=<seconds>: Overrides pod termination grace period. Defaults to respecting the pod’s terminationGracePeriodSeconds.
- --timeout=<duration>: Specifies the maximum time to wait before failing the drain command. Example: --timeout=60s.
- --disable-eviction: Uses pod deletion instead of eviction API (not recommended for production).
- --skip-wait-for-delete-timeout=<seconds>: Skips waiting if a pod is stuck terminating for the specified duration.
- --pod-selector="<label>": Only evicts pods that match the specified label selector.
- --dry-run: Prints information about a draining instead of actually performing one.
Challenges and solutions in node draining
In the process of draining nodes, you may face the following common challenges.
Handling StatefulSets

A StatefulSet is a group of pods with unique identities. They’re often used in situations where one pod needs to start or stop before others. Draining a node that includes StatefulSets can be a bit challenging because StatefulSet pods have persistent identities and ordered termination. Kubernetes doesn’t automatically reschedule them like regular deployments.
To check whether a node includes any StatefulSets, run:
The output will list nodes that do include StatefulSets.
After this, cordon the node so that no new pods are scheduled on it. Then, you can delete each pod within the affected StatefulSets manually using:
This will cause Kubernetes to redeploy the evicted pods as new pods on a different node (since you cordoned the node currently hosting them).
After this, you can drain the node, using the --ignore-daemonsets option to tell Kubernetes to ignore StatefulSets (this works with StatefulSets even though the option refers explicitly only to DaemonSet-managed pods:
An alternative approach to handling StatefulSets is to use PodDisruptionBudgets (PDBs). PDBs are a way of defining how many pods for a given application can be unavailable. You can define a PDB using YAML like the following:
Note the minAvailable field, which states that at least one pod should run at all times. With this configuration in place, you can evict pods from a node safely because the PDB will ensure that at least one pod instance keeps running during the draining process.
Pods that fail to terminate
In some cases, pods don’t terminate in response to the drain command. Here are the reasons why this can happen, along with solutions:
- PodDisruptionBudget (PDB) blocks eviction: Edit PDB or use --disable-eviction
- Pod finalizers prevent deletion: Remove finalizers manually
- emptyDir volumes prevent draining: Use --delete-emptydir-data
- DaemonSets block draining: Use --ignore-daemonsets
- StatefulSet pods don't evict automatically: Manually delete StatefulSet pods
- Long terminationGracePeriodSeconds: Force delete pod (--grace-period=0 --force)
- CNI/Network policy issues: Restart CNI plugin or manually delete pod
- Draining timeout: Specify a timeout using --timeout=<seconds>
Cleaning up orphaned resources
After draining a node, you may end up with “orphaned” resources, such as pods that are stuck in the unknown state or persistent volume claims (PVCs) that remain attached to the node.
Unfortunately, there is no single command that can report all orphaned node resources. However, the command kubectl get nodes -o wide provides some details about the status of a node, and may clue you into orphaned resources associated with it. You can also use kubectl get pods to check whether any pods associated with the node are stuck and kubectl get pv to look up information about PVCs.
Best practices for draining Kubernetes nodes
To drain nodes reliably and efficiently, consider the following best practices:
- Communicate with teams before draining: If you share your Kubernetes cluster with other users, notify them before draining a node. It’s possible they have workloads assigned to the node that need to run on that specific node.
- Test draining: Prior to initializing a drain, you can use the --dry-run option to collect information about the intended drain operation. This allows you to test the drain before it actually begins.
- Validate status post-drain: Rather than blindly trusting the status reported by kubectl following a drain operation, it’s a best practice to double-check that the node was successfully drained using kubectl get nodes. You may also want to use kubectl get pods to check on the status of any pods that were evicted to be sure they were successfully rescheduled.
- Use grace periods: Optionally, you can use the --grace-period flag to configure a grace period (in seconds) for pod termination. This tells Kubernetes to give the pod a set period of time to shut itself down. If it’s still running after the time expires, Kubernetes will shut it down forcefully. Graceful terminations help avoid situations where an application is forced to shut down before completing all pending operations.
- Update load balancer configuration: If you’ve deployed a load balancer to manage network traffic between nodes, update the load balancer configuration to indicate that a node is being drained. This tells the load balancer to stop directing traffic to the node, further helping to ensure a graceful shutdown.
Node draining and groundcover
While draining nodes is a straightforward process, many things can go wrong – such as unexpected pod crashes or application failures. And unfortunately, Kubernetes itself doesn’t go out of its way to tell you when something goes awry. It won’t automatically notify you that a pod has failed to terminate and reschedule, for example, or explain why the drain node command is hanging.

This is where groundcover comes in. By providing continuous Kubernetes monitoring across all layers of your Kubernetes cluster and reporting anomalies, groundcover provides the insights you need not just to discover performance issues stemming from node draining, but also to fix them quickly.
Draining nodes without draining morale
Being able to shift pods between nodes flexibly using the drain command is part of what makes Kubernetes so powerful. But it’s also challenging in the sense that draining nodes is complicated and many things can go wrong. By knowing how best to approach the draining process and how to troubleshoot issues if they arise, you can take advantage of kubectl drain node to manage your nodes as needed, while simultaneously keeping your risks in check.
Sign up for Updates
Keep up with all things cloud-native observability.