There comes a time in the life of most Kubernetes Pods when a restart is necessary. Pod restarts can help with a variety of issues – such as updating a Pod configuration, collecting information about the root cause of a problem, and attempting to fix a failed Pod. This is why knowing how to restart Pods in Kubernetes – both manually via commands like kubectl restart pod and automatically using container restart policies – is an important skill for Kubernetes admins.

This article dives deep into these topics by explaining how Kubernetes Pod restarts work, why you might want to restart Pods, and how to restart a Pod in Kubernetes using approaches.

Understanding the Pod lifecycle

| State | Meaning | Common causes | |-----------|-----------------------------------------------------------------|---------------------------------------------------------------------------------------------------| | Pending | Pod is waiting to be scheduled on an available node. | Pod was just launched and Kubernetes is still looking for a node. Not enough nodes are available. | | Running | Containers in the Pod are running normally. | The Pod started as expected. | | Succeeded | All containers in the Pod have shut down successfully. | The Pod terminated normally. | | Failed | One or more containers in the Pod terminated in a failed state. | Buggy container code triggered a failure. A container image pull failed. | | Unknown | Kubernetes can't determine the Pod state. | Problems connecting to Pod. |

To understand why you might want to say "Kubernetes, restart Pod," you must first understand the Kubernetes Pod lifecycle – meaning the set of states in which Pods can exist over the course of their operations.

There are five basic states into which Pods may enter during the course of their lifecycle:

  • Pending, which happens when the Pod is waiting to be scheduled onto a node. Pods always spend some time in the Pending state as part of their normal startup routine, since Kubernetes needs to find a node to host each Pod before the Pod can start. However, if Pods are stuck in Pending for an extended period, it typically reflects a problem like a lack of available nodes.
  • Running, which means the Pod has been deployed to a node and has running containers within it. This is the normal operating state for Pods.
  • Succeeded, which happens when all containers in a Pod have terminated successfully. This state is also a part of normal operations.
  • Failed, which signifies that at least one container inside a Pod has terminated in a state of failure. This usually means something is wrong – such as a misconfigured Pod or buggy code inside a container.
  • Unknown, a message that indicates that Kubernetes isn't sure what's happening with a given Pod. Often, this happens due to networking errors that make the Pod unreachable, but it can also result from software bugs, as well as insufficient resources within a cluster (which cause Kubernetes not to be able to process requests to Pods.

Thus, a healthy Pod starts in the Pending phase, transitions to Running and then ends in Succeeded. If this happens, all is good, and there's typically no reason to restart the Pod.

But if you have a Pod that is stuck in the Pending phase and can't progress, or one that ends up in the Failed or Unknown states, you have a problem, which is one of the reasons why you may want to try restarting your Pod.

Why you might want to restart a pod

Problems that occur during the Pod lifecycle are just one of the many potential reasons for restarting a Pod. Here's a complete list of events that might trigger a Pod restart.

Applying configuration changes

To change a Pod's configuration, you have to update its manifest and then redeploy it – which requires a restart. Kubernetes doesn't allow you to make configuration changes dynamically.

It's worth noting that you could always deploy an updated version of your Pod while leaving the original one running – in which case you wouldn't need to restart the Pod – but this is typically not desirable, since in most cases you wouldn't want to have multiple versions of the same Pod active at once.

Debugging applications

Restarting a Pod and monitoring events that transpire during the restart is a good way to collect debugging information. This includes information generated by Kubernetes itself, which generates some data about what's happening inside a Pod as it transitions between various lifecycle phases.

In addition, debugging can also include the use of external tools to monitor additional Kubernetes metrics like resource consumption and run traces while the Pod is starting. This data can lead to insights that help identify the root cause of issues like a Pod whose containers keep crashing.

Pod stuck in a terminating state

Restarting Pods can help resolve situations where Pods crash and end up stuck in the terminating state. Sometimes, this issue stems from one-off failures, like the inability of a container to pull its image due to a temporary network failure. In that case, simply restarting the Pod will fix the issue.

If that doesn't happen, it usually means there's a deeper problem, like bugs in your container code. Still, attempting a Pod restart to see what happens is a good initial step toward resolving Pods that are stuck in the terminating state.

OOM errors

In a similar vein, restarting Pods can help to resolve OOM errors, which occur when insufficient memory is available within a Kubernetes cluster. If you're lucky, you'll simply be able to restart a Pod and have it run normally after an OOM event without further issue. This is likely to be the outcome if the Pod was OOM-killed due to a temporary memory availability problem, such as a node that went down but then comes back up.

However, if your Pod keeps being OOM-killed, it's likely that you have a bigger issue, like a bug inside one of your containers that is causing a memory leak.

Forcing a new image pull

Although Kubernetes caches container images after it has pulled them, you can force it to download a new image every time a container starts by defining the imagePullPolicy: Always value within a container spec.

If you do this, restarting the Pod will trigger a new image pull. This is desirable in situations where a container image has been updated (without a change in its tag or version) and you want to force Kubernetes to start a new container based on the updated image.

Mitigating resource contention

Resource contention occurs when multiple Pods are competing for limited resources. This can happen when there's simply not enough CPU or memory on a given node to support all Pods running on that node, or if misconfigured resource requests and Kubernetes limits lead to inefficient distribution of resources.

Either way, since a Pod restart will force Kubernetes to reschedule the Pod, it may end up choosing a different node where resource contention is not an issue.

Ideally, you'd also make sure that you don't have deeper resource availability issues; you don't want to be constantly restarting Pods and crossing your fingers that Kubernetes places them on nodes where there are sufficient resources available. But in a pinch, a Pod restart is a simple way to try to mitigate resource contention problems.

State cleanup

Sometimes, restarting a Pod is the best way to restore it to a "clean" state by resetting the images, persistent volumes, networking configuration, and so on within the Pod. Restarting Pods that have been running for a long time may also reduce their overall resource consumption, especially if they have issues like memory leaks that cause them to consume increasing amounts of resources over time.

Here again, it's best to optimize your Pods and containers so that they can run indefinitely without getting stuck in "dirty" or inefficient states. But when you just want to restart everything from scratch, Pod restarts are a handy generic mitigation.

How to restart a Pod in Kubernetes using kubectl (5 examples)

| Restart method | How it works | |-------------------------|-----------------------------------------------------------------------------------------------------| | kubectl rollout restart | Tells Kubernetes to restart the Pod gracefully, without downtime. | | kubectl delete | Manually deletes the Pod. | | kubectl scale | Can be used to scale Pod replicas to 0, then scale them back up, effectively causing a Pod restart. | | kubectl replace | Restarts a Pod based on an updated configuration file. | | kubectl set env | Changes environment variables, which forces Pod restart. |

Once you've decided you want to trigger a Kubernetes restart Pod event, actually restarting it is relatively simple – and there are a variety of ways to do it using kubectl, the Kubernetes CLI tool.

Here's a look at five common kubectl restart Pod methods.

#1. Rollout restart

The command kubectl rollout restart is the most straightforward way to restart a Pod.

It tells Kubernetes to restart an existing resource by keeping the original version of the resource in place until the replacement is running – hence, this is a graceful kubectl restart Pod strategy that avoids downtime.

#2. Deleting and restarting Pods

You can use the command kubectl delete pod to shut down a Pod manually. Then, you can restart the Pod manually using the kubectl -f apply or a similar command.

Using kubectl delete pod and then manually restarting the Pod is a less graceful approach because it requires you to run multiple commands. In addition, downtime will occur in the interim between when you shut down the Pod and restart it. That said, this approach gives you more control over exactly when the new Pod starts, since you trigger the restart explicitly instead of leaving it to Kubernetes to decide when to restart.

#3. Scaling Pods

The kubectl scale command tells Kubernetes to increase (or decrease) the number of replicas of a Pod or other resource. Restarting Pods isn't the main purpose of this command, but you can use it to do so by scaling the number of replicas of your Pod down to 0. Then, you can scale it back up to launch one or more new instances of the Pod.

In most respects, this is simply a more complex way of stopping a Pod manually with kubectl delete pod. However, if you have multiple replicas of the Pod running and want to scale down to 0, and then scale back up to multiple replicas, it makes sense to use the scale command.

#4. Replacing a Pod

The kubectl replace command tells Kubernetes to redeploy a resource based on a new configuration. It essentially means "kubectl restart deployment using an updated configuration."

This is useful if you've updated a Pod's manifest and want to launch a new version based on the new manifest. This approach can also be handy because it allows you to copy, paste, and apply an updated configuration directly from the CLI using a command such as:

cat pod.json | kubectl replace -f -

You could do something similar if you manually deleted a Pod, then restarted it using a new configuration that you feed in through the command line. But that would require two steps, where kubectl replace can do it all in one.

#5. Modifying environment variables

Using the command kubectl set env, you can change the environment variables that apply to a running Pod. This will trigger a restart.

You should use kubectl to restart a deployment this way if you actually want to modify an environment variable. If you just want to restart a Pod without changing environment variables, it's simpler to use one of the restart methods described above.

Automated restarts through container restart policy

Above, we explained kubectl restart Pod strategies that require manual commands. It's also possible to trigger restarts within Pods automatically by setting the desired container restart policy when creating a Pod's manifest.

There are three container restart policy options:

  • Always, which tells Kubernetes to restart containers automatically whenever they terminate, regardless of the reason.
  • OnFailure, which forces automated container restarts only if containers exit with an error.
  • Never, which tells Kubernetes never to restart containers.

Importantly, these policies apply to containers within a Pod, not the Pod itself – so they won't force an entire Pod to restart from scratch. But if your goal is simply to restart troublesome containers as part of your K8s restart Pod strategy, the container restart policy feature is a handy way to do so automatically.

Note, too, that Kubernetes imposes a backoff delay when attempting to restart containers, meaning that if the containers keep failing, it will progressively increase the interval between restart attempts. By default, the maximum delay is 5 minutes; beyond that point, it won't keep trying to restart containers.

Kubernetes troubleshooting with groundcover

As we mentioned, restarting Pods can be a quick and simple way to work around various types of issues – such as OOMkilled events and failed resource contention.

However, restarting Pods is not a substitute for effective Kubernetes monitoring and troubleshooting. To do that, you need a tool like groundcover, which continuously observes your Pods, containers, and clusters, and lets you dive deep into performance data.

With groundcover, you gain the deep visibility you need to get to the root of complex Kubernetes issues so that you don't have to keep restarting Pods.

K8s restart Pod: A key Kubernetes admin skill

The bottom line: understanding how to restart a Pod with kubectl is an essential skill for Kubernetes admins. But simply restarting Pods every time something goes wrong is not a good way to keep workloads and clusters running smoothly. You must also understand how to troubleshoot Kubernetes issues and optimize configurations so that Pod restarts are rarely necessary.

Sign up for Updates

Keep up with all things cloud-native observability.

We care about data. Check out our privacy policy.

We care about data. Check out our privacy policy.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.