Fix CreateContainerConfigError & CreateContainerError in Kubernetes
Discover what CreateContainerConfigError and CreateContainerError are and grasp the real importance of these error messages to reach effective Kubernetes monitoring and troubleshooting.
If your immediate reply to “What are CreateContainerConfigError and CreateContainerError?” is that "they're terms that are hard to read because someone forgot to insert whitespaces," we can't say you're wrong. But we can say that you're not grasping the real importance of these error messages, which are critical for effective Kubernetes monitoring and troubleshooting.
So, let's take a look at what CreateContainerConfigError and CreateContainerError mean, what causes these events in Kubernetes, and how you can fix them. We'll also cover other common Kubernetes errors that you might encounter when you experience CreateContainerConfigError and CreateContainerError issues.
What is Kubernetes CreateContainerConfigError?
In Kubernetes, CreateContainerConfigError is an error condition that occurs when Kubernetes fails to generate the configuration for a container.
To understand fully what this means, let's step back and discuss how Kubernetes produces container configurations. When Kubernetes starts a new container, it uses a method called generateContainerConfig to read the configuration data or pod metadata associated with the container.
That data, which usually appears in the form of YAML code, may include configuration details such as:
- Commands that run when the container starts.
- References to ConfigMaps, which contain non-sensitive data that should be injected into the container when it runs.
- References to Secrets, meaning sensitive data that the container needs to access.
- Definition of storage resources that the container connects to.
Under normal conditions, Kubernetes locates any resources defined in the configuration and connects the container to them. But if it can't find the resources, a CreateContainerConfigError event will result.
Common Causes for CreateContainerConfigError
The most common cause for CreateContainerConfigError is a failure by Kubernetes to locate resources that are part of a container's configuration. Specifically, this event usually happens when Kubernetes can't locate either a ConfigMap or a secret.
Missing ConfigMap
A ConfigMap is an API object that stores configuration data or pod metadata using a key-value model. ConfigMaps are a handy way of storing configuration information (such as environment-specific variables) that a container needs to access at runtime but that shouldn't be hard-coded into the container.
To use a ConfigMap, you first create it using a command like:
Then, when creating a Pod, you reference data from the ConfigMap that the Pod needs to access. The references appear in the spec section of the Pod's configuration.
For example, consider the following spec (which we've borrowed from the Kubernetes documentation):
This references values defined in a ConfigMap named game-demo. This is good and well if that ConfigMap actually exists and the Pod is able to access it. But if not, you'll get a CreateContainerConfigError when Kubernetes attempts to start the Pod.
Missing Secrets
The same error will result if you configure a container to use a secret that doesn't exist.
Like ConfigMaps, any secrets that you want a Pod to use must be set up before you launch the Pod. You can generate secrets using a command like:
Then, you reference the secret when configuring a Pod. Here's an example (also borrowed from the Kubernetes documentation) that points to a secret named secret-dockercfg:
Here again, everything will be peachy so long as your secret exists. But if the secret does not exist or is not accessible to the Pod, you'll encounter a CreateContainerConfigError event.
How to Troubleshoot CreateContainerConfigError
When troubleshooting CreateContainerConfigError events, start by looking at relevant logs and events to confirm that a CreateContainerConfigError has indeed occurred. Then, compare your Pod configuration to your actual configuration to determine what triggered the error.
Here's what the troubleshooting process looks like in detail.
Step 1: View Pod Status and Logs
To view Pod logs, run:
You can parse through the logs looking for information related to CreateContainerConfigError. Or, to save time, just grep for that string:
If a CreateContainerConfigError event has occurred, you'll typically see a log message to the effect of "[container name] in pod [pod name] is waiting to start: CreateContainerConfigError." You may also see related information (such as kubelet container image pull events) that will help you figure out what happened prior to the error.
Step 2: Check Kubectl Events
You can also check for events related to CreateContainerConfigError errors by running:
Here again, feel free to pipe the output into grep if you want to zero in on CreateContainerConfigError lines specifically.
Step 3: Inspect Pods to Identify Failures
Checking logs and events is a good way to verify whether a CreateContainerConfigError event has occurred, but you typically won't find details on why it occurred. For that information, you need to dig a little deeper by inspecting the configuration for the Pod that has experienced the issue.
Do this by running:
The output will display detailed information about your Pod, including any ConfigMaps or secrets that it depends on. You can compare this data to the results of the following commands:
Those commands list the ConfigMaps and secrets that are actually configured. If your Pod references any ConfigMaps or secrets that don't appear when you ask kubectl to describe ConfigMaps and secrets, you've found the source of your CreateContainerConfigError.
Step 4: Verify Permissions and Namespace Settings
You may sometimes encounter a CreateContainerConfigError even though all of the ConfigMaps and secrets referenced in a Pod's configuration exist. If this is the case, the problem is most likely that the resource in question is not accessible to the Pod due either to misconfigured permissions or because it's associated with a different namespace from the Pod.
If you suspect this might be the issue, inspect the output of the following commands again, being sure to check that all of your Pod's ConfigMaps and secrets are properly configured:
How to Fix CreateContainerConfigError
Once you've pinpointed the cause of a CreateContainerConfigError event, fixing it is usually easy enough: You simply need to create whichever resource your container configuration points to but which does not actually exist. Or, if the resource does exist but is not accessible to a Pod that needs it due to a misconfiguration, you can redeploy the resource with the proper settings.
When creating the resources to fix a CreateContainerConfigError issue, be sure to pay close attention to the following details:
- Ensure that the resource exists in the same namespace as the Pod that needs to access it.
- Configure permissions properly for the resource.
- Avoid typos, which will cause a Pod to look in the wrong place for your resource.
What is Kubernetes CreateContainerError?
Now that you know all about What is Kubernetes CreateContainerConfigError, let's talk about CreateContainerError, a seemingly similar but actually quite distinct type of Kubernetes error event.
In Kubernetes, CreateContainerError is an error that happens when Kubernetes fails to create a container successfully. Unlike CreateContainerConfigError, CreateContainerError problems stem from issues related to the creation of a container itself, not the container's configuration.
Common Causes for CreateContainerError
In most cases, CreateContainerError events result from one of the following issues.
In Kubernetes, CreateContainerError is an error that happens when Kubernetes fails to create a container successfully. Unlike CreateContainerConfigError, CreateContainerError problems stem from issues related to the creation of a container itself, not the container's configuration.
Common Causes for CreateContainerError
In most cases, CreateContainerError events result from one of the following issues.
Duplicate Container Names
The names of every container running in Kubernetes must be unique. Normally, the container runtime will automatically avoid naming conflicts. But if it assigns the same name to more than one container for some reason, Kubernetes will fail to start the container and will generate a CreateContainerError event.
Missing Entrypoint
Containers that lack an entrypoint or other startup command will fail with a CreateContainerError unless you explicitly configured an entrypoint command in the manifest that defines the Pod's deployment configuration.
Storage Volume Problems
If your container is configured to use a storage volume that doesn't exist or is not accessible to the Pod, you'll see a CreateContainerError.
Container Runtime Problems
In rare cases, problems with the container runtime can trigger CreateContainerError. If your runtime is the culprit, it's typically either because the runtime code contains a bug, or because the node that hosts the runtime is so short on resources that the runtime lacks enough CPU and memory to function normally.
How to Troubleshoot CreateContainerError
The steps for troubleshooting a CreateContainerError issue are similar to those for investigating CreateContainerConfigError.
Step 1: Check the Pod Status
First, run the following command to check the status of your Pods:
If any Pods have failed due to CreateContainerError, you'll see CreateContainerError in the STATUS column of the output from this command.
You can also get more details about a specific Pod by running:
Step 2: Check Events
You can also run:
To check for events that reference a CreateContainerError problem or other failures.
Step 3: Examine Pod Configuration
In most cases, the previous two steps suffice to confirm that a CreateContainerError has occurred. Once you have that information, your next step is to identify the root cause of the error.
Start by examining the manifests for the failed Pod, which detail the image and Pod configuration for your application. The manifest files are typically located in the directory /etc/kubernetes/manifests, although this can vary depending on which Kubernetes distribution you are using. If you're unsure where the manifest is located, you can try to search for it using a command like:
Read the YAML code and determine whether you've defined any storage resources for the Pod. If you have, make sure those resources exist and are accessible.
You can also read the YAML to check which container images the Pod uses. Then, you can download the container images directly and use a command like docker inspect to verify that they have a proper entrypoint.
If the storage configuration and entrypoint appear valid, your issue is most likely related to the container runtime. To confirm this, try running the containers directly on the CLI using a different runtime or a different node. If they work properly under a different runtime or a different server, it's likely that whichever runtime you have set up in Kubernetes is buggy. Running containers directly on the CLI will also reveal any issues related to the lack of a proper entrypoint.
How to Fix CreateContainerError
Resolving a CreateContainerError varies depending on what the root cause of the problem is:
- For duplicate container names: Restarting the container runtime may fix this issue. In an extreme case, you may need to restart your entire Kubernetes cluster.
- Missing entrypoint: You can fix this problem either by modifying the container images to include a proper entrypoint or adding a manual entrypoint command to the Pod's manifest.
- Storage volume problems: If storage volumes are the issue, make sure that the volumes you've configured exist, and that the manifest properly references them.
- Buggy runtime: If strange runtime behavior causes CreateContainerError, consider updating your runtime or switching to a different one. Make sure as well that your nodes are not starved of resources, since resource exhaustion could also cause strange runtime behavior.
Related Kubernetes Errors
In addition to CreateContainerConfigError and CreateContainerError, you may experience the following errors when trying to start an application in Kubernetes:
- RunContainerError: This occurs when Kubernetes successfully creates the container but fails to run it. This typically happens when your container is configured to use a ConfigMap or secret that exists, but the container references specific values within those resources that do not exist.
- CrashLoopBackOff: This happens when the container repeatedly crashes. Each time it crashes, Kubernetes will attempt to restart it, with increasing intervals between restart attempts. There are several potential underlying causes of this issue – so many that we wrote an entire article devoted to troubleshooting CrashLoopBackOff.
Kubernetes Troubleshooting with groundcover
Sometimes, problems like CreateContainerConfigError and CreateContainerError stem from simple issues that have simple fixes, such as creating a missing ConfigMap or secret.
Other times, they are symptomatic of deeper, more complex issues with Kubernetes – which is why you need a solid Kubernetes observability solution on your side. Groundcover helps teams to identify and investigate even the most complicated performance issues in Kubernetes, allowing you to identify root causes quickly. In addition, groundcover makes it easy to understand how a failed container impacts your overall cluster health and performance.
Many things can go wrong when creating containers in Kubernetes. But with a little digging – and with help from basic commands like kubectl describe pod – it's typically easy enough to figure out whether a CreateContainerConfigError or CreateContainerError is the root cause of an issue you're facing. And once you determine the source of the issue, you'll know what needs to happen to fix it.
That, at least, is the case with relatively straightforward container setup failures. For more Kubernetes troubleshooting scenarios, consider a tool like groundcover.
FAQS
Here are answers to common questions about CrashLoopBackOff
How do I delete CrashLoopBackoff Pod?
To delete a Pod that is stuck in a CrashLoopBackOff, run:
kubectl delete pods pod-name
If the Pod won't delete – which can happen for various reasons, such as the Pod being bound to a persistent storage volume – you can run this command with the --force flag to force deletion. This tells Kubernetes to ignore errors and warnings when deleting the Pod.
How do I fix CrashLoopBackoff without logs?
If you don't have Pod or container logs, you can troubleshoot CrashLoopBackOff using the command:
kubectl describe pod pod-name
The output will include information that allows you to confirm that a CrashLoopBackOff error has occurred. In addition, the output may provide clues about why the error occurred – such as a failure to pull the container image or connect to a certain resource.
If you're still not sure what's causing the error, you can use the other troubleshooting methods described above – such as checking DNS settings and environment variables – to troubleshoot CrashLoopBackOff without having logs.
Once you determine the cause of the error, fixing it is as easy as resolving the issue. For example, if you have a misconfigured file, simply update the file.
How do I fix CrashLoopBackOff containers with unready status?
If a container experiences a CrashLoopBackOff and is in the unready state, it means that it failed a readiness probe – a type of health check Kubernetes uses to determine whether a container is ready to receive traffic.
In some cases, the cause of this issue is simply that the health check is misconfigured, and Kubernetes therefore deems the container unready even if there is not actually a problem. To determine whether this might be the root cause of your issue, check which command (or commands) are run as part of the readiness check. This is defined in the container spec of the YAML file for the Pod. Make sure the readiness checks are not attempting to connect to resources that don't actually exist.
If your readiness probe is properly configured, you can investigate further by running:
kubectl get events
This will show events related to the Pod, including information about changes to its status. You can use this data to figure out how far the Pod progressed before getting stuck in the unready status. For example, if its container images were pulled successfully, you'll see that.
You can also run the following command to get further information about the Pod's configuration:
kubectl describe pod pod-name
Checking Pod logs, too, may provide insights related to why it's unready.
For further guidance, check out our guide to Kubernetes readiness probes.
Kubernetes Academy
Related content
Sign up for Updates
Keep up with all things cloud-native observability.