The error message “0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.” indicates your pod can’t be scheduled due to memory constraints. Here are the steps to address this issue-
Investigate Memory Usage
Firstly, check the current Memory Usage of the OpenShift Node where your Pod is emitting the “Insufficient memory Error”
kubectl describe node <node-name>
This command displays detailed information about a specific node, including resource utilization like memory. Look for the “Allocatable” memory section to see how much memory is available.
Name: crc-ksq4m-master-0 . . . . Allocatable: cpu: 3800m ephemeral-storage: 29045851293 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15519456Ki pods: 250
Check Pod Consumption
Use the following command to check Memory Consumption of Pods:
kubectl top pods -A
This command lists all pods in the cluster, along with their CPU and memory usage. Identify pods with high memory consumption that could be targeted for optimization or scaling down. The above command requires that you have Metrics available on your OpenShift Cluster.
Also, you can have a view of your Resource Usage with the following command:
oc get pods -o custom-columns=NAME:.metadata.name,CPU:.resources.limits.cpu,MEMORY:.resources.limits.memory
This command displays a custom view of pods, including their names, CPU limits, and memory limits. You can adjust the output columns as needed.
Explore Optimization Options
There are several optimization options you can apply to reduce the amount of Memory your Pods require. For example:
Remove Stuck Pods: Check for any stuck or failed pods that may be consuming resources but not releasing them properly. You can use the oc get pods
command to list all pods in the cluster and look for any pods with a status of Error
, CrashLoopBackOff
, or Pending
.
Scale Down Existing Pods: If possible, scale down existing pods or terminate unused pods to free up resources for the new pod.
Reduce Pod Memory Requirements: If possible, analyze the application’s memory requirements and try to optimize the code or container image to reduce memory consumption. This involves techniques like code optimization, removing unnecessary dependencies, or using memory-efficient libraries.
VerticalPodAutoscaler: If applicable, consider using a VPA to automatically adjust resource requests and limits for the pod based on its actual usage. This can help optimize resource allocation dynamically.
Running OpenShift / Kubernetes Local ?
Finally, we will mention a common scenario where this error happens frequently. When running OpenShift Local or Minikube your Cluster starts with limited Memory Defaults. You can check the amount of Memory you are using with OpenShift Local as follows:
crc config view - consent-telemetry : no - memory : 8192
With Minikube you can use the following command to check the Amount of Free Memory
minikube ssh "free -mh"
In most cases, the default are not enough to run a complex application. You can increase OpenShift Local Memory as follows:
crc config set memory 16000
With Minikube, you can run the following command:
minikube config set memory 16000
Then, restart your cluster for changes to take effect.
Conclusion
By following these steps and carefully evaluating the situation, you can effectively troubleshoot and resolve the “Insufficient memory” and “No preemption victims” error in your OpenShift environment.