Unraveling the OOM Killer in Kubernetes: Navigating Memory Challenges
Today we'll deconstruct the OOM killer, examine its internal workings, and learn how to recognize and treat memory problems that cause it.
Have you ever worked with Kubernetes and encountered the terrifying OOM (Out-of-Memory) killer? This is an important component that helps your cluster manage its memory resources efficiently. On the other hand, the OOM killer steps in and ends the pod if it uses too much memory and exceeds its allotted limit. Memory for other crucial functions is intended to be freed up by this operation.
The Crucial Role of the OOM Killer in Kubernetes
One of Kubernetes' most important mechanisms, the OOM killer prevents memory depletion and maintains system stability. It serves as the last line of defense when memory resources are critically low. In these cases, the OOM killer locates the particular pod or process that is generating the memory overflow and kills it. The memory used by the remaining system processes is freed up by this termination. The OOM killer preserves the cluster's overall stability by giving up one process in order to avert a complete system crash.
Initiating OOM Kills: Triggers and Mechanisms
An Out of Memory (OOM) event is raised by a pod in Kubernetes when it uses more memory than it is allotted. Similar to Docker, the container runtime notifies the Kubernetes kubelet about the amount of memory used. After that, the kubelet monitors each pod's memory consumption and compares it to its allocated limitations. When a pod reaches its limit, the kubelet triggers the OOM killer, which ends the troublesome pod and frees up RAM for other important workloads.
Understanding OOM Decision and Memory Metrics
The OOM killer uses memory measurements that are provided to Kubernetes by cAdvisor (Container Advisor) in order to make well-informed OOM kill choices. Container_memory_working_set_bytes is the primary measure used by the OOM killer. It approximates non-evictable memory by taking into account the memory pages that are actively used within the container. When the OOM killer decides whether to terminate a pod, this statistic forms the basis of their analysis.
Navigating Memory Metrics: Distinguishing Key Differences
Although container_memory_usage_bytes may appear to be a straightforward option for tracking memory utilization, it encompasses cached items, such as the filesystem cache, which are subject to eviction under memory pressure. Therefore, it does not precisely represent the memory that the OOM killer observes and responds to. In contrast, container_memory_working_set_bytes offers a more dependable insight into memory usage, aligning with the OOM killer's monitoring criteria. This metric concentrates on memory that is less prone to easy reclamation.
Debugging Memory Consumption
To keep an eye on memory growth inside your program, refer to the specific file that provides data on memory utilization. Add the below code snippet to your program and run it in DEBUG mode so that the memory use can be seen.
<pre class="codeWrap"><code>
const fs = require('fs');
// Function to read memory usage
function readMemoryUsage() {
try {
const memoryUsage = fs.readFileSync('/sys/fs/cgroup/memory/memory.usage_in_bytes', 'utf8');
console.log(`Memory Usage: ${memoryUsage}`);
} catch (error) {
console.error('Error reading memory usage:', error);
}
}
// Call the function to read memory usage
readMemoryUsage();
</code></pre>
In the above code, fs.readFileSync method is utilized to read the contents of “/sys/fs/cgroup/memory/memory.usage_in_bytes” file synchronously. Utf8 encoding is used to interpret the file as a string. The logs and the files are read by the function “ readMemoryUsage” and the memory usage is logged in the console. While reading, if an error occurs, it will also be recorded to the console.
You need premium privileges to access the system files. So a user who has these privileged permissions should only run the Node.js script.
Code for node heap usage:
<pre class="codeWrap"><code>
const fs = require('fs');
// Function to track memory growth
function trackMemoryGrowth() {
const memoryUsage = process.memoryUsage();
console.log(`Memory Usage (RSS): ${memoryUsage.rss}`);
console.log(`Memory Usage (Heap Total): ${memoryUsage.heapTotal}`);
console.log(`Memory Usage (Heap Used): ${memoryUsage.heapUsed}`);
}
trackMemoryGrowth();
});
</code></pre>
Utilizing kubectl top to Monitor Memory and CPU in Kubernetes (K8s)
To monitor the usage of resources of nodes and pods in a cluster, you can use the powerful command “ Kubectl top” in Kubernetes. This command offers you real-time insights about CPU utilization and memory which in turn helps you detect potential bottlenecks, resolve performance issues, and informed decision-making regarding resource distribution. Let us explore the usage of kubectl top command for monitoring node and pod resource usage.
Monitoring Pod Resource Utilization:
To track the memory and CPU usage of pods, employ the following command:
<pre class="codeWrap"><code>kubectl top pod</code></pre>
This command furnishes a summary of resource utilization for all pods within the current namespace. It exhibits essential information such as pod name, CPU usage, memory usage, and the respective percentage of resource utilization.
For a more focused examination of a specific pod, utilize the pod name as an argument:<pre class="codeWrap"><code>kubectl top pod </code></pre>
This command offers comprehensive resource usage details for the designated pod.
Monitoring Node Resource Utilization:
For tracking the memory and CPU usage of nodes within your cluster, use the following command:
<pre class="codeWrap"><code>kubectl top node </code></pre>
This command furnishes an outline of resource usage for all nodes in the cluster, presenting key details such as node name, CPU usage, memory usage, and the associated percentage of resource utilization.
Much like monitoring pods, you can pinpoint a specific node to obtain in-depth resource usage information:
<pre class="codeWrap"><code>kubectl top node <node-name></code></pre>
To conclude, ‘kubectl top’ command is a useful tool that helps monitor the CPU usage and memory of nodes and pods in Kubernetes cluster. This command helps you to check the utilization and optimization of the allocation of resources, and identify performance bottlenecks ensuring smooth operations of applications.
Considerations for using the same Setup in My Docker Environment
The Docker CLI in Linux calculates memory use by taking the total memory utilization and subtracting the cache usage. Nevertheless, this particular computation is not carried out via the API. Rather, it provides the total memory use as well as the cache size independently, letting clients work with the data as needed. On cgroup v1 hosts, the memory.stat file's total_inactive_file value indicates the amount of cache used.
On Docker 19.03 and earlier versions, the usage of the cache was identified as a value of the cache field. However, on cgroup v2 hosts, the cache usage is identified by the value of the 'inactive_file' field.
memory_stats.usage is from:
<pre class="codeWrap"><code>/sys/fs/cgroup/memory/memory.usage_in_bytes</code></pre>
Retrieve the Node.js code to display memory usage for a Docker container:
In Node.js, you can take the help of docker-stats-api library and continue with printing the container usage. Just follow the below-mentioned steps.
In Node.js project, docker-stats-api library must be installed as a dependency by executing this command:
<pre class="codeWrap"><code>
npm install docker-stats-api
</code></pre>
Next, to print the container usage, you must define a function.
<pre class="codeWrap"><code>
function printContainerUsage(containerId) {
dockerStats.getStats(containerId)
.then(stats => {
console.log(`Container Usage (CPU): ${stats.cpu_percent}`);
console.log(`Container Usage (Memory): ${stats.mem_usage}`);
})
.catch(error => {
console.error('Error retrieving container stats:', error);
});
}
</code></pre>
Take a look at complete example:
<pre class="codeWrap"><code>
const DockerStats = require('docker-stats-api');
const dockerStats = new DockerStats();
function printContainerUsage(containerId) {
dockerStats.getStats(containerId)
.then(stats => {
console.log(`Container Usage (CPU): ${stats.cpu_percent}`);
console.log(`Container Usage (Memory): ${stats.mem_usage}`);
})
.catch(error => {
console.error('Error retrieving container stats:', error);
});
}
const containerId = 'YOUR_CONTAINER_ID';
printContainerUsage(containerId);
</code></pre>
Executing this code enables you to fetch and display the CPU and memory usage of a particular container utilizing the docker-stats-api library in Node.js. Ensure to substitute ’YOUR_CONTAINER_ID’ with the actual ID of the container you intend to monitor.
Conclusion
Gaining an understanding of the function and functioning of the OOM killer in Kubernetes offers important insights into memory management in cluster contexts. In addition to exploring the processes behind OOM kills this blog highlighted the importance of memory metrics—specifically, container memory working set bytes—in affecting OOM choices. Now that you have this knowledge, you can effectively monitor and fix memory-related issues that might lead to OOM kills.
Facing Challenges in Cloud, DevOps, or Security?
Let’s tackle them together!
get free consultation sessionsWe will contact you shortly.