
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 21:18:20 GMT</lastBuildDate>
        <item>
            <title><![CDATA[A deep dive into BPF LPM trie performance and optimization]]></title>
            <link>https://blog.cloudflare.com/a-deep-dive-into-bpf-lpm-trie-performance-and-optimization/</link>
            <pubDate>Tue, 21 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ This post explores the performance of BPF LPM tries, a critical data structure used for IP matching.  ]]></description>
            <content:encoded><![CDATA[ <p>It started with a mysterious soft lockup message in production. A single, cryptic line that led us down a rabbit hole into the performance of one of the most fundamental data structures we use: the BPF LPM trie.</p><p>BPF trie maps (<a href="https://docs.ebpf.io/linux/map-type/BPF_MAP_TYPE_LPM_TRIE/">BPF_MAP_TYPE_LPM_TRIE</a>) are heavily used for things like IP and IP+Port matching when routing network packets, ensuring your request passes through the right services before returning a result. The performance of this data structure is critical for serving our customers, but the speed of the current implementation leaves a lot to be desired. We’ve run into several bottlenecks when storing millions of entries in BPF LPM trie maps, such as entry lookup times taking hundreds of milliseconds to complete and freeing maps locking up a CPU for over 10 seconds. For instance, BPF maps are used when evaluating Cloudflare’s <a href="https://www.cloudflare.com/network-services/products/magic-firewall/"><u>Magic Firewall</u></a> rules and these bottlenecks have even led to traffic packet loss for some customers.</p><p>This post gives a refresher of how tries and prefix matching work, benchmark results, and a list of the shortcomings of the current BPF LPM trie implementation.</p>
    <div>
      <h2>A brief recap of tries</h2>
      <a href="#a-brief-recap-of-tries">
        
      </a>
    </div>
    <p>If it’s been a while since you last looked at the trie data structure (or if you’ve never seen it before), a trie is a tree data structure (similar to a binary tree) that allows you to store and search for data for a given key and where each node stores some number of key bits.</p><p>Searches are performed by traversing a path, which essentially reconstructs the key from the traversal path, meaning nodes do not need to store their full key. This differs from a traditional binary search tree (BST) where the primary invariant is that the left child node has a key that is less than the current node and the right child has a key that is greater. BSTs require that each node store the full key so that a comparison can be made at each search step.</p><p>Here’s an example that shows how a BST might store values for the keys:</p><ul><li><p>ABC</p></li><li><p>ABCD</p></li><li><p>ABCDEFGH</p></li><li><p>DEF</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1uXt5qwpyq7VzrqxXlHFLj/99677afd73a98b9ce04d30209065499f/image4.png" />
          </figure><p>In comparison, a trie for storing the same set of keys might look like this.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3TfFZmwekNAF18yWlOIVWh/58396a19e053bd1c02734a6a54eea18e/image8.png" />
          </figure><p>This way of splitting out bits is really memory-efficient when you have redundancy in your data, e.g. prefixes are common in your keys, because that shared data only requires a single set of nodes. It’s for this reason that tries are often used to efficiently store strings, e.g. dictionaries of words – storing the strings “ABC” and “ABCD” doesn’t require 3 bytes + 4 bytes (assuming ASCII), it only requires 3 bytes + 1 byte because “ABC” is shared by both (the exact number of bits required in the trie is implementation dependent).</p><p>Tries also allow more efficient searching. For instance, if you wanted to know whether the key “CAR” existed in the BST you are required to go to the right child of the root (the node with key “DEF”) and check its left child because this is where it would live if it existed. A trie is more efficient because it searches in prefix order. In this particular example, a trie knows at the root whether that key is in the trie or not.</p><p>This design makes tries perfectly suited for performing longest prefix matches and for working with IP routing using CIDR. CIDR was introduced to make more efficient use of the IP address space (no longer requiring that classes fall into 4 buckets of 8 bits) but comes with added complexity because now the network portion of an IP address can fall anywhere. Handling the CIDR scheme in IP routing tables requires matching on the longest (most specific) prefix in the table rather than performing a search for an exact match.</p><p>If searching a trie does a single-bit comparison at each node, that’s a binary trie. If searching compares more bits we call that a <b><i>multibit trie</i></b>. You can store anything you like in a trie, including IP and subnet addresses – it’s all just ones and zeroes.</p><p>Nodes in multibit tries use more memory than in binary tries, but since computers operate on multibit words anyhow, it’s more efficient from a microarchitecture perspective to use multibit tries because you can traverse through the bits faster, reducing the number of comparisons you need to make to search for your data. It’s a classic space vs time tradeoff.</p><p>There are other optimisations we can use with tries. The distribution of data that you store in a trie might not be uniform and there could be sparsely populated areas. For example, if you store the strings “A” and “BCDEFGHI” in a multibit trie, how many nodes do you expect to use? If you’re using ASCII, you could construct the binary trie with a root node and branch left for “A” or right for “B”. With 8-bit nodes, you’d need another 7 nodes to store “C”, “D”, “E”, “F”, “G”, “H", “I”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/LO6izFC5e06dRf9ra2roC/167ba5c4128fcebacc7b7a8eab199ea5/image5.png" />
          </figure><p>Since there are no other strings in the trie, that’s pretty suboptimal. Once you hit the first level after matching on “B” you know there’s only one string in the trie with that prefix, and you can avoid creating all the other nodes by using <b><i>path compression</i></b>. Path compression replaces nodes “C”, “D”, “E” etc. with a single one such as “I”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ADY3lNtF7NIgfUX7bX9vY/828a14e155d6530a4dc8cf3286ce8cc3/image13.png" />
          </figure><p>If you traverse the tree and hit “I”, you still need to compare the search key with the bits you skipped (“CDEFGH”) to make sure your search key matches the string. Exactly how and where you store the skipped bits is implementation dependent – BPF LPM tries simply store the entire key in the leaf node. As your data becomes denser, path compression is less effective.</p><p>What if your data distribution is dense and, say, all the first 3 levels in a trie are fully populated? In that case you can use <b><i>level compression</i></b><i> </i>and replace all the nodes in those levels with a single node that has 2**3 children. This is how Level-Compressed Tries work which are used for <a href="https://vincent.bernat.ch/en/blog/2017-ipv4-route-lookup-linux">IP route lookup</a> in the Linux kernel (see <a href="https://elixir.bootlin.com/linux/v6.12.43/source/net/ipv4/fib_trie.c"><u>net/ipv4/fib_trie.c</u></a>).</p><p>There are other optimisations too, but this brief detour is sufficient for this post because the BPF LPM trie implementation in the kernel doesn’t fully use the three we just discussed.</p>
    <div>
      <h2>How fast are BPF LPM trie maps?</h2>
      <a href="#how-fast-are-bpf-lpm-trie-maps">
        
      </a>
    </div>
    <p>Here are some numbers from running <a href="https://lore.kernel.org/bpf/20250827140149.1001557-1-matt@readmodwrite.com/"><u>BPF selftests benchmark</u></a> on AMD EPYC 9684X 96-Core machines. Here the trie has 10K entries, a 32-bit prefix length, and an entry for every key in the range [0, 10K).</p><table><tr><td><p>Operation</p></td><td><p>Throughput</p></td><td><p>Stddev</p></td><td><p>Latency</p></td></tr><tr><td><p>lookup</p></td><td><p>7.423M ops/s</p></td><td><p>0.023M ops/s</p></td><td><p>134.710 ns/op</p></td></tr><tr><td><p>update</p></td><td><p>2.643M ops/s</p></td><td><p>0.015M ops/s</p></td><td><p>378.310 ns/op</p></td></tr><tr><td><p>delete</p></td><td><p>0.712M ops/s</p></td><td><p>0.008M ops/s</p></td><td><p>1405.152 ns/op</p></td></tr><tr><td><p>free</p></td><td><p>0.573K ops/s</p></td><td><p>0.574K ops/s</p></td><td><p>1.743 ms/op</p></td></tr></table><p>The time to free a BPF LPM trie with 10K entries is noticeably large. We recently ran into an issue where this took so long that it caused <a href="https://lore.kernel.org/lkml/20250616095532.47020-1-matt@readmodwrite.com/"><u>soft lockup messages</u></a> to spew in production.</p><p>This benchmark gives some idea of worst case behaviour. Since the keys are so densely populated, path compression is completely ineffective. In the next section, we explore the lookup operation to understand the bottlenecks involved.</p>
    <div>
      <h2>Why are BPF LPM tries slow?</h2>
      <a href="#why-are-bpf-lpm-tries-slow">
        
      </a>
    </div>
    <p>The LPM trie implementation in <a href="https://elixir.bootlin.com/linux/v6.12.43/source/kernel/bpf/lpm_trie.c"><u>kernel/bpf/lpm_trie.c</u></a> has a couple of the optimisations we discussed in the introduction. It is capable of multibit comparisons at leaf nodes, but since there are only two child pointers in each internal node, if your tree is densely populated with a lot of data that only differs by one bit, these multibit comparisons degrade into single bit comparisons.</p><p>Here’s an example. Suppose you store the numbers 0, 1, and 3 in a BPF LPM trie. You might hope that since these values fit in a single 32 or 64-bit machine word, you could use a single comparison to decide which next node to visit in the trie. But that’s only possible if your trie implementation has 3 child pointers in the current node (which, to be fair, most trie implementations do). In other words, you want to make a 3-way branching decision but since BPF LPM tries only have two children, you’re limited to a 2-way branch.</p><p>A diagram for this 2-child trie is given below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ciL2t6aMyJHR2FfX41rNk/365abe47cf384729408cf9b98c65c0be/image9.png" />
          </figure><p>The leaf nodes are shown in green with the key, as a binary string, in the center. Even though a single 8-bit comparison is more than capable of figuring out which node has that key, the BPF LPM trie implementation resorts to inserting intermediate nodes (blue) to inject 2-way branching decisions into your path traversal because its parent (the orange root node in this case) only has 2 children. Once you reach a leaf node, BPF LPM tries can perform a multibit comparison to check the key. If a node supported pointers to more children, the above trie could instead look like this, allowing a 3-way branch and reducing the lookup time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17VoWl8OY6tzcARKDKuSjS/b9200dbeddf13f101b7085a549742f95/image3.png" />
          </figure><p>This 2-child design impacts the height of the trie. In the worst case, a completely full trie essentially becomes a binary search tree with height log2(nr_entries) and the height of the trie impacts how many comparisons are required to search for a key.</p><p>The above trie also shows how BPF LPM tries implement a form of path compression – you only need to insert an intermediate node where you have two nodes whose keys differ by a single bit. If instead of 3, you insert a key of 15 (0b1111), this won’t change the layout of the trie; you still only need a single node at the right child of the root.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ecfKSeoqN3bfBXmC9KHw5/3be952edea34d6b2cc867ba31ce14805/image12.png" />
          </figure><p>And finally, BPF LPM tries do not implement level compression. Again, this stems from the fact that nodes in the trie can only have 2 children. IP route tables tend to have many prefixes in common and you typically see densely packed tries at the upper levels which makes level compression very effective for tries containing IP routes.</p><p>Here’s a graph showing how the lookup throughput for LPM tries (measured in million ops/sec) degrades as the number of entries increases, from 1 entry up to 100K entries.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33I92exrEZTcUWOjxaBOqY/fb1de551b06e3272c8670d0117d738fa/image2.png" />
          </figure><p>Once you reach 1 million entries, throughput is around 1.5 million ops/sec, and continues to fall as the number of entries increases.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4OhaAaI5Y2XJCofI9V39z/567a01b3335f29ef3b46ccdd74dc27e5/image1.png" />
          </figure><p>Why is this? Initially, this is because of the L1 dcache miss rate. All of those nodes that need to be traversed in the trie are potential cache miss opportunities.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Gx4fOLKmhUKHegybQU7sl/4936239213f0061d5cbc2f5d6b63fde6/image11.png" />
          </figure><p>As you can see from the graph, L1 dcache miss rate remains relatively steady and yet the throughput continues to decline. At around 80K entries, dTLB miss rate becomes the bottleneck.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Jy7aTN3Nyo2EsbSzw313n/d26871fa417ffe293adb47fe7f7dc56b/image7.png" />
          </figure><p>Because BPF LPM tries to dynamically allocate individual nodes from a freelist of kernel memory, these nodes can live at arbitrary addresses. Which means traversing a path through a trie almost certainly will incur cache misses and potentially dTLB misses. This gets worse as the number of entries, and height of the trie, increases.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6CB3MvSvSgH1T2eY7Xlei8/81ebe572592ca71529d79564a88993f0/image10.png" />
          </figure>
    <div>
      <h2>Where do we go from here?</h2>
      <a href="#where-do-we-go-from-here">
        
      </a>
    </div>
    <p>By understanding the current limitations of the BPF LPM trie, we can now work towards building a more performant and efficient solution for the future of the Internet.</p><p>We’ve already contributed these benchmarks to the upstream Linux kernel — but that’s only the start. We have plans to improve the performance of BPM LPM tries, particularly the lookup function which is heavily used for our workloads. This post covered a number of optimisations that are already used by the <a href="https://elixir.bootlin.com/linux/v6.12.43/source/net/ipv4/fib_trie.c"><u>net/ipv4/fib_trie.c</u></a> code, so a natural first step is to refactor that code so that a common Level Compressed trie implementation can be used. Expect future blog posts to explore this work in depth.</p><p>If you’re interested in looking at more performance numbers, <a href="https://wiki.cfdata.org/display/~jesper">Jesper Brouer</a> has recorded some here: <a href="https://github.com/xdp-project/xdp-project/blob/main/areas/bench/bench02_lpm-trie-lookup.org">https://github.com/xdp-project/xdp-project/blob/main/areas/bench/bench02_lpm-trie-lookup.org</a>.</p><h6><i>If the Linux kernel, performance, or optimising data structures excites you, </i><a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Engineering&amp;location=default"><i>our engineering teams are hiring</i></a><i>.</i></h6><p></p> ]]></content:encoded>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[eBPF]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[IPv6]]></category>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">2A4WHjTqyxprwUMPaZ6tfj</guid>
            <dc:creator>Matt Fleming</dc:creator>
            <dc:creator>Jesper Brouer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Searching for the cause of hung tasks in the Linux kernel]]></title>
            <link>https://blog.cloudflare.com/searching-for-the-cause-of-hung-tasks-in-the-linux-kernel/</link>
            <pubDate>Fri, 14 Feb 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ The Linux kernel can produce a hung task warning. Searching the Internet and the kernel docs, you can find a brief explanation that the process is stuck in the uninterruptible state. ]]></description>
            <content:encoded><![CDATA[ <p>Depending on your configuration, the Linux kernel can produce a hung task warning message in its log. Searching the Internet and the kernel documentation, you can find a brief explanation that the kernel process is stuck in the uninterruptable state and hasn’t been scheduled on the CPU for an unexpectedly long period of time. That explains the warning’s meaning, but doesn’t provide the reason it occurred. In this blog post we’re going to explore how the hung task warning works, why it happens, whether it is a bug in the Linux kernel or application itself, and whether it is worth monitoring at all.</p>
    <div>
      <h3>INFO: task XXX:1495882 blocked for more than YYY seconds.</h3>
      <a href="#info-task-xxx-1495882-blocked-for-more-than-yyy-seconds">
        
      </a>
    </div>
    <p>The hung task message in the kernel log looks like this:</p>
            <pre><code>INFO: task XXX:1495882 blocked for more than YYY seconds.
     Tainted: G          O       6.6.39-cloudflare-2024.7.3 #1
"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:XXX         state:D stack:0     pid:1495882 ppid:1      flags:0x00004002
. . .</code></pre>
            <p>Processes in Linux can be in different states. Some of them are running or ready to run on the CPU — they are in the <a href="https://elixir.bootlin.com/linux/v6.12.6/source/include/linux/sched.h#L99"><code><u>TASK_RUNNING</u></code></a> state. Others are waiting for some signal or event to happen, e.g. network packets to arrive or terminal input from a user. They are in a <code>TASK_INTERRUPTIBLE</code> state and can spend an arbitrary length of time in this state until being woken up by a signal. The most important thing about these states is that they still can receive signals, and be terminated by a signal. In contrast, a process in the <code>TASK_UNINTERRUPTIBLE</code> state is waiting only for certain special classes of events to wake them up, and can’t be interrupted by a signal. The signals are not delivered until the process emerges from this state and only a system reboot can clear the process. It’s marked with the letter <code>D</code> in the log shown above.</p><p>What if this wake up event doesn’t happen or happens with a significant delay? (A “significant delay” may be on the order of seconds or minutes, depending on the system.) Then our dependent process is hung in this state. What if this dependent process holds some lock and prevents other processes from acquiring it? Or if we see many processes in the D state? Then it might tell us that some of the system resources are overwhelmed or are not working correctly. At the same time, this state is very valuable, especially if we want to preserve the process memory. It might be useful if part of the data is written to disk and another part is still in the process memory — we don’t want inconsistent data on a disk. Or maybe we want a snapshot of the process memory when the bug is hit. To preserve this behaviour, but make it more controlled, a new state was introduced in the kernel: <a href="https://lwn.net/Articles/288056/"><code><u>TASK_KILLABLE</u></code></a> — it still protects a process, but allows termination with a fatal signal. </p>
    <div>
      <h3>How Linux identifies the hung process</h3>
      <a href="#how-linux-identifies-the-hung-process">
        
      </a>
    </div>
    <p>The Linux kernel has a special thread called <code>khungtaskd</code>. It runs regularly depending on the settings, iterating over all processes in the <code>D</code> state. If a process is in this state for more than YYY seconds, we’ll see a message in the kernel log. There are settings for this daemon that can be changed according to your wishes:</p>
            <pre><code>$ sudo sysctl -a --pattern hung
kernel.hung_task_all_cpu_backtrace = 0
kernel.hung_task_check_count = 4194304
kernel.hung_task_check_interval_secs = 0
kernel.hung_task_panic = 0
kernel.hung_task_timeout_secs = 10
kernel.hung_task_warnings = 200</code></pre>
            <p>At Cloudflare, we changed the notification threshold <code>kernel.hung_task_timeout_secs</code> from the default 120 seconds to 10 seconds. You can adjust the value for your system depending on configuration and how critical this delay is for you. If the process spends more than <code>hung_task_timeout_secs</code> seconds in the D state, a log entry is written, and our internal monitoring system emits an alert based on this log. Another important setting here is <code>kernel.hung_task_warnings</code> — the total number of messages that will be sent to the log. We limit it to 200 messages and reset it every 15 minutes. It allows us not to be overwhelmed by the same issue, and at the same time doesn’t stop our monitoring for too long. You can make it unlimited by <a href="https://docs.kernel.org/admin-guide/sysctl/kernel.html#hung-task-warnings"><u>setting the value to "-1"</u></a>.</p><p>To better understand the root causes of the hung tasks and how a system can be affected, we’re going to review more detailed examples. </p>
    <div>
      <h3>Example #1 or XFS</h3>
      <a href="#example-1-or-xfs">
        
      </a>
    </div>
    <p>Typically, there is a meaningful process or application name in the log, but sometimes you might see something like this:</p>
            <pre><code>INFO: task kworker/13:0:834409 blocked for more than 11 seconds.
 	Tainted: G      	O   	6.6.39-cloudflare-2024.7.3 #1
"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/13:0	state:D stack:0 	pid:834409 ppid:2   flags:0x00004000
Workqueue: xfs-sync/dm-6 xfs_log_worker</code></pre>
            <p>In this log, <code>kworker</code> is the kernel thread. It’s used as a deferring mechanism, meaning a piece of work will be scheduled to be executed in the future. Under <code>kworker</code>, the work is aggregated from different tasks, which makes it difficult to tell which application is experiencing a delay. Luckily, the <code>kworker</code> is accompanied by the <a href="https://docs.kernel.org/core-api/workqueue.html"><code><u>Workqueue</u></code></a> line. <code>Workqueue</code> is a linked list, usually predefined in the kernel, where these pieces of work are added and performed by the <code>kworker</code> in the order they were added to the queue. The <code>Workqueue</code> name <code>xfs-sync</code> and <a href="https://elixir.bootlin.com/linux/v6.12.6/source/kernel/workqueue.c#L6096"><u>the function which it points to</u></a>, <code>xfs_log_worker</code>, might give a good clue where to look. Here we can make an assumption that the <a href="https://en.wikipedia.org/wiki/XFS"><u>XFS</u></a> is under pressure and check the relevant metrics. It helped us to discover that due to some configuration changes, we forgot <code>no_read_workqueue</code> / <code>no_write_workqueue</code> flags that were introduced some time ago to <a href="https://blog.cloudflare.com/speeding-up-linux-disk-encryption/"><u>speed up Linux disk encryption</u></a>.</p><p><i>Summary</i>: In this case, nothing critical happened to the system, but the hung tasks warnings gave us an alert that our file system had slowed down.</p>
    <div>
      <h3>Example #2 or Coredump</h3>
      <a href="#example-2-or-coredump">
        
      </a>
    </div>
    <p>Let’s take a look at the next hung task log and its decoded stack trace:</p>
            <pre><code>INFO: task test:964 blocked for more than 5 seconds.
      Not tainted 6.6.72-cloudflare-2025.1.7 #1
"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:test            state:D stack:0     pid:964   ppid:916    flags:0x00004000
Call Trace:
&lt;TASK&gt;
__schedule (linux/kernel/sched/core.c:5378 linux/kernel/sched/core.c:6697) 
schedule (linux/arch/x86/include/asm/preempt.h:85 (discriminator 13) linux/kernel/sched/core.c:6772 (discriminator 13)) 
[do_exit (linux/kernel/exit.c:433 (discriminator 4) linux/kernel/exit.c:825 (discriminator 4)) 
? finish_task_switch.isra.0 (linux/arch/x86/include/asm/irqflags.h:42 linux/arch/x86/include/asm/irqflags.h:77 linux/kernel/sched/sched.h:1385 linux/kernel/sched/core.c:5132 linux/kernel/sched/core.c:5250) 
do_group_exit (linux/kernel/exit.c:1005) 
get_signal (linux/kernel/signal.c:2869) 
? srso_return_thunk (linux/arch/x86/lib/retpoline.S:217) 
? hrtimer_try_to_cancel.part.0 (linux/kernel/time/hrtimer.c:1347) 
arch_do_signal_or_restart (linux/arch/x86/kernel/signal.c:310) 
? srso_return_thunk (linux/arch/x86/lib/retpoline.S:217) 
? hrtimer_nanosleep (linux/kernel/time/hrtimer.c:2105) 
exit_to_user_mode_prepare (linux/kernel/entry/common.c:176 linux/kernel/entry/common.c:210) 
syscall_exit_to_user_mode (linux/arch/x86/include/asm/entry-common.h:91 linux/kernel/entry/common.c:141 linux/kernel/entry/common.c:304) 
? srso_return_thunk (linux/arch/x86/lib/retpoline.S:217) 
do_syscall_64 (linux/arch/x86/entry/common.c:88) 
entry_SYSCALL_64_after_hwframe (linux/arch/x86/entry/entry_64.S:121) 
&lt;/TASK&gt;</code></pre>
            <p>The stack trace says that the process or application <code>test</code> was blocked <code>for more than 5 seconds</code>. We might recognise this user space application by the name, but why is it blocked? It’s always helpful to check the stack trace when looking for a cause. The most interesting line here is <code>do_exit (linux/kernel/exit.c:433 (discriminator 4) linux/kernel/exit.c:825 (discriminator 4))</code>. The <a href="https://elixir.bootlin.com/linux/v6.6.67/source/kernel/exit.c#L825"><u>source code</u></a> points to the <code>coredump_task_exit</code> function. Additionally, checking the process metrics revealed that the application crashed during the time when the warning message appeared in the log. When a process is terminated based on some set of signals (abnormally), <a href="https://man7.org/linux/man-pages/man5/core.5.html"><u>the Linux kernel can provide a core dump file</u></a>, if enabled. The mechanism — when a process terminates, the kernel makes a snapshot of the process memory before exiting and either writes it to a file or sends it through the socket to another handler — can be <a href="https://systemd.io/COREDUMP/"><u>systemd-coredump</u></a> or your custom one. When it happens, the kernel moves the process to the <code>D</code> state to preserve its memory and early termination. The higher the process memory usage, the longer it takes to get a core dump file, and the higher the chance of getting a hung task warning.</p><p>Let’s check our hypothesis by triggering it with a small Go program. We’ll use the default Linux coredump handler and will decrease the hung task threshold to 1 second.</p><p>Coredump settings:</p>
            <pre><code>$ sudo sysctl -a --pattern kernel.core
kernel.core_pattern = core
kernel.core_pipe_limit = 16
kernel.core_uses_pid = 1</code></pre>
            <p>You can make changes with <a href="https://man7.org/linux/man-pages/man8/sysctl.8.html"><u>sysctl</u></a>:</p>
            <pre><code>$ sudo sysctl -w kernel.core_uses_pid=1</code></pre>
            <p>Hung task settings:</p>
            <pre><code>$ sudo sysctl -a --pattern hung
kernel.hung_task_all_cpu_backtrace = 0
kernel.hung_task_check_count = 4194304
kernel.hung_task_check_interval_secs = 0
kernel.hung_task_panic = 0
kernel.hung_task_timeout_secs = 1
kernel.hung_task_warnings = -1</code></pre>
            <p>Go program:</p>
            <pre><code>$ cat main.go
package main

import (
	"os"
	"time"
)

func main() {
	_, err := os.ReadFile("test.file")
	if err != nil {
		panic(err)
	}
	time.Sleep(8 * time.Minute) 
}</code></pre>
            <p>This program reads a 10 GB file into process memory. Let’s create the file:</p>
            <pre><code>$ yes this is 10GB file | head -c 10GB &gt; test.file</code></pre>
            <p>The last step is to build the Go program, crash it, and watch our kernel log:</p>
            <pre><code>$ go mod init test
$ go build .
$ GOTRACEBACK=crash ./test
$ (Ctrl+\)</code></pre>
            <p>Hooray! We can see our hung task warning:</p>
            <pre><code>$ sudo dmesg -T | tail -n 31
INFO: task test:8734 blocked for more than 22 seconds.
      Not tainted 6.6.72-cloudflare-2025.1.7 #1
      Blocked by coredump.
"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:test            state:D stack:0     pid:8734  ppid:8406   task_flags:0x400448 flags:0x00004000</code></pre>
            <p>By the way, have you noticed the <code>Blocked by coredump.</code> line in the log? It was recently added to the <a href="https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?h=mm-nonmm-stable&amp;id=23f3f7625cfb55f92e950950e70899312f54afb7"><u>upstream</u></a> code to improve visibility and remove the blame from the process itself. The patch also added the <code>task_flags</code> information, as <code>Blocked by coredump</code> is detected via the flag <a href="https://elixir.bootlin.com/linux/v6.13.1/source/include/linux/sched.h#L1675"><code><u>PF_POSTCOREDUMP</u></code></a>, and knowing all the task flags is useful for further root-cause analysis.</p><p><i>Summary</i>: This example showed that even if everything suggests that the application is the problem, the real root cause can be something else — in this case, <code>coredump</code>.</p>
    <div>
      <h3>Example #3 or rtnl_mutex</h3>
      <a href="#example-3-or-rtnl_mutex">
        
      </a>
    </div>
    <p>This one was tricky to debug. Usually, the alerts are limited by one or two different processes, meaning only a certain application or subsystem experiences an issue. In this case, we saw dozens of unrelated tasks hanging for minutes with no improvements over time. Nothing else was in the log, most of the system metrics were fine, and existing traffic was being served, but it was not possible to ssh to the server. New Kubernetes container creations were also stalling. Analyzing the stack traces of different tasks initially revealed that all the traces were limited to just three functions:</p>
            <pre><code>rtnetlink_rcv_msg+0x9/0x3c0
dev_ethtool+0xc6/0x2db0 
bonding_show_bonds+0x20/0xb0</code></pre>
            <p>Further investigation showed that all of these functions were waiting for <a href="https://elixir.bootlin.com/linux/v6.6.74/source/net/core/rtnetlink.c#L76"><code><u>rtnl_lock</u></code></a> to be acquired. It looked like some application acquired the <code>rtnl_mutex</code> and didn’t release it. All other processes were in the <code>D</code> state waiting for this lock.</p><p>The RTNL lock is primarily used by the kernel networking subsystem for any network-related config, for both writing and reading. The RTNL is a global <b>mutex</b> lock, although <a href="https://lpc.events/event/18/contributions/1959/"><u>upstream efforts</u></a> are being made for splitting up RTNL per network namespace (netns).</p><p>From the hung task reports, we can observe the “victims” that are being stalled waiting for the lock, but how do we identify the task that is holding this lock for too long? For troubleshooting this, we leveraged <code>BPF</code> via a <code>bpftrace</code> script, as this allows us to inspect the running kernel state. The <a href="https://elixir.bootlin.com/linux/v6.6.75/source/include/linux/mutex.h#L67"><u>kernel's mutex implementation</u></a> has a struct member called <code>owner</code>. It contains a pointer to the <a href="https://elixir.bootlin.com/linux/v6.6.75/source/include/linux/sched.h#L746"><code><u>task_struct</u></code></a> from the mutex-owning process, except it is encoded as type <code>atomic_long_t</code>. This is because the mutex implementation stores some state information in the lower 3-bits (mask 0x7) of this pointer. Thus, to read and dereference this <code>task_struct</code> pointer, we must first mask off the lower bits (0x7).</p><p>Our <code>bpftrace</code> script to determine who holds the mutex is as follows:</p>
            <pre><code>#!/usr/bin/env bpftrace
interval:s:10 {
  $rtnl_mutex = (struct mutex *) kaddr("rtnl_mutex");
  $owner = (struct task_struct *) ($rtnl_mutex-&gt;owner.counter &amp; ~0x07);
  if ($owner != 0) {
    printf("rtnl_mutex-&gt;owner = %u %s\n", $owner-&gt;pid, $owner-&gt;comm);
  }
}</code></pre>
            <p>In this script, the <code>rtnl_mutex</code> lock is a global lock whose address can be exposed via <code>/proc/kallsyms</code> – using <code>bpftrace</code> helper function <code>kaddr()</code>, we can access the struct mutex pointer from the <code>kallsyms</code>. Thus, we can periodically (via <code>interval:s:10</code>) check if someone is holding this lock.</p><p>In the output we had this:</p>
            <pre><code>rtnl_mutex-&gt;owner = 3895365 calico-node</code></pre>
            <p>This allowed us to quickly identify <code>calico-node</code> as the process holding the RTNL lock for too long. To quickly observe where this process itself is stalled, the call stack is available via <code>/proc/3895365/stack</code>. This showed us that the root cause was a Wireguard config change, with function <code>wg_set_device()</code> holding the RTNL lock, and <code>peer_remove_after_dead()</code> waiting too long for a <code>napi_disable()</code> call. We continued debugging via a tool called <a href="https://drgn.readthedocs.io/en/latest/user_guide.html#stack-traces"><code><u>drgn</u></code></a>, which is a programmable debugger that can debug a running kernel via a Python-like interactive shell. We still haven’t discovered the root cause for the Wireguard issue and have <a href="https://lore.kernel.org/lkml/CALrw=nGoSW=M-SApcvkP4cfYwWRj=z7WonKi6fEksWjMZTs81A@mail.gmail.com/"><u>asked the upstream</u></a> for help, but that is another story.</p><p><i>Summary</i>: The hung task messages were the only ones which we had in the kernel log. Each stack trace of these messages was unique, but by carefully analyzing them, we could spot similarities and continue debugging with other instruments.</p>
    <div>
      <h3>Epilogue</h3>
      <a href="#epilogue">
        
      </a>
    </div>
    <p>Your system might have different hung task warnings, and we have many others not mentioned here. Each case is unique, and there is no standard approach to debug them. But hopefully this blog post helps you better understand why it’s good to have these warnings enabled, how they work, and what the meaning is behind them. We tried to provide some navigation guidance for the debugging process as well:</p><ul><li><p>analyzing the stack trace might be a good starting point for debugging it, even if all the messages look unrelated, like we saw in example #3</p></li><li><p>keep in mind that the alert might be misleading, pointing to the victim and not the offender, as we saw in example #2 and example #3</p></li><li><p>if the kernel doesn’t schedule your application on the CPU, puts it in the D state, and emits the warning – the real problem might exist in the application code</p></li></ul><p>Good luck with your debugging, and hopefully this material will help you on this journey!</p> ]]></content:encoded>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[Kernel]]></category>
            <category><![CDATA[Monitoring]]></category>
            <guid isPermaLink="false">3UHNgpNPKn2IAwDUzD4m3a</guid>
            <dc:creator>Oxana Kharitonova</dc:creator>
            <dc:creator>Jesper Brouer</dc:creator>
        </item>
    </channel>
</rss>