Skip to content

Instantly share code, notes, and snippets.

@bnomei
Created March 23, 2026 11:30
Show Gist options
  • Select an option

  • Save bnomei/ecd18c1c116ff2dac98a247fe44d4090 to your computer and use it in GitHub Desktop.

Select an option

Save bnomei/ecd18c1c116ff2dac98a247fe44d4090 to your computer and use it in GitHub Desktop.

PR #15741 Hot Path Summary

This note summarizes the "hotpath" discussed in netty/netty#15741, based on the PR discussion plus Frigg inspection of the merged code in this checkout.

Merged commit in this tree: accd981104dfe23dbe6208a16d197b7b3f5b8c94

What "hotpath" Means Here

The PR discussion points to the adaptive allocator's thread-local direct-allocation fast path, not a general allocator slow path.

Why that interpretation is the right one:

  • The PR description says it reduces costly atomic operations in the "thread local allocation's fast path."
  • The added benchmark focuses on direct allocation throughput.
  • The review thread discusses keeping owner-thread segments "hot" in the local free list instead of mixing them with the external queue.

Relevant discussion links:

Minimal Hot Path

Using Frigg on the current tree, the minimal direct-allocation path is:

  1. microbench/src/main/java/io/netty/microbench/buffer/ByteBufAllocatorAllocPatternBenchmark.java#L214 directAllocation(...) calls state.performDirectAllocation().
  2. microbench/src/main/java/io/netty/microbench/buffer/ByteBufAllocatorAllocPatternBenchmark.java#L133 performDirectAllocation() picks a size, releases the previous buffer, then calls allocateDirect(allocator, size).
  3. microbench/src/main/java/io/netty/microbench/buffer/ByteBufAllocatorAllocPatternBenchmark.java#L129 allocateDirect(...) jumps into ByteBufAllocator.directBuffer(size).
  4. buffer/src/main/java/io/netty/buffer/AdaptivePoolingAllocator.java#L393 The allocator takes the thread-local magazine path: tlMag.newBuffer() and then tlMag.tryAllocate(...).
  5. buffer/src/main/java/io/netty/buffer/AdaptivePoolingAllocator.java#L844 Magazine.tryAllocate(...) immediately takes the no-lock branch when allocationLock == null, which is the thread-local case.
  6. buffer/src/main/java/io/netty/buffer/AdaptivePoolingAllocator.java#L881 Allocation succeeds when the current chunk can readInitInto(...).
  7. buffer/src/main/java/io/netty/buffer/AdaptivePoolingAllocator.java#L1306 SizeClassedChunk.readInitInto(...) calls nextAvailableSegmentOffset() to pick the backing segment.

The most important hot instruction path is the owner-thread segment selection and release path inside SizeClassedChunk.

What PR #15741 Changed In That Path

Why This Helps

The merged optimization is mostly "keep owner-thread reuse owner-thread local."

  • The thread-local magazine avoids lock acquisition on the fast path.
  • Recently freed owner-thread segments are reused from a local LIFO stack.
  • Cross-thread coordination is deferred until the local free list is empty or a different thread performs the release.
  • The design favors cache and core locality over aggressively draining the external queue into the local list.

That matches the author comment in review: the local segments are more likely to still be "hot" on the owner thread core, so they are preferred over mixing in external segments.

Benchmark Context Added By The PR

The PR added microbench/src/main/java/io/netty/microbench/buffer/ByteBufAllocatorAllocPatternBenchmark.java, which exercises this path directly.

Two details from the benchmark are relevant:

Remaining Follow-Ups Mentioned In The PR Thread

The PR did not claim to solve everything. The follow-up items explicitly mentioned in the thread were:

  • size-class chunk queue sizing and better use of available queue capacity
  • removing extra reference-count operations on size-class chunks
  • possibly more improvements in later PRs rather than in this merge

In short: PR #15741 optimized the adaptive allocator by making the common owner-thread allocation/release cycle stay local, simpler, and less atomic-heavy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment