mempool/stack: add lock-free stack mempool handler
authorGage Eads <gage.eads@intel.com>
Wed, 3 Apr 2019 23:20:20 +0000 (18:20 -0500)
committerThomas Monjalon <thomas@monjalon.net>
Thu, 4 Apr 2019 20:06:16 +0000 (22:06 +0200)
commite75bc77f98bcf1e9772022b6f833da588b59c8e1
treeeb244b77aae6f2d1243b2303fd4fefab0cb7c420
parent0420378bbfc4ff14261f5fa84f16ffe142048061
mempool/stack: add lock-free stack mempool handler

This commit adds support for lock-free (linked list based) stack mempool
handler.

In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
  stack's worst-case performance (i.e. one thread being preempted while
  holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
  difference.

*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.

Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
doc/guides/prog_guide/env_abstraction_layer.rst
doc/guides/rel_notes/release_19_05.rst
drivers/mempool/stack/rte_mempool_stack.c