of virtual memory being preallocated at startup by editing the following config
variables:
-* ``CONFIG_RTE_MAX_MEMSEG_LISTS`` controls how many segment lists can DPDK have
-* ``CONFIG_RTE_MAX_MEM_MB_PER_LIST`` controls how much megabytes of memory each
+* ``RTE_MAX_MEMSEG_LISTS`` controls how many segment lists can DPDK have
+* ``RTE_MAX_MEM_MB_PER_LIST`` controls how much megabytes of memory each
segment list can address
-* ``CONFIG_RTE_MAX_MEMSEG_PER_LIST`` controls how many segments each segment can
+* ``RTE_MAX_MEMSEG_PER_LIST`` controls how many segments each segment can
have
-* ``CONFIG_RTE_MAX_MEMSEG_PER_TYPE`` controls how many segments each memory type
+* ``RTE_MAX_MEMSEG_PER_TYPE`` controls how many segments each memory type
can have (where "type" is defined as "page size + NUMA node" combination)
-* ``CONFIG_RTE_MAX_MEM_MB_PER_TYPE`` controls how much megabytes of memory each
+* ``RTE_MAX_MEM_MB_PER_TYPE`` controls how much megabytes of memory each
memory type can address
-* ``CONFIG_RTE_MAX_MEM_MB`` places a global maximum on the amount of memory
+* ``RTE_MAX_MEM_MB`` places a global maximum on the amount of memory
DPDK can reserve
Normally, these options do not need to be changed.
Refer to the rte_malloc() function description in the *DPDK API Reference*
manual for more information.
-Cookies
-~~~~~~~
-
-When CONFIG_RTE_MALLOC_DEBUG is enabled, the allocated memory contains
-overwrite protection fields to help identify buffer overflows.
Alignment and NUMA Constraints
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-----------------------------
- Test with various burst size values (256, 128, 64, 32) using
- CONFIG_RTE_GRAPH_BURST_SIZE config option.
+ RTE_GRAPH_BURST_SIZE config option.
The testing shows, on x86 and arm64 servers, The sweet spot is 256 burst
size. While on arm64 embedded SoCs, it is either 64 or 128.
-- Disable node statistics (using ``CONFIG_RTE_LIBRTE_GRAPH_STATS`` config option)
+- Disable node statistics (using ``RTE_LIBRTE_GRAPH_STATS`` config option)
if not needed.
-- Use arm64 optimized memory copy for arm64 architecture by
- selecting ``CONFIG_RTE_ARCH_ARM64_MEMCPY``.
Programming model
-----------------
The RTE_LIBRTE_IP_FRAG_TBL_STAT config macro controls statistics collection for the Fragment Table.
This macro is not enabled by default.
-
-The RTE_LIBRTE_IP_FRAG_DEBUG controls debug logging of IP fragments processing and reassembling.
-This macro is disabled by default.
-Note that while logging contains a lot of detailed information,
-it slows down packet processing and might cause the loss of a lot of packets.
.. code-block:: console
- # insmod kmod/rte_kni.ko
+ # insmod <build_dir>/kernel/linux/kni/rte_kni.ko
.. _kni_loopback_mode:
.. code-block:: console
- # insmod kmod/rte_kni.ko lo_mode=lo_mode_fifo
+ # insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo
The ``lo_mode_fifo`` loopback option will loop back ring enqueue/dequeue
operations in kernel space.
.. code-block:: console
- # insmod kmod/rte_kni.ko lo_mode=lo_mode_fifo_skb
+ # insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo_skb
The ``lo_mode_fifo_skb`` loopback option will loop back ring enqueue/dequeue
operations and sk buffer copies in kernel space.
.. code-block:: console
- # insmod kmod/rte_kni.ko kthread_mode=single
+ # insmod <build_dir>/kernel/linux/kni/rte_kni.ko kthread_mode=single
This mode will create only one kernel thread for all KNI interfaces to
receive data on the kernel side. By default, this kernel thread is not
.. code-block:: console
- # insmod kmod/rte_kni.ko kthread_mode=multiple
+ # insmod <build_dir>/kernel/linux/kni/rte_kni.ko kthread_mode=multiple
This mode will create a separate kernel thread for each KNI interface to
receive data on the kernel side. The core affinity of each ``kni_thread``
.. code-block:: console
- # insmod kmod/rte_kni.ko carrier=on
+ # insmod <build_dir>/kernel/linux/kni/rte_kni.ko carrier=on
To set the default carrier state to *off*:
.. code-block:: console
- # insmod kmod/rte_kni.ko carrier=off
+ # insmod <build_dir>/kernel/linux/kni/rte_kni.ko carrier=off
If the ``carrier`` parameter is not specified, the default carrier state
of KNI interfaces will be set to *off*.
.. note::
The Link Bonding PMD Library is enabled by default in the build
- configuration files, the library can be disabled by setting
- ``CONFIG_RTE_LIBRTE_PMD_BOND=n`` and recompiling the DPDK.
+ configuration, the library can be disabled using the meson option
+ "-Ddisable_drivers=net/bond".
+
Link Bonding Modes Overview
---------------------------
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,bond_opt0=..,bond opt1=..'--vdev 'net_bonding1,bond _opt0=..,bond_opt1=..'
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,bond_opt0=..,bond opt1=..'--vdev 'net_bonding1,bond _opt0=..,bond_opt1=..'
Link Bonding EAL Options
^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
.. code-block:: console
- $RTE_TARGET/app/testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
Please note that turning LTO on causes considerable extension of
build time.
-When using make based build, link time optimization can be enabled for
-the whole DPDK by setting:
-
-.. code-block:: console
-
- CONFIG_RTE_ENABLE_LTO=y
-
-in config file.
-
-For the meson based build it can be enabled by setting meson built-in
-'b_lto' option:
+Link time optimization can be enabled by setting meson built-in 'b_lto' option:
.. code-block:: console
Debug
-----
-In debug mode (CONFIG_RTE_MBUF_DEBUG is enabled),
-the functions of the mbuf library perform sanity checks before any operation (such as, buffer corruption, bad type, and so on).
+In debug mode, the functions of the mbuf library perform sanity checks before any operation (such as, buffer corruption,
+bad type, and so on).
Use Cases
---------
Cookies
-------
-In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), cookies are added at the beginning and end of allocated blocks.
+In debug mode, cookies are added at the beginning and end of allocated blocks.
The allocated objects then contain overwrite protection fields to help debugging buffer overflows.
Stats
-----
-In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled),
-statistics about get from/put in the pool are stored in the mempool structure.
+In debug mode, statistics about get from/put in the pool are stored in the mempool structure.
Statistics are per-lcore to avoid concurrent access to statistics counters.
Memory Alignment Constraints on x86 architecture
The cache is composed of a small, per-core table of pointers and its length (used as a stack).
This internal cache can be enabled or disabled at creation of the pool.
-The maximum size of the cache is static and is defined at compilation time (CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE).
+The maximum size of the cache is static and is defined at compilation time (RTE_MEMPOOL_CACHE_MAX_SIZE).
:numref:`figure_mempool` shows a cache in operation.
for details about application profiling.
-Profiling with VTune
-~~~~~~~~~~~~~~~~~~~~
-
-To allow VTune attaching to the DPDK application, reconfigure and recompile
-the DPDK with ``CONFIG_RTE_ETHDEV_RXTX_CALLBACKS`` and
-``CONFIG_RTE_ETHDEV_PROFILE_WITH_VTUNE`` enabled.
-
-
Profiling on ARM64
------------------
mode (kernel space).
By default the ``rte_rdtsc()`` implementation uses a portable ``cntvct_el0``
-scheme. Application can choose the PMU based implementation with
-``CONFIG_RTE_ARM_EAL_RDTSC_USE_PMU``.
+scheme.
The example below shows the steps to configure the PMU based cycle counter on
an ARMv8 machine.
cd armv8_pmu_cycle_counter_el0
make
sudo insmod pmu_el0_cycle_counter.ko
- cd $DPDK_DIR
- make config T=arm64-armv8a-linux-gcc
- echo "CONFIG_RTE_ARM_EAL_RDTSC_USE_PMU=y" >> build/.config
- make
+
+Please refer to :doc:`../linux_gsg/build_dpdk` for details on compiling DPDK with meson.
.. warning::
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RED functionality in the DPDK QoS scheduler is disabled by default.
-To enable it, use the DPDK configuration parameter:
-
-::
-
- CONFIG_RTE_SCHED_RED=y
-
-This parameter must be set to y.
-The parameter is found in the build configuration files in the DPDK/config directory,
-for example, DPDK/config/common_linux.
+The parameter is found in the build configuration files in the DPDK/config directory.
RED configuration parameters are specified in the rte_red_params structure within the rte_sched_port_params structure
that is passed to the scheduler on initialization.
RED parameters are specified separately for four traffic classes and three packet colors (green, yellow and red)
quiescent state query and update the state accordingly.
The ``rte_rcu_qsbr_lock()`` and ``rte_rcu_qsbr_unlock()`` are empty functions.
-However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
-in debugging issues. One can mark the access to shared data structures on the
-reader side using these APIs. The ``rte_rcu_qsbr_quiescent()`` will check if
-all the locks are unlocked.
+However, these APIs can aid in debugging issues. One can mark the access to
+shared data structures on the reader side using these APIs. The
+``rte_rcu_qsbr_quiescent()`` will check if all the locks are unlocked.
Resource reclamation framework for DPDK
---------------------------------------
the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
-``CONFIG_RTE_ENABLE_TRACE_FP`` configuration parameter.
-The ``enable_trace_fp`` option shall be used for the same for meson build.
+the ``enable_trace_fp`` option for meson build.
Event record mode
-----------------
Setting the Target CPU Type
---------------------------
-The DPDK supports CPU microarchitecture-specific optimizations by means of CONFIG_RTE_MACHINE option
-in the DPDK configuration file.
+The DPDK supports CPU microarchitecture-specific optimizations by means of RTE_MACHINE option.
The degree of optimization depends on the compiler's ability to optimize for a specific microarchitecture,
therefore it is preferable to use the latest compiler versions whenever possible.