The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
-ConnectX-5** and **Mellanox Bluefield** families of 10/25/40/50/100 Gb/s
-adapters as well as their virtual functions (VF) in SR-IOV context.
+ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox Bluefield** families
+of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
+in SR-IOV context.
Information and documentation about these adapters can be found on the
`Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
- Hardware checksum offloads.
- Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
RTE_ETH_FDIR_REJECT).
-- Flow API.
+- Flow API, including :ref:`flow_isolated_mode`.
- Multiple process.
- KVM and VMware ESX SR-IOV modes are supported.
- RSS hash result is supported.
Toggle compilation of librte_pmd_mlx5 itself.
-- ``CONFIG_RTE_LIBRTE_MLX5_DLOPEN_DEPS`` (default **n**)
+- ``CONFIG_RTE_IBVERBS_LINK_DLOPEN`` (default **n**)
Build PMD with additional code to make it loadable without hard
dependencies on **libibverbs** nor **libmlx5**, which may not be installed
This option has no performance impact.
+- ``CONFIG_RTE_IBVERBS_LINK_STATIC`` (default **n**)
+
+ Embed static flavour of the dependencies **libibverbs** and **libmlx5**
+ in the PMD shared library or the executable static binary.
+
- ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
Toggle debugging code and stricter compilation flags. Enabling this option
adds additional run-time checks and debugging messages at the cost of
lower performance.
+.. note::
+
+ For Bluefield, target should be set to ``arm64-bluefield-linuxapp-gcc``. This
+ will enable ``CONFIG_RTE_LIBRTE_MLX5_PMD`` and set ``RTE_CACHE_LINE_SIZE`` to
+ 64. Default armv8a configuration of make build and meson build set it to 128
+ then brings performance degradation.
+
Environment variables
~~~~~~~~~~~~~~~~~~~~~
A list of directories in which to search for the rdma-core "glue" plug-in,
separated by colons or semi-colons.
- Only matters when compiled with ``CONFIG_RTE_LIBRTE_MLX5_DLOPEN_DEPS``
+ Only matters when compiled with ``CONFIG_RTE_IBVERBS_LINK_DLOPEN``
enabled and most useful when ``CONFIG_RTE_EAL_PMD_PATH`` is also set,
since ``LD_LIBRARY_PATH`` has no effect in this case.
-- ``MLX5_PMD_ENABLE_PADDING``
-
- Enables HW packet padding in PCI bus transactions.
-
- When packet size is cache aligned and CRC stripping is enabled, 4 fewer
- bytes are written to the PCI bus. Enabling padding makes such packets
- aligned again.
-
- In cases where PCI bandwidth is the bottleneck, padding can improve
- performance by 10%.
-
- This is disabled by default since this can also decrease performance for
- unaligned packet sizes.
-
- ``MLX5_SHUT_UP_BF``
Configures HW Tx doorbell register as IO-mapped.
Supported on:
- - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5 and Bluefield.
- - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5 and Bluefield.
+ - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+ - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
- ``rxq_cqe_pad_en`` parameter [int]
- CPU having 128B cacheline with ConnectX-5 and Bluefield.
+- ``rxq_pkt_pad_en`` parameter [int]
+
+ A nonzero value enables padding Rx packet to the size of cacheline on PCI
+ transaction. This feature would waste PCI bandwidth but could improve
+ performance by avoiding partial cacheline write which may cause costly
+ read-modify-copy in memory transaction on some architectures. Disabled by
+ default.
+
+ Supported on:
+
+ - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+ - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+
- ``mprq_en`` parameter [int]
A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
This option should be used in combination with ``txq_inline`` above.
- On ConnectX-4, ConnectX-4 LX, ConnectX-5 and Bluefield without
+ On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield without
Enhanced MPW:
- Disabled by default.
- In case ``txq_inline`` is set recommendation is 4.
- On ConnectX-5 and Bluefield with Enhanced MPW:
+ On ConnectX-5, ConnectX-6 and Bluefield with Enhanced MPW:
- Set to 8 by default.
- ``txq_mpw_en`` parameter [int]
A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
- enhanced multi-packet send (Enhanced MPS) for ConnectX-5 and Bluefield.
+ enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and Bluefield.
MPS allows the TX burst function to pack up multiple packets in a
single descriptor session in order to save PCI bandwidth and improve
performance at the cost of a slightly higher CPU usage. When
DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
When those offloads are requested the MPS send function will not be used.
- It is currently only supported on the ConnectX-4 Lx, ConnectX-5 and Bluefield
+ It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and Bluefield
families of adapters.
On ConnectX-4 Lx the MPW is considered un-secure hence disabled by default.
Users which enable the MPW should be aware that application which provides incorrect
mbuf descriptors in the Tx burst can lead to serious errors in the host including, on some cases,
NIC to get stuck.
- On ConnectX-5 and Bluefield the MPW is secure and enabled by default.
+ On ConnectX-5, ConnectX-6 and Bluefield the MPW is secure and enabled by default.
- ``txq_mpw_hdr_dseg_en`` parameter [int]
- ``tx_vec_en`` parameter [int]
- A nonzero value enables Tx vector on ConnectX-5 and Bluefield NICs if the number of
+ A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and Bluefield NICs if the number of
global Tx queues on the port is less than ``txqs_max_vec``.
This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
When those offloads are requested the MPS send function will not be used.
- Enabled by default on ConnectX-5 and Bluefield.
+ Enabled by default on ConnectX-5, ConnectX-6 and Bluefield.
- ``rx_vec_en`` parameter [int]
- **libmlx5**
Low-level user space driver library for Mellanox
- ConnectX-4/ConnectX-5/Bluefield devices, it is automatically loaded
+ ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices, it is automatically loaded
by libibverbs.
This library basically implements send/receive calls to the hardware
their devices:
- mlx5_core: hardware driver managing Mellanox
- ConnectX-4/ConnectX-5/Bluefield devices and related Ethernet kernel
+ ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices and related Ethernet kernel
network devices.
- mlx5_ib: InifiniBand device driver.
- ib_uverbs: user space driver for Verbs (entry point for libibverbs).
- **Firmware update**
- Mellanox OFED releases include firmware updates for
- ConnectX-4/ConnectX-5/Bluefield adapters.
+ Mellanox OFED/EN releases include firmware updates for
+ ConnectX-4/ConnectX-5/ConnectX-6/Bluefield adapters.
Because each release provides new features, these updates must be applied to
match the kernel modules and libraries they come with.
~~~~~~~~~~~~
Either RDMA Core library with a recent enough Linux kernel release
-(recommended) or Mellanox OFED, which provides compatibility with older
+(recommended) or Mellanox OFED/EN, which provides compatibility with older
releases.
RMDA Core with Linux Kernel
- rdma-core version 18.0 or above built with 32bit support.
- Kernel version 4.14.41 or above.
+- Starting with rdma-core v21, static libraries can be built::
+
+ cd build
+ CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
+ ninja
+
.. _`Linux installation documentation`: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
.. _`RDMA Core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
-Mellanox OFED
-^^^^^^^^^^^^^
+If rdma-core libraries are built but not installed, DPDK makefile can link them,
+thanks to these environment variables:
+
+ - ``EXTRA_CFLAGS=-I/path/to/rdma-core/build/include``
+ - ``EXTRA_LDFLAGS=-L/path/to/rdma-core/build/lib``
+ - ``PKG_CONFIG_PATH=/path/to/rdma-core/build/lib/pkgconfig``
+
+Mellanox OFED/EN
+^^^^^^^^^^^^^^^^
-- Mellanox OFED version: **4.4, 4.5**.
+- Mellanox OFED version: **4.4, 4.5** / Mellanox EN version: **4.5**
- firmware version:
- ConnectX-4: **12.21.1000** and above.
- ConnectX-4 Lx: **14.21.1000** and above.
- ConnectX-5: **16.21.1000** and above.
- ConnectX-5 Ex: **16.21.1000** and above.
+ - ConnectX-6: **20.99.5374** and above.
- Bluefield: **18.99.3950** and above.
While these libraries and kernel modules are available on OpenFabrics
may not be supported at the moment (this is a work in progress).
`Mellanox OFED
-<http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__
-includes the necessary support and should be used in the meantime. For DPDK,
+<http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__ and
+`Mellanox EN
+<http://www.mellanox.com/page/products_dyn?product_family=27&mtag=linux>`__
+include the necessary support and should be used in the meantime. For DPDK,
only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
required from that distribution.
.. note::
- Several versions of Mellanox OFED are available. Installing the version
+ Several versions of Mellanox OFED/EN are available. Installing the version
this DPDK release was developed and tested against is strongly
recommended. Please check the `prerequisites`_.
* Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
* Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G)
-Quick Start Guide on OFED
--------------------------
+Quick Start Guide on OFED/EN
+----------------------------
-1. Download latest Mellanox OFED. For more info check the `prerequisites`_.
+1. Download latest Mellanox OFED/EN. For more info check the `prerequisites`_.
2. Install the required libraries and kernel modules either by installing
- only the required set, or by installing the entire Mellanox OFED:
+ only the required set, or by installing the entire Mellanox OFED/EN:
.. code-block:: console
-------------
This section demonstrates how to launch **testpmd** with Mellanox
-ConnectX-4/ConnectX-5/Bluefield devices managed by librte_pmd_mlx5.
+ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices managed by librte_pmd_mlx5.
#. Load the kernel modules:
modprobe -a ib_uverbs mlx5_core mlx5_ib
- Alternatively if MLNX_OFED is fully installed, the following script can
- be run:
+ Alternatively if MLNX_OFED/MLNX_EN is fully installed, the following script
+ can be run:
.. code-block:: console