The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
-ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox Bluefield** families
+ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox BlueField** families
of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
in SR-IOV context.
- RX VLAN stripping.
- TX VLAN insertion.
- RX CRC stripping configuration.
-- Promiscuous mode.
-- Multicast promiscuous mode.
+- Promiscuous mode on PF and VF.
+- Multicast promiscuous mode on PF and VF.
- Hardware checksum offloads.
- Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
RTE_ETH_FDIR_REJECT).
- RX interrupts.
- Statistics query including Basic, Extended and per queue.
- Rx HW timestamp.
-- Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP.
+- Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP, IP-in-IP.
- Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
+- NIC HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
+ increment/decrement, count, drop, mark. For details please see :ref:`Supported hardware offloads using rte_flow API`.
+- Flow insertion rate of more then million flows per second, when using Direct Rules.
+- Support for multiple rte_flow groups.
+- Hardware LRO.
Limitations
-----------
is set to multi-packet send or Enhanced multi-packet send. Otherwise it must have
less than 50 segments.
-- Count action for RTE flow is **only supported in Mellanox OFED**.
-
- Flows with a VXLAN Network Identifier equal (or ends to be equal)
to 0 are not supported.
To receive IPv6 Multicast messages on VM, explicitly set the relevant
MAC address using rte_eth_dev_mac_addr_add() API.
-- E-Switch VXLAN tunnel is not supported together with outer VLAN.
-
-- E-Switch Flows with VNI pattern must include the VXLAN decapsulation action.
-
-- E-Switch VXLAN decapsulation Flow:
+- E-Switch decapsulation Flow:
- can be applied to PF port only.
- must specify VF port action (packet redirection from PF to VF).
- - must specify tunnel outer UDP local (destination) port, wildcards not allowed.
- - must specify tunnel outer VNI, wildcards not allowed.
- - must specify tunnel outer local (destination) IPv4 or IPv6 address, wildcards not allowed.
- - optionally may specify tunnel outer remote (source) IPv4 or IPv6, wildcards or group IPs allowed.
- optionally may specify tunnel inner source and destination MAC addresses.
-- E-Switch VXLAN encapsulation Flow:
+- E-Switch encapsulation Flow:
- can be applied to VF ports only.
- must specify PF port action (packet redirection from VF to PF).
- - must specify the VXLAN item with tunnel outer parameters.
- - must specify the tunnel outer VNI in the VXLAN item.
- - must specify the tunnel outer remote (destination) UDP port in the VXLAN item.
- - must specify the tunnel outer local (source) IPv4 or IPv6 in the , this address will locally (with scope link) assigned to the outer network interace, wildcards not allowed.
- - must specify the tunnel outer remote (destination) IPv4 or IPv6 in the VXLAN item, group IPs allowed.
- - must specify the tunnel outer destination MAC address in the VXLAN item, this address will be used to create neigh rule.
+
+- ICMP/ICMP6 code/type matching cannot be supported togeter with IP-in-IP tunnel.
+
+- LRO:
+
+ - No mbuf headroom space is created for RX packets when LRO is configured.
+ - ``scatter_fcs`` is disabled when LRO is configured.
Statistics
----------
- ``CONFIG_RTE_IBVERBS_LINK_STATIC`` (default **n**)
- Embed static flavour of the dependencies **libibverbs** and **libmlx5**
+ Embed static flavor of the dependencies **libibverbs** and **libmlx5**
in the PMD shared library or the executable static binary.
- ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
.. note::
- For Bluefield, target should be set to ``arm64-bluefield-linux-gcc``. This
+ For BlueField, target should be set to ``arm64-bluefield-linux-gcc``. This
will enable ``CONFIG_RTE_LIBRTE_MLX5_PMD`` and set ``RTE_CACHE_LINE_SIZE`` to
64. Default armv8a configuration of make build and meson build set it to 128
then brings performance degradation.
Supported on:
- - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
- - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+ - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+ - POWER9 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
- ``rxq_cqe_pad_en`` parameter [int]
Supported on:
- - CPU having 128B cacheline with ConnectX-5 and Bluefield.
+ - CPU having 128B cacheline with ConnectX-5 and BlueField.
- ``rxq_pkt_pad_en`` parameter [int]
Supported on:
- - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
- - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
+ - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+ - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
- ``mprq_en`` parameter [int]
- ``txq_inline`` parameter [int]
- Amount of data to be inlined during TX operations. Improves latency.
- Can improve PPS performance when PCI back pressure is detected and may be
- useful for scenarios involving heavy traffic on many queues.
-
- Because additional software logic is necessary to handle this mode, this
- option should be used with care, as it can lower performance when back
- pressure is not expected.
+ Amount of data to be inlined during TX operations. This parameter is
+ deprecated and converted to the new parameter ``txq_inline_max`` providing
+ partial compatibility.
- ``txqs_min_inline`` parameter [int]
- Enable inline send only when the number of TX queues is greater or equal
+ Enable inline data send only when the number of TX queues is greater or equal
to this value.
- This option should be used in combination with ``txq_inline`` above.
+ This option should be used in combination with ``txq_inline_max`` and
+ ``txq_inline_mpw`` below and does not affect ``txq_inline_min`` settings above.
- On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield without
- Enhanced MPW:
+ If this option is not specified the default value 16 is used for BlueField
+ and 8 for other platforms
- - Disabled by default.
- - In case ``txq_inline`` is set recommendation is 4.
+ The data inlining consumes the CPU cycles, so this option is intended to
+ auto enable inline data if we have enough Tx queues, which means we have
+ enough CPU cores and PCI bandwidth is getting more critical and CPU
+ is not supposed to be bottleneck anymore.
- On ConnectX-5, ConnectX-6 and Bluefield with Enhanced MPW:
+ The copying data into WQE improves latency and can improve PPS performance
+ when PCI back pressure is detected and may be useful for scenarios involving
+ heavy traffic on many queues.
- - Set to 8 by default.
+ Because additional software logic is necessary to handle this mode, this
+ option should be used with care, as it may lower performance when back
+ pressure is not expected.
+
+- ``txq_inline_min`` parameter [int]
+
+ Minimal amount of data to be inlined into WQE during Tx operations. NICs
+ may require this minimal data amount to operate correctly. The exact value
+ may depend on NIC operation mode, requested offloads, etc.
+
+ If ``txq_inline_min`` key is present the specified value (may be aligned
+ by the driver in order not to exceed the limits and provide better descriptor
+ space utilization) will be used by the driver and it is guaranteed the
+ requested data bytes are inlined into the WQE beside other inline settings.
+ This keys also may update ``txq_inline_max`` value (default of specified
+ explicitly in devargs) to reserve the space for inline data.
+
+ If ``txq_inline_min`` key is not present, the value may be queried by the
+ driver from the NIC via DevX if this feature is available. If there is no DevX
+ enabled/supported the value 18 (supposing L2 header including VLAN) is set
+ for ConnectX-4, value 58 (supposing L2-L4 headers, required by configurations
+ over E-Switch) is set for ConnectX-4 Lx, and 0 is set by default for ConnectX-5
+ and newer NICs. If packet is shorter the ``txq_inline_min`` value, the entire
+ packet is inlined.
+
+ For the ConnectX-4 and ConnectX-4 Lx NICs driver does not allow to set
+ this value below 18 (minimal L2 header, including VLAN).
+
+ Please, note, this minimal data inlining disengages eMPW feature (Enhanced
+ Multi-Packet Write), because last one does not support partial packet inlining.
+ This is not very critical due to minimal data inlining is mostly required
+ by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW feature.
+
+- ``txq_inline_max`` parameter [int]
+
+ Specifies the maximal packet length to be completely inlined into WQE
+ Ethernet Segment for ordinary SEND method. If packet is larger than specified
+ value, the packet data won't be copied by the driver at all, data buffer
+ is addressed with a pointer. If packet length is less or equal all packet
+ data will be copied into WQE. This may improve PCI bandwidth utilization for
+ short packets significantly but requires the extra CPU cycles.
+
+ The data inline feature is controlled by number of Tx queues, if number of Tx
+ queues is larger than ``txqs_min_inline`` key parameter, the inline feature
+ is engaged, if there are not enough Tx queues (which means not enough CPU cores
+ and CPU resources are scarce), data inline is not performed by the driver.
+ Assigning ``txqs_min_inline`` with zero always enables the data inline.
+
+ The default ``txq_inline_max`` value is 290. The specified value may be adjusted
+ by the driver in order not to exceed the limit (930 bytes) and to provide better
+ WQE space filling without gaps, the adjustment is reflected in the debug log.
+
+- ``txq_inline_mpw`` parameter [int]
+
+ Specifies the maximal packet length to be completely inlined into WQE for
+ Enhanced MPW method. If packet is large the specified value, the packet data
+ won't be copied, and data buffer is addressed with pointer. If packet length
+ is less or equal, all packet data will be copied into WQE. This may improve PCI
+ bandwidth utilization for short packets significantly but requires the extra
+ CPU cycles.
+
+ The data inline feature is controlled by number of TX queues, if number of Tx
+ queues is larger than ``txqs_min_inline`` key parameter, the inline feature
+ is engaged, if there are not enough Tx queues (which means not enough CPU cores
+ and CPU resources are scarce), data inline is not performed by the driver.
+ Assigning ``txqs_min_inline`` with zero always enables the data inline.
+
+ The default ``txq_inline_mpw`` value is 188. The specified value may be adjusted
+ by the driver in order not to exceed the limit (930 bytes) and to provide better
+ WQE space filling without gaps, the adjustment is reflected in the debug log.
+ Due to multiple packets may be included to the same WQE with Enhanced Multi
+ Packet Write Method and overall WQE size is limited it is not recommended to
+ specify large values for the ``txq_inline_mpw``.
- ``txqs_max_vec`` parameter [int]
Enable vectorized Tx only when the number of TX queues is less than or
- equal to this value. Effective only when ``tx_vec_en`` is enabled.
-
- On ConnectX-5:
-
- - Set to 8 by default on ARMv8.
- - Set to 4 by default otherwise.
-
- On Bluefield
-
- - Set to 16 by default.
-
-- ``txq_mpw_en`` parameter [int]
-
- A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
- enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and Bluefield.
- MPS allows the TX burst function to pack up multiple packets in a
- single descriptor session in order to save PCI bandwidth and improve
- performance at the cost of a slightly higher CPU usage. When
- ``txq_inline`` is set along with ``txq_mpw_en``, TX burst function tries
- to copy entire packet data on to TX descriptor instead of including
- pointer of packet only if there is enough room remained in the
- descriptor. ``txq_inline`` sets per-descriptor space for either pointers
- or inlined packets. In addition, Enhanced MPS supports hybrid mode -
- mixing inlined packets and pointers in the same descriptor.
-
- This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
- When those offloads are requested the MPS send function will not be used.
-
- It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and Bluefield
- families of adapters.
- On ConnectX-4 Lx the MPW is considered un-secure hence disabled by default.
- Users which enable the MPW should be aware that application which provides incorrect
- mbuf descriptors in the Tx burst can lead to serious errors in the host including, on some cases,
- NIC to get stuck.
- On ConnectX-5, ConnectX-6 and Bluefield the MPW is secure and enabled by default.
+ equal to this value. This parameter is deprecated and ignored, kept
+ for compatibility issue to not prevent driver from probing.
- ``txq_mpw_hdr_dseg_en`` parameter [int]
A nonzero value enables including two pointers in the first block of TX
- descriptor. This can be used to lessen CPU load for memory copy.
-
- Effective only when Enhanced MPS is supported. Disabled by default.
+ descriptor. The parameter is deprecated and ignored, kept for compatibility
+ issue.
- ``txq_max_inline_len`` parameter [int]
Maximum size of packet to be inlined. This limits the size of packet to
be inlined. If the size of a packet is larger than configured value, the
packet isn't inlined even though there's enough space remained in the
- descriptor. Instead, the packet is included with pointer.
+ descriptor. Instead, the packet is included with pointer. This parameter
+ is deprecated and converted directly to ``txq_inline_mpw`` providing full
+ compatibility. Valid only if eMPW feature is engaged.
- Effective only when Enhanced MPS is supported. The default value is 256.
+- ``txq_mpw_en`` parameter [int]
-- ``tx_vec_en`` parameter [int]
+ A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
+ ConnectX-6 and BlueField. eMPW allows the TX burst function to pack up multiple
+ packets in a single descriptor session in order to save PCI bandwidth and improve
+ performance at the cost of a slightly higher CPU usage. When ``txq_inline_mpw``
+ is set along with ``txq_mpw_en``, TX burst function copies entire packet
+ data on to TX descriptor instead of including pointer of packet.
- A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and Bluefield NICs if the number of
- global Tx queues on the port is less than ``txqs_max_vec``.
+ The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
+ it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` option.
+ Also, if minimal data inlining is requested by non-zero ``txq_inline_min``
+ option or reported by the NIC, the eMPW feature is disengaged.
- This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
- When those offloads are requested the MPS send function will not be used.
+- ``tx_vec_en`` parameter [int]
- Enabled by default on ConnectX-5, ConnectX-6 and Bluefield.
+ A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and BlueField
+ NICs if the number of global Tx queues on the port is less than
+ ``txqs_max_vec``. The parameter is deprecated and ignored.
- ``rx_vec_en`` parameter [int]
A nonzero value enables the DV flow steering assuming it is supported
by the driver.
- The DV flow steering is not supported on switchdev mode.
Disabled by default.
+- ``dv_esw_en`` parameter [int]
+
+ A nonzero value enables E-Switch using Direct Rules.
+
+ Enabled by default if supported.
+
- ``mr_ext_memseg_en`` parameter [int]
A nonzero value enables extending memseg when registering DMA memory. If
representor=[0-2]
+- ``max_dump_files_num`` parameter [int]
+
+ The maximum number of files per PMD entity that may be created for debug information.
+ The files will be created in /var/log directory or in current directory.
+
+ set to 128 by default.
+
+- ``lro_timeout_usec`` parameter [int]
+
+ The maximum allowed duration of an LRO session, in micro-seconds.
+ PMD will set the nearest value supported by HW, which is not bigger than
+ the input ``lro_timeout_usec`` value.
+ If this parameter is not specified, by default PMD will set
+ the smallest value supported by HW.
+
Firmware configuration
~~~~~~~~~~~~~~~~~~~~~~
IP_OVER_VXLAN_EN True(1)
IP_OVER_VXLAN_PORT <udp dport>
+- enable ICMP/ICMP6's code/type field matching
+
+ .. code-block:: console
+
+ mlxconfig -d <mst device> set FLEX_PARSER_PROFILE_ENABLE=2
+
+ Verify configurations are set:
+
+ .. code-block:: console
+
+ mlxconfig -d <mst device> query | grep FLEX_PARSER_PROFILE_ENABLE
+ FLEX_PARSER_PROFILE_ENABLE 2
+
+- IP-in-IP tunnel enable
+
+ .. code-block:: console
+
+ mlxconfig -d <mst device> set FLEX_PARSER_PROFILE_ENABLE=0
+
+ Verify configurations are set:
+
+ .. code-block:: console
+
+ mlxconfig -d <mst device> query | grep FLEX_PARSER_PROFILE_ENABLE
+ FLEX_PARSER_PROFILE_ENABLE 0
+
Prerequisites
-------------
- **libmlx5**
Low-level user space driver library for Mellanox
- ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices, it is automatically loaded
+ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices, it is automatically loaded
by libibverbs.
This library basically implements send/receive calls to the hardware
queues.
-- **libmnl**
-
- Minimalistic Netlink library mainly relied on to manage E-Switch flow
- rules (i.e. those with the "transfer" attribute and typically involving
- port representors).
-
- **Kernel modules**
They provide the kernel-side Verbs API and low level device drivers that
their devices:
- mlx5_core: hardware driver managing Mellanox
- ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices and related Ethernet kernel
+ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel
network devices.
- mlx5_ib: InifiniBand device driver.
- ib_uverbs: user space driver for Verbs (entry point for libibverbs).
- **Firmware update**
Mellanox OFED/EN releases include firmware updates for
- ConnectX-4/ConnectX-5/ConnectX-6/Bluefield adapters.
+ ConnectX-4/ConnectX-5/ConnectX-6/BlueField adapters.
Because each release provides new features, these updates must be applied to
match the kernel modules and libraries they come with.
(recommended) or Mellanox OFED/EN, which provides compatibility with older
releases.
-RMDA Core with Linux Kernel
+RDMA Core with Linux Kernel
^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Minimal kernel version : v4.14 or the most recent 4.14-rc (see `Linux installation documentation`_)
Mellanox OFED/EN
^^^^^^^^^^^^^^^^
-- Mellanox OFED version: **4.4, 4.5** / Mellanox EN version: **4.5**
+- Mellanox OFED version: ** 4.5, 4.6** /
+ Mellanox EN version: **4.5, 4.6**
- firmware version:
- ConnectX-4: **12.21.1000** and above.
- ConnectX-5: **16.21.1000** and above.
- ConnectX-5 Ex: **16.21.1000** and above.
- ConnectX-6: **20.99.5374** and above.
- - Bluefield: **18.99.3950** and above.
+ - BlueField: **18.25.1010** and above.
While these libraries and kernel modules are available on OpenFabrics
Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
this DPDK release was developed and tested against is strongly
recommended. Please check the `prerequisites`_.
-Libmnl
-^^^^^^
-
-Minimal version for libmnl is **1.0.3**.
-
-As a dependency of the **iproute2** suite, this library is often installed
-by default. It is otherwise readily available through standard system
-packages.
-
-Its development headers must be installed in order to compile this PMD.
-These packages are usually named **libmnl-dev** or **libmnl-devel**
-depending on the Linux distribution.
-
Supported NICs
--------------
6. Compile DPDK and you are ready to go. See instructions on
:ref:`Development Kit Build System <Development_Kit_Build_System>`
+Enable switchdev mode
+---------------------
+
+Switchdev mode is a mode in E-Switch, that binds between representor and VF.
+Representor is a port in DPDK that is connected to a VF in such a way
+that assuming there are no offload flows, each packet that is sent from the VF
+will be received by the corresponding representor. While each packet that is
+sent to a representor will be received by the VF.
+This is very useful in case of SRIOV mode, where the first packet that is sent
+by the VF will be received by the DPDK application which will decide if this
+flow should be offloaded to the E-Switch. After offloading the flow packet
+that the VF that are matching the flow will not be received any more by
+the DPDK application.
+
+1. Enable SRIOV mode:
+
+ .. code-block:: console
+
+ mlxconfig -d <mst device> set SRIOV_EN=true
+
+2. Configure the max number of VFs:
+
+ .. code-block:: console
+
+ mlxconfig -d <mst device> set NUM_OF_VFS=<num of vfs>
+
+3. Reset the FW:
+
+ .. code-block:: console
+
+ mlxfwreset -d <mst device> reset
+
+3. Configure the actual number of VFs:
+
+ .. code-block:: console
+
+ echo <num of vfs > /sys/class/net/<net device>/device/sriov_numvfs
+
+4. Unbind the device (can be rebind after the switchdev mode):
+
+ .. code-block:: console
+
+ echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
+
+5. Enbale switchdev mode:
+
+ .. code-block:: console
+
+ echo switchdev > /sys/class/net/<net device>/compat/devlink/mode
+
Performance tuning
------------------
- Configure per-lcore cache when creating Mempools for packet buffer.
- Refrain from dynamically allocating/freeing memory in run-time.
+Supported hardware offloads using rte_flow API
+----------------------------------------------
+
+.. _Supported hardware offloads using rte_flow API:
+
+.. table:: Supported hardware offloads using rte_flow API
+
+ +-----------------------+-----------------+-----------------+
+ | Offload | E-Switch | NIC |
+ | | | |
+ +=======================+=================+=================+
+ | Count | | DPDK 19.05 | | DPDK 19.02 |
+ | | | OFED 4.6 | | OFED 4.6 |
+ | | | RDMA-CORE V24 | | RDMA-CORE V23 |
+ | | | ConnectX-5 | | ConnectX-5 |
+ +-----------------------+-----------------+-----------------+
+ | Drop / Queue / RSS | | DPDK 19.05 | | DPDK 18.11 |
+ | | | OFED 4.6 | | OFED 4.5 |
+ | | | RDMA-CORE V24 | | RDMA-CORE V23 |
+ | | | ConnectX-5 | | ConnectX-4 |
+ +-----------------------+-----------------+-----------------+
+ | Encapsulation | | DPDK 19.05 | | DPDK 19.02 |
+ | (VXLAN / NVGRE / RAW) | | OFED 4.6.2 | | OFED 4.6 |
+ | | | RDMA-CORE V24 | | RDMA-CORE V23 |
+ | | | ConnectX-5 | | ConnectX-5 |
+ +-----------------------+-----------------+-----------------+
+ | Header rewrite | | DPDK 19.05 | | DPDK 19.02 |
+ | (set_ipv4_src / | | OFED 4.6.2 | | OFED 4.6.2 |
+ | set_ipv4_dst / | | RDMA-CORE V24 | | RDMA-CORE V23 |
+ | set_ipv6_src / | | ConnectX-5 | | ConnectX-5 |
+ | set_ipv6_dst / | | |
+ | set_tp_src / | | |
+ | set_tp_dst / | | |
+ | dec_ttl / | | |
+ | set_ttl / | | |
+ | set_mac_src / | | |
+ | set_mac_dst) | | |
+ +-----------------------+-----------------+-----------------+
+ | Jump | | DPDK 19.05 | | DPDK 19.02 |
+ | | | OFED 4.6.2 | | OFED 4.6.2 |
+ | | | RDMA-CORE V24 | | N/A |
+ | | | ConnectX-5 | | ConnectX-5 |
+ +-----------------------+-----------------+-----------------+
+ | Mark / Flag | | DPDK 19.05 | | DPDK 18.11 |
+ | | | OFED 4.6 | | OFED 4.5 |
+ | | | RDMA-CORE V24 | | RDMA-CORE V23 |
+ | | | ConnectX-5 | | ConnectX-4 |
+ +-----------------------+-----------------+-----------------+
+ | Port ID | | DPDK 19.05 | | N/A |
+ | | | OFED 4.6 | | N/A |
+ | | | RDMA-CORE V24 | | N/A |
+ | | | ConnectX-5 | | N/A |
+ +-----------------------+-----------------+-----------------+
+
+* Minimum version for each component and nic.
+
Notes for testpmd
-----------------
-------------
This section demonstrates how to launch **testpmd** with Mellanox
-ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices managed by librte_pmd_mlx5.
+ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
#. Load the kernel modules: