1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2015 6WIND S.A.
3 Copyright 2015 Mellanox Technologies, Ltd
8 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
9 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
10 ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox Bluefield** families
11 of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
14 Information and documentation about these adapters can be found on the
15 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
16 `Mellanox community <http://community.mellanox.com/welcome>`__.
18 There is also a `section dedicated to this poll mode driver
19 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
23 Due to external dependencies, this driver is disabled by default. It must
24 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX5_PMD=y`` and
27 Implementation details
28 ----------------------
30 Besides its dependency on libibverbs (that implies libmlx5 and associated
31 kernel support), librte_pmd_mlx5 relies heavily on system calls for control
32 operations such as querying/updating the MTU and flow control parameters.
34 For security reasons and robustness, this driver only deals with virtual
35 memory addresses. The way resources allocations are handled by the kernel
36 combined with hardware specifications that allow it to handle virtual memory
37 addresses directly ensure that DPDK applications cannot access random
38 physical memory (or memory that does not belong to the current process).
40 This capability allows the PMD to coexist with kernel network interfaces
41 which remain functional, although they stop receiving unicast packets as
42 long as they share the same MAC address.
43 This means legacy linux control tools (for example: ethtool, ifconfig and
44 more) can operate on the same network interfaces that owned by the DPDK
47 Enabling librte_pmd_mlx5 causes DPDK applications to be linked against
53 - Multi arch support: x86_64, POWER8, ARMv8, i686.
54 - Multiple TX and RX queues.
55 - Support for scattered TX and RX frames.
56 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
57 - Several RSS hash keys, one for each flow type.
58 - Default RSS operation with no hash key specification.
59 - Configurable RETA table.
60 - Support for multiple MAC addresses.
64 - RX CRC stripping configuration.
66 - Multicast promiscuous mode.
67 - Hardware checksum offloads.
68 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
72 - KVM and VMware ESX SR-IOV modes are supported.
73 - RSS hash result is supported.
74 - Hardware TSO for generic IP or UDP tunnel, including VXLAN and GRE.
75 - Hardware checksum Tx offload for generic IP or UDP tunnel, including VXLAN and GRE.
77 - Statistics query including Basic, Extended and per queue.
79 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP.
80 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
85 - For secondary process:
87 - Forked secondary process not supported.
88 - All mempools must be initialized before rte_eth_dev_start().
90 - Flow pattern without any specific vlan will match for vlan packets as well:
92 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
93 Meaning, the flow rule::
95 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
97 Will only match vlan packets with vid=3. and the flow rules::
99 flow create 0 ingress pattern eth / ipv4 / end ...
103 flow create 0 ingress pattern eth / vlan / ipv4 / end ...
105 Will match any ipv4 packet (VLAN included).
107 - A multi segment packet must have less than 6 segments in case the Tx burst function
108 is set to multi-packet send or Enhanced multi-packet send. Otherwise it must have
109 less than 50 segments.
111 - Count action for RTE flow is **only supported in Mellanox OFED**.
113 - Flows with a VXLAN Network Identifier equal (or ends to be equal)
114 to 0 are not supported.
116 - VXLAN TSO and checksum offloads are not supported on VM.
118 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
120 - VF: flow rules created on VF devices can only match traffic targeted at the
121 configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``).
125 MAC addresses not already present in the bridge table of the associated
126 kernel network device will be added and cleaned up by the PMD when closing
127 the device. In case of ungraceful program termination, some entries may
128 remain present and should be removed manually by other means.
130 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
131 externally attached to a user-provided mbuf with having EXT_ATTACHED_MBUF in
132 ol_flags. As the mempool for the external buffer is managed by PMD, all the
133 Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
134 the external buffers will be freed by PMD and the application which still
135 holds the external buffers may be corrupted.
137 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
138 enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
139 supported. Some Rx packets may not have PKT_RX_RSS_HASH.
141 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
142 and allmulticast mode are both set to off.
143 To receive IPv6 Multicast messages on VM, explicitly set the relevant
144 MAC address using rte_eth_dev_mac_addr_add() API.
146 - E-Switch VXLAN tunnel is not supported together with outer VLAN.
148 - E-Switch Flows with VNI pattern must include the VXLAN decapsulation action.
150 - E-Switch VXLAN decapsulation Flow:
152 - can be appiled to PF port only.
153 - must specify VF port action (packet redirection from PF to VF).
154 - must specify tunnel outer UDP local (destination) port, wildcards not allowed.
155 - must specify tunnel outer VNI, wildcards not allowed.
156 - must specify tunnel outer local (destination) IPv4 or IPv6 address, wildcards not allowed.
157 - optionally may specify tunnel outer remote (source) IPv4 or IPv6, wildcards or group IPs allowed.
158 - optionally may specify tunnel inner source and destination MAC addresses.
160 - E-Switch VXLAN encapsulation Flow:
162 - can be applied to VF ports only.
163 - must specify PF port action (packet redirection from VF to PF).
164 - must specify the VXLAN item with tunnel outer parameters.
165 - must specify the tunnel outer VNI in the VXLAN item.
166 - must specify the tunnel outer remote (destination) UDP port in the VXLAN item.
167 - must specify the tunnel outer local (source) IPv4 or IPv6 in the , this address will locally (with scope link) assigned to the outer network interace, wildcards not allowed.
168 - must specify the tunnel outer remote (destination) IPv4 or IPv6 in the VXLAN item, group IPs allowed.
169 - must specify the tunnel outer destination MAC address in the VXLAN item, this address will be used to create neigh rule.
174 MLX5 supports various of methods to report statistics:
176 Port statistics can be queried using ``rte_eth_stats_get()``. The received and sent statistics are through SW only and counts the number of packets received or sent successfully by the PMD. The imissed counter is the amount of packets that could not be delivered to SW because a queue was full. Packets not received due to congestion in the bus or on the NIC can be queried via the rx_discards_phy xstats counter.
178 Extended statistics can be queried using ``rte_eth_xstats_get()``. The extended statistics expose a wider set of counters counted by the device. The extended port statistics counts the number of packets received or sent successfully by the port. As Mellanox NICs are using the :ref:`Bifurcated Linux Driver <linux_gsg_linux_drivers>` those counters counts also packet received or sent by the Linux kernel. The counters with ``_phy`` suffix counts the total events on the physical port, therefore not valid for VF.
180 Finally per-flow statistics can by queried using ``rte_flow_query`` when attaching a count action for specific flow. The flow counter counts the number of packets received successfully by the port and match the specific flow.
188 These options can be modified in the ``.config`` file.
190 - ``CONFIG_RTE_LIBRTE_MLX5_PMD`` (default **n**)
192 Toggle compilation of librte_pmd_mlx5 itself.
194 - ``CONFIG_RTE_IBVERBS_LINK_DLOPEN`` (default **n**)
196 Build PMD with additional code to make it loadable without hard
197 dependencies on **libibverbs** nor **libmlx5**, which may not be installed
198 on the target system.
200 In this mode, their presence is still required for it to run properly,
201 however their absence won't prevent a DPDK application from starting (with
202 ``CONFIG_RTE_BUILD_SHARED_LIB`` disabled) and they won't show up as
203 missing with ``ldd(1)``.
205 It works by moving these dependencies to a purpose-built rdma-core "glue"
206 plug-in which must either be installed in a directory whose name is based
207 on ``CONFIG_RTE_EAL_PMD_PATH`` suffixed with ``-glue`` if set, or in a
208 standard location for the dynamic linker (e.g. ``/lib``) if left to the
209 default empty string (``""``).
211 This option has no performance impact.
213 - ``CONFIG_RTE_IBVERBS_LINK_STATIC`` (default **n**)
215 Embed static flavour of the dependencies **libibverbs** and **libmlx5**
216 in the PMD shared library or the executable static binary.
218 - ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
220 Toggle debugging code and stricter compilation flags. Enabling this option
221 adds additional run-time checks and debugging messages at the cost of
224 Environment variables
225 ~~~~~~~~~~~~~~~~~~~~~
229 A list of directories in which to search for the rdma-core "glue" plug-in,
230 separated by colons or semi-colons.
232 Only matters when compiled with ``CONFIG_RTE_IBVERBS_LINK_DLOPEN``
233 enabled and most useful when ``CONFIG_RTE_EAL_PMD_PATH`` is also set,
234 since ``LD_LIBRARY_PATH`` has no effect in this case.
236 - ``MLX5_PMD_ENABLE_PADDING``
238 Enables HW packet padding in PCI bus transactions.
240 When packet size is cache aligned and CRC stripping is enabled, 4 fewer
241 bytes are written to the PCI bus. Enabling padding makes such packets
244 In cases where PCI bandwidth is the bottleneck, padding can improve
247 This is disabled by default since this can also decrease performance for
248 unaligned packet sizes.
250 - ``MLX5_SHUT_UP_BF``
252 Configures HW Tx doorbell register as IO-mapped.
254 By default, the HW Tx doorbell is configured as a write-combining register.
255 The register would be flushed to HW usually when the write-combining buffer
256 becomes full, but it depends on CPU design.
258 Except for vectorized Tx burst routines, a write memory barrier is enforced
259 after updating the register so that the update can be immediately visible to
262 When vectorized Tx burst is called, the barrier is set only if the burst size
263 is not aligned to MLX5_VPMD_TX_MAX_BURST. However, setting this environmental
264 variable will bring better latency even though the maximum throughput can
267 Run-time configuration
268 ~~~~~~~~~~~~~~~~~~~~~~
270 - librte_pmd_mlx5 brings kernel network interfaces up during initialization
271 because it is affected by their state. Forcing them down prevents packets
274 - **ethtool** operations on related kernel interfaces also affect the PMD.
276 - ``rxq_cqe_comp_en`` parameter [int]
278 A nonzero value enables the compression of CQE on RX side. This feature
279 allows to save PCI bandwidth and improve performance. Enabled by default.
283 - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
284 - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield.
286 - ``rxq_cqe_pad_en`` parameter [int]
288 A nonzero value enables 128B padding of CQE on RX side. The size of CQE
289 is aligned with the size of a cacheline of the core. If cacheline size is
290 128B, the CQE size is configured to be 128B even though the device writes
291 only 64B data on the cacheline. This is to avoid unnecessary cache
292 invalidation by device's two consecutive writes on to one cacheline.
293 However in some architecture, it is more beneficial to update entire
294 cacheline with padding the rest 64B rather than striding because
295 read-modify-write could drop performance a lot. On the other hand,
296 writing extra data will consume more PCIe bandwidth and could also drop
297 the maximum throughput. It is recommended to empirically set this
298 parameter. Disabled by default.
302 - CPU having 128B cacheline with ConnectX-5 and Bluefield.
304 - ``mprq_en`` parameter [int]
306 A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
307 configured as Multi-Packet RQ if the total number of Rx queues is
308 ``rxqs_min_mprq`` or more and Rx scatter isn't configured. Disabled by
311 Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe bandwidth
312 by posting a single large buffer for multiple packets. Instead of posting a
313 buffers per a packet, one large buffer is posted in order to receive multiple
314 packets on the buffer. A MPRQ buffer consists of multiple fixed-size strides
315 and each stride receives one packet. MPRQ can improve throughput for
316 small-packet tarffic.
318 When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
319 user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
320 configure large stride size enough to accommodate max_rx_pkt_len as long as
321 device allows. Note that this can waste system memory compared to enabling Rx
322 scatter and multi-segment packet.
324 - ``mprq_log_stride_num`` parameter [int]
326 Log 2 of the number of strides for Multi-Packet Rx queue. Configuring more
327 strides can reduce PCIe tarffic further. If configured value is not in the
328 range of device capability, the default value will be set with a warning
329 message. The default value is 4 which is 16 strides per a buffer, valid only
330 if ``mprq_en`` is set.
332 The size of Rx queue should be bigger than the number of strides.
334 - ``mprq_max_memcpy_len`` parameter [int]
336 The maximum length of packet to memcpy in case of Multi-Packet Rx queue. Rx
337 packet is mem-copied to a user-provided mbuf if the size of Rx packet is less
338 than or equal to this parameter. Otherwise, PMD will attach the Rx packet to
339 the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
340 A mempool for external buffers will be allocated and managed by PMD. If Rx
341 packet is externally attached, ol_flags field of the mbuf will have
342 EXT_ATTACHED_MBUF and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
343 checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
345 - ``rxqs_min_mprq`` parameter [int]
347 Configure Rx queues as Multi-Packet RQ if the total number of Rx queues is
348 greater or equal to this value. The default value is 12, valid only if
351 - ``txq_inline`` parameter [int]
353 Amount of data to be inlined during TX operations. Improves latency.
354 Can improve PPS performance when PCI back pressure is detected and may be
355 useful for scenarios involving heavy traffic on many queues.
357 Because additional software logic is necessary to handle this mode, this
358 option should be used with care, as it can lower performance when back
359 pressure is not expected.
361 - ``txqs_min_inline`` parameter [int]
363 Enable inline send only when the number of TX queues is greater or equal
366 This option should be used in combination with ``txq_inline`` above.
368 On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and Bluefield without
371 - Disabled by default.
372 - In case ``txq_inline`` is set recommendation is 4.
374 On ConnectX-5, ConnectX-6 and Bluefield with Enhanced MPW:
376 - Set to 8 by default.
378 - ``txqs_max_vec`` parameter [int]
380 Enable vectorized Tx only when the number of TX queues is less than or
381 equal to this value. Effective only when ``tx_vec_en`` is enabled.
385 - Set to 8 by default on ARMv8.
386 - Set to 4 by default otherwise.
390 - Set to 16 by default.
392 - ``txq_mpw_en`` parameter [int]
394 A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
395 enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 and Bluefield.
396 MPS allows the TX burst function to pack up multiple packets in a
397 single descriptor session in order to save PCI bandwidth and improve
398 performance at the cost of a slightly higher CPU usage. When
399 ``txq_inline`` is set along with ``txq_mpw_en``, TX burst function tries
400 to copy entire packet data on to TX descriptor instead of including
401 pointer of packet only if there is enough room remained in the
402 descriptor. ``txq_inline`` sets per-descriptor space for either pointers
403 or inlined packets. In addition, Enhanced MPS supports hybrid mode -
404 mixing inlined packets and pointers in the same descriptor.
406 This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
407 DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
408 When those offloads are requested the MPS send function will not be used.
410 It is currently only supported on the ConnectX-4 Lx, ConnectX-5, ConnectX-6 and Bluefield
411 families of adapters.
412 On ConnectX-4 Lx the MPW is considered un-secure hence disabled by default.
413 Users which enable the MPW should be aware that application which provides incorrect
414 mbuf descriptors in the Tx burst can lead to serious errors in the host including, on some cases,
416 On ConnectX-5, ConnectX-6 and Bluefield the MPW is secure and enabled by default.
418 - ``txq_mpw_hdr_dseg_en`` parameter [int]
420 A nonzero value enables including two pointers in the first block of TX
421 descriptor. This can be used to lessen CPU load for memory copy.
423 Effective only when Enhanced MPS is supported. Disabled by default.
425 - ``txq_max_inline_len`` parameter [int]
427 Maximum size of packet to be inlined. This limits the size of packet to
428 be inlined. If the size of a packet is larger than configured value, the
429 packet isn't inlined even though there's enough space remained in the
430 descriptor. Instead, the packet is included with pointer.
432 Effective only when Enhanced MPS is supported. The default value is 256.
434 - ``tx_vec_en`` parameter [int]
436 A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and Bluefield NICs if the number of
437 global Tx queues on the port is less than ``txqs_max_vec``.
439 This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
440 DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
441 When those offloads are requested the MPS send function will not be used.
443 Enabled by default on ConnectX-5, ConnectX-6 and Bluefield.
445 - ``rx_vec_en`` parameter [int]
447 A nonzero value enables Rx vector if the port is not configured in
448 multi-segment otherwise this parameter is ignored.
452 - ``vf_nl_en`` parameter [int]
454 A nonzero value enables Netlink requests from the VF to add/remove MAC
455 addresses or/and enable/disable promiscuous/all multicast on the Netdevice.
456 Otherwise the relevant configuration must be run with Linux iproute2 tools.
457 This is a prerequisite to receive this kind of traffic.
459 Enabled by default, valid only on VF devices ignored otherwise.
461 - ``l3_vxlan_en`` parameter [int]
463 A nonzero value allows L3 VXLAN and VXLAN-GPE flow creation. To enable
464 L3 VXLAN or VXLAN-GPE, users has to configure firmware and enable this
465 parameter. This is a prerequisite to receive this kind of traffic.
469 - ``dv_flow_en`` parameter [int]
471 A nonzero value enables the DV flow steering assuming it is supported
473 The DV flow steering is not supported on switchdev mode.
477 - ``representor`` parameter [list]
479 This parameter can be used to instantiate DPDK Ethernet devices from
480 existing port (or VF) representors configured on the device.
482 It is a standard parameter whose format is described in
483 :ref:`ethernet_device_standard_device_arguments`.
485 For instance, to probe port representors 0 through 2::
489 Firmware configuration
490 ~~~~~~~~~~~~~~~~~~~~~~
492 - L3 VXLAN and VXLAN-GPE destination UDP port
494 .. code-block:: console
496 mlxconfig -d <mst device> set IP_OVER_VXLAN_EN=1
497 mlxconfig -d <mst device> set IP_OVER_VXLAN_PORT=<udp dport>
499 Verify configurations are set:
501 .. code-block:: console
503 mlxconfig -d <mst device> query | grep IP_OVER_VXLAN
504 IP_OVER_VXLAN_EN True(1)
505 IP_OVER_VXLAN_PORT <udp dport>
510 This driver relies on external libraries and kernel drivers for resources
511 allocations and initialization. The following dependencies are not part of
512 DPDK and must be installed separately:
516 User space Verbs framework used by librte_pmd_mlx5. This library provides
517 a generic interface between the kernel and low-level user space drivers
520 It allows slow and privileged operations (context initialization, hardware
521 resources allocations) to be managed by the kernel and fast operations to
522 never leave user space.
526 Low-level user space driver library for Mellanox
527 ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices, it is automatically loaded
530 This library basically implements send/receive calls to the hardware
535 Minimalistic Netlink library mainly relied on to manage E-Switch flow
536 rules (i.e. those with the "transfer" attribute and typically involving
541 They provide the kernel-side Verbs API and low level device drivers that
542 manage actual hardware initialization and resources sharing with user
545 Unlike most other PMDs, these modules must remain loaded and bound to
548 - mlx5_core: hardware driver managing Mellanox
549 ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices and related Ethernet kernel
551 - mlx5_ib: InifiniBand device driver.
552 - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
554 - **Firmware update**
556 Mellanox OFED releases include firmware updates for
557 ConnectX-4/ConnectX-5/ConnectX-6/Bluefield adapters.
559 Because each release provides new features, these updates must be applied to
560 match the kernel modules and libraries they come with.
564 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
570 Either RDMA Core library with a recent enough Linux kernel release
571 (recommended) or Mellanox OFED, which provides compatibility with older
574 RMDA Core with Linux Kernel
575 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
577 - Minimal kernel version : v4.14 or the most recent 4.14-rc (see `Linux installation documentation`_)
578 - Minimal rdma-core version: v15+ commit 0c5f5765213a ("Merge pull request #227 from yishaih/tm")
579 (see `RDMA Core installation documentation`_)
580 - When building for i686 use:
582 - rdma-core version 18.0 or above built with 32bit support.
583 - Kernel version 4.14.41 or above.
585 - Starting with rdma-core v21, static libraries can be built::
588 CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
591 .. _`Linux installation documentation`: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
592 .. _`RDMA Core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
594 If rdma-core libraries are built but not installed, DPDK makefile can link them,
595 thanks to these environment variables:
597 - ``EXTRA_CFLAGS=-I/path/to/rdma-core/build/include``
598 - ``EXTRA_LDFLAGS=-L/path/to/rdma-core/build/lib``
599 - ``PKG_CONFIG_PATH=/path/to/rdma-core/build/lib/pkgconfig``
604 - Mellanox OFED version: **4.4, 4.5**.
607 - ConnectX-4: **12.21.1000** and above.
608 - ConnectX-4 Lx: **14.21.1000** and above.
609 - ConnectX-5: **16.21.1000** and above.
610 - ConnectX-5 Ex: **16.21.1000** and above.
611 - ConnectX-6: **20.99.5374** and above.
612 - Bluefield: **18.99.3950** and above.
614 While these libraries and kernel modules are available on OpenFabrics
615 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
616 managers on most distributions, this PMD requires Ethernet extensions that
617 may not be supported at the moment (this is a work in progress).
620 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__
621 includes the necessary support and should be used in the meantime. For DPDK,
622 only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
623 required from that distribution.
627 Several versions of Mellanox OFED are available. Installing the version
628 this DPDK release was developed and tested against is strongly
629 recommended. Please check the `prerequisites`_.
634 Minimal version for libmnl is **1.0.3**.
636 As a dependency of the **iproute2** suite, this library is often installed
637 by default. It is otherwise readily available through standard system
640 Its development headers must be installed in order to compile this PMD.
641 These packages are usually named **libmnl-dev** or **libmnl-devel**
642 depending on the Linux distribution.
647 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
648 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
649 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
650 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
651 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT (1x40G)
652 * Mellanox(R) ConnectX(R)-4 40G MCX413A-BCAT (1x40G)
653 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
654 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT (1x50G)
655 * Mellanox(R) ConnectX(R)-4 50G MCX413A-GCAT (1x50G)
656 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
657 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT (2x50G)
658 * Mellanox(R) ConnectX(R)-4 50G MCX416A-BCAT (2x50G)
659 * Mellanox(R) ConnectX(R)-4 50G MCX416A-GCAT (2x50G)
660 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
661 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
662 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
663 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
664 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
665 * Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G)
667 Quick Start Guide on OFED
668 -------------------------
670 1. Download latest Mellanox OFED. For more info check the `prerequisites`_.
673 2. Install the required libraries and kernel modules either by installing
674 only the required set, or by installing the entire Mellanox OFED:
676 .. code-block:: console
678 ./mlnxofedinstall --upstream-libs --dpdk
680 3. Verify the firmware is the correct one:
682 .. code-block:: console
686 4. Verify all ports links are set to Ethernet:
688 .. code-block:: console
690 mlxconfig -d <mst device> query | grep LINK_TYPE
694 Link types may have to be configured to Ethernet:
696 .. code-block:: console
698 mlxconfig -d <mst device> set LINK_TYPE_P1/2=1/2/3
700 * LINK_TYPE_P1=<1|2|3> , 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
702 For hypervisors verify SR-IOV is enabled on the NIC:
704 .. code-block:: console
706 mlxconfig -d <mst device> query | grep SRIOV_EN
709 If needed, set enable the set the relevant fields:
711 .. code-block:: console
713 mlxconfig -d <mst device> set SRIOV_EN=1 NUM_OF_VFS=16
714 mlxfwreset -d <mst device> reset
716 5. Restart the driver:
718 .. code-block:: console
720 /etc/init.d/openibd restart
724 .. code-block:: console
726 service openibd restart
728 If link type was changed, firmware must be reset as well:
730 .. code-block:: console
732 mlxfwreset -d <mst device> reset
734 For hypervisors, after reset write the sysfs number of virtual functions
737 To dynamically instantiate a given number of virtual functions (VFs):
739 .. code-block:: console
741 echo [num_vfs] > /sys/class/infiniband/mlx5_0/device/sriov_numvfs
743 6. Compile DPDK and you are ready to go. See instructions on
744 :ref:`Development Kit Build System <Development_Kit_Build_System>`
749 1. Configure aggressive CQE Zipping for maximum performance:
751 .. code-block:: console
753 mlxconfig -d <mst device> s CQE_COMPRESSION=1
755 To set it back to the default CQE Zipping mode use:
757 .. code-block:: console
759 mlxconfig -d <mst device> s CQE_COMPRESSION=0
761 2. In case of virtualization:
763 - Make sure that hypervisor kernel is 3.16 or newer.
764 - Configure boot with ``iommu=pt``.
766 - Make sure to allocate a VM on huge pages.
767 - Make sure to set CPU pinning.
769 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
770 for better performance. For VMs, verify that the right CPU
771 and NUMA node are pinned according to the above. Run:
773 .. code-block:: console
777 to identify the NUMA node to which the PCIe adapter is connected.
779 4. If more than one adapter is used, and root complex capabilities allow
780 to put both adapters on the same NUMA node without PCI bandwidth degradation,
781 it is recommended to locate both adapters on the same NUMA node.
782 This in order to forward packets from one to the other without
783 NUMA performance penalty.
785 5. Disable pause frames:
787 .. code-block:: console
789 ethtool -A <netdev> rx off tx off
791 6. Verify IO non-posted prefetch is disabled by default. This can be checked
792 via the BIOS configuration. Please contact you server provider for more
793 information about the settings.
797 On some machines, depends on the machine integrator, it is beneficial
798 to set the PCI max read request parameter to 1K. This can be
799 done in the following way:
801 To query the read request size use:
803 .. code-block:: console
805 setpci -s <NIC PCI address> 68.w
807 If the output is different than 3XXX, set it by:
809 .. code-block:: console
811 setpci -s <NIC PCI address> 68.w=3XXX
813 The XXX can be different on different systems. Make sure to configure
814 according to the setpci output.
816 7. To minimize overhead of searching Memory Regions:
818 - '--socket-mem' is recommended to pin memory by predictable amount.
819 - Configure per-lcore cache when creating Mempools for packet buffer.
820 - Refrain from dynamically allocating/freeing memory in run-time.
825 Compared to librte_pmd_mlx4 that implements a single RSS configuration per
826 port, librte_pmd_mlx5 supports per-protocol RSS configuration.
828 Since ``testpmd`` defaults to IP RSS mode and there is currently no
829 command-line parameter to enable additional protocols (UDP and TCP as well
830 as IP), the following commands must be entered from its CLI to get the same
831 behavior as librte_pmd_mlx4:
833 .. code-block:: console
836 > port config all rss all
842 This section demonstrates how to launch **testpmd** with Mellanox
843 ConnectX-4/ConnectX-5/ConnectX-6/Bluefield devices managed by librte_pmd_mlx5.
845 #. Load the kernel modules:
847 .. code-block:: console
849 modprobe -a ib_uverbs mlx5_core mlx5_ib
851 Alternatively if MLNX_OFED is fully installed, the following script can
854 .. code-block:: console
856 /etc/init.d/openibd restart
860 User space I/O kernel modules (uio and igb_uio) are not used and do
861 not have to be loaded.
863 #. Make sure Ethernet interfaces are in working order and linked to kernel
864 verbs. Related sysfs entries should be present:
866 .. code-block:: console
868 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
872 .. code-block:: console
879 #. Optionally, retrieve their PCI bus addresses for whitelisting:
881 .. code-block:: console
884 for intf in eth2 eth3 eth4 eth5;
886 (cd "/sys/class/net/${intf}/device/" && pwd -P);
889 sed -n 's,.*/\(.*\),-w \1,p'
893 .. code-block:: console
900 #. Request huge pages:
902 .. code-block:: console
904 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
906 #. Start testpmd with basic parameters:
908 .. code-block:: console
910 testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
914 .. code-block:: console
917 EAL: PCI device 0000:05:00.0 on NUMA socket 0
918 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
919 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
920 PMD: librte_pmd_mlx5: 1 port(s) detected
921 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
922 EAL: PCI device 0000:05:00.1 on NUMA socket 0
923 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
924 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
925 PMD: librte_pmd_mlx5: 1 port(s) detected
926 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
927 EAL: PCI device 0000:06:00.0 on NUMA socket 0
928 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
929 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
930 PMD: librte_pmd_mlx5: 1 port(s) detected
931 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
932 EAL: PCI device 0000:06:00.1 on NUMA socket 0
933 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
934 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
935 PMD: librte_pmd_mlx5: 1 port(s) detected
936 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
937 Interactive-mode selected
938 Configuring Port 0 (socket 0)
939 PMD: librte_pmd_mlx5: 0x8cba80: TX queues number update: 0 -> 2
940 PMD: librte_pmd_mlx5: 0x8cba80: RX queues number update: 0 -> 2
941 Port 0: E4:1D:2D:E7:0C:FE
942 Configuring Port 1 (socket 0)
943 PMD: librte_pmd_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
944 PMD: librte_pmd_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
945 Port 1: E4:1D:2D:E7:0C:FF
946 Configuring Port 2 (socket 0)
947 PMD: librte_pmd_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
948 PMD: librte_pmd_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
949 Port 2: E4:1D:2D:E7:0C:FA
950 Configuring Port 3 (socket 0)
951 PMD: librte_pmd_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
952 PMD: librte_pmd_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
953 Port 3: E4:1D:2D:E7:0C:FB
954 Checking link statuses...
955 Port 0 Link Up - speed 40000 Mbps - full-duplex
956 Port 1 Link Up - speed 40000 Mbps - full-duplex
957 Port 2 Link Up - speed 10000 Mbps - full-duplex
958 Port 3 Link Up - speed 10000 Mbps - full-duplex