1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2015 6WIND S.A.
3 Copyright 2015 Mellanox Technologies, Ltd
8 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
9 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
10 ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox BlueField** families
11 of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
14 Information and documentation about these adapters can be found on the
15 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
16 `Mellanox community <http://community.mellanox.com/welcome>`__.
18 There is also a `section dedicated to this poll mode driver
19 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
23 Due to external dependencies, this driver is disabled by default. It must
24 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX5_PMD=y`` and
27 Implementation details
28 ----------------------
30 Besides its dependency on libibverbs (that implies libmlx5 and associated
31 kernel support), librte_pmd_mlx5 relies heavily on system calls for control
32 operations such as querying/updating the MTU and flow control parameters.
34 For security reasons and robustness, this driver only deals with virtual
35 memory addresses. The way resources allocations are handled by the kernel
36 combined with hardware specifications that allow it to handle virtual memory
37 addresses directly ensure that DPDK applications cannot access random
38 physical memory (or memory that does not belong to the current process).
40 This capability allows the PMD to coexist with kernel network interfaces
41 which remain functional, although they stop receiving unicast packets as
42 long as they share the same MAC address.
43 This means legacy linux control tools (for example: ethtool, ifconfig and
44 more) can operate on the same network interfaces that owned by the DPDK
47 Enabling librte_pmd_mlx5 causes DPDK applications to be linked against
53 - Multi arch support: x86_64, POWER8, ARMv8, i686.
54 - Multiple TX and RX queues.
55 - Support for scattered TX and RX frames.
56 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
57 - Several RSS hash keys, one for each flow type.
58 - Default RSS operation with no hash key specification.
59 - Configurable RETA table.
60 - Support for multiple MAC addresses.
64 - RX CRC stripping configuration.
65 - Promiscuous mode on PF and VF.
66 - Multicast promiscuous mode on PF and VF.
67 - Hardware checksum offloads.
68 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
70 - Flow API, including :ref:`flow_isolated_mode`.
72 - KVM and VMware ESX SR-IOV modes are supported.
73 - RSS hash result is supported.
74 - Hardware TSO for generic IP or UDP tunnel, including VXLAN and GRE.
75 - Hardware checksum Tx offload for generic IP or UDP tunnel, including VXLAN and GRE.
77 - Statistics query including Basic, Extended and per queue.
79 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP, IP-in-IP.
80 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
81 - NIC HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
82 increment/decrement, count, drop, mark. For details please see :ref:`Supported hardware offloads using rte_flow API`.
83 - Flow insertion rate of more then million flows per second, when using Direct Rules.
84 - Support for multiple rte_flow groups.
90 - For secondary process:
92 - Forked secondary process not supported.
93 - External memory unregistered in EAL memseg list cannot be used for DMA
94 unless such memory has been registered by ``mlx5_mr_update_ext_mp()`` in
95 primary process and remapped to the same virtual address in secondary
96 process. If the external memory is registered by primary process but has
97 different virtual address in secondary process, unexpected error may happen.
99 - Flow pattern without any specific vlan will match for vlan packets as well:
101 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
102 Meaning, the flow rule::
104 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
106 Will only match vlan packets with vid=3. and the flow rules::
108 flow create 0 ingress pattern eth / ipv4 / end ...
112 flow create 0 ingress pattern eth / vlan / ipv4 / end ...
114 Will match any ipv4 packet (VLAN included).
116 - A multi segment packet must have less than 6 segments in case the Tx burst function
117 is set to multi-packet send or Enhanced multi-packet send. Otherwise it must have
118 less than 50 segments.
120 - Flows with a VXLAN Network Identifier equal (or ends to be equal)
121 to 0 are not supported.
123 - VXLAN TSO and checksum offloads are not supported on VM.
125 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
127 - VF: flow rules created on VF devices can only match traffic targeted at the
128 configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``).
132 MAC addresses not already present in the bridge table of the associated
133 kernel network device will be added and cleaned up by the PMD when closing
134 the device. In case of ungraceful program termination, some entries may
135 remain present and should be removed manually by other means.
137 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
138 externally attached to a user-provided mbuf with having EXT_ATTACHED_MBUF in
139 ol_flags. As the mempool for the external buffer is managed by PMD, all the
140 Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
141 the external buffers will be freed by PMD and the application which still
142 holds the external buffers may be corrupted.
144 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
145 enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
146 supported. Some Rx packets may not have PKT_RX_RSS_HASH.
148 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
149 and allmulticast mode are both set to off.
150 To receive IPv6 Multicast messages on VM, explicitly set the relevant
151 MAC address using rte_eth_dev_mac_addr_add() API.
153 - E-Switch decapsulation Flow:
155 - can be applied to PF port only.
156 - must specify VF port action (packet redirection from PF to VF).
157 - optionally may specify tunnel inner source and destination MAC addresses.
159 - E-Switch encapsulation Flow:
161 - can be applied to VF ports only.
162 - must specify PF port action (packet redirection from VF to PF).
164 - ICMP/ICMP6 code/type matching cannot be supported togeter with IP-in-IP tunnel.
168 - KEEP_CRC offload cannot be supported with LRO.
169 - The first mbuf length, without head-room, must be big enough to include the
175 MLX5 supports various of methods to report statistics:
177 Port statistics can be queried using ``rte_eth_stats_get()``. The received and sent statistics are through SW only and counts the number of packets received or sent successfully by the PMD. The imissed counter is the amount of packets that could not be delivered to SW because a queue was full. Packets not received due to congestion in the bus or on the NIC can be queried via the rx_discards_phy xstats counter.
179 Extended statistics can be queried using ``rte_eth_xstats_get()``. The extended statistics expose a wider set of counters counted by the device. The extended port statistics counts the number of packets received or sent successfully by the port. As Mellanox NICs are using the :ref:`Bifurcated Linux Driver <linux_gsg_linux_drivers>` those counters counts also packet received or sent by the Linux kernel. The counters with ``_phy`` suffix counts the total events on the physical port, therefore not valid for VF.
181 Finally per-flow statistics can by queried using ``rte_flow_query`` when attaching a count action for specific flow. The flow counter counts the number of packets received successfully by the port and match the specific flow.
189 These options can be modified in the ``.config`` file.
191 - ``CONFIG_RTE_LIBRTE_MLX5_PMD`` (default **n**)
193 Toggle compilation of librte_pmd_mlx5 itself.
195 - ``CONFIG_RTE_IBVERBS_LINK_DLOPEN`` (default **n**)
197 Build PMD with additional code to make it loadable without hard
198 dependencies on **libibverbs** nor **libmlx5**, which may not be installed
199 on the target system.
201 In this mode, their presence is still required for it to run properly,
202 however their absence won't prevent a DPDK application from starting (with
203 ``CONFIG_RTE_BUILD_SHARED_LIB`` disabled) and they won't show up as
204 missing with ``ldd(1)``.
206 It works by moving these dependencies to a purpose-built rdma-core "glue"
207 plug-in which must either be installed in a directory whose name is based
208 on ``CONFIG_RTE_EAL_PMD_PATH`` suffixed with ``-glue`` if set, or in a
209 standard location for the dynamic linker (e.g. ``/lib``) if left to the
210 default empty string (``""``).
212 This option has no performance impact.
214 - ``CONFIG_RTE_IBVERBS_LINK_STATIC`` (default **n**)
216 Embed static flavor of the dependencies **libibverbs** and **libmlx5**
217 in the PMD shared library or the executable static binary.
219 - ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
221 Toggle debugging code and stricter compilation flags. Enabling this option
222 adds additional run-time checks and debugging messages at the cost of
227 For BlueField, target should be set to ``arm64-bluefield-linux-gcc``. This
228 will enable ``CONFIG_RTE_LIBRTE_MLX5_PMD`` and set ``RTE_CACHE_LINE_SIZE`` to
229 64. Default armv8a configuration of make build and meson build set it to 128
230 then brings performance degradation.
232 Environment variables
233 ~~~~~~~~~~~~~~~~~~~~~
237 A list of directories in which to search for the rdma-core "glue" plug-in,
238 separated by colons or semi-colons.
240 Only matters when compiled with ``CONFIG_RTE_IBVERBS_LINK_DLOPEN``
241 enabled and most useful when ``CONFIG_RTE_EAL_PMD_PATH`` is also set,
242 since ``LD_LIBRARY_PATH`` has no effect in this case.
244 - ``MLX5_SHUT_UP_BF``
246 Configures HW Tx doorbell register as IO-mapped.
248 By default, the HW Tx doorbell is configured as a write-combining register.
249 The register would be flushed to HW usually when the write-combining buffer
250 becomes full, but it depends on CPU design.
252 Except for vectorized Tx burst routines, a write memory barrier is enforced
253 after updating the register so that the update can be immediately visible to
256 When vectorized Tx burst is called, the barrier is set only if the burst size
257 is not aligned to MLX5_VPMD_TX_MAX_BURST. However, setting this environmental
258 variable will bring better latency even though the maximum throughput can
261 Run-time configuration
262 ~~~~~~~~~~~~~~~~~~~~~~
264 - librte_pmd_mlx5 brings kernel network interfaces up during initialization
265 because it is affected by their state. Forcing them down prevents packets
268 - **ethtool** operations on related kernel interfaces also affect the PMD.
270 - ``rxq_cqe_comp_en`` parameter [int]
272 A nonzero value enables the compression of CQE on RX side. This feature
273 allows to save PCI bandwidth and improve performance. Enabled by default.
277 - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
278 - POWER9 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
280 - ``rxq_cqe_pad_en`` parameter [int]
282 A nonzero value enables 128B padding of CQE on RX side. The size of CQE
283 is aligned with the size of a cacheline of the core. If cacheline size is
284 128B, the CQE size is configured to be 128B even though the device writes
285 only 64B data on the cacheline. This is to avoid unnecessary cache
286 invalidation by device's two consecutive writes on to one cacheline.
287 However in some architecture, it is more beneficial to update entire
288 cacheline with padding the rest 64B rather than striding because
289 read-modify-write could drop performance a lot. On the other hand,
290 writing extra data will consume more PCIe bandwidth and could also drop
291 the maximum throughput. It is recommended to empirically set this
292 parameter. Disabled by default.
296 - CPU having 128B cacheline with ConnectX-5 and BlueField.
298 - ``rxq_pkt_pad_en`` parameter [int]
300 A nonzero value enables padding Rx packet to the size of cacheline on PCI
301 transaction. This feature would waste PCI bandwidth but could improve
302 performance by avoiding partial cacheline write which may cause costly
303 read-modify-copy in memory transaction on some architectures. Disabled by
308 - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
309 - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
311 - ``mprq_en`` parameter [int]
313 A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
314 configured as Multi-Packet RQ if the total number of Rx queues is
315 ``rxqs_min_mprq`` or more and Rx scatter isn't configured. Disabled by
318 Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe bandwidth
319 by posting a single large buffer for multiple packets. Instead of posting a
320 buffers per a packet, one large buffer is posted in order to receive multiple
321 packets on the buffer. A MPRQ buffer consists of multiple fixed-size strides
322 and each stride receives one packet. MPRQ can improve throughput for
323 small-packet traffic.
325 When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
326 user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
327 configure large stride size enough to accommodate max_rx_pkt_len as long as
328 device allows. Note that this can waste system memory compared to enabling Rx
329 scatter and multi-segment packet.
331 - ``mprq_log_stride_num`` parameter [int]
333 Log 2 of the number of strides for Multi-Packet Rx queue. Configuring more
334 strides can reduce PCIe traffic further. If configured value is not in the
335 range of device capability, the default value will be set with a warning
336 message. The default value is 4 which is 16 strides per a buffer, valid only
337 if ``mprq_en`` is set.
339 The size of Rx queue should be bigger than the number of strides.
341 - ``mprq_max_memcpy_len`` parameter [int]
343 The maximum length of packet to memcpy in case of Multi-Packet Rx queue. Rx
344 packet is mem-copied to a user-provided mbuf if the size of Rx packet is less
345 than or equal to this parameter. Otherwise, PMD will attach the Rx packet to
346 the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
347 A mempool for external buffers will be allocated and managed by PMD. If Rx
348 packet is externally attached, ol_flags field of the mbuf will have
349 EXT_ATTACHED_MBUF and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
350 checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
352 - ``rxqs_min_mprq`` parameter [int]
354 Configure Rx queues as Multi-Packet RQ if the total number of Rx queues is
355 greater or equal to this value. The default value is 12, valid only if
358 - ``txq_inline`` parameter [int]
360 Amount of data to be inlined during TX operations. This parameter is
361 deprecated and converted to the new parameter ``txq_inline_max`` providing
362 partial compatibility.
364 - ``txqs_min_inline`` parameter [int]
366 Enable inline data send only when the number of TX queues is greater or equal
369 This option should be used in combination with ``txq_inline_max`` and
370 ``txq_inline_mpw`` below and does not affect ``txq_inline_min`` settings above.
372 If this option is not specified the default value 16 is used for BlueField
373 and 8 for other platforms
375 The data inlining consumes the CPU cycles, so this option is intended to
376 auto enable inline data if we have enough Tx queues, which means we have
377 enough CPU cores and PCI bandwidth is getting more critical and CPU
378 is not supposed to be bottleneck anymore.
380 The copying data into WQE improves latency and can improve PPS performance
381 when PCI back pressure is detected and may be useful for scenarios involving
382 heavy traffic on many queues.
384 Because additional software logic is necessary to handle this mode, this
385 option should be used with care, as it may lower performance when back
386 pressure is not expected.
388 - ``txq_inline_min`` parameter [int]
390 Minimal amount of data to be inlined into WQE during Tx operations. NICs
391 may require this minimal data amount to operate correctly. The exact value
392 may depend on NIC operation mode, requested offloads, etc.
394 If ``txq_inline_min`` key is present the specified value (may be aligned
395 by the driver in order not to exceed the limits and provide better descriptor
396 space utilization) will be used by the driver and it is guaranteed the
397 requested data bytes are inlined into the WQE beside other inline settings.
398 This keys also may update ``txq_inline_max`` value (default of specified
399 explicitly in devargs) to reserve the space for inline data.
401 If ``txq_inline_min`` key is not present, the value may be queried by the
402 driver from the NIC via DevX if this feature is available. If there is no DevX
403 enabled/supported the value 18 (supposing L2 header including VLAN) is set
404 for ConnectX-4, value 58 (supposing L2-L4 headers, required by configurations
405 over E-Switch) is set for ConnectX-4 Lx, and 0 is set by default for ConnectX-5
406 and newer NICs. If packet is shorter the ``txq_inline_min`` value, the entire
409 For the ConnectX-4 and ConnectX-4 Lx NICs driver does not allow to set
410 this value below 18 (minimal L2 header, including VLAN).
412 Please, note, this minimal data inlining disengages eMPW feature (Enhanced
413 Multi-Packet Write), because last one does not support partial packet inlining.
414 This is not very critical due to minimal data inlining is mostly required
415 by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW feature.
417 - ``txq_inline_max`` parameter [int]
419 Specifies the maximal packet length to be completely inlined into WQE
420 Ethernet Segment for ordinary SEND method. If packet is larger than specified
421 value, the packet data won't be copied by the driver at all, data buffer
422 is addressed with a pointer. If packet length is less or equal all packet
423 data will be copied into WQE. This may improve PCI bandwidth utilization for
424 short packets significantly but requires the extra CPU cycles.
426 The data inline feature is controlled by number of Tx queues, if number of Tx
427 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
428 is engaged, if there are not enough Tx queues (which means not enough CPU cores
429 and CPU resources are scarce), data inline is not performed by the driver.
430 Assigning ``txqs_min_inline`` with zero always enables the data inline.
432 The default ``txq_inline_max`` value is 290. The specified value may be adjusted
433 by the driver in order not to exceed the limit (930 bytes) and to provide better
434 WQE space filling without gaps, the adjustment is reflected in the debug log.
436 - ``txq_inline_mpw`` parameter [int]
438 Specifies the maximal packet length to be completely inlined into WQE for
439 Enhanced MPW method. If packet is large the specified value, the packet data
440 won't be copied, and data buffer is addressed with pointer. If packet length
441 is less or equal, all packet data will be copied into WQE. This may improve PCI
442 bandwidth utilization for short packets significantly but requires the extra
445 The data inline feature is controlled by number of TX queues, if number of Tx
446 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
447 is engaged, if there are not enough Tx queues (which means not enough CPU cores
448 and CPU resources are scarce), data inline is not performed by the driver.
449 Assigning ``txqs_min_inline`` with zero always enables the data inline.
451 The default ``txq_inline_mpw`` value is 188. The specified value may be adjusted
452 by the driver in order not to exceed the limit (930 bytes) and to provide better
453 WQE space filling without gaps, the adjustment is reflected in the debug log.
454 Due to multiple packets may be included to the same WQE with Enhanced Multi
455 Packet Write Method and overall WQE size is limited it is not recommended to
456 specify large values for the ``txq_inline_mpw``.
458 - ``txqs_max_vec`` parameter [int]
460 Enable vectorized Tx only when the number of TX queues is less than or
461 equal to this value. This parameter is deprecated and ignored, kept
462 for compatibility issue to not prevent driver from probing.
464 - ``txq_mpw_hdr_dseg_en`` parameter [int]
466 A nonzero value enables including two pointers in the first block of TX
467 descriptor. The parameter is deprecated and ignored, kept for compatibility
470 - ``txq_max_inline_len`` parameter [int]
472 Maximum size of packet to be inlined. This limits the size of packet to
473 be inlined. If the size of a packet is larger than configured value, the
474 packet isn't inlined even though there's enough space remained in the
475 descriptor. Instead, the packet is included with pointer. This parameter
476 is deprecated and converted directly to ``txq_inline_mpw`` providing full
477 compatibility. Valid only if eMPW feature is engaged.
479 - ``txq_mpw_en`` parameter [int]
481 A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
482 ConnectX-6 and BlueField. eMPW allows the TX burst function to pack up multiple
483 packets in a single descriptor session in order to save PCI bandwidth and improve
484 performance at the cost of a slightly higher CPU usage. When ``txq_inline_mpw``
485 is set along with ``txq_mpw_en``, TX burst function copies entire packet
486 data on to TX descriptor instead of including pointer of packet.
488 The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
489 it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` option.
490 Also, if minimal data inlining is requested by non-zero ``txq_inline_min``
491 option or reported by the NIC, the eMPW feature is disengaged.
493 - ``tx_vec_en`` parameter [int]
495 A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and BlueField
496 NICs if the number of global Tx queues on the port is less than
497 ``txqs_max_vec``. The parameter is deprecated and ignored.
499 - ``rx_vec_en`` parameter [int]
501 A nonzero value enables Rx vector if the port is not configured in
502 multi-segment otherwise this parameter is ignored.
506 - ``vf_nl_en`` parameter [int]
508 A nonzero value enables Netlink requests from the VF to add/remove MAC
509 addresses or/and enable/disable promiscuous/all multicast on the Netdevice.
510 Otherwise the relevant configuration must be run with Linux iproute2 tools.
511 This is a prerequisite to receive this kind of traffic.
513 Enabled by default, valid only on VF devices ignored otherwise.
515 - ``l3_vxlan_en`` parameter [int]
517 A nonzero value allows L3 VXLAN and VXLAN-GPE flow creation. To enable
518 L3 VXLAN or VXLAN-GPE, users has to configure firmware and enable this
519 parameter. This is a prerequisite to receive this kind of traffic.
523 - ``dv_flow_en`` parameter [int]
525 A nonzero value enables the DV flow steering assuming it is supported
530 - ``dv_esw_en`` parameter [int]
532 A nonzero value enables E-Switch using Direct Rules.
534 Enabled by default if supported.
536 - ``mr_ext_memseg_en`` parameter [int]
538 A nonzero value enables extending memseg when registering DMA memory. If
539 enabled, the number of entries in MR (Memory Region) lookup table on datapath
540 is minimized and it benefits performance. On the other hand, it worsens memory
541 utilization because registered memory is pinned by kernel driver. Even if a
542 page in the extended chunk is freed, that doesn't become reusable until the
543 entire memory is freed.
547 - ``representor`` parameter [list]
549 This parameter can be used to instantiate DPDK Ethernet devices from
550 existing port (or VF) representors configured on the device.
552 It is a standard parameter whose format is described in
553 :ref:`ethernet_device_standard_device_arguments`.
555 For instance, to probe port representors 0 through 2::
559 - ``max_dump_files_num`` parameter [int]
561 The maximum number of files per PMD entity that may be created for debug information.
562 The files will be created in /var/log directory or in current directory.
564 set to 128 by default.
566 - ``lro_timeout_usec`` parameter [int]
568 The maximum allowed duration of an LRO session, in micro-seconds.
569 PMD will set the nearest value supported by HW, which is not bigger than
570 the input ``lro_timeout_usec`` value.
571 If this parameter is not specified, by default PMD will set
572 the smallest value supported by HW.
574 Firmware configuration
575 ~~~~~~~~~~~~~~~~~~~~~~
577 - L3 VXLAN and VXLAN-GPE destination UDP port
579 .. code-block:: console
581 mlxconfig -d <mst device> set IP_OVER_VXLAN_EN=1
582 mlxconfig -d <mst device> set IP_OVER_VXLAN_PORT=<udp dport>
584 Verify configurations are set:
586 .. code-block:: console
588 mlxconfig -d <mst device> query | grep IP_OVER_VXLAN
589 IP_OVER_VXLAN_EN True(1)
590 IP_OVER_VXLAN_PORT <udp dport>
592 - enable ICMP/ICMP6's code/type field matching
594 .. code-block:: console
596 mlxconfig -d <mst device> set FLEX_PARSER_PROFILE_ENABLE=2
598 Verify configurations are set:
600 .. code-block:: console
602 mlxconfig -d <mst device> query | grep FLEX_PARSER_PROFILE_ENABLE
603 FLEX_PARSER_PROFILE_ENABLE 2
605 - IP-in-IP tunnel enable
607 .. code-block:: console
609 mlxconfig -d <mst device> set FLEX_PARSER_PROFILE_ENABLE=0
611 Verify configurations are set:
613 .. code-block:: console
615 mlxconfig -d <mst device> query | grep FLEX_PARSER_PROFILE_ENABLE
616 FLEX_PARSER_PROFILE_ENABLE 0
621 This driver relies on external libraries and kernel drivers for resources
622 allocations and initialization. The following dependencies are not part of
623 DPDK and must be installed separately:
627 User space Verbs framework used by librte_pmd_mlx5. This library provides
628 a generic interface between the kernel and low-level user space drivers
631 It allows slow and privileged operations (context initialization, hardware
632 resources allocations) to be managed by the kernel and fast operations to
633 never leave user space.
637 Low-level user space driver library for Mellanox
638 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices, it is automatically loaded
641 This library basically implements send/receive calls to the hardware
646 They provide the kernel-side Verbs API and low level device drivers that
647 manage actual hardware initialization and resources sharing with user
650 Unlike most other PMDs, these modules must remain loaded and bound to
653 - mlx5_core: hardware driver managing Mellanox
654 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel
656 - mlx5_ib: InifiniBand device driver.
657 - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
659 - **Firmware update**
661 Mellanox OFED/EN releases include firmware updates for
662 ConnectX-4/ConnectX-5/ConnectX-6/BlueField adapters.
664 Because each release provides new features, these updates must be applied to
665 match the kernel modules and libraries they come with.
669 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
675 Either RDMA Core library with a recent enough Linux kernel release
676 (recommended) or Mellanox OFED/EN, which provides compatibility with older
679 RDMA Core with Linux Kernel
680 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
682 - Minimal kernel version : v4.14 or the most recent 4.14-rc (see `Linux installation documentation`_)
683 - Minimal rdma-core version: v15+ commit 0c5f5765213a ("Merge pull request #227 from yishaih/tm")
684 (see `RDMA Core installation documentation`_)
685 - When building for i686 use:
687 - rdma-core version 18.0 or above built with 32bit support.
688 - Kernel version 4.14.41 or above.
690 - Starting with rdma-core v21, static libraries can be built::
693 CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
696 .. _`Linux installation documentation`: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
697 .. _`RDMA Core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
699 If rdma-core libraries are built but not installed, DPDK makefile can link them,
700 thanks to these environment variables:
702 - ``EXTRA_CFLAGS=-I/path/to/rdma-core/build/include``
703 - ``EXTRA_LDFLAGS=-L/path/to/rdma-core/build/lib``
704 - ``PKG_CONFIG_PATH=/path/to/rdma-core/build/lib/pkgconfig``
709 - Mellanox OFED version: ** 4.5, 4.6** /
710 Mellanox EN version: **4.5, 4.6**
713 - ConnectX-4: **12.21.1000** and above.
714 - ConnectX-4 Lx: **14.21.1000** and above.
715 - ConnectX-5: **16.21.1000** and above.
716 - ConnectX-5 Ex: **16.21.1000** and above.
717 - ConnectX-6: **20.99.5374** and above.
718 - BlueField: **18.25.1010** and above.
720 While these libraries and kernel modules are available on OpenFabrics
721 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
722 managers on most distributions, this PMD requires Ethernet extensions that
723 may not be supported at the moment (this is a work in progress).
726 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__ and
728 <http://www.mellanox.com/page/products_dyn?product_family=27&mtag=linux>`__
729 include the necessary support and should be used in the meantime. For DPDK,
730 only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
731 required from that distribution.
735 Several versions of Mellanox OFED/EN are available. Installing the version
736 this DPDK release was developed and tested against is strongly
737 recommended. Please check the `prerequisites`_.
742 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
743 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
744 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
745 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
746 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT (1x40G)
747 * Mellanox(R) ConnectX(R)-4 40G MCX413A-BCAT (1x40G)
748 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
749 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT (1x50G)
750 * Mellanox(R) ConnectX(R)-4 50G MCX413A-GCAT (1x50G)
751 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
752 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT (2x50G)
753 * Mellanox(R) ConnectX(R)-4 50G MCX416A-BCAT (2x50G)
754 * Mellanox(R) ConnectX(R)-4 50G MCX416A-GCAT (2x50G)
755 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
756 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
757 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
758 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
759 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
760 * Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G)
762 Quick Start Guide on OFED/EN
763 ----------------------------
765 1. Download latest Mellanox OFED/EN. For more info check the `prerequisites`_.
768 2. Install the required libraries and kernel modules either by installing
769 only the required set, or by installing the entire Mellanox OFED/EN:
771 .. code-block:: console
773 ./mlnxofedinstall --upstream-libs --dpdk
775 3. Verify the firmware is the correct one:
777 .. code-block:: console
781 4. Verify all ports links are set to Ethernet:
783 .. code-block:: console
785 mlxconfig -d <mst device> query | grep LINK_TYPE
789 Link types may have to be configured to Ethernet:
791 .. code-block:: console
793 mlxconfig -d <mst device> set LINK_TYPE_P1/2=1/2/3
795 * LINK_TYPE_P1=<1|2|3> , 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
797 For hypervisors verify SR-IOV is enabled on the NIC:
799 .. code-block:: console
801 mlxconfig -d <mst device> query | grep SRIOV_EN
804 If needed, set enable the set the relevant fields:
806 .. code-block:: console
808 mlxconfig -d <mst device> set SRIOV_EN=1 NUM_OF_VFS=16
809 mlxfwreset -d <mst device> reset
811 5. Restart the driver:
813 .. code-block:: console
815 /etc/init.d/openibd restart
819 .. code-block:: console
821 service openibd restart
823 If link type was changed, firmware must be reset as well:
825 .. code-block:: console
827 mlxfwreset -d <mst device> reset
829 For hypervisors, after reset write the sysfs number of virtual functions
832 To dynamically instantiate a given number of virtual functions (VFs):
834 .. code-block:: console
836 echo [num_vfs] > /sys/class/infiniband/mlx5_0/device/sriov_numvfs
838 6. Compile DPDK and you are ready to go. See instructions on
839 :ref:`Development Kit Build System <Development_Kit_Build_System>`
841 Enable switchdev mode
842 ---------------------
844 Switchdev mode is a mode in E-Switch, that binds between representor and VF.
845 Representor is a port in DPDK that is connected to a VF in such a way
846 that assuming there are no offload flows, each packet that is sent from the VF
847 will be received by the corresponding representor. While each packet that is
848 sent to a representor will be received by the VF.
849 This is very useful in case of SRIOV mode, where the first packet that is sent
850 by the VF will be received by the DPDK application which will decide if this
851 flow should be offloaded to the E-Switch. After offloading the flow packet
852 that the VF that are matching the flow will not be received any more by
853 the DPDK application.
855 1. Enable SRIOV mode:
857 .. code-block:: console
859 mlxconfig -d <mst device> set SRIOV_EN=true
861 2. Configure the max number of VFs:
863 .. code-block:: console
865 mlxconfig -d <mst device> set NUM_OF_VFS=<num of vfs>
869 .. code-block:: console
871 mlxfwreset -d <mst device> reset
873 3. Configure the actual number of VFs:
875 .. code-block:: console
877 echo <num of vfs > /sys/class/net/<net device>/device/sriov_numvfs
879 4. Unbind the device (can be rebind after the switchdev mode):
881 .. code-block:: console
883 echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
885 5. Enbale switchdev mode:
887 .. code-block:: console
889 echo switchdev > /sys/class/net/<net device>/compat/devlink/mode
894 1. Configure aggressive CQE Zipping for maximum performance:
896 .. code-block:: console
898 mlxconfig -d <mst device> s CQE_COMPRESSION=1
900 To set it back to the default CQE Zipping mode use:
902 .. code-block:: console
904 mlxconfig -d <mst device> s CQE_COMPRESSION=0
906 2. In case of virtualization:
908 - Make sure that hypervisor kernel is 3.16 or newer.
909 - Configure boot with ``iommu=pt``.
911 - Make sure to allocate a VM on huge pages.
912 - Make sure to set CPU pinning.
914 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
915 for better performance. For VMs, verify that the right CPU
916 and NUMA node are pinned according to the above. Run:
918 .. code-block:: console
922 to identify the NUMA node to which the PCIe adapter is connected.
924 4. If more than one adapter is used, and root complex capabilities allow
925 to put both adapters on the same NUMA node without PCI bandwidth degradation,
926 it is recommended to locate both adapters on the same NUMA node.
927 This in order to forward packets from one to the other without
928 NUMA performance penalty.
930 5. Disable pause frames:
932 .. code-block:: console
934 ethtool -A <netdev> rx off tx off
936 6. Verify IO non-posted prefetch is disabled by default. This can be checked
937 via the BIOS configuration. Please contact you server provider for more
938 information about the settings.
942 On some machines, depends on the machine integrator, it is beneficial
943 to set the PCI max read request parameter to 1K. This can be
944 done in the following way:
946 To query the read request size use:
948 .. code-block:: console
950 setpci -s <NIC PCI address> 68.w
952 If the output is different than 3XXX, set it by:
954 .. code-block:: console
956 setpci -s <NIC PCI address> 68.w=3XXX
958 The XXX can be different on different systems. Make sure to configure
959 according to the setpci output.
961 7. To minimize overhead of searching Memory Regions:
963 - '--socket-mem' is recommended to pin memory by predictable amount.
964 - Configure per-lcore cache when creating Mempools for packet buffer.
965 - Refrain from dynamically allocating/freeing memory in run-time.
967 Supported hardware offloads using rte_flow API
968 ----------------------------------------------
970 .. _Supported hardware offloads using rte_flow API:
972 .. table:: Supported hardware offloads using rte_flow API
974 +-----------------------+-----------------+-----------------+
975 | Offload | E-Switch | NIC |
977 +=======================+=================+=================+
978 | Count | | DPDK 19.05 | | DPDK 19.02 |
979 | | | OFED 4.6 | | OFED 4.6 |
980 | | | RDMA-CORE V24 | | RDMA-CORE V23 |
981 | | | ConnectX-5 | | ConnectX-5 |
982 +-----------------------+-----------------+-----------------+
983 | Drop / Queue / RSS | | DPDK 19.05 | | DPDK 18.11 |
984 | | | OFED 4.6 | | OFED 4.5 |
985 | | | RDMA-CORE V24 | | RDMA-CORE V23 |
986 | | | ConnectX-5 | | ConnectX-4 |
987 +-----------------------+-----------------+-----------------+
988 | Encapsulation | | DPDK 19.05 | | DPDK 19.02 |
989 | (VXLAN / NVGRE / RAW) | | OFED 4.6.2 | | OFED 4.6 |
990 | | | RDMA-CORE V24 | | RDMA-CORE V23 |
991 | | | ConnectX-5 | | ConnectX-5 |
992 +-----------------------+-----------------+-----------------+
993 | Header rewrite | | DPDK 19.05 | | DPDK 19.02 |
994 | (set_ipv4_src / | | OFED 4.6.2 | | OFED 4.6.2 |
995 | set_ipv4_dst / | | RDMA-CORE V24 | | RDMA-CORE V23 |
996 | set_ipv6_src / | | ConnectX-5 | | ConnectX-5 |
997 | set_ipv6_dst / | | |
1002 | set_mac_src / | | |
1003 | set_mac_dst) | | |
1004 +-----------------------+-----------------+-----------------+
1005 | Jump | | DPDK 19.05 | | DPDK 19.02 |
1006 | | | OFED 4.6.2 | | OFED 4.6.2 |
1007 | | | RDMA-CORE V24 | | N/A |
1008 | | | ConnectX-5 | | ConnectX-5 |
1009 +-----------------------+-----------------+-----------------+
1010 | Mark / Flag | | DPDK 19.05 | | DPDK 18.11 |
1011 | | | OFED 4.6 | | OFED 4.5 |
1012 | | | RDMA-CORE V24 | | RDMA-CORE V23 |
1013 | | | ConnectX-5 | | ConnectX-4 |
1014 +-----------------------+-----------------+-----------------+
1015 | Port ID | | DPDK 19.05 | | N/A |
1016 | | | OFED 4.6 | | N/A |
1017 | | | RDMA-CORE V24 | | N/A |
1018 | | | ConnectX-5 | | N/A |
1019 +-----------------------+-----------------+-----------------+
1021 * Minimum version for each component and nic.
1026 Compared to librte_pmd_mlx4 that implements a single RSS configuration per
1027 port, librte_pmd_mlx5 supports per-protocol RSS configuration.
1029 Since ``testpmd`` defaults to IP RSS mode and there is currently no
1030 command-line parameter to enable additional protocols (UDP and TCP as well
1031 as IP), the following commands must be entered from its CLI to get the same
1032 behavior as librte_pmd_mlx4:
1034 .. code-block:: console
1037 > port config all rss all
1043 This section demonstrates how to launch **testpmd** with Mellanox
1044 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
1046 #. Load the kernel modules:
1048 .. code-block:: console
1050 modprobe -a ib_uverbs mlx5_core mlx5_ib
1052 Alternatively if MLNX_OFED/MLNX_EN is fully installed, the following script
1055 .. code-block:: console
1057 /etc/init.d/openibd restart
1061 User space I/O kernel modules (uio and igb_uio) are not used and do
1062 not have to be loaded.
1064 #. Make sure Ethernet interfaces are in working order and linked to kernel
1065 verbs. Related sysfs entries should be present:
1067 .. code-block:: console
1069 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
1073 .. code-block:: console
1080 #. Optionally, retrieve their PCI bus addresses for whitelisting:
1082 .. code-block:: console
1085 for intf in eth2 eth3 eth4 eth5;
1087 (cd "/sys/class/net/${intf}/device/" && pwd -P);
1090 sed -n 's,.*/\(.*\),-w \1,p'
1094 .. code-block:: console
1101 #. Request huge pages:
1103 .. code-block:: console
1105 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
1107 #. Start testpmd with basic parameters:
1109 .. code-block:: console
1111 testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
1115 .. code-block:: console
1118 EAL: PCI device 0000:05:00.0 on NUMA socket 0
1119 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
1120 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
1121 PMD: librte_pmd_mlx5: 1 port(s) detected
1122 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
1123 EAL: PCI device 0000:05:00.1 on NUMA socket 0
1124 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
1125 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
1126 PMD: librte_pmd_mlx5: 1 port(s) detected
1127 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
1128 EAL: PCI device 0000:06:00.0 on NUMA socket 0
1129 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
1130 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
1131 PMD: librte_pmd_mlx5: 1 port(s) detected
1132 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
1133 EAL: PCI device 0000:06:00.1 on NUMA socket 0
1134 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
1135 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
1136 PMD: librte_pmd_mlx5: 1 port(s) detected
1137 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
1138 Interactive-mode selected
1139 Configuring Port 0 (socket 0)
1140 PMD: librte_pmd_mlx5: 0x8cba80: TX queues number update: 0 -> 2
1141 PMD: librte_pmd_mlx5: 0x8cba80: RX queues number update: 0 -> 2
1142 Port 0: E4:1D:2D:E7:0C:FE
1143 Configuring Port 1 (socket 0)
1144 PMD: librte_pmd_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
1145 PMD: librte_pmd_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
1146 Port 1: E4:1D:2D:E7:0C:FF
1147 Configuring Port 2 (socket 0)
1148 PMD: librte_pmd_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
1149 PMD: librte_pmd_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
1150 Port 2: E4:1D:2D:E7:0C:FA
1151 Configuring Port 3 (socket 0)
1152 PMD: librte_pmd_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
1153 PMD: librte_pmd_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
1154 Port 3: E4:1D:2D:E7:0C:FB
1155 Checking link statuses...
1156 Port 0 Link Up - speed 40000 Mbps - full-duplex
1157 Port 1 Link Up - speed 40000 Mbps - full-duplex
1158 Port 2 Link Up - speed 10000 Mbps - full-duplex
1159 Port 3 Link Up - speed 10000 Mbps - full-duplex