1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2015 6WIND S.A.
3 Copyright 2015 Mellanox Technologies, Ltd
5 .. include:: <isonum.txt>
10 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
11 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
12 ConnectX-5**, **Mellanox ConnectX-6**, **Mellanox ConnectX-6 Dx** and
13 **Mellanox BlueField** families of 10/25/40/50/100/200 Gb/s adapters
14 as well as their virtual functions (VF) in SR-IOV context.
16 Information and documentation about these adapters can be found on the
17 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
18 `Mellanox community <http://community.mellanox.com/welcome>`__.
20 There is also a `section dedicated to this poll mode driver
21 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
25 Due to external dependencies, this driver is disabled in default configuration
26 of the "make" build. It can be enabled with ``CONFIG_RTE_LIBRTE_MLX5_PMD=y``
27 or by using "meson" build system which will detect dependencies.
32 Besides its dependency on libibverbs (that implies libmlx5 and associated
33 kernel support), librte_pmd_mlx5 relies heavily on system calls for control
34 operations such as querying/updating the MTU and flow control parameters.
36 For security reasons and robustness, this driver only deals with virtual
37 memory addresses. The way resources allocations are handled by the kernel,
38 combined with hardware specifications that allow to handle virtual memory
39 addresses directly, ensure that DPDK applications cannot access random
40 physical memory (or memory that does not belong to the current process).
42 This capability allows the PMD to coexist with kernel network interfaces
43 which remain functional, although they stop receiving unicast packets as
44 long as they share the same MAC address.
45 This means legacy linux control tools (for example: ethtool, ifconfig and
46 more) can operate on the same network interfaces that owned by the DPDK
49 The PMD can use libibverbs and libmlx5 to access the device firmware
50 or directly the hardware components.
51 There are different levels of objects and bypassing abilities
52 to get the best performances:
54 - Verbs is a complete high-level generic API
55 - Direct Verbs is a device-specific API
56 - DevX allows to access firmware objects
57 - Direct Rules manages flow steering at low-level hardware layer
59 Enabling librte_pmd_mlx5 causes DPDK applications to be linked against
65 - Multi arch support: x86_64, POWER8, ARMv8, i686.
66 - Multiple TX and RX queues.
67 - Support for scattered TX and RX frames.
68 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
69 - RSS using different combinations of fields: L3 only, L4 only or both,
70 and source only, destination only or both.
71 - Several RSS hash keys, one for each flow type.
72 - Default RSS operation with no hash key specification.
73 - Configurable RETA table.
74 - Link flow control (pause frame).
75 - Support for multiple MAC addresses.
79 - RX CRC stripping configuration.
80 - Promiscuous mode on PF and VF.
81 - Multicast promiscuous mode on PF and VF.
82 - Hardware checksum offloads.
83 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
85 - Flow API, including :ref:`flow_isolated_mode`.
87 - KVM and VMware ESX SR-IOV modes are supported.
88 - RSS hash result is supported.
89 - Hardware TSO for generic IP or UDP tunnel, including VXLAN and GRE.
90 - Hardware checksum Tx offload for generic IP or UDP tunnel, including VXLAN and GRE.
92 - Statistics query including Basic, Extended and per queue.
94 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP, IP-in-IP, Geneve, GTP.
95 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
96 - NIC HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
97 increment/decrement, count, drop, mark. For details please see :ref:`mlx5_offloads_support`.
98 - Flow insertion rate of more then million flows per second, when using Direct Rules.
99 - Support for multiple rte_flow groups.
100 - Per packet no-inline hint flag to disable packet data copying into Tx descriptors.
106 - For secondary process:
108 - Forked secondary process not supported.
109 - External memory unregistered in EAL memseg list cannot be used for DMA
110 unless such memory has been registered by ``mlx5_mr_update_ext_mp()`` in
111 primary process and remapped to the same virtual address in secondary
112 process. If the external memory is registered by primary process but has
113 different virtual address in secondary process, unexpected error may happen.
115 - When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
116 specific VLAN will match for VLAN packets as well:
118 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
119 Meaning, the flow rule::
121 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
123 Will only match vlan packets with vid=3. and the flow rule::
125 flow create 0 ingress pattern eth / ipv4 / end ...
127 Will match any ipv4 packet (VLAN included).
129 - VLAN pop offload command:
131 - Flow rules having a VLAN pop offload command as one of their actions and
132 are lacking a match on VLAN as one of their items are not supported.
133 - The command is not supported on egress traffic.
135 - VLAN push offload is not supported on ingress traffic.
137 - VLAN set PCP offload is not supported on existing headers.
139 - A multi segment packet must have not more segments than reported by dev_infos_get()
140 in tx_desc_lim.nb_seg_max field. This value depends on maximal supported Tx descriptor
141 size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
142 inline settings) to 58.
144 - Flows with a VXLAN Network Identifier equal (or ends to be equal)
145 to 0 are not supported.
147 - VXLAN TSO and checksum offloads are not supported on VM.
149 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
151 - Match on Geneve header supports the following fields only:
157 Currently, the only supported options length value is 0.
159 - VF: flow rules created on VF devices can only match traffic targeted at the
160 configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``).
162 - Match on GTP tunnel header item supports the following fields only:
167 - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
168 The flows within group 0 and set metadata action are rejected by hardware.
172 MAC addresses not already present in the bridge table of the associated
173 kernel network device will be added and cleaned up by the PMD when closing
174 the device. In case of ungraceful program termination, some entries may
175 remain present and should be removed manually by other means.
177 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
178 externally attached to a user-provided mbuf with having EXT_ATTACHED_MBUF in
179 ol_flags. As the mempool for the external buffer is managed by PMD, all the
180 Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
181 the external buffers will be freed by PMD and the application which still
182 holds the external buffers may be corrupted.
184 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
185 enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
186 supported. Some Rx packets may not have PKT_RX_RSS_HASH.
188 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
189 and allmulticast mode are both set to off.
190 To receive IPv6 Multicast messages on VM, explicitly set the relevant
191 MAC address using rte_eth_dev_mac_addr_add() API.
193 - To support a mixed traffic pattern (some buffers from local host memory, some
194 buffers from other devices) with high bandwidth, a mbuf flag is used.
196 An application hints the PMD whether or not it should try to inline the
197 given mbuf data buffer. PMD should do the best effort to act upon this request.
199 The hint flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE`` is dynamic,
200 registered by application with rte_mbuf_dynflag_register(). This flag is
201 purely driver-specific and declared in PMD specific header ``rte_pmd_mlx5.h``,
202 which is intended to be used by the application.
204 To query the supported specific flags in runtime,
205 the function ``rte_pmd_mlx5_get_dyn_flag_names`` returns the array of
206 currently (over present hardware and configuration) supported specific flags.
207 The "not inline hint" feature operating flow is the following one:
210 - probe the devices, ports are created
211 - query the port capabilities
212 - if port supporting the feature is found
213 - register dynamic flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE``
214 - application starts the ports
215 - on ``dev_start()`` PMD checks whether the feature flag is registered and
216 enables the feature support in datapath
217 - application might set the registered flag bit in ``ol_flags`` field
218 of mbuf being sent and PMD will handle ones appropriately.
220 - The amount of descriptors in Tx queue may be limited by data inline settings.
221 Inline data require the more descriptor building blocks and overall block
222 amount may exceed the hardware supported limits. The application should
223 reduce the requested Tx size or adjust data inline settings with
224 ``txq_inline_max`` and ``txq_inline_mpw`` devargs keys.
226 - E-Switch decapsulation Flow:
228 - can be applied to PF port only.
229 - must specify VF port action (packet redirection from PF to VF).
230 - optionally may specify tunnel inner source and destination MAC addresses.
232 - E-Switch encapsulation Flow:
234 - can be applied to VF ports only.
235 - must specify PF port action (packet redirection from VF to PF).
239 - The input buffer, used as outer header, is not validated.
243 - The decapsulation is always done up to the outermost tunnel detected by the HW.
244 - The input buffer, providing the removal size, is not validated.
245 - The buffer size must match the length of the headers to be removed.
247 - ICMP/ICMP6 code/type matching, IP-in-IP and MPLS flow matching are all
248 mutually exclusive features which cannot be supported together
249 (see :ref:`mlx5_firmware_config`).
253 - Requires DevX and DV flow to be enabled.
254 - KEEP_CRC offload cannot be supported with LRO.
255 - The first mbuf length, without head-room, must be big enough to include the
257 - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
258 it with size limited to max LRO size, not to max RX packet length.
263 MLX5 supports various methods to report statistics:
265 Port statistics can be queried using ``rte_eth_stats_get()``. The received and sent statistics are through SW only and counts the number of packets received or sent successfully by the PMD. The imissed counter is the amount of packets that could not be delivered to SW because a queue was full. Packets not received due to congestion in the bus or on the NIC can be queried via the rx_discards_phy xstats counter.
267 Extended statistics can be queried using ``rte_eth_xstats_get()``. The extended statistics expose a wider set of counters counted by the device. The extended port statistics counts the number of packets received or sent successfully by the port. As Mellanox NICs are using the :ref:`Bifurcated Linux Driver <linux_gsg_linux_drivers>` those counters counts also packet received or sent by the Linux kernel. The counters with ``_phy`` suffix counts the total events on the physical port, therefore not valid for VF.
269 Finally per-flow statistics can by queried using ``rte_flow_query`` when attaching a count action for specific flow. The flow counter counts the number of packets received successfully by the port and match the specific flow.
277 These options can be modified in the ``.config`` file.
279 - ``CONFIG_RTE_LIBRTE_MLX5_PMD`` (default **n**)
281 Toggle compilation of librte_pmd_mlx5 itself.
283 - ``CONFIG_RTE_IBVERBS_LINK_DLOPEN`` (default **n**)
285 Build PMD with additional code to make it loadable without hard
286 dependencies on **libibverbs** nor **libmlx5**, which may not be installed
287 on the target system.
289 In this mode, their presence is still required for it to run properly,
290 however their absence won't prevent a DPDK application from starting (with
291 ``CONFIG_RTE_BUILD_SHARED_LIB`` disabled) and they won't show up as
292 missing with ``ldd(1)``.
294 It works by moving these dependencies to a purpose-built rdma-core "glue"
295 plug-in which must either be installed in a directory whose name is based
296 on ``CONFIG_RTE_EAL_PMD_PATH`` suffixed with ``-glue`` if set, or in a
297 standard location for the dynamic linker (e.g. ``/lib``) if left to the
298 default empty string (``""``).
300 This option has no performance impact.
302 - ``CONFIG_RTE_IBVERBS_LINK_STATIC`` (default **n**)
304 Embed static flavor of the dependencies **libibverbs** and **libmlx5**
305 in the PMD shared library or the executable static binary.
307 - ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
309 Toggle debugging code and stricter compilation flags. Enabling this option
310 adds additional run-time checks and debugging messages at the cost of
315 For BlueField, target should be set to ``arm64-bluefield-linux-gcc``. This
316 will enable ``CONFIG_RTE_LIBRTE_MLX5_PMD`` and set ``RTE_CACHE_LINE_SIZE`` to
317 64. Default armv8a configuration of make build and meson build set it to 128
318 then brings performance degradation.
320 This option is available in meson:
322 - ``ibverbs_link`` can be ``static``, ``shared``, or ``dlopen``.
324 Environment variables
325 ~~~~~~~~~~~~~~~~~~~~~
329 A list of directories in which to search for the rdma-core "glue" plug-in,
330 separated by colons or semi-colons.
332 Only matters when compiled with ``CONFIG_RTE_IBVERBS_LINK_DLOPEN``
333 enabled and most useful when ``CONFIG_RTE_EAL_PMD_PATH`` is also set,
334 since ``LD_LIBRARY_PATH`` has no effect in this case.
336 - ``MLX5_SHUT_UP_BF``
338 Configures HW Tx doorbell register as IO-mapped.
340 By default, the HW Tx doorbell is configured as a write-combining register.
341 The register would be flushed to HW usually when the write-combining buffer
342 becomes full, but it depends on CPU design.
344 Except for vectorized Tx burst routines, a write memory barrier is enforced
345 after updating the register so that the update can be immediately visible to
348 When vectorized Tx burst is called, the barrier is set only if the burst size
349 is not aligned to MLX5_VPMD_TX_MAX_BURST. However, setting this environmental
350 variable will bring better latency even though the maximum throughput can
353 Run-time configuration
354 ~~~~~~~~~~~~~~~~~~~~~~
356 - librte_pmd_mlx5 brings kernel network interfaces up during initialization
357 because it is affected by their state. Forcing them down prevents packets
360 - **ethtool** operations on related kernel interfaces also affect the PMD.
362 - ``rxq_cqe_comp_en`` parameter [int]
364 A nonzero value enables the compression of CQE on RX side. This feature
365 allows to save PCI bandwidth and improve performance. Enabled by default.
369 - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx
371 - POWER9 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx
374 - ``rxq_cqe_pad_en`` parameter [int]
376 A nonzero value enables 128B padding of CQE on RX side. The size of CQE
377 is aligned with the size of a cacheline of the core. If cacheline size is
378 128B, the CQE size is configured to be 128B even though the device writes
379 only 64B data on the cacheline. This is to avoid unnecessary cache
380 invalidation by device's two consecutive writes on to one cacheline.
381 However in some architecture, it is more beneficial to update entire
382 cacheline with padding the rest 64B rather than striding because
383 read-modify-write could drop performance a lot. On the other hand,
384 writing extra data will consume more PCIe bandwidth and could also drop
385 the maximum throughput. It is recommended to empirically set this
386 parameter. Disabled by default.
390 - CPU having 128B cacheline with ConnectX-5 and BlueField.
392 - ``rxq_pkt_pad_en`` parameter [int]
394 A nonzero value enables padding Rx packet to the size of cacheline on PCI
395 transaction. This feature would waste PCI bandwidth but could improve
396 performance by avoiding partial cacheline write which may cause costly
397 read-modify-copy in memory transaction on some architectures. Disabled by
402 - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx
404 - POWER8 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx
407 - ``mprq_en`` parameter [int]
409 A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
410 configured as Multi-Packet RQ if the total number of Rx queues is
411 ``rxqs_min_mprq`` or more and Rx scatter isn't configured. Disabled by
414 Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe bandwidth
415 by posting a single large buffer for multiple packets. Instead of posting a
416 buffers per a packet, one large buffer is posted in order to receive multiple
417 packets on the buffer. A MPRQ buffer consists of multiple fixed-size strides
418 and each stride receives one packet. MPRQ can improve throughput for
419 small-packet traffic.
421 When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
422 user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
423 configure large stride size enough to accommodate max_rx_pkt_len as long as
424 device allows. Note that this can waste system memory compared to enabling Rx
425 scatter and multi-segment packet.
427 - ``mprq_log_stride_num`` parameter [int]
429 Log 2 of the number of strides for Multi-Packet Rx queue. Configuring more
430 strides can reduce PCIe traffic further. If configured value is not in the
431 range of device capability, the default value will be set with a warning
432 message. The default value is 4 which is 16 strides per a buffer, valid only
433 if ``mprq_en`` is set.
435 The size of Rx queue should be bigger than the number of strides.
437 - ``mprq_max_memcpy_len`` parameter [int]
439 The maximum length of packet to memcpy in case of Multi-Packet Rx queue. Rx
440 packet is mem-copied to a user-provided mbuf if the size of Rx packet is less
441 than or equal to this parameter. Otherwise, PMD will attach the Rx packet to
442 the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
443 A mempool for external buffers will be allocated and managed by PMD. If Rx
444 packet is externally attached, ol_flags field of the mbuf will have
445 EXT_ATTACHED_MBUF and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
446 checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
448 - ``rxqs_min_mprq`` parameter [int]
450 Configure Rx queues as Multi-Packet RQ if the total number of Rx queues is
451 greater or equal to this value. The default value is 12, valid only if
454 - ``txq_inline`` parameter [int]
456 Amount of data to be inlined during TX operations. This parameter is
457 deprecated and converted to the new parameter ``txq_inline_max`` providing
458 partial compatibility.
460 - ``txqs_min_inline`` parameter [int]
462 Enable inline data send only when the number of TX queues is greater or equal
465 This option should be used in combination with ``txq_inline_max`` and
466 ``txq_inline_mpw`` below and does not affect ``txq_inline_min`` settings above.
468 If this option is not specified the default value 16 is used for BlueField
469 and 8 for other platforms
471 The data inlining consumes the CPU cycles, so this option is intended to
472 auto enable inline data if we have enough Tx queues, which means we have
473 enough CPU cores and PCI bandwidth is getting more critical and CPU
474 is not supposed to be bottleneck anymore.
476 The copying data into WQE improves latency and can improve PPS performance
477 when PCI back pressure is detected and may be useful for scenarios involving
478 heavy traffic on many queues.
480 Because additional software logic is necessary to handle this mode, this
481 option should be used with care, as it may lower performance when back
482 pressure is not expected.
484 If inline data are enabled it may affect the maximal size of Tx queue in
485 descriptors because the inline data increase the descriptor size and
486 queue size limits supported by hardware may be exceeded.
488 - ``txq_inline_min`` parameter [int]
490 Minimal amount of data to be inlined into WQE during Tx operations. NICs
491 may require this minimal data amount to operate correctly. The exact value
492 may depend on NIC operation mode, requested offloads, etc. It is strongly
493 recommended to omit this parameter and use the default values. Anyway,
494 applications using this parameter should take into consideration that
495 specifying an inconsistent value may prevent the NIC from sending packets.
497 If ``txq_inline_min`` key is present the specified value (may be aligned
498 by the driver in order not to exceed the limits and provide better descriptor
499 space utilization) will be used by the driver and it is guaranteed that
500 requested amount of data bytes are inlined into the WQE beside other inline
501 settings. This key also may update ``txq_inline_max`` value (default
502 or specified explicitly in devargs) to reserve the space for inline data.
504 If ``txq_inline_min`` key is not present, the value may be queried by the
505 driver from the NIC via DevX if this feature is available. If there is no DevX
506 enabled/supported the value 18 (supposing L2 header including VLAN) is set
507 for ConnectX-4 and ConnectX-4 Lx, and 0 is set by default for ConnectX-5
508 and newer NICs. If packet is shorter the ``txq_inline_min`` value, the entire
511 For ConnectX-4 NIC, driver does not allow specifying value below 18
512 (minimal L2 header, including VLAN), error will be raised.
514 For ConnectX-4 Lx NIC, it is allowed to specify values below 18, but
515 it is not recommended and may prevent NIC from sending packets over
518 Please, note, this minimal data inlining disengages eMPW feature (Enhanced
519 Multi-Packet Write), because last one does not support partial packet inlining.
520 This is not very critical due to minimal data inlining is mostly required
521 by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW feature.
523 - ``txq_inline_max`` parameter [int]
525 Specifies the maximal packet length to be completely inlined into WQE
526 Ethernet Segment for ordinary SEND method. If packet is larger than specified
527 value, the packet data won't be copied by the driver at all, data buffer
528 is addressed with a pointer. If packet length is less or equal all packet
529 data will be copied into WQE. This may improve PCI bandwidth utilization for
530 short packets significantly but requires the extra CPU cycles.
532 The data inline feature is controlled by number of Tx queues, if number of Tx
533 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
534 is engaged, if there are not enough Tx queues (which means not enough CPU cores
535 and CPU resources are scarce), data inline is not performed by the driver.
536 Assigning ``txqs_min_inline`` with zero always enables the data inline.
538 The default ``txq_inline_max`` value is 290. The specified value may be adjusted
539 by the driver in order not to exceed the limit (930 bytes) and to provide better
540 WQE space filling without gaps, the adjustment is reflected in the debug log.
541 Also, the default value (290) may be decreased in run-time if the large transmit
542 queue size is requested and hardware does not support enough descriptor
543 amount, in this case warning is emitted. If ``txq_inline_max`` key is
544 specified and requested inline settings can not be satisfied then error
547 - ``txq_inline_mpw`` parameter [int]
549 Specifies the maximal packet length to be completely inlined into WQE for
550 Enhanced MPW method. If packet is large the specified value, the packet data
551 won't be copied, and data buffer is addressed with pointer. If packet length
552 is less or equal, all packet data will be copied into WQE. This may improve PCI
553 bandwidth utilization for short packets significantly but requires the extra
556 The data inline feature is controlled by number of TX queues, if number of Tx
557 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
558 is engaged, if there are not enough Tx queues (which means not enough CPU cores
559 and CPU resources are scarce), data inline is not performed by the driver.
560 Assigning ``txqs_min_inline`` with zero always enables the data inline.
562 The default ``txq_inline_mpw`` value is 268. The specified value may be adjusted
563 by the driver in order not to exceed the limit (930 bytes) and to provide better
564 WQE space filling without gaps, the adjustment is reflected in the debug log.
565 Due to multiple packets may be included to the same WQE with Enhanced Multi
566 Packet Write Method and overall WQE size is limited it is not recommended to
567 specify large values for the ``txq_inline_mpw``. Also, the default value (268)
568 may be decreased in run-time if the large transmit queue size is requested
569 and hardware does not support enough descriptor amount, in this case warning
570 is emitted. If ``txq_inline_mpw`` key is specified and requested inline
571 settings can not be satisfied then error will be raised.
573 - ``txqs_max_vec`` parameter [int]
575 Enable vectorized Tx only when the number of TX queues is less than or
576 equal to this value. This parameter is deprecated and ignored, kept
577 for compatibility issue to not prevent driver from probing.
579 - ``txq_mpw_hdr_dseg_en`` parameter [int]
581 A nonzero value enables including two pointers in the first block of TX
582 descriptor. The parameter is deprecated and ignored, kept for compatibility
585 - ``txq_max_inline_len`` parameter [int]
587 Maximum size of packet to be inlined. This limits the size of packet to
588 be inlined. If the size of a packet is larger than configured value, the
589 packet isn't inlined even though there's enough space remained in the
590 descriptor. Instead, the packet is included with pointer. This parameter
591 is deprecated and converted directly to ``txq_inline_mpw`` providing full
592 compatibility. Valid only if eMPW feature is engaged.
594 - ``txq_mpw_en`` parameter [int]
596 A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
597 ConnectX-6, ConnectX-6 Dx and BlueField. eMPW allows the TX burst function to pack
598 up multiple packets in a single descriptor session in order to save PCI bandwidth
599 and improve performance at the cost of a slightly higher CPU usage. When
600 ``txq_inline_mpw`` is set along with ``txq_mpw_en``, TX burst function copies
601 entire packet data on to TX descriptor instead of including pointer of packet.
603 The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
604 it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` option.
605 Also, if minimal data inlining is requested by non-zero ``txq_inline_min``
606 option or reported by the NIC, the eMPW feature is disengaged.
608 - ``tx_db_nc`` parameter [int]
610 The rdma core library can map doorbell register in two ways, depending on the
611 environment variable "MLX5_SHUT_UP_BF":
613 - As regular cached memory (usually with write combining attribute), if the
614 variable is either missing or set to zero.
615 - As non-cached memory, if the variable is present and set to not "0" value.
617 The type of mapping may slightly affect the Tx performance, the optimal choice
618 is strongly relied on the host architecture and should be deduced practically.
620 If ``tx_db_nc`` is set to zero, the doorbell is forced to be mapped to regular
621 memory (with write combining), the PMD will perform the extra write memory barrier
622 after writing to doorbell, it might increase the needed CPU clocks per packet
623 to send, but latency might be improved.
625 If ``tx_db_nc`` is set to one, the doorbell is forced to be mapped to non
626 cached memory, the PMD will not perform the extra write memory barrier
627 after writing to doorbell, on some architectures it might improve the
630 If ``tx_db_nc`` is set to two, the doorbell is forced to be mapped to regular
631 memory, the PMD will use heuristics to decide whether write memory barrier
632 should be performed. For bursts with size multiple of recommended one (64 pkts)
633 it is supposed the next burst is coming and no need to issue the extra memory
634 barrier (it is supposed to be issued in the next coming burst, at least after
635 descriptor writing). It might increase latency (on some hosts till next
636 packets transmit) and should be used with care.
638 If ``tx_db_nc`` is omitted or set to zero, the preset (if any) environment
639 variable "MLX5_SHUT_UP_BF" value is used. If there is no "MLX5_SHUT_UP_BF",
640 the default ``tx_db_nc`` value is zero for ARM64 hosts and one for others.
642 - ``tx_vec_en`` parameter [int]
644 A nonzero value enables Tx vector on ConnectX-5, ConnectX-6, ConnectX-6 Dx
645 and BlueField NICs if the number of global Tx queues on the port is less than
646 ``txqs_max_vec``. The parameter is deprecated and ignored.
648 - ``rx_vec_en`` parameter [int]
650 A nonzero value enables Rx vector if the port is not configured in
651 multi-segment otherwise this parameter is ignored.
655 - ``vf_nl_en`` parameter [int]
657 A nonzero value enables Netlink requests from the VF to add/remove MAC
658 addresses or/and enable/disable promiscuous/all multicast on the Netdevice.
659 Otherwise the relevant configuration must be run with Linux iproute2 tools.
660 This is a prerequisite to receive this kind of traffic.
662 Enabled by default, valid only on VF devices ignored otherwise.
664 - ``l3_vxlan_en`` parameter [int]
666 A nonzero value allows L3 VXLAN and VXLAN-GPE flow creation. To enable
667 L3 VXLAN or VXLAN-GPE, users has to configure firmware and enable this
668 parameter. This is a prerequisite to receive this kind of traffic.
672 - ``dv_xmeta_en`` parameter [int]
674 A nonzero value enables extensive flow metadata support if device is
675 capable and driver supports it. This can enable extensive support of
676 ``MARK`` and ``META`` item of ``rte_flow``. The newly introduced
677 ``SET_TAG`` and ``SET_META`` actions do not depend on ``dv_xmeta_en``.
679 There are some possible configurations, depending on parameter value:
681 - 0, this is default value, defines the legacy mode, the ``MARK`` and
682 ``META`` related actions and items operate only within NIC Tx and
683 NIC Rx steering domains, no ``MARK`` and ``META`` information crosses
684 the domain boundaries. The ``MARK`` item is 24 bits wide, the ``META``
685 item is 32 bits wide and match supported on egress only.
687 - 1, this engages extensive metadata mode, the ``MARK`` and ``META``
688 related actions and items operate within all supported steering domains,
689 including FDB, ``MARK`` and ``META`` information may cross the domain
690 boundaries. The ``MARK`` item is 24 bits wide, the ``META`` item width
691 depends on kernel and firmware configurations and might be 0, 16 or
692 32 bits. Within NIC Tx domain ``META`` data width is 32 bits for
693 compatibility, the actual width of data transferred to the FDB domain
694 depends on kernel configuration and may be vary. The actual supported
695 width can be retrieved in runtime by series of rte_flow_validate()
698 - 2, this engages extensive metadata mode, the ``MARK`` and ``META``
699 related actions and items operate within all supported steering domains,
700 including FDB, ``MARK`` and ``META`` information may cross the domain
701 boundaries. The ``META`` item is 32 bits wide, the ``MARK`` item width
702 depends on kernel and firmware configurations and might be 0, 16 or
703 24 bits. The actual supported width can be retrieved in runtime by
704 series of rte_flow_validate() trials.
706 +------+-----------+-----------+-------------+-------------+
707 | Mode | ``MARK`` | ``META`` | ``META`` Tx | FDB/Through |
708 +======+===========+===========+=============+=============+
709 | 0 | 24 bits | 32 bits | 32 bits | no |
710 +------+-----------+-----------+-------------+-------------+
711 | 1 | 24 bits | vary 0-32 | 32 bits | yes |
712 +------+-----------+-----------+-------------+-------------+
713 | 2 | vary 0-32 | 32 bits | 32 bits | yes |
714 +------+-----------+-----------+-------------+-------------+
716 If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
717 ignored and the device is configured to operate in legacy mode (0).
719 Disabled by default (set to 0).
721 The Direct Verbs/Rules (engaged with ``dv_flow_en`` = 1) supports all
722 of the extensive metadata features. The legacy Verbs supports FLAG and
723 MARK metadata actions over NIC Rx steering domain only.
725 - ``dv_flow_en`` parameter [int]
727 A nonzero value enables the DV flow steering assuming it is supported
728 by the driver (RDMA Core library version is rdma-core-24.0 or higher).
730 Enabled by default if supported.
732 - ``dv_esw_en`` parameter [int]
734 A nonzero value enables E-Switch using Direct Rules.
736 Enabled by default if supported.
738 - ``mr_ext_memseg_en`` parameter [int]
740 A nonzero value enables extending memseg when registering DMA memory. If
741 enabled, the number of entries in MR (Memory Region) lookup table on datapath
742 is minimized and it benefits performance. On the other hand, it worsens memory
743 utilization because registered memory is pinned by kernel driver. Even if a
744 page in the extended chunk is freed, that doesn't become reusable until the
745 entire memory is freed.
749 - ``representor`` parameter [list]
751 This parameter can be used to instantiate DPDK Ethernet devices from
752 existing port (or VF) representors configured on the device.
754 It is a standard parameter whose format is described in
755 :ref:`ethernet_device_standard_device_arguments`.
757 For instance, to probe port representors 0 through 2::
761 - ``max_dump_files_num`` parameter [int]
763 The maximum number of files per PMD entity that may be created for debug information.
764 The files will be created in /var/log directory or in current directory.
766 set to 128 by default.
768 - ``lro_timeout_usec`` parameter [int]
770 The maximum allowed duration of an LRO session, in micro-seconds.
771 PMD will set the nearest value supported by HW, which is not bigger than
772 the input ``lro_timeout_usec`` value.
773 If this parameter is not specified, by default PMD will set
774 the smallest value supported by HW.
776 .. _mlx5_firmware_config:
778 Firmware configuration
779 ~~~~~~~~~~~~~~~~~~~~~~
781 Firmware features can be configured as key/value pairs.
783 The command to set a value is::
785 mlxconfig -d <device> set <key>=<value>
787 The command to query a value is::
789 mlxconfig -d <device> query | grep <key>
791 The device name for the command ``mlxconfig`` can be either the PCI address,
792 or the mst device name found with::
796 Below are some firmware configurations listed.
802 value: 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
808 - maximum number of SR-IOV virtual functions::
812 - enable DevX (required by Direct Rules and other features)::
816 - aggressive CQE zipping::
820 - L3 VXLAN and VXLAN-GPE destination UDP port::
823 IP_OVER_VXLAN_PORT=<udp dport>
825 - enable IP-in-IP tunnel flow matching::
827 FLEX_PARSER_PROFILE_ENABLE=0
829 - enable MPLS flow matching::
831 FLEX_PARSER_PROFILE_ENABLE=1
833 - enable ICMP/ICMP6 code/type fields matching::
835 FLEX_PARSER_PROFILE_ENABLE=2
837 - enable Geneve flow matching::
839 FLEX_PARSER_PROFILE_ENABLE=0
841 - enable GTP flow matching::
843 FLEX_PARSER_PROFILE_ENABLE=3
848 This driver relies on external libraries and kernel drivers for resources
849 allocations and initialization. The following dependencies are not part of
850 DPDK and must be installed separately:
854 User space Verbs framework used by librte_pmd_mlx5. This library provides
855 a generic interface between the kernel and low-level user space drivers
858 It allows slow and privileged operations (context initialization, hardware
859 resources allocations) to be managed by the kernel and fast operations to
860 never leave user space.
864 Low-level user space driver library for Mellanox
865 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices, it is automatically loaded
868 This library basically implements send/receive calls to the hardware
873 They provide the kernel-side Verbs API and low level device drivers that
874 manage actual hardware initialization and resources sharing with user
877 Unlike most other PMDs, these modules must remain loaded and bound to
880 - mlx5_core: hardware driver managing Mellanox
881 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel
883 - mlx5_ib: InifiniBand device driver.
884 - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
886 - **Firmware update**
888 Mellanox OFED/EN releases include firmware updates for
889 ConnectX-4/ConnectX-5/ConnectX-6/BlueField adapters.
891 Because each release provides new features, these updates must be applied to
892 match the kernel modules and libraries they come with.
896 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
902 Either RDMA Core library with a recent enough Linux kernel release
903 (recommended) or Mellanox OFED/EN, which provides compatibility with older
906 RDMA Core with Linux Kernel
907 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
909 - Minimal kernel version : v4.14 or the most recent 4.14-rc (see `Linux installation documentation`_)
910 - Minimal rdma-core version: v15+ commit 0c5f5765213a ("Merge pull request #227 from yishaih/tm")
911 (see `RDMA Core installation documentation`_)
912 - When building for i686 use:
914 - rdma-core version 18.0 or above built with 32bit support.
915 - Kernel version 4.14.41 or above.
917 - Starting with rdma-core v21, static libraries can be built::
920 CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
923 .. _`Linux installation documentation`: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
924 .. _`RDMA Core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
926 If rdma-core libraries are built but not installed, DPDK makefile can link them,
927 thanks to these environment variables:
929 - ``EXTRA_CFLAGS=-I/path/to/rdma-core/build/include``
930 - ``EXTRA_LDFLAGS=-L/path/to/rdma-core/build/lib``
931 - ``PKG_CONFIG_PATH=/path/to/rdma-core/build/lib/pkgconfig``
936 - Mellanox OFED version: ** 4.5, 4.6** /
937 Mellanox EN version: **4.5, 4.6**
940 - ConnectX-4: **12.21.1000** and above.
941 - ConnectX-4 Lx: **14.21.1000** and above.
942 - ConnectX-5: **16.21.1000** and above.
943 - ConnectX-5 Ex: **16.21.1000** and above.
944 - ConnectX-6: **20.99.5374** and above.
945 - ConnectX-6 Dx: **22.27.0090** and above.
946 - BlueField: **18.25.1010** and above.
948 While these libraries and kernel modules are available on OpenFabrics
949 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
950 managers on most distributions, this PMD requires Ethernet extensions that
951 may not be supported at the moment (this is a work in progress).
954 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__ and
956 <http://www.mellanox.com/page/products_dyn?product_family=27&mtag=linux>`__
957 include the necessary support and should be used in the meantime. For DPDK,
958 only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
959 required from that distribution.
963 Several versions of Mellanox OFED/EN are available. Installing the version
964 this DPDK release was developed and tested against is strongly
965 recommended. Please check the `prerequisites`_.
970 The following Mellanox device families are supported by the same mlx5 driver:
980 Below are detailed device names:
982 * Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX4111A-XCAT (1x10G)
983 * Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX412A-XCAT (2x10G)
984 * Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX4111A-ACAT (1x25G)
985 * Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX412A-ACAT (2x25G)
986 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX413A-BCAT (1x40G)
987 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX4131A-BCAT (1x40G)
988 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX415A-BCAT (1x40G)
989 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX413A-GCAT (1x50G)
990 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX4131A-GCAT (1x50G)
991 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX414A-BCAT (2x50G)
992 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-GCAT (1x50G)
993 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-BCAT (2x50G)
994 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-GCAT (2x50G)
995 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-CCAT (1x100G)
996 * Mellanox\ |reg| ConnectX\ |reg|-4 100G MCX416A-CCAT (2x100G)
997 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4111A-XCAT (1x10G)
998 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4121A-XCAT (2x10G)
999 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4111A-ACAT (1x25G)
1000 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
1001 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 40G MCX4131A-BCAT (1x40G)
1002 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
1003 * Mellanox\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
1004 * Mellanox\ |reg| ConnectX\ |reg|-6 200G MCX654106A-HCAT (2x200G)
1005 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
1006 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 200G MCX623105AN-VDAT (1x200G)
1008 Quick Start Guide on OFED/EN
1009 ----------------------------
1011 1. Download latest Mellanox OFED/EN. For more info check the `prerequisites`_.
1014 2. Install the required libraries and kernel modules either by installing
1015 only the required set, or by installing the entire Mellanox OFED/EN::
1017 ./mlnxofedinstall --upstream-libs --dpdk
1019 3. Verify the firmware is the correct one::
1023 4. Verify all ports links are set to Ethernet::
1025 mlxconfig -d <mst device> query | grep LINK_TYPE
1029 Link types may have to be configured to Ethernet::
1031 mlxconfig -d <mst device> set LINK_TYPE_P1/2=1/2/3
1033 * LINK_TYPE_P1=<1|2|3> , 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
1035 For hypervisors, verify SR-IOV is enabled on the NIC::
1037 mlxconfig -d <mst device> query | grep SRIOV_EN
1040 If needed, configure SR-IOV::
1042 mlxconfig -d <mst device> set SRIOV_EN=1 NUM_OF_VFS=16
1043 mlxfwreset -d <mst device> reset
1045 5. Restart the driver::
1047 /etc/init.d/openibd restart
1051 service openibd restart
1053 If link type was changed, firmware must be reset as well::
1055 mlxfwreset -d <mst device> reset
1057 For hypervisors, after reset write the sysfs number of virtual functions
1060 To dynamically instantiate a given number of virtual functions (VFs)::
1062 echo [num_vfs] > /sys/class/infiniband/mlx5_0/device/sriov_numvfs
1064 6. Compile DPDK and you are ready to go. See instructions on
1065 :ref:`Development Kit Build System <Development_Kit_Build_System>`
1067 Enable switchdev mode
1068 ---------------------
1070 Switchdev mode is a mode in E-Switch, that binds between representor and VF.
1071 Representor is a port in DPDK that is connected to a VF in such a way
1072 that assuming there are no offload flows, each packet that is sent from the VF
1073 will be received by the corresponding representor. While each packet that is
1074 sent to a representor will be received by the VF.
1075 This is very useful in case of SRIOV mode, where the first packet that is sent
1076 by the VF will be received by the DPDK application which will decide if this
1077 flow should be offloaded to the E-Switch. After offloading the flow packet
1078 that the VF that are matching the flow will not be received any more by
1079 the DPDK application.
1081 1. Enable SRIOV mode::
1083 mlxconfig -d <mst device> set SRIOV_EN=true
1085 2. Configure the max number of VFs::
1087 mlxconfig -d <mst device> set NUM_OF_VFS=<num of vfs>
1091 mlxfwreset -d <mst device> reset
1093 3. Configure the actual number of VFs::
1095 echo <num of vfs > /sys/class/net/<net device>/device/sriov_numvfs
1097 4. Unbind the device (can be rebind after the switchdev mode)::
1099 echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
1101 5. Enbale switchdev mode::
1103 echo switchdev > /sys/class/net/<net device>/compat/devlink/mode
1108 1. Configure aggressive CQE Zipping for maximum performance::
1110 mlxconfig -d <mst device> s CQE_COMPRESSION=1
1112 To set it back to the default CQE Zipping mode use::
1114 mlxconfig -d <mst device> s CQE_COMPRESSION=0
1116 2. In case of virtualization:
1118 - Make sure that hypervisor kernel is 3.16 or newer.
1119 - Configure boot with ``iommu=pt``.
1120 - Use 1G huge pages.
1121 - Make sure to allocate a VM on huge pages.
1122 - Make sure to set CPU pinning.
1124 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
1125 for better performance. For VMs, verify that the right CPU
1126 and NUMA node are pinned according to the above. Run::
1130 to identify the NUMA node to which the PCIe adapter is connected.
1132 4. If more than one adapter is used, and root complex capabilities allow
1133 to put both adapters on the same NUMA node without PCI bandwidth degradation,
1134 it is recommended to locate both adapters on the same NUMA node.
1135 This in order to forward packets from one to the other without
1136 NUMA performance penalty.
1138 5. Disable pause frames::
1140 ethtool -A <netdev> rx off tx off
1142 6. Verify IO non-posted prefetch is disabled by default. This can be checked
1143 via the BIOS configuration. Please contact you server provider for more
1144 information about the settings.
1148 On some machines, depends on the machine integrator, it is beneficial
1149 to set the PCI max read request parameter to 1K. This can be
1150 done in the following way:
1152 To query the read request size use::
1154 setpci -s <NIC PCI address> 68.w
1156 If the output is different than 3XXX, set it by::
1158 setpci -s <NIC PCI address> 68.w=3XXX
1160 The XXX can be different on different systems. Make sure to configure
1161 according to the setpci output.
1163 7. To minimize overhead of searching Memory Regions:
1165 - '--socket-mem' is recommended to pin memory by predictable amount.
1166 - Configure per-lcore cache when creating Mempools for packet buffer.
1167 - Refrain from dynamically allocating/freeing memory in run-time.
1169 .. _mlx5_offloads_support:
1171 Supported hardware offloads
1172 ---------------------------
1174 .. table:: Minimal SW/HW versions for queue offloads
1176 ============== ===== ===== ========= ===== ========== ==========
1177 Offload DPDK Linux rdma-core OFED firmware hardware
1178 ============== ===== ===== ========= ===== ========== ==========
1179 common base 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1180 checksums 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1181 Rx timestamp 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1182 TSO 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1183 LRO 19.08 N/A N/A 4.6-4 16.25.6406 ConnectX-5
1184 ============== ===== ===== ========= ===== ========== ==========
1186 .. table:: Minimal SW/HW versions for rte_flow offloads
1188 +-----------------------+-----------------+-----------------+
1189 | Offload | with E-Switch | with NIC |
1190 +=======================+=================+=================+
1191 | Count | | DPDK 19.05 | | DPDK 19.02 |
1192 | | | OFED 4.6 | | OFED 4.6 |
1193 | | | rdma-core 24 | | rdma-core 23 |
1194 | | | ConnectX-5 | | ConnectX-5 |
1195 +-----------------------+-----------------+-----------------+
1196 | Drop | | DPDK 19.05 | | DPDK 18.11 |
1197 | | | OFED 4.6 | | OFED 4.5 |
1198 | | | rdma-core 24 | | rdma-core 23 |
1199 | | | ConnectX-5 | | ConnectX-4 |
1200 +-----------------------+-----------------+-----------------+
1201 | Queue / RSS | | | | DPDK 18.11 |
1202 | | | N/A | | OFED 4.5 |
1203 | | | | | rdma-core 23 |
1204 | | | | | ConnectX-4 |
1205 +-----------------------+-----------------+-----------------+
1206 | Encapsulation | | DPDK 19.05 | | DPDK 19.02 |
1207 | (VXLAN / NVGRE / RAW) | | OFED 4.7-1 | | OFED 4.6 |
1208 | | | rdma-core 24 | | rdma-core 23 |
1209 | | | ConnectX-5 | | ConnectX-5 |
1210 +-----------------------+-----------------+-----------------+
1211 | Encapsulation | | DPDK 19.11 | | DPDK 19.11 |
1212 | GENEVE | | OFED 4.7-3 | | OFED 4.7-3 |
1213 | | | rdma-core 27 | | rdma-core 27 |
1214 | | | ConnectX-5 | | ConnectX-5 |
1215 +-----------------------+-----------------+-----------------+
1216 | | Header rewrite | | DPDK 19.05 | | DPDK 19.02 |
1217 | | (set_ipv4_src / | | OFED 4.7-1 | | OFED 4.7-1 |
1218 | | set_ipv4_dst / | | rdma-core 24 | | rdma-core 24 |
1219 | | set_ipv6_src / | | ConnectX-5 | | ConnectX-5 |
1220 | | set_ipv6_dst / | | | | |
1221 | | set_tp_src / | | | | |
1222 | | set_tp_dst / | | | | |
1223 | | dec_ttl / | | | | |
1224 | | set_ttl / | | | | |
1225 | | set_mac_src / | | | | |
1226 | | set_mac_dst) | | | | |
1227 +-----------------------+-----------------+-----------------+
1228 | | Header rewrite | | DPDK 20.02 | | DPDK 20.02 |
1229 | | (set_dscp) | | OFED 5.0 | | OFED 5.0 |
1230 | | | | rdma-core 24 | | rdma-core 24 |
1231 | | | | ConnectX-5 | | ConnectX-5 |
1232 +-----------------------+-----------------+-----------------+
1233 | Jump | | DPDK 19.05 | | DPDK 19.02 |
1234 | | | OFED 4.7-1 | | OFED 4.7-1 |
1235 | | | rdma-core 24 | | N/A |
1236 | | | ConnectX-5 | | ConnectX-5 |
1237 +-----------------------+-----------------+-----------------+
1238 | Mark / Flag | | DPDK 19.05 | | DPDK 18.11 |
1239 | | | OFED 4.6 | | OFED 4.5 |
1240 | | | rdma-core 24 | | rdma-core 23 |
1241 | | | ConnectX-5 | | ConnectX-4 |
1242 +-----------------------+-----------------+-----------------+
1243 | Port ID | | DPDK 19.05 | | N/A |
1244 | | | OFED 4.7-1 | | N/A |
1245 | | | rdma-core 24 | | N/A |
1246 | | | ConnectX-5 | | N/A |
1247 +-----------------------+-----------------+-----------------+
1248 | | VLAN | | DPDK 19.11 | | DPDK 19.11 |
1249 | | (of_pop_vlan / | | OFED 4.7-1 | | OFED 4.7-1 |
1250 | | of_push_vlan / | | ConnectX-5 | | ConnectX-5 |
1251 | | of_set_vlan_pcp / | | | | |
1252 | | of_set_vlan_vid) | | | | |
1253 +-----------------------+-----------------+-----------------+
1254 | Hairpin | | | | DPDK 19.11 |
1255 | | | N/A | | OFED 4.7-3 |
1256 | | | | | rdma-core 26 |
1257 | | | | | ConnectX-5 |
1258 +-----------------------+-----------------+-----------------+
1259 | Meta data | | DPDK 19.11 | | DPDK 19.11 |
1260 | | | OFED 4.7-3 | | OFED 4.7-3 |
1261 | | | rdma-core 26 | | rdma-core 26 |
1262 | | | ConnectX-5 | | ConnectX-5 |
1263 +-----------------------+-----------------+-----------------+
1264 | Metering | | DPDK 19.11 | | DPDK 19.11 |
1265 | | | OFED 4.7-3 | | OFED 4.7-3 |
1266 | | | rdma-core 26 | | rdma-core 26 |
1267 | | | ConnectX-5 | | ConnectX-5 |
1268 +-----------------------+-----------------+-----------------+
1273 Compared to librte_pmd_mlx4 that implements a single RSS configuration per
1274 port, librte_pmd_mlx5 supports per-protocol RSS configuration.
1276 Since ``testpmd`` defaults to IP RSS mode and there is currently no
1277 command-line parameter to enable additional protocols (UDP and TCP as well
1278 as IP), the following commands must be entered from its CLI to get the same
1279 behavior as librte_pmd_mlx4::
1282 > port config all rss all
1288 This section demonstrates how to launch **testpmd** with Mellanox
1289 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
1291 #. Load the kernel modules::
1293 modprobe -a ib_uverbs mlx5_core mlx5_ib
1295 Alternatively if MLNX_OFED/MLNX_EN is fully installed, the following script
1298 /etc/init.d/openibd restart
1302 User space I/O kernel modules (uio and igb_uio) are not used and do
1303 not have to be loaded.
1305 #. Make sure Ethernet interfaces are in working order and linked to kernel
1306 verbs. Related sysfs entries should be present::
1308 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
1317 #. Optionally, retrieve their PCI bus addresses for whitelisting::
1320 for intf in eth2 eth3 eth4 eth5;
1322 (cd "/sys/class/net/${intf}/device/" && pwd -P);
1325 sed -n 's,.*/\(.*\),-w \1,p'
1334 #. Request huge pages::
1336 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
1338 #. Start testpmd with basic parameters::
1340 testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
1345 EAL: PCI device 0000:05:00.0 on NUMA socket 0
1346 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
1347 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
1348 PMD: librte_pmd_mlx5: 1 port(s) detected
1349 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
1350 EAL: PCI device 0000:05:00.1 on NUMA socket 0
1351 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
1352 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
1353 PMD: librte_pmd_mlx5: 1 port(s) detected
1354 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
1355 EAL: PCI device 0000:06:00.0 on NUMA socket 0
1356 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
1357 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
1358 PMD: librte_pmd_mlx5: 1 port(s) detected
1359 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
1360 EAL: PCI device 0000:06:00.1 on NUMA socket 0
1361 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
1362 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
1363 PMD: librte_pmd_mlx5: 1 port(s) detected
1364 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
1365 Interactive-mode selected
1366 Configuring Port 0 (socket 0)
1367 PMD: librte_pmd_mlx5: 0x8cba80: TX queues number update: 0 -> 2
1368 PMD: librte_pmd_mlx5: 0x8cba80: RX queues number update: 0 -> 2
1369 Port 0: E4:1D:2D:E7:0C:FE
1370 Configuring Port 1 (socket 0)
1371 PMD: librte_pmd_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
1372 PMD: librte_pmd_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
1373 Port 1: E4:1D:2D:E7:0C:FF
1374 Configuring Port 2 (socket 0)
1375 PMD: librte_pmd_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
1376 PMD: librte_pmd_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
1377 Port 2: E4:1D:2D:E7:0C:FA
1378 Configuring Port 3 (socket 0)
1379 PMD: librte_pmd_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
1380 PMD: librte_pmd_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
1381 Port 3: E4:1D:2D:E7:0C:FB
1382 Checking link statuses...
1383 Port 0 Link Up - speed 40000 Mbps - full-duplex
1384 Port 1 Link Up - speed 40000 Mbps - full-duplex
1385 Port 2 Link Up - speed 10000 Mbps - full-duplex
1386 Port 3 Link Up - speed 10000 Mbps - full-duplex
1393 This section demonstrates how to dump flows. Currently, it's possible to dump
1394 all flows with assistance of external tools.
1396 #. 2 ways to get flow raw file:
1398 - Using testpmd CLI:
1400 .. code-block:: console
1402 testpmd> flow dump <port> <output_file>
1404 - call rte_flow_dev_dump api:
1406 .. code-block:: console
1408 rte_flow_dev_dump(port, file, NULL);
1410 #. Dump human-readable flows from raw file:
1412 Get flow parsing tool from: https://github.com/Mellanox/mlx_steering_dump
1414 .. code-block:: console
1416 mlx_steering_dump.py -f <output_file>