1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2015 6WIND S.A.
3 Copyright 2015 Mellanox Technologies, Ltd
5 .. include:: <isonum.txt>
10 The MLX5 poll mode driver library (**librte_net_mlx5**) provides support
11 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
12 ConnectX-5**, **Mellanox ConnectX-6**, **Mellanox ConnectX-6 Dx**, **Mellanox
13 ConnectX-6 Lx**, **Mellanox BlueField** and **Mellanox BlueField-2** families
14 of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
17 Information and documentation about these adapters can be found on the
18 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
19 `Mellanox community <http://community.mellanox.com/welcome>`__.
21 There is also a `section dedicated to this poll mode driver
22 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
28 Besides its dependency on libibverbs (that implies libmlx5 and associated
29 kernel support), librte_net_mlx5 relies heavily on system calls for control
30 operations such as querying/updating the MTU and flow control parameters.
32 For security reasons and robustness, this driver only deals with virtual
33 memory addresses. The way resources allocations are handled by the kernel,
34 combined with hardware specifications that allow to handle virtual memory
35 addresses directly, ensure that DPDK applications cannot access random
36 physical memory (or memory that does not belong to the current process).
38 This capability allows the PMD to coexist with kernel network interfaces
39 which remain functional, although they stop receiving unicast packets as
40 long as they share the same MAC address.
41 This means legacy linux control tools (for example: ethtool, ifconfig and
42 more) can operate on the same network interfaces that owned by the DPDK
45 The PMD can use libibverbs and libmlx5 to access the device firmware
46 or directly the hardware components.
47 There are different levels of objects and bypassing abilities
48 to get the best performances:
50 - Verbs is a complete high-level generic API
51 - Direct Verbs is a device-specific API
52 - DevX allows to access firmware objects
53 - Direct Rules manages flow steering at low-level hardware layer
55 Enabling librte_net_mlx5 causes DPDK applications to be linked against
61 - Multi arch support: x86_64, POWER8, ARMv8, i686.
62 - Multiple TX and RX queues.
63 - Support for scattered TX frames.
64 - Advanced support for scattered Rx frames with tunable buffer attributes.
65 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
66 - RSS using different combinations of fields: L3 only, L4 only or both,
67 and source only, destination only or both.
68 - Several RSS hash keys, one for each flow type.
69 - Default RSS operation with no hash key specification.
70 - Configurable RETA table.
71 - Link flow control (pause frame).
72 - Support for multiple MAC addresses.
76 - RX CRC stripping configuration.
77 - Promiscuous mode on PF and VF.
78 - Multicast promiscuous mode on PF and VF.
79 - Hardware checksum offloads.
80 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
82 - Flow API, including :ref:`flow_isolated_mode`.
84 - KVM and VMware ESX SR-IOV modes are supported.
85 - RSS hash result is supported.
86 - Hardware TSO for generic IP or UDP tunnel, including VXLAN and GRE.
87 - Hardware checksum Tx offload for generic IP or UDP tunnel, including VXLAN and GRE.
89 - Statistics query including Basic, Extended and per queue.
91 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP, IP-in-IP, Geneve, GTP.
92 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
93 - NIC HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
94 increment/decrement, count, drop, mark. For details please see :ref:`mlx5_offloads_support`.
95 - Flow insertion rate of more then million flows per second, when using Direct Rules.
96 - Support for multiple rte_flow groups.
97 - Per packet no-inline hint flag to disable packet data copying into Tx descriptors.
100 - Multiple-thread flow insertion.
107 On Windows, the features are limited:
109 - Promiscuous mode is not supported
110 - The following rules are supported:
112 - IPv4/UDP with CVLAN filtering
113 - Unicast MAC filtering
115 - For secondary process:
117 - Forked secondary process not supported.
118 - External memory unregistered in EAL memseg list cannot be used for DMA
119 unless such memory has been registered by ``mlx5_mr_update_ext_mp()`` in
120 primary process and remapped to the same virtual address in secondary
121 process. If the external memory is registered by primary process but has
122 different virtual address in secondary process, unexpected error may happen.
124 - When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
125 specific VLAN will match for VLAN packets as well:
127 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
128 Meaning, the flow rule::
130 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
132 Will only match vlan packets with vid=3. and the flow rule::
134 flow create 0 ingress pattern eth / ipv4 / end ...
136 Will match any ipv4 packet (VLAN included).
138 - When using Verbs flow engine (``dv_flow_en`` = 0), multi-tagged(QinQ) match is not supported.
140 - When using DV flow engine (``dv_flow_en`` = 1), flow pattern with any VLAN specification will match only single-tagged packets unless the ETH item ``type`` field is 0x88A8 or the VLAN item ``has_more_vlan`` field is 1.
143 flow create 0 ingress pattern eth / ipv4 / end ...
145 Will match any ipv4 packet.
148 flow create 0 ingress pattern eth / vlan / end ...
149 flow create 0 ingress pattern eth has_vlan is 1 / end ...
150 flow create 0 ingress pattern eth type is 0x8100 / end ...
152 Will match single-tagged packets only, with any VLAN ID value.
155 flow create 0 ingress pattern eth type is 0x88A8 / end ...
156 flow create 0 ingress pattern eth / vlan has_more_vlan is 1 / end ...
158 Will match multi-tagged packets only, with any VLAN ID value.
160 - A flow pattern with 2 sequential VLAN items is not supported.
162 - VLAN pop offload command:
164 - Flow rules having a VLAN pop offload command as one of their actions and
165 are lacking a match on VLAN as one of their items are not supported.
166 - The command is not supported on egress traffic.
168 - VLAN push offload is not supported on ingress traffic.
170 - VLAN set PCP offload is not supported on existing headers.
172 - A multi segment packet must have not more segments than reported by dev_infos_get()
173 in tx_desc_lim.nb_seg_max field. This value depends on maximal supported Tx descriptor
174 size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
175 inline settings) to 58.
177 - Flows with a VXLAN Network Identifier equal (or ends to be equal)
178 to 0 are not supported.
180 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
182 - Match on Geneve header supports the following fields only:
188 Currently, the only supported options length value is 0.
190 - VF: flow rules created on VF devices can only match traffic targeted at the
191 configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``).
193 - Match on GTP tunnel header item supports the following fields only:
195 - v_pt_rsv_flags: E flag, S flag, PN flag
199 - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
200 The flows within group 0 and set metadata action are rejected by hardware.
204 MAC addresses not already present in the bridge table of the associated
205 kernel network device will be added and cleaned up by the PMD when closing
206 the device. In case of ungraceful program termination, some entries may
207 remain present and should be removed manually by other means.
209 - Buffer split offload is supported with regular Rx burst routine only,
210 no MPRQ feature or vectorized code can be engaged.
212 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
213 externally attached to a user-provided mbuf with having EXT_ATTACHED_MBUF in
214 ol_flags. As the mempool for the external buffer is managed by PMD, all the
215 Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
216 the external buffers will be freed by PMD and the application which still
217 holds the external buffers may be corrupted.
219 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
220 enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
221 supported. Some Rx packets may not have PKT_RX_RSS_HASH.
223 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
224 and allmulticast mode are both set to off.
225 To receive IPv6 Multicast messages on VM, explicitly set the relevant
226 MAC address using rte_eth_dev_mac_addr_add() API.
228 - To support a mixed traffic pattern (some buffers from local host memory, some
229 buffers from other devices) with high bandwidth, a mbuf flag is used.
231 An application hints the PMD whether or not it should try to inline the
232 given mbuf data buffer. PMD should do the best effort to act upon this request.
234 The hint flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE`` is dynamic,
235 registered by application with rte_mbuf_dynflag_register(). This flag is
236 purely driver-specific and declared in PMD specific header ``rte_pmd_mlx5.h``,
237 which is intended to be used by the application.
239 To query the supported specific flags in runtime,
240 the function ``rte_pmd_mlx5_get_dyn_flag_names`` returns the array of
241 currently (over present hardware and configuration) supported specific flags.
242 The "not inline hint" feature operating flow is the following one:
245 - probe the devices, ports are created
246 - query the port capabilities
247 - if port supporting the feature is found
248 - register dynamic flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE``
249 - application starts the ports
250 - on ``dev_start()`` PMD checks whether the feature flag is registered and
251 enables the feature support in datapath
252 - application might set the registered flag bit in ``ol_flags`` field
253 of mbuf being sent and PMD will handle ones appropriately.
255 - The amount of descriptors in Tx queue may be limited by data inline settings.
256 Inline data require the more descriptor building blocks and overall block
257 amount may exceed the hardware supported limits. The application should
258 reduce the requested Tx size or adjust data inline settings with
259 ``txq_inline_max`` and ``txq_inline_mpw`` devargs keys.
261 - To provide the packet send scheduling on mbuf timestamps the ``tx_pp``
262 parameter should be specified.
263 When PMD sees the RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME set on the packet
264 being sent it tries to synchronize the time of packet appearing on
265 the wire with the specified packet timestamp. It the specified one
266 is in the past it should be ignored, if one is in the distant future
267 it should be capped with some reasonable value (in range of seconds).
268 These specific cases ("too late" and "distant future") can be optionally
269 reported via device xstats to assist applications to detect the
270 time-related problems.
272 The timestamp upper "too-distant-future" limit
273 at the moment of invoking the Tx burst routine
274 can be estimated as ``tx_pp`` option (in nanoseconds) multiplied by 2^23.
275 Please note, for the testpmd txonly mode,
276 the limit is deduced from the expression::
278 (n_tx_descriptors / burst_size + 1) * inter_burst_gap
280 There is no any packet reordering according timestamps is supposed,
281 neither within packet burst, nor between packets, it is an entirely
282 application responsibility to generate packets and its timestamps
283 in desired order. The timestamps can be put only in the first packet
284 in the burst providing the entire burst scheduling.
286 - E-Switch decapsulation Flow:
288 - can be applied to PF port only.
289 - must specify VF port action (packet redirection from PF to VF).
290 - optionally may specify tunnel inner source and destination MAC addresses.
292 - E-Switch encapsulation Flow:
294 - can be applied to VF ports only.
295 - must specify PF port action (packet redirection from VF to PF).
299 - The input buffer, used as outer header, is not validated.
303 - The decapsulation is always done up to the outermost tunnel detected by the HW.
304 - The input buffer, providing the removal size, is not validated.
305 - The buffer size must match the length of the headers to be removed.
307 - ICMP(code/type/identifier/sequence number) / ICMP6(code/type) matching, IP-in-IP and MPLS flow matching are all
308 mutually exclusive features which cannot be supported together
309 (see :ref:`mlx5_firmware_config`).
313 - Requires DevX and DV flow to be enabled.
314 - KEEP_CRC offload cannot be supported with LRO.
315 - The first mbuf length, without head-room, must be big enough to include the
317 - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
318 it with size limited to max LRO size, not to max RX packet length.
319 - LRO can be used with outer header of TCP packets of the standard format:
320 eth (with or without vlan) / ipv4 or ipv6 / tcp / payload
322 Other TCP packets (e.g. with MPLS label) received on Rx queue with LRO enabled, will be received with bad checksum.
323 - LRO packet aggregation is performed by HW only for packet size larger than
324 ``lro_min_mss_size``. This value is reported on device start, when debug
329 - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
330 for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
331 The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
335 - Supports ``RTE_FLOW_ACTION_TYPE_SAMPLE`` action only within NIC Rx and E-Switch steering domain.
336 - The E-Switch Sample flow must have the eswitch_manager VPORT destination (PF or ECPF) and no additional actions.
337 - For ConnectX-5, the ``RTE_FLOW_ACTION_TYPE_SAMPLE`` is typically used as first action in the E-Switch egress flow if with header modify or encapsulation actions.
339 - IPv6 header item 'proto' field, indicating the next header protocol, should
340 not be set as extension header.
341 In case the next header is an extension header, it should not be specified in
342 IPv6 header item 'proto' field.
343 The last extension header item 'next header' field can specify the following
344 header protocol type.
348 - Hairpin between two ports could only manual binding and explicit Tx flow mode. For single port hairpin, all the combinations of auto/manual binding and explicit/implicit Tx flow mode could be supported.
349 - Hairpin in switchdev SR-IOV mode is not supported till now.
354 MLX5 supports various methods to report statistics:
356 Port statistics can be queried using ``rte_eth_stats_get()``. The received and sent statistics are through SW only and counts the number of packets received or sent successfully by the PMD. The imissed counter is the amount of packets that could not be delivered to SW because a queue was full. Packets not received due to congestion in the bus or on the NIC can be queried via the rx_discards_phy xstats counter.
358 Extended statistics can be queried using ``rte_eth_xstats_get()``. The extended statistics expose a wider set of counters counted by the device. The extended port statistics counts the number of packets received or sent successfully by the port. As Mellanox NICs are using the :ref:`Bifurcated Linux Driver <linux_gsg_linux_drivers>` those counters counts also packet received or sent by the Linux kernel. The counters with ``_phy`` suffix counts the total events on the physical port, therefore not valid for VF.
360 Finally per-flow statistics can by queried using ``rte_flow_query`` when attaching a count action for specific flow. The flow counter counts the number of packets received successfully by the port and match the specific flow.
368 The ibverbs libraries can be linked with this PMD in a number of ways,
369 configured by the ``ibverbs_link`` build option:
371 - ``shared`` (default): the PMD depends on some .so files.
373 - ``dlopen``: Split the dependencies glue in a separate library
374 loaded when needed by dlopen.
375 It make dependencies on libibverbs and libmlx4 optional,
376 and has no performance impact.
378 - ``static``: Embed static flavor of the dependencies libibverbs and libmlx4
379 in the PMD shared library or the executable static binary.
381 Environment variables
382 ~~~~~~~~~~~~~~~~~~~~~
386 A list of directories in which to search for the rdma-core "glue" plug-in,
387 separated by colons or semi-colons.
389 - ``MLX5_SHUT_UP_BF``
391 Configures HW Tx doorbell register as IO-mapped.
393 By default, the HW Tx doorbell is configured as a write-combining register.
394 The register would be flushed to HW usually when the write-combining buffer
395 becomes full, but it depends on CPU design.
397 Except for vectorized Tx burst routines, a write memory barrier is enforced
398 after updating the register so that the update can be immediately visible to
401 When vectorized Tx burst is called, the barrier is set only if the burst size
402 is not aligned to MLX5_VPMD_TX_MAX_BURST. However, setting this environmental
403 variable will bring better latency even though the maximum throughput can
406 Run-time configuration
407 ~~~~~~~~~~~~~~~~~~~~~~
409 - librte_net_mlx5 brings kernel network interfaces up during initialization
410 because it is affected by their state. Forcing them down prevents packets
413 - **ethtool** operations on related kernel interfaces also affect the PMD.
418 In order to run as a non-root user,
419 some capabilities must be granted to the application::
421 setcap cap_sys_admin,cap_net_admin,cap_net_raw,cap_ipc_lock+ep <dpdk-app>
423 Below are the reasons of the need for each capability:
426 When using physical addresses (PA mode), with Linux >= 4.0,
427 for access to ``/proc/self/pagemap``.
430 For device configuration.
433 For raw ethernet queue allocation through kernel driver.
436 For DMA memory pinning.
441 - ``rxq_cqe_comp_en`` parameter [int]
443 A nonzero value enables the compression of CQE on RX side. This feature
444 allows to save PCI bandwidth and improve performance. Enabled by default.
445 Different compression formats are supported in order to achieve the best
446 performance for different traffic patterns. Hash RSS format is the default.
448 Specifying 2 as a ``rxq_cqe_comp_en`` value selects Flow Tag format for
449 better compression rate in case of RTE Flow Mark traffic.
450 Specifying 3 as a ``rxq_cqe_comp_en`` value selects Checksum format.
451 Specifying 4 as a ``rxq_cqe_comp_en`` value selects L3/L4 Header format for
452 better compression rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
456 - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
457 ConnectX-6 Lx, BlueField and BlueField-2.
458 - POWER9 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
459 ConnectX-6 Lx, BlueField and BlueField-2.
461 - ``rxq_pkt_pad_en`` parameter [int]
463 A nonzero value enables padding Rx packet to the size of cacheline on PCI
464 transaction. This feature would waste PCI bandwidth but could improve
465 performance by avoiding partial cacheline write which may cause costly
466 read-modify-copy in memory transaction on some architectures. Disabled by
471 - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
472 ConnectX-6 Lx, BlueField and BlueField-2.
473 - POWER8 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
474 ConnectX-6 Lx, BlueField and BlueField-2.
476 - ``mprq_en`` parameter [int]
478 A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
479 configured as Multi-Packet RQ if the total number of Rx queues is
480 ``rxqs_min_mprq`` or more. Disabled by default.
482 Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe bandwidth
483 by posting a single large buffer for multiple packets. Instead of posting a
484 buffers per a packet, one large buffer is posted in order to receive multiple
485 packets on the buffer. A MPRQ buffer consists of multiple fixed-size strides
486 and each stride receives one packet. MPRQ can improve throughput for
487 small-packet traffic.
489 When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
490 user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
491 configure large stride size enough to accommodate max_rx_pkt_len as long as
492 device allows. Note that this can waste system memory compared to enabling Rx
493 scatter and multi-segment packet.
495 - ``mprq_log_stride_num`` parameter [int]
497 Log 2 of the number of strides for Multi-Packet Rx queue. Configuring more
498 strides can reduce PCIe traffic further. If configured value is not in the
499 range of device capability, the default value will be set with a warning
500 message. The default value is 4 which is 16 strides per a buffer, valid only
501 if ``mprq_en`` is set.
503 The size of Rx queue should be bigger than the number of strides.
505 - ``mprq_log_stride_size`` parameter [int]
507 Log 2 of the size of a stride for Multi-Packet Rx queue. Configuring a smaller
508 stride size can save some memory and reduce probability of a depletion of all
509 available strides due to unreleased packets by an application. If configured
510 value is not in the range of device capability, the default value will be set
511 with a warning message. The default value is 11 which is 2048 bytes per a
512 stride, valid only if ``mprq_en`` is set. With ``mprq_log_stride_size`` set
513 it is possible for a packet to span across multiple strides. This mode allows
514 support of jumbo frames (9K) with MPRQ. The memcopy of some packets (or part
515 of a packet if Rx scatter is configured) may be required in case there is no
516 space left for a head room at the end of a stride which incurs some
519 - ``mprq_max_memcpy_len`` parameter [int]
521 The maximum length of packet to memcpy in case of Multi-Packet Rx queue. Rx
522 packet is mem-copied to a user-provided mbuf if the size of Rx packet is less
523 than or equal to this parameter. Otherwise, PMD will attach the Rx packet to
524 the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
525 A mempool for external buffers will be allocated and managed by PMD. If Rx
526 packet is externally attached, ol_flags field of the mbuf will have
527 EXT_ATTACHED_MBUF and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
528 checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
530 - ``rxqs_min_mprq`` parameter [int]
532 Configure Rx queues as Multi-Packet RQ if the total number of Rx queues is
533 greater or equal to this value. The default value is 12, valid only if
536 - ``txq_inline`` parameter [int]
538 Amount of data to be inlined during TX operations. This parameter is
539 deprecated and converted to the new parameter ``txq_inline_max`` providing
540 partial compatibility.
542 - ``txqs_min_inline`` parameter [int]
544 Enable inline data send only when the number of TX queues is greater or equal
547 This option should be used in combination with ``txq_inline_max`` and
548 ``txq_inline_mpw`` below and does not affect ``txq_inline_min`` settings above.
550 If this option is not specified the default value 16 is used for BlueField
551 and 8 for other platforms
553 The data inlining consumes the CPU cycles, so this option is intended to
554 auto enable inline data if we have enough Tx queues, which means we have
555 enough CPU cores and PCI bandwidth is getting more critical and CPU
556 is not supposed to be bottleneck anymore.
558 The copying data into WQE improves latency and can improve PPS performance
559 when PCI back pressure is detected and may be useful for scenarios involving
560 heavy traffic on many queues.
562 Because additional software logic is necessary to handle this mode, this
563 option should be used with care, as it may lower performance when back
564 pressure is not expected.
566 If inline data are enabled it may affect the maximal size of Tx queue in
567 descriptors because the inline data increase the descriptor size and
568 queue size limits supported by hardware may be exceeded.
570 - ``txq_inline_min`` parameter [int]
572 Minimal amount of data to be inlined into WQE during Tx operations. NICs
573 may require this minimal data amount to operate correctly. The exact value
574 may depend on NIC operation mode, requested offloads, etc. It is strongly
575 recommended to omit this parameter and use the default values. Anyway,
576 applications using this parameter should take into consideration that
577 specifying an inconsistent value may prevent the NIC from sending packets.
579 If ``txq_inline_min`` key is present the specified value (may be aligned
580 by the driver in order not to exceed the limits and provide better descriptor
581 space utilization) will be used by the driver and it is guaranteed that
582 requested amount of data bytes are inlined into the WQE beside other inline
583 settings. This key also may update ``txq_inline_max`` value (default
584 or specified explicitly in devargs) to reserve the space for inline data.
586 If ``txq_inline_min`` key is not present, the value may be queried by the
587 driver from the NIC via DevX if this feature is available. If there is no DevX
588 enabled/supported the value 18 (supposing L2 header including VLAN) is set
589 for ConnectX-4 and ConnectX-4 Lx, and 0 is set by default for ConnectX-5
590 and newer NICs. If packet is shorter the ``txq_inline_min`` value, the entire
593 For ConnectX-4 NIC, driver does not allow specifying value below 18
594 (minimal L2 header, including VLAN), error will be raised.
596 For ConnectX-4 Lx NIC, it is allowed to specify values below 18, but
597 it is not recommended and may prevent NIC from sending packets over
600 Please, note, this minimal data inlining disengages eMPW feature (Enhanced
601 Multi-Packet Write), because last one does not support partial packet inlining.
602 This is not very critical due to minimal data inlining is mostly required
603 by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW feature.
605 - ``txq_inline_max`` parameter [int]
607 Specifies the maximal packet length to be completely inlined into WQE
608 Ethernet Segment for ordinary SEND method. If packet is larger than specified
609 value, the packet data won't be copied by the driver at all, data buffer
610 is addressed with a pointer. If packet length is less or equal all packet
611 data will be copied into WQE. This may improve PCI bandwidth utilization for
612 short packets significantly but requires the extra CPU cycles.
614 The data inline feature is controlled by number of Tx queues, if number of Tx
615 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
616 is engaged, if there are not enough Tx queues (which means not enough CPU cores
617 and CPU resources are scarce), data inline is not performed by the driver.
618 Assigning ``txqs_min_inline`` with zero always enables the data inline.
620 The default ``txq_inline_max`` value is 290. The specified value may be adjusted
621 by the driver in order not to exceed the limit (930 bytes) and to provide better
622 WQE space filling without gaps, the adjustment is reflected in the debug log.
623 Also, the default value (290) may be decreased in run-time if the large transmit
624 queue size is requested and hardware does not support enough descriptor
625 amount, in this case warning is emitted. If ``txq_inline_max`` key is
626 specified and requested inline settings can not be satisfied then error
629 - ``txq_inline_mpw`` parameter [int]
631 Specifies the maximal packet length to be completely inlined into WQE for
632 Enhanced MPW method. If packet is large the specified value, the packet data
633 won't be copied, and data buffer is addressed with pointer. If packet length
634 is less or equal, all packet data will be copied into WQE. This may improve PCI
635 bandwidth utilization for short packets significantly but requires the extra
638 The data inline feature is controlled by number of TX queues, if number of Tx
639 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
640 is engaged, if there are not enough Tx queues (which means not enough CPU cores
641 and CPU resources are scarce), data inline is not performed by the driver.
642 Assigning ``txqs_min_inline`` with zero always enables the data inline.
644 The default ``txq_inline_mpw`` value is 268. The specified value may be adjusted
645 by the driver in order not to exceed the limit (930 bytes) and to provide better
646 WQE space filling without gaps, the adjustment is reflected in the debug log.
647 Due to multiple packets may be included to the same WQE with Enhanced Multi
648 Packet Write Method and overall WQE size is limited it is not recommended to
649 specify large values for the ``txq_inline_mpw``. Also, the default value (268)
650 may be decreased in run-time if the large transmit queue size is requested
651 and hardware does not support enough descriptor amount, in this case warning
652 is emitted. If ``txq_inline_mpw`` key is specified and requested inline
653 settings can not be satisfied then error will be raised.
655 - ``txqs_max_vec`` parameter [int]
657 Enable vectorized Tx only when the number of TX queues is less than or
658 equal to this value. This parameter is deprecated and ignored, kept
659 for compatibility issue to not prevent driver from probing.
661 - ``txq_mpw_hdr_dseg_en`` parameter [int]
663 A nonzero value enables including two pointers in the first block of TX
664 descriptor. The parameter is deprecated and ignored, kept for compatibility
667 - ``txq_max_inline_len`` parameter [int]
669 Maximum size of packet to be inlined. This limits the size of packet to
670 be inlined. If the size of a packet is larger than configured value, the
671 packet isn't inlined even though there's enough space remained in the
672 descriptor. Instead, the packet is included with pointer. This parameter
673 is deprecated and converted directly to ``txq_inline_mpw`` providing full
674 compatibility. Valid only if eMPW feature is engaged.
676 - ``txq_mpw_en`` parameter [int]
678 A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
679 ConnectX-6, ConnectX-6 Dx, ConnectX-6 Lx, BlueField, BlueField-2.
680 eMPW allows the Tx burst function to pack up multiple packets
681 in a single descriptor session in order to save PCI bandwidth
682 and improve performance at the cost of a slightly higher CPU usage.
683 When ``txq_inline_mpw`` is set along with ``txq_mpw_en``,
684 Tx burst function copies entire packet data on to Tx descriptor
685 instead of including pointer of packet.
687 The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
688 it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` option.
689 Also, if minimal data inlining is requested by non-zero ``txq_inline_min``
690 option or reported by the NIC, the eMPW feature is disengaged.
692 - ``tx_db_nc`` parameter [int]
694 The rdma core library can map doorbell register in two ways, depending on the
695 environment variable "MLX5_SHUT_UP_BF":
697 - As regular cached memory (usually with write combining attribute), if the
698 variable is either missing or set to zero.
699 - As non-cached memory, if the variable is present and set to not "0" value.
701 The type of mapping may slightly affect the Tx performance, the optimal choice
702 is strongly relied on the host architecture and should be deduced practically.
704 If ``tx_db_nc`` is set to zero, the doorbell is forced to be mapped to regular
705 memory (with write combining), the PMD will perform the extra write memory barrier
706 after writing to doorbell, it might increase the needed CPU clocks per packet
707 to send, but latency might be improved.
709 If ``tx_db_nc`` is set to one, the doorbell is forced to be mapped to non
710 cached memory, the PMD will not perform the extra write memory barrier
711 after writing to doorbell, on some architectures it might improve the
714 If ``tx_db_nc`` is set to two, the doorbell is forced to be mapped to regular
715 memory, the PMD will use heuristics to decide whether write memory barrier
716 should be performed. For bursts with size multiple of recommended one (64 pkts)
717 it is supposed the next burst is coming and no need to issue the extra memory
718 barrier (it is supposed to be issued in the next coming burst, at least after
719 descriptor writing). It might increase latency (on some hosts till next
720 packets transmit) and should be used with care.
722 If ``tx_db_nc`` is omitted or set to zero, the preset (if any) environment
723 variable "MLX5_SHUT_UP_BF" value is used. If there is no "MLX5_SHUT_UP_BF",
724 the default ``tx_db_nc`` value is zero for ARM64 hosts and one for others.
726 - ``tx_pp`` parameter [int]
728 If a nonzero value is specified the driver creates all necessary internal
729 objects to provide accurate packet send scheduling on mbuf timestamps.
730 The positive value specifies the scheduling granularity in nanoseconds,
731 the packet send will be accurate up to specified digits. The allowed range is
732 from 500 to 1 million of nanoseconds. The negative value specifies the module
733 of granularity and engages the special test mode the check the schedule rate.
734 By default (if the ``tx_pp`` is not specified) send scheduling on timestamps
737 - ``tx_skew`` parameter [int]
739 The parameter adjusts the send packet scheduling on timestamps and represents
740 the average delay between beginning of the transmitting descriptor processing
741 by the hardware and appearance of actual packet data on the wire. The value
742 should be provided in nanoseconds and is valid only if ``tx_pp`` parameter is
743 specified. The default value is zero.
745 - ``tx_vec_en`` parameter [int]
747 A nonzero value enables Tx vector on ConnectX-5, ConnectX-6, ConnectX-6 Dx,
748 ConnectX-6 Lx, BlueField and BlueField-2 NICs
749 if the number of global Tx queues on the port is less than ``txqs_max_vec``.
750 The parameter is deprecated and ignored.
752 - ``rx_vec_en`` parameter [int]
754 A nonzero value enables Rx vector if the port is not configured in
755 multi-segment otherwise this parameter is ignored.
759 - ``vf_nl_en`` parameter [int]
761 A nonzero value enables Netlink requests from the VF to add/remove MAC
762 addresses or/and enable/disable promiscuous/all multicast on the Netdevice.
763 Otherwise the relevant configuration must be run with Linux iproute2 tools.
764 This is a prerequisite to receive this kind of traffic.
766 Enabled by default, valid only on VF devices ignored otherwise.
768 - ``l3_vxlan_en`` parameter [int]
770 A nonzero value allows L3 VXLAN and VXLAN-GPE flow creation. To enable
771 L3 VXLAN or VXLAN-GPE, users has to configure firmware and enable this
772 parameter. This is a prerequisite to receive this kind of traffic.
776 - ``dv_xmeta_en`` parameter [int]
778 A nonzero value enables extensive flow metadata support if device is
779 capable and driver supports it. This can enable extensive support of
780 ``MARK`` and ``META`` item of ``rte_flow``. The newly introduced
781 ``SET_TAG`` and ``SET_META`` actions do not depend on ``dv_xmeta_en``.
783 There are some possible configurations, depending on parameter value:
785 - 0, this is default value, defines the legacy mode, the ``MARK`` and
786 ``META`` related actions and items operate only within NIC Tx and
787 NIC Rx steering domains, no ``MARK`` and ``META`` information crosses
788 the domain boundaries. The ``MARK`` item is 24 bits wide, the ``META``
789 item is 32 bits wide and match supported on egress only.
791 - 1, this engages extensive metadata mode, the ``MARK`` and ``META``
792 related actions and items operate within all supported steering domains,
793 including FDB, ``MARK`` and ``META`` information may cross the domain
794 boundaries. The ``MARK`` item is 24 bits wide, the ``META`` item width
795 depends on kernel and firmware configurations and might be 0, 16 or
796 32 bits. Within NIC Tx domain ``META`` data width is 32 bits for
797 compatibility, the actual width of data transferred to the FDB domain
798 depends on kernel configuration and may be vary. The actual supported
799 width can be retrieved in runtime by series of rte_flow_validate()
802 - 2, this engages extensive metadata mode, the ``MARK`` and ``META``
803 related actions and items operate within all supported steering domains,
804 including FDB, ``MARK`` and ``META`` information may cross the domain
805 boundaries. The ``META`` item is 32 bits wide, the ``MARK`` item width
806 depends on kernel and firmware configurations and might be 0, 16 or
807 24 bits. The actual supported width can be retrieved in runtime by
808 series of rte_flow_validate() trials.
810 - 3, this engages tunnel offload mode. In E-Switch configuration, that
811 mode implicitly activates ``dv_xmeta_en=1``.
813 +------+-----------+-----------+-------------+-------------+
814 | Mode | ``MARK`` | ``META`` | ``META`` Tx | FDB/Through |
815 +======+===========+===========+=============+=============+
816 | 0 | 24 bits | 32 bits | 32 bits | no |
817 +------+-----------+-----------+-------------+-------------+
818 | 1 | 24 bits | vary 0-32 | 32 bits | yes |
819 +------+-----------+-----------+-------------+-------------+
820 | 2 | vary 0-32 | 32 bits | 32 bits | yes |
821 +------+-----------+-----------+-------------+-------------+
823 If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
824 ignored and the device is configured to operate in legacy mode (0).
826 Disabled by default (set to 0).
828 The Direct Verbs/Rules (engaged with ``dv_flow_en`` = 1) supports all
829 of the extensive metadata features. The legacy Verbs supports FLAG and
830 MARK metadata actions over NIC Rx steering domain only.
832 - ``dv_flow_en`` parameter [int]
834 A nonzero value enables the DV flow steering assuming it is supported
835 by the driver (RDMA Core library version is rdma-core-24.0 or higher).
837 Enabled by default if supported.
839 - ``dv_esw_en`` parameter [int]
841 A nonzero value enables E-Switch using Direct Rules.
843 Enabled by default if supported.
845 - ``lacp_by_user`` parameter [int]
847 A nonzero value enables the control of LACP traffic by the user application.
848 When a bond exists in the driver, by default it should be managed by the
849 kernel and therefore LACP traffic should be steered to the kernel.
850 If this devarg is set to 1 it will allow the user to manage the bond by
851 itself and not steer LACP traffic to the kernel.
853 Disabled by default (set to 0).
855 - ``mr_ext_memseg_en`` parameter [int]
857 A nonzero value enables extending memseg when registering DMA memory. If
858 enabled, the number of entries in MR (Memory Region) lookup table on datapath
859 is minimized and it benefits performance. On the other hand, it worsens memory
860 utilization because registered memory is pinned by kernel driver. Even if a
861 page in the extended chunk is freed, that doesn't become reusable until the
862 entire memory is freed.
866 - ``representor`` parameter [list]
868 This parameter can be used to instantiate DPDK Ethernet devices from
869 existing port (or VF) representors configured on the device.
871 It is a standard parameter whose format is described in
872 :ref:`ethernet_device_standard_device_arguments`.
874 For instance, to probe port representors 0 through 2::
878 - ``max_dump_files_num`` parameter [int]
880 The maximum number of files per PMD entity that may be created for debug information.
881 The files will be created in /var/log directory or in current directory.
883 set to 128 by default.
885 - ``lro_timeout_usec`` parameter [int]
887 The maximum allowed duration of an LRO session, in micro-seconds.
888 PMD will set the nearest value supported by HW, which is not bigger than
889 the input ``lro_timeout_usec`` value.
890 If this parameter is not specified, by default PMD will set
891 the smallest value supported by HW.
893 - ``hp_buf_log_sz`` parameter [int]
895 The total data buffer size of a hairpin queue (logarithmic form), in bytes.
896 PMD will set the data buffer size to 2 ** ``hp_buf_log_sz``, both for RX & TX.
897 The capacity of the value is specified by the firmware and the initialization
898 will get a failure if it is out of scope.
899 The range of the value is from 11 to 19 right now, and the supported frame
900 size of a single packet for hairpin is from 512B to 128KB. It might change if
901 different firmware release is being used. By using a small value, it could
902 reduce memory consumption but not work with a large frame. If the value is
903 too large, the memory consumption will be high and some potential performance
904 degradation will be introduced.
905 By default, the PMD will set this value to 16, which means that 9KB jumbo
906 frames will be supported.
908 - ``reclaim_mem_mode`` parameter [int]
910 Cache some resources in flow destroy will help flow recreation more efficient.
911 While some systems may require the all the resources can be reclaimed after
913 The parameter ``reclaim_mem_mode`` provides the option for user to configure
914 if the resource cache is needed or not.
916 There are three options to choose:
918 - 0. It means the flow resources will be cached as usual. The resources will
919 be cached, helpful with flow insertion rate.
921 - 1. It will only enable the DPDK PMD level resources reclaim.
923 - 2. Both DPDK PMD level and rdma-core low level will be configured as
926 By default, the PMD will set this value to 0.
928 - ``sys_mem_en`` parameter [int]
930 A non-zero value enables the PMD memory management allocating memory
931 from system by default, without explicit rte memory flag.
933 By default, the PMD will set this value to 0.
935 - ``decap_en`` parameter [int]
937 Some devices do not support FCS (frame checksum) scattering for
938 tunnel-decapsulated packets.
939 If set to 0, this option forces the FCS feature and rejects tunnel
940 decapsulation in the flow engine for such devices.
942 By default, the PMD will set this value to 1.
944 .. _mlx5_firmware_config:
946 Firmware configuration
947 ~~~~~~~~~~~~~~~~~~~~~~
949 Firmware features can be configured as key/value pairs.
951 The command to set a value is::
953 mlxconfig -d <device> set <key>=<value>
955 The command to query a value is::
957 mlxconfig -d <device> query | grep <key>
959 The device name for the command ``mlxconfig`` can be either the PCI address,
960 or the mst device name found with::
964 Below are some firmware configurations listed.
970 value: 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
976 - maximum number of SR-IOV virtual functions::
980 - enable DevX (required by Direct Rules and other features)::
984 - aggressive CQE zipping::
988 - L3 VXLAN and VXLAN-GPE destination UDP port::
991 IP_OVER_VXLAN_PORT=<udp dport>
993 - enable VXLAN-GPE tunnel flow matching::
995 FLEX_PARSER_PROFILE_ENABLE=0
997 FLEX_PARSER_PROFILE_ENABLE=2
999 - enable IP-in-IP tunnel flow matching::
1001 FLEX_PARSER_PROFILE_ENABLE=0
1003 - enable MPLS flow matching::
1005 FLEX_PARSER_PROFILE_ENABLE=1
1007 - enable ICMP(code/type/identifier/sequence number) / ICMP6(code/type) fields matching::
1009 FLEX_PARSER_PROFILE_ENABLE=2
1011 - enable Geneve flow matching::
1013 FLEX_PARSER_PROFILE_ENABLE=0
1015 FLEX_PARSER_PROFILE_ENABLE=1
1017 - enable GTP flow matching::
1019 FLEX_PARSER_PROFILE_ENABLE=3
1021 - enable eCPRI flow matching::
1023 FLEX_PARSER_PROFILE_ENABLE=4
1029 This driver relies on external libraries and kernel drivers for resources
1030 allocations and initialization. The following dependencies are not part of
1031 DPDK and must be installed separately:
1035 User space Verbs framework used by librte_net_mlx5. This library provides
1036 a generic interface between the kernel and low-level user space drivers
1039 It allows slow and privileged operations (context initialization, hardware
1040 resources allocations) to be managed by the kernel and fast operations to
1041 never leave user space.
1045 Low-level user space driver library for Mellanox
1046 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices, it is automatically loaded
1049 This library basically implements send/receive calls to the hardware
1052 - **Kernel modules**
1054 They provide the kernel-side Verbs API and low level device drivers that
1055 manage actual hardware initialization and resources sharing with user
1058 Unlike most other PMDs, these modules must remain loaded and bound to
1061 - mlx5_core: hardware driver managing Mellanox
1062 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel
1064 - mlx5_ib: InifiniBand device driver.
1065 - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
1067 - **Firmware update**
1069 Mellanox OFED/EN releases include firmware updates for
1070 ConnectX-4/ConnectX-5/ConnectX-6/BlueField adapters.
1072 Because each release provides new features, these updates must be applied to
1073 match the kernel modules and libraries they come with.
1077 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
1083 Either RDMA Core library with a recent enough Linux kernel release
1084 (recommended) or Mellanox OFED/EN, which provides compatibility with older
1087 RDMA Core with Linux Kernel
1088 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
1090 - Minimal kernel version : v4.14 or the most recent 4.14-rc (see `Linux installation documentation`_)
1091 - Minimal rdma-core version: v15+ commit 0c5f5765213a ("Merge pull request #227 from yishaih/tm")
1092 (see `RDMA Core installation documentation`_)
1093 - When building for i686 use:
1095 - rdma-core version 18.0 or above built with 32bit support.
1096 - Kernel version 4.14.41 or above.
1098 - Starting with rdma-core v21, static libraries can be built::
1101 CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
1104 .. _`Linux installation documentation`: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
1105 .. _`RDMA Core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
1111 - Mellanox OFED version: **4.5** and above /
1112 Mellanox EN version: **4.5** and above
1115 - ConnectX-4: **12.21.1000** and above.
1116 - ConnectX-4 Lx: **14.21.1000** and above.
1117 - ConnectX-5: **16.21.1000** and above.
1118 - ConnectX-5 Ex: **16.21.1000** and above.
1119 - ConnectX-6: **20.27.0090** and above.
1120 - ConnectX-6 Dx: **22.27.0090** and above.
1121 - BlueField: **18.25.1010** and above.
1123 While these libraries and kernel modules are available on OpenFabrics
1124 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
1125 managers on most distributions, this PMD requires Ethernet extensions that
1126 may not be supported at the moment (this is a work in progress).
1129 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__ and
1131 <http://www.mellanox.com/page/products_dyn?product_family=27&mtag=linux>`__
1132 include the necessary support and should be used in the meantime. For DPDK,
1133 only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
1134 required from that distribution.
1138 Several versions of Mellanox OFED/EN are available. Installing the version
1139 this DPDK release was developed and tested against is strongly
1140 recommended. Please check the `linux prerequisites`_.
1142 Windows Prerequisites
1143 ---------------------
1145 This driver relies on external libraries and kernel drivers for resources
1146 allocations and initialization. The dependencies in the following sub-sections
1147 are not part of DPDK, and must be installed separately.
1149 Compilation Prerequisites
1150 ~~~~~~~~~~~~~~~~~~~~~~~~~
1152 DevX SDK installation
1153 ^^^^^^^^^^^^^^^^^^^^^
1155 The DevX SDK must be installed on the machine building the Windows PMD.
1156 Additional information can be found at
1157 `How to Integrate Windows DevX in Your Development Environment
1158 <https://docs.mellanox.com/display/winof2v250/RShim+Drivers+and+Usage#RShimDriversandUsage-DevXInterface>`__.
1160 Runtime Prerequisites
1161 ~~~~~~~~~~~~~~~~~~~~~
1163 WinOF2 version 2.60 or higher must be installed on the machine.
1168 The driver can be downloaded from the following site:
1170 <https://www.mellanox.com/products/adapter-software/ethernet/windows/winof-2>`__
1175 DevX for Windows must be enabled in the Windows registry.
1176 The keys ``DevxEnabled`` and ``DevxFsRules`` must be set.
1177 Additional information can be found in the WinOF2 user manual.
1182 The following Mellanox device families are supported by the same mlx5 driver:
1194 Below are detailed device names:
1196 * Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX4111A-XCAT (1x10G)
1197 * Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX412A-XCAT (2x10G)
1198 * Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX4111A-ACAT (1x25G)
1199 * Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX412A-ACAT (2x25G)
1200 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX413A-BCAT (1x40G)
1201 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX4131A-BCAT (1x40G)
1202 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX415A-BCAT (1x40G)
1203 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX413A-GCAT (1x50G)
1204 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX4131A-GCAT (1x50G)
1205 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX414A-BCAT (2x50G)
1206 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-GCAT (1x50G)
1207 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-BCAT (2x50G)
1208 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-GCAT (2x50G)
1209 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-CCAT (1x100G)
1210 * Mellanox\ |reg| ConnectX\ |reg|-4 100G MCX416A-CCAT (2x100G)
1211 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4111A-XCAT (1x10G)
1212 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4121A-XCAT (2x10G)
1213 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4111A-ACAT (1x25G)
1214 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
1215 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 40G MCX4131A-BCAT (1x40G)
1216 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
1217 * Mellanox\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
1218 * Mellanox\ |reg| ConnectX\ |reg|-6 200G MCX654106A-HCAT (2x200G)
1219 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
1220 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 200G MCX623105AN-VDAT (1x200G)
1221 * Mellanox\ |reg| ConnectX\ |reg|-6 Lx EN 25G MCX631102AN-ADAT (2x25G)
1223 Quick Start Guide on OFED/EN
1224 ----------------------------
1226 1. Download latest Mellanox OFED/EN. For more info check the `linux prerequisites`_.
1229 2. Install the required libraries and kernel modules either by installing
1230 only the required set, or by installing the entire Mellanox OFED/EN::
1232 ./mlnxofedinstall --upstream-libs --dpdk
1234 3. Verify the firmware is the correct one::
1238 4. Verify all ports links are set to Ethernet::
1240 mlxconfig -d <mst device> query | grep LINK_TYPE
1244 Link types may have to be configured to Ethernet::
1246 mlxconfig -d <mst device> set LINK_TYPE_P1/2=1/2/3
1248 * LINK_TYPE_P1=<1|2|3> , 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
1250 For hypervisors, verify SR-IOV is enabled on the NIC::
1252 mlxconfig -d <mst device> query | grep SRIOV_EN
1255 If needed, configure SR-IOV::
1257 mlxconfig -d <mst device> set SRIOV_EN=1 NUM_OF_VFS=16
1258 mlxfwreset -d <mst device> reset
1260 5. Restart the driver::
1262 /etc/init.d/openibd restart
1266 service openibd restart
1268 If link type was changed, firmware must be reset as well::
1270 mlxfwreset -d <mst device> reset
1272 For hypervisors, after reset write the sysfs number of virtual functions
1275 To dynamically instantiate a given number of virtual functions (VFs)::
1277 echo [num_vfs] > /sys/class/infiniband/mlx5_0/device/sriov_numvfs
1279 6. Install DPDK and you are ready to go.
1280 See :doc:`compilation instructions <../linux_gsg/build_dpdk>`.
1282 Enable switchdev mode
1283 ---------------------
1285 Switchdev mode is a mode in E-Switch, that binds between representor and VF.
1286 Representor is a port in DPDK that is connected to a VF in such a way
1287 that assuming there are no offload flows, each packet that is sent from the VF
1288 will be received by the corresponding representor. While each packet that is
1289 sent to a representor will be received by the VF.
1290 This is very useful in case of SRIOV mode, where the first packet that is sent
1291 by the VF will be received by the DPDK application which will decide if this
1292 flow should be offloaded to the E-Switch. After offloading the flow packet
1293 that the VF that are matching the flow will not be received any more by
1294 the DPDK application.
1296 1. Enable SRIOV mode::
1298 mlxconfig -d <mst device> set SRIOV_EN=true
1300 2. Configure the max number of VFs::
1302 mlxconfig -d <mst device> set NUM_OF_VFS=<num of vfs>
1306 mlxfwreset -d <mst device> reset
1308 3. Configure the actual number of VFs::
1310 echo <num of vfs > /sys/class/net/<net device>/device/sriov_numvfs
1312 4. Unbind the device (can be rebind after the switchdev mode)::
1314 echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
1316 5. Enbale switchdev mode::
1318 echo switchdev > /sys/class/net/<net device>/compat/devlink/mode
1323 1. Configure aggressive CQE Zipping for maximum performance::
1325 mlxconfig -d <mst device> s CQE_COMPRESSION=1
1327 To set it back to the default CQE Zipping mode use::
1329 mlxconfig -d <mst device> s CQE_COMPRESSION=0
1331 2. In case of virtualization:
1333 - Make sure that hypervisor kernel is 3.16 or newer.
1334 - Configure boot with ``iommu=pt``.
1335 - Use 1G huge pages.
1336 - Make sure to allocate a VM on huge pages.
1337 - Make sure to set CPU pinning.
1339 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
1340 for better performance. For VMs, verify that the right CPU
1341 and NUMA node are pinned according to the above. Run::
1345 to identify the NUMA node to which the PCIe adapter is connected.
1347 4. If more than one adapter is used, and root complex capabilities allow
1348 to put both adapters on the same NUMA node without PCI bandwidth degradation,
1349 it is recommended to locate both adapters on the same NUMA node.
1350 This in order to forward packets from one to the other without
1351 NUMA performance penalty.
1353 5. Disable pause frames::
1355 ethtool -A <netdev> rx off tx off
1357 6. Verify IO non-posted prefetch is disabled by default. This can be checked
1358 via the BIOS configuration. Please contact you server provider for more
1359 information about the settings.
1363 On some machines, depends on the machine integrator, it is beneficial
1364 to set the PCI max read request parameter to 1K. This can be
1365 done in the following way:
1367 To query the read request size use::
1369 setpci -s <NIC PCI address> 68.w
1371 If the output is different than 3XXX, set it by::
1373 setpci -s <NIC PCI address> 68.w=3XXX
1375 The XXX can be different on different systems. Make sure to configure
1376 according to the setpci output.
1378 7. To minimize overhead of searching Memory Regions:
1380 - '--socket-mem' is recommended to pin memory by predictable amount.
1381 - Configure per-lcore cache when creating Mempools for packet buffer.
1382 - Refrain from dynamically allocating/freeing memory in run-time.
1387 There are multiple Rx burst functions with different advantages and limitations.
1389 .. table:: Rx burst functions
1391 +-------------------+------------------------+---------+-----------------+------+-------+
1392 || Function Name || Enabler || Scatter|| Error Recovery || CQE || Large|
1393 | | | | || comp|| MTU |
1394 +===================+========================+=========+=================+======+=======+
1395 | rx_burst | rx_vec_en=0 | Yes | Yes | Yes | Yes |
1396 +-------------------+------------------------+---------+-----------------+------+-------+
1397 | rx_burst_vec | rx_vec_en=1 (default) | No | if CQE comp off | Yes | No |
1398 +-------------------+------------------------+---------+-----------------+------+-------+
1399 | rx_burst_mprq || mprq_en=1 | No | Yes | Yes | Yes |
1400 | || RxQs >= rxqs_min_mprq | | | | |
1401 +-------------------+------------------------+---------+-----------------+------+-------+
1402 | rx_burst_mprq_vec || rx_vec_en=1 (default) | No | if CQE comp off | Yes | Yes |
1403 | || mprq_en=1 | | | | |
1404 | || RxQs >= rxqs_min_mprq | | | | |
1405 +-------------------+------------------------+---------+-----------------+------+-------+
1407 .. _mlx5_offloads_support:
1409 Supported hardware offloads
1410 ---------------------------
1412 .. table:: Minimal SW/HW versions for queue offloads
1414 ============== ===== ===== ========= ===== ========== =============
1415 Offload DPDK Linux rdma-core OFED firmware hardware
1416 ============== ===== ===== ========= ===== ========== =============
1417 common base 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1418 checksums 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1419 Rx timestamp 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1420 TSO 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1421 LRO 19.08 N/A N/A 4.6-4 16.25.6406 ConnectX-5
1422 Buffer Split 20.11 N/A N/A 5.1-2 22.28.2006 ConnectX-6 Dx
1423 ============== ===== ===== ========= ===== ========== =============
1425 .. table:: Minimal SW/HW versions for rte_flow offloads
1427 +-----------------------+-----------------+-----------------+
1428 | Offload | with E-Switch | with NIC |
1429 +=======================+=================+=================+
1430 | Count | | DPDK 19.05 | | DPDK 19.02 |
1431 | | | OFED 4.6 | | OFED 4.6 |
1432 | | | rdma-core 24 | | rdma-core 23 |
1433 | | | ConnectX-5 | | ConnectX-5 |
1434 +-----------------------+-----------------+-----------------+
1435 | Drop | | DPDK 19.05 | | DPDK 18.11 |
1436 | | | OFED 4.6 | | OFED 4.5 |
1437 | | | rdma-core 24 | | rdma-core 23 |
1438 | | | ConnectX-5 | | ConnectX-4 |
1439 +-----------------------+-----------------+-----------------+
1440 | Queue / RSS | | | | DPDK 18.11 |
1441 | | | N/A | | OFED 4.5 |
1442 | | | | | rdma-core 23 |
1443 | | | | | ConnectX-4 |
1444 +-----------------------+-----------------+-----------------+
1445 | RSS shared action | | | | DPDK 20.11 |
1446 | | | N/A | | OFED 5.2 |
1447 | | | | | rdma-core 33 |
1448 | | | | | ConnectX-5 |
1449 +-----------------------+-----------------+-----------------+
1450 | | VLAN | | DPDK 19.11 | | DPDK 19.11 |
1451 | | (of_pop_vlan / | | OFED 4.7-1 | | OFED 4.7-1 |
1452 | | of_push_vlan / | | ConnectX-5 | | ConnectX-5 |
1453 | | of_set_vlan_pcp / | | | | |
1454 | | of_set_vlan_vid) | | | | |
1455 +-----------------------+-----------------+-----------------+
1456 | Encapsulation | | DPDK 19.05 | | DPDK 19.02 |
1457 | (VXLAN / NVGRE / RAW) | | OFED 4.7-1 | | OFED 4.6 |
1458 | | | rdma-core 24 | | rdma-core 23 |
1459 | | | ConnectX-5 | | ConnectX-5 |
1460 +-----------------------+-----------------+-----------------+
1461 | Encapsulation | | DPDK 19.11 | | DPDK 19.11 |
1462 | GENEVE | | OFED 4.7-3 | | OFED 4.7-3 |
1463 | | | rdma-core 27 | | rdma-core 27 |
1464 | | | ConnectX-5 | | ConnectX-5 |
1465 +-----------------------+-----------------+-----------------+
1466 | Tunnel Offload | | DPDK 20.11 | | DPDK 20.11 |
1467 | | | OFED 5.1-2 | | OFED 5.1-2 |
1468 | | | rdma-core 32 | | N/A |
1469 | | | ConnectX-5 | | ConnectX-5 |
1470 +-----------------------+-----------------+-----------------+
1471 | | Header rewrite | | DPDK 19.05 | | DPDK 19.02 |
1472 | | (set_ipv4_src / | | OFED 4.7-1 | | OFED 4.7-1 |
1473 | | set_ipv4_dst / | | rdma-core 24 | | rdma-core 24 |
1474 | | set_ipv6_src / | | ConnectX-5 | | ConnectX-5 |
1475 | | set_ipv6_dst / | | | | |
1476 | | set_tp_src / | | | | |
1477 | | set_tp_dst / | | | | |
1478 | | dec_ttl / | | | | |
1479 | | set_ttl / | | | | |
1480 | | set_mac_src / | | | | |
1481 | | set_mac_dst) | | | | |
1482 +-----------------------+-----------------+-----------------+
1483 | | Header rewrite | | DPDK 20.02 | | DPDK 20.02 |
1484 | | (set_dscp) | | OFED 5.0 | | OFED 5.0 |
1485 | | | | rdma-core 24 | | rdma-core 24 |
1486 | | | | ConnectX-5 | | ConnectX-5 |
1487 +-----------------------+-----------------+-----------------+
1488 | Jump | | DPDK 19.05 | | DPDK 19.02 |
1489 | | | OFED 4.7-1 | | OFED 4.7-1 |
1490 | | | rdma-core 24 | | N/A |
1491 | | | ConnectX-5 | | ConnectX-5 |
1492 +-----------------------+-----------------+-----------------+
1493 | Mark / Flag | | DPDK 19.05 | | DPDK 18.11 |
1494 | | | OFED 4.6 | | OFED 4.5 |
1495 | | | rdma-core 24 | | rdma-core 23 |
1496 | | | ConnectX-5 | | ConnectX-4 |
1497 +-----------------------+-----------------+-----------------+
1498 | Meta data | | DPDK 19.11 | | DPDK 19.11 |
1499 | | | OFED 4.7-3 | | OFED 4.7-3 |
1500 | | | rdma-core 26 | | rdma-core 26 |
1501 | | | ConnectX-5 | | ConnectX-5 |
1502 +-----------------------+-----------------+-----------------+
1503 | Port ID | | DPDK 19.05 | | N/A |
1504 | | | OFED 4.7-1 | | N/A |
1505 | | | rdma-core 24 | | N/A |
1506 | | | ConnectX-5 | | N/A |
1507 +-----------------------+-----------------+-----------------+
1508 | Hairpin | | | | DPDK 19.11 |
1509 | | | N/A | | OFED 4.7-3 |
1510 | | | | | rdma-core 26 |
1511 | | | | | ConnectX-5 |
1512 +-----------------------+-----------------+-----------------+
1513 | 2-port Hairpin | | | | DPDK 20.11 |
1514 | | | N/A | | OFED 5.1-2 |
1516 | | | | | ConnectX-5 |
1517 +-----------------------+-----------------+-----------------+
1518 | Metering | | DPDK 19.11 | | DPDK 19.11 |
1519 | | | OFED 4.7-3 | | OFED 4.7-3 |
1520 | | | rdma-core 26 | | rdma-core 26 |
1521 | | | ConnectX-5 | | ConnectX-5 |
1522 +-----------------------+-----------------+-----------------+
1523 | Sampling | | DPDK 20.11 | | DPDK 20.11 |
1524 | | | OFED 5.1-2 | | OFED 5.1-2 |
1525 | | | rdma-core 32 | | N/A |
1526 | | | ConnectX-5 | | ConnectX-5 |
1527 +-----------------------+-----------------+-----------------+
1528 | Age shared action | | DPDK 20.11 | | DPDK 20.11 |
1529 | | | OFED 5.2 | | OFED 5.2 |
1530 | | | rdma-core 32 | | rdma-core 32 |
1531 | | | ConnectX-6 Dx| | ConnectX-6 Dx |
1532 +-----------------------+-----------------+-----------------+
1537 MARK and META items are interrelated with datapath - they might move from/to
1538 the applications in mbuf fields. Hence, zero value for these items has the
1539 special meaning - it means "no metadata are provided", not zero values are
1540 treated by applications and PMD as valid ones.
1542 Moreover in the flow engine domain the value zero is acceptable to match and
1543 set, and we should allow to specify zero values as rte_flow parameters for the
1544 META and MARK items and actions. In the same time zero mask has no meaning and
1545 should be rejected on validation stage.
1550 Flows are not cached in the driver.
1551 When stopping a device port, all the flows created on this port from the
1552 application will be flushed automatically in the background.
1553 After stopping the device port, all flows on this port become invalid and
1554 not represented in the system.
1555 All references to these flows held by the application should be discarded
1556 directly but neither destroyed nor flushed.
1558 The application should re-create the flows as required after the port restart.
1563 Compared to librte_net_mlx4 that implements a single RSS configuration per
1564 port, librte_net_mlx5 supports per-protocol RSS configuration.
1566 Since ``testpmd`` defaults to IP RSS mode and there is currently no
1567 command-line parameter to enable additional protocols (UDP and TCP as well
1568 as IP), the following commands must be entered from its CLI to get the same
1569 behavior as librte_net_mlx4::
1572 > port config all rss all
1578 This section demonstrates how to launch **testpmd** with Mellanox
1579 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
1581 #. Load the kernel modules::
1583 modprobe -a ib_uverbs mlx5_core mlx5_ib
1585 Alternatively if MLNX_OFED/MLNX_EN is fully installed, the following script
1588 /etc/init.d/openibd restart
1592 User space I/O kernel modules (uio and igb_uio) are not used and do
1593 not have to be loaded.
1595 #. Make sure Ethernet interfaces are in working order and linked to kernel
1596 verbs. Related sysfs entries should be present::
1598 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
1607 #. Optionally, retrieve their PCI bus addresses for to be used with the allow list::
1610 for intf in eth2 eth3 eth4 eth5;
1612 (cd "/sys/class/net/${intf}/device/" && pwd -P);
1615 sed -n 's,.*/\(.*\),-a \1,p'
1624 #. Request huge pages::
1626 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
1628 #. Start testpmd with basic parameters::
1630 testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
1635 EAL: PCI device 0000:05:00.0 on NUMA socket 0
1636 EAL: probe driver: 15b3:1013 librte_net_mlx5
1637 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
1638 PMD: librte_net_mlx5: 1 port(s) detected
1639 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
1640 EAL: PCI device 0000:05:00.1 on NUMA socket 0
1641 EAL: probe driver: 15b3:1013 librte_net_mlx5
1642 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
1643 PMD: librte_net_mlx5: 1 port(s) detected
1644 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
1645 EAL: PCI device 0000:06:00.0 on NUMA socket 0
1646 EAL: probe driver: 15b3:1013 librte_net_mlx5
1647 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
1648 PMD: librte_net_mlx5: 1 port(s) detected
1649 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
1650 EAL: PCI device 0000:06:00.1 on NUMA socket 0
1651 EAL: probe driver: 15b3:1013 librte_net_mlx5
1652 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
1653 PMD: librte_net_mlx5: 1 port(s) detected
1654 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
1655 Interactive-mode selected
1656 Configuring Port 0 (socket 0)
1657 PMD: librte_net_mlx5: 0x8cba80: TX queues number update: 0 -> 2
1658 PMD: librte_net_mlx5: 0x8cba80: RX queues number update: 0 -> 2
1659 Port 0: E4:1D:2D:E7:0C:FE
1660 Configuring Port 1 (socket 0)
1661 PMD: librte_net_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
1662 PMD: librte_net_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
1663 Port 1: E4:1D:2D:E7:0C:FF
1664 Configuring Port 2 (socket 0)
1665 PMD: librte_net_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
1666 PMD: librte_net_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
1667 Port 2: E4:1D:2D:E7:0C:FA
1668 Configuring Port 3 (socket 0)
1669 PMD: librte_net_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
1670 PMD: librte_net_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
1671 Port 3: E4:1D:2D:E7:0C:FB
1672 Checking link statuses...
1673 Port 0 Link Up - speed 40000 Mbps - full-duplex
1674 Port 1 Link Up - speed 40000 Mbps - full-duplex
1675 Port 2 Link Up - speed 10000 Mbps - full-duplex
1676 Port 3 Link Up - speed 10000 Mbps - full-duplex
1683 This section demonstrates how to dump flows. Currently, it's possible to dump
1684 all flows with assistance of external tools.
1686 #. 2 ways to get flow raw file:
1688 - Using testpmd CLI:
1690 .. code-block:: console
1692 testpmd> flow dump <port> <output_file>
1694 - call rte_flow_dev_dump api:
1696 .. code-block:: console
1698 rte_flow_dev_dump(port, file, NULL);
1700 #. Dump human-readable flows from raw file:
1702 Get flow parsing tool from: https://github.com/Mellanox/mlx_steering_dump
1704 .. code-block:: console
1706 mlx_steering_dump.py -f <output_file>