1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2015 6WIND S.A.
3 Copyright 2015 Mellanox Technologies, Ltd
5 .. include:: <isonum.txt>
10 The MLX5 poll mode driver library (**librte_net_mlx5**) provides support
11 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
12 ConnectX-5**, **Mellanox ConnectX-6**, **Mellanox ConnectX-6 Dx**, **Mellanox
13 ConnectX-6 Lx**, **Mellanox BlueField** and **Mellanox BlueField-2** families
14 of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
17 Information and documentation about these adapters can be found on the
18 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
19 `Mellanox community <http://community.mellanox.com/welcome>`__.
21 There is also a `section dedicated to this poll mode driver
22 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
28 Besides its dependency on libibverbs (that implies libmlx5 and associated
29 kernel support), librte_net_mlx5 relies heavily on system calls for control
30 operations such as querying/updating the MTU and flow control parameters.
32 For security reasons and robustness, this driver only deals with virtual
33 memory addresses. The way resources allocations are handled by the kernel,
34 combined with hardware specifications that allow to handle virtual memory
35 addresses directly, ensure that DPDK applications cannot access random
36 physical memory (or memory that does not belong to the current process).
38 This capability allows the PMD to coexist with kernel network interfaces
39 which remain functional, although they stop receiving unicast packets as
40 long as they share the same MAC address.
41 This means legacy linux control tools (for example: ethtool, ifconfig and
42 more) can operate on the same network interfaces that owned by the DPDK
45 The PMD can use libibverbs and libmlx5 to access the device firmware
46 or directly the hardware components.
47 There are different levels of objects and bypassing abilities
48 to get the best performances:
50 - Verbs is a complete high-level generic API
51 - Direct Verbs is a device-specific API
52 - DevX allows to access firmware objects
53 - Direct Rules manages flow steering at low-level hardware layer
55 Enabling librte_net_mlx5 causes DPDK applications to be linked against
61 - Multi arch support: x86_64, POWER8, ARMv8, i686.
62 - Multiple TX and RX queues.
64 - Rx queue delay drop.
65 - Support for scattered TX frames.
66 - Advanced support for scattered Rx frames with tunable buffer attributes.
67 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
68 - RSS using different combinations of fields: L3 only, L4 only or both,
69 and source only, destination only or both.
70 - Several RSS hash keys, one for each flow type.
71 - Default RSS operation with no hash key specification.
72 - Configurable RETA table.
73 - Link flow control (pause frame).
74 - Support for multiple MAC addresses.
78 - RX CRC stripping configuration.
79 - TX mbuf fast free offload.
80 - Promiscuous mode on PF and VF.
81 - Multicast promiscuous mode on PF and VF.
82 - Hardware checksum offloads.
83 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
85 - Flow API, including :ref:`flow_isolated_mode`.
87 - KVM and VMware ESX SR-IOV modes are supported.
88 - RSS hash result is supported.
89 - Hardware TSO for generic IP or UDP tunnel, including VXLAN and GRE.
90 - Hardware checksum Tx offload for generic IP or UDP tunnel, including VXLAN and GRE.
92 - Statistics query including Basic, Extended and per queue.
94 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP, IP-in-IP, Geneve, GTP.
95 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
96 - NIC HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
97 increment/decrement, count, drop, mark. For details please see :ref:`mlx5_offloads_support`.
98 - Flow insertion rate of more then million flows per second, when using Direct Rules.
99 - Support for multiple rte_flow groups.
100 - Per packet no-inline hint flag to disable packet data copying into Tx descriptors.
103 - Multiple-thread flow insertion.
104 - Matching on IPv4 Internet Header Length (IHL).
105 - Matching on GTP extension header with raw encap/decap action.
106 - Matching on Geneve TLV option header with raw encap/decap action.
107 - RSS support in sample action.
108 - E-Switch mirroring and jump.
109 - E-Switch mirroring and modify.
110 - 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer
112 - Flow metering, including meter policy API.
113 - Flow meter hierarchy.
114 - Flow integrity offload API.
115 - Connection tracking.
116 - Sub-Function representors.
125 On Windows, the features are limited:
127 - Promiscuous mode is not supported
128 - The following rules are supported:
130 - IPv4/UDP with CVLAN filtering
131 - Unicast MAC filtering
133 - Additional rules are supported from WinOF2 version 2.70:
135 - IPv4/TCP with CVLAN filtering
136 - L4 steering rules for port RSS of UDP, TCP and IP
138 - For secondary process:
140 - Forked secondary process not supported.
141 - External memory unregistered in EAL memseg list cannot be used for DMA
142 unless such memory has been registered by ``mlx5_mr_update_ext_mp()`` in
143 primary process and remapped to the same virtual address in secondary
144 process. If the external memory is registered by primary process but has
145 different virtual address in secondary process, unexpected error may happen.
149 - Counters of received packets and bytes number of devices in same share group are same.
150 - Counters of received packets and bytes number of queues in same group and queue ID are same.
152 - When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
153 specific VLAN will match for VLAN packets as well:
155 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
156 Meaning, the flow rule::
158 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
160 Will only match vlan packets with vid=3. and the flow rule::
162 flow create 0 ingress pattern eth / ipv4 / end ...
164 Will match any ipv4 packet (VLAN included).
166 - When using Verbs flow engine (``dv_flow_en`` = 0), multi-tagged(QinQ) match is not supported.
168 - When using DV flow engine (``dv_flow_en`` = 1), flow pattern with any VLAN specification will match only single-tagged packets unless the ETH item ``type`` field is 0x88A8 or the VLAN item ``has_more_vlan`` field is 1.
171 flow create 0 ingress pattern eth / ipv4 / end ...
173 Will match any ipv4 packet.
176 flow create 0 ingress pattern eth / vlan / end ...
177 flow create 0 ingress pattern eth has_vlan is 1 / end ...
178 flow create 0 ingress pattern eth type is 0x8100 / end ...
180 Will match single-tagged packets only, with any VLAN ID value.
183 flow create 0 ingress pattern eth type is 0x88A8 / end ...
184 flow create 0 ingress pattern eth / vlan has_more_vlan is 1 / end ...
186 Will match multi-tagged packets only, with any VLAN ID value.
188 - A flow pattern with 2 sequential VLAN items is not supported.
190 - VLAN pop offload command:
192 - Flow rules having a VLAN pop offload command as one of their actions and
193 are lacking a match on VLAN as one of their items are not supported.
194 - The command is not supported on egress traffic in NIC mode.
196 - VLAN push offload is not supported on ingress traffic in NIC mode.
198 - VLAN set PCP offload is not supported on existing headers.
200 - A multi segment packet must have not more segments than reported by dev_infos_get()
201 in tx_desc_lim.nb_seg_max field. This value depends on maximal supported Tx descriptor
202 size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
203 inline settings) to 58.
205 - Match on VXLAN supports the following fields only:
208 - Last reserved 8-bits
210 Last reserved 8-bits matching is only supported When using DV flow
211 engine (``dv_flow_en`` = 1).
212 For ConnectX-5, the UDP destination port must be the standard one (4789).
213 Group zero's behavior may differ which depends on FW.
214 Matching value equals 0 (value & mask) is not supported.
216 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
218 - Match on Geneve header supports the following fields only:
225 - Match on Geneve TLV option is supported on the following fields:
232 Only one Class/Type/Length Geneve TLV option is supported per shared device.
233 Class/Type/Length fields must be specified as well as masks.
234 Class/Type/Length specified masks must be full.
235 Matching Geneve TLV option without specifying data is not supported.
236 Matching Geneve TLV option with ``data & mask == 0`` is not supported.
238 - VF: flow rules created on VF devices can only match traffic targeted at the
239 configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``).
241 - Match on GTP tunnel header item supports the following fields only:
243 - v_pt_rsv_flags: E flag, S flag, PN flag
247 - Match on GTP extension header only for GTP PDU session container (next
248 extension header type = 0x85).
249 - Match on GTP extension header is not supported in group 0.
253 - Hardware support: BlueField 2.
254 - Flex item is supported on PF only.
255 - Hardware limits ``header_length_mask_width`` up to 6 bits.
256 - Firmware supports 8 global sample fields.
257 Each flex item allocates non-shared sample fields from that pool.
258 - Supported flex item can have 1 input link - ``eth`` or ``udp``
259 and up to 2 output links - ``ipv4`` or ``ipv6``.
260 - Flex item fields (``next_header``, ``next_protocol``, ``samples``)
261 do not participate in RSS hash functions.
262 - In flex item configuration, ``next_header.field_base`` value
263 must be byte aligned (multiple of 8).
265 - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
266 The flows within group 0 and set metadata action are rejected by hardware.
270 MAC addresses not already present in the bridge table of the associated
271 kernel network device will be added and cleaned up by the PMD when closing
272 the device. In case of ungraceful program termination, some entries may
273 remain present and should be removed manually by other means.
275 - Buffer split offload is supported with regular Rx burst routine only,
276 no MPRQ feature or vectorized code can be engaged.
278 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
279 externally attached to a user-provided mbuf with having RTE_MBUF_F_EXTERNAL in
280 ol_flags. As the mempool for the external buffer is managed by PMD, all the
281 Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
282 the external buffers will be freed by PMD and the application which still
283 holds the external buffers may be corrupted.
285 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
286 enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
287 supported. Some Rx packets may not have RTE_MBUF_F_RX_RSS_HASH.
289 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
290 and allmulticast mode are both set to off.
291 To receive IPv6 Multicast messages on VM, explicitly set the relevant
292 MAC address using rte_eth_dev_mac_addr_add() API.
294 - To support a mixed traffic pattern (some buffers from local host memory, some
295 buffers from other devices) with high bandwidth, a mbuf flag is used.
297 An application hints the PMD whether or not it should try to inline the
298 given mbuf data buffer. PMD should do the best effort to act upon this request.
300 The hint flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE`` is dynamic,
301 registered by application with rte_mbuf_dynflag_register(). This flag is
302 purely driver-specific and declared in PMD specific header ``rte_pmd_mlx5.h``,
303 which is intended to be used by the application.
305 To query the supported specific flags in runtime,
306 the function ``rte_pmd_mlx5_get_dyn_flag_names`` returns the array of
307 currently (over present hardware and configuration) supported specific flags.
308 The "not inline hint" feature operating flow is the following one:
311 - probe the devices, ports are created
312 - query the port capabilities
313 - if port supporting the feature is found
314 - register dynamic flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE``
315 - application starts the ports
316 - on ``dev_start()`` PMD checks whether the feature flag is registered and
317 enables the feature support in datapath
318 - application might set the registered flag bit in ``ol_flags`` field
319 of mbuf being sent and PMD will handle ones appropriately.
321 - The amount of descriptors in Tx queue may be limited by data inline settings.
322 Inline data require the more descriptor building blocks and overall block
323 amount may exceed the hardware supported limits. The application should
324 reduce the requested Tx size or adjust data inline settings with
325 ``txq_inline_max`` and ``txq_inline_mpw`` devargs keys.
327 - To provide the packet send scheduling on mbuf timestamps the ``tx_pp``
328 parameter should be specified.
329 When PMD sees the RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME set on the packet
330 being sent it tries to synchronize the time of packet appearing on
331 the wire with the specified packet timestamp. It the specified one
332 is in the past it should be ignored, if one is in the distant future
333 it should be capped with some reasonable value (in range of seconds).
334 These specific cases ("too late" and "distant future") can be optionally
335 reported via device xstats to assist applications to detect the
336 time-related problems.
338 The timestamp upper "too-distant-future" limit
339 at the moment of invoking the Tx burst routine
340 can be estimated as ``tx_pp`` option (in nanoseconds) multiplied by 2^23.
341 Please note, for the testpmd txonly mode,
342 the limit is deduced from the expression::
344 (n_tx_descriptors / burst_size + 1) * inter_burst_gap
346 There is no any packet reordering according timestamps is supposed,
347 neither within packet burst, nor between packets, it is an entirely
348 application responsibility to generate packets and its timestamps
349 in desired order. The timestamps can be put only in the first packet
350 in the burst providing the entire burst scheduling.
352 - E-Switch decapsulation Flow:
354 - can be applied to PF port only.
355 - must specify VF port action (packet redirection from PF to VF).
356 - optionally may specify tunnel inner source and destination MAC addresses.
358 - E-Switch encapsulation Flow:
360 - can be applied to VF ports only.
361 - must specify PF port action (packet redirection from VF to PF).
365 - The input buffer, used as outer header, is not validated.
369 - The decapsulation is always done up to the outermost tunnel detected by the HW.
370 - The input buffer, providing the removal size, is not validated.
371 - The buffer size must match the length of the headers to be removed.
373 - ICMP(code/type/identifier/sequence number) / ICMP6(code/type) matching, IP-in-IP and MPLS flow matching are all
374 mutually exclusive features which cannot be supported together
375 (see :ref:`mlx5_firmware_config`).
379 - Requires DevX and DV flow to be enabled.
380 - KEEP_CRC offload cannot be supported with LRO.
381 - The first mbuf length, without head-room, must be big enough to include the
383 - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
384 it with size limited to max LRO size, not to max RX packet length.
385 - LRO can be used with outer header of TCP packets of the standard format:
386 eth (with or without vlan) / ipv4 or ipv6 / tcp / payload
388 Other TCP packets (e.g. with MPLS label) received on Rx queue with LRO enabled, will be received with bad checksum.
389 - LRO packet aggregation is performed by HW only for packet size larger than
390 ``lro_min_mss_size``. This value is reported on device start, when debug
395 - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
396 for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
397 The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
401 - fast free offload assumes the all mbufs being sent are originated from the
402 same memory pool and there is no any extra references to the mbufs (the
403 reference counter for each mbuf is equal 1 on tx_burst call). The latter
404 means there should be no any externally attached buffers in mbufs. It is
405 an application responsibility to provide the correct mbufs if the fast
406 free offload is engaged. The mlx5 PMD implicitly produces the mbufs with
407 externally attached buffers if MPRQ option is enabled, hence, the fast
408 free offload is neither supported nor advertised if there is MPRQ enabled.
412 - Supports ``RTE_FLOW_ACTION_TYPE_SAMPLE`` action only within NIC Rx and
413 E-Switch steering domain.
414 - For E-Switch Sampling flow with sample ratio > 1, additional actions are not
415 supported in the sample actions list.
416 - For ConnectX-5, the ``RTE_FLOW_ACTION_TYPE_SAMPLE`` is typically used as
417 first action in the E-Switch egress flow if with header modify or
418 encapsulation actions.
419 - For NIC Rx flow, supports ``MARK``, ``COUNT``, ``QUEUE``, ``RSS`` in the
421 - For E-Switch mirroring flow, supports ``RAW ENCAP``, ``Port ID``,
422 ``VXLAN ENCAP``, ``NVGRE ENCAP`` in the sample actions list.
426 - Supports the 'set' operation only for ``RTE_FLOW_ACTION_TYPE_MODIFY_FIELD`` action.
427 - Modification of an arbitrary place in a packet via the special ``RTE_FLOW_FIELD_START`` Field ID is not supported.
428 - Modification of the 802.1Q Tag, VXLAN Network or GENEVE Network ID's is not supported.
429 - Encapsulation levels are not supported, can modify outermost header fields only.
430 - Offsets must be 32-bits aligned, cannot skip past the boundary of a field.
432 - IPv6 header item 'proto' field, indicating the next header protocol, should
433 not be set as extension header.
434 In case the next header is an extension header, it should not be specified in
435 IPv6 header item 'proto' field.
436 The last extension header item 'next header' field can specify the following
437 header protocol type.
441 - Hairpin between two ports could only manual binding and explicit Tx flow mode. For single port hairpin, all the combinations of auto/manual binding and explicit/implicit Tx flow mode could be supported.
442 - Hairpin in switchdev SR-IOV mode is not supported till now.
446 - All the meter colors with drop action will be counted only by the global drop statistics.
447 - Yellow detection is only supported with ASO metering.
448 - Red color must be with drop action.
449 - Meter statistics are supported only for drop case.
450 - A meter action created with pre-defined policy must be the last action in the flow except single case where the policy actions are:
451 - green: NULL or END.
452 - yellow: NULL or END.
454 - The only supported meter policy actions:
455 - green: QUEUE, RSS, PORT_ID, REPRESENTED_PORT, JUMP, DROP, MARK and SET_TAG.
456 - yellow: QUEUE, RSS, PORT_ID, REPRESENTED_PORT, JUMP, DROP, MARK and SET_TAG.
458 - Policy actions of RSS for green and yellow should have the same configuration except queues.
459 - Policy with RSS/queue action is not supported when ``dv_xmeta_en`` enabled.
460 - meter profile packet mode is supported.
461 - meter profiles of RFC2697, RFC2698 and RFC4115 are supported.
465 - Integrity offload is enabled for **ConnectX-6** family.
466 - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
467 - ``level`` value 0 references outer headers.
468 - Multiple integrity items not supported in a single flow rule.
469 - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
470 For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
471 TCP or UDP, must be in the rule pattern as well::
473 flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
475 flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
477 - Connection tracking:
479 - Cannot co-exist with ASO meter, ASO age action in a single flow rule.
480 - Flow rules insertion rate and memory consumption need more optimization.
482 - 4M connections maximum.
484 - Multi-thread flow insertion:
486 - In order to achieve best insertion rate, application should manage the flows per lcore.
487 - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache.
491 - TXQ affinity subjects to HW hash once enabled.
493 - Bonding under socket direct mode
499 - CQE timestamp field width is limited by hardware to 63 bits, MSB is zero.
500 - In the free-running mode the timestamp counter is reset on power on
501 and 63-bit value provides over 1800 years of uptime till overflow.
502 - In the real-time mode
503 (configurable with ``REAL_TIME_CLOCK_ENABLE`` firmware settings),
504 the timestamp presents the nanoseconds elapsed since 01-Jan-1970,
505 hardware timestamp overflow will happen on 19-Jan-2038
506 (0x80000000 seconds since 01-Jan-1970).
507 - The send scheduling is based on timestamps
508 from the reference "Clock Queue" completions,
509 the scheduled send timestamps should not be specified with non-zero MSB.
514 MLX5 supports various methods to report statistics:
516 Port statistics can be queried using ``rte_eth_stats_get()``. The received and sent statistics are through SW only and counts the number of packets received or sent successfully by the PMD. The imissed counter is the amount of packets that could not be delivered to SW because a queue was full. Packets not received due to congestion in the bus or on the NIC can be queried via the rx_discards_phy xstats counter.
518 Extended statistics can be queried using ``rte_eth_xstats_get()``. The extended statistics expose a wider set of counters counted by the device. The extended port statistics counts the number of packets received or sent successfully by the port. As Mellanox NICs are using the :ref:`Bifurcated Linux Driver <linux_gsg_linux_drivers>` those counters counts also packet received or sent by the Linux kernel. The counters with ``_phy`` suffix counts the total events on the physical port, therefore not valid for VF.
520 Finally per-flow statistics can by queried using ``rte_flow_query`` when attaching a count action for specific flow. The flow counter counts the number of packets received successfully by the port and match the specific flow.
528 The ibverbs libraries can be linked with this PMD in a number of ways,
529 configured by the ``ibverbs_link`` build option:
531 - ``shared`` (default): the PMD depends on some .so files.
533 - ``dlopen``: Split the dependencies glue in a separate library
534 loaded when needed by dlopen.
535 It make dependencies on libibverbs and libmlx4 optional,
536 and has no performance impact.
538 - ``static``: Embed static flavor of the dependencies libibverbs and libmlx4
539 in the PMD shared library or the executable static binary.
541 Environment variables
542 ~~~~~~~~~~~~~~~~~~~~~
546 A list of directories in which to search for the rdma-core "glue" plug-in,
547 separated by colons or semi-colons.
549 - ``MLX5_SHUT_UP_BF``
551 Configures HW Tx doorbell register as IO-mapped.
553 By default, the HW Tx doorbell is configured as a write-combining register.
554 The register would be flushed to HW usually when the write-combining buffer
555 becomes full, but it depends on CPU design.
557 Except for vectorized Tx burst routines, a write memory barrier is enforced
558 after updating the register so that the update can be immediately visible to
561 When vectorized Tx burst is called, the barrier is set only if the burst size
562 is not aligned to MLX5_VPMD_TX_MAX_BURST. However, setting this environmental
563 variable will bring better latency even though the maximum throughput can
566 Run-time configuration
567 ~~~~~~~~~~~~~~~~~~~~~~
569 - librte_net_mlx5 brings kernel network interfaces up during initialization
570 because it is affected by their state. Forcing them down prevents packets
573 - **ethtool** operations on related kernel interfaces also affect the PMD.
578 In order to run as a non-root user,
579 some capabilities must be granted to the application::
581 setcap cap_sys_admin,cap_net_admin,cap_net_raw,cap_ipc_lock+ep <dpdk-app>
583 Below are the reasons of the need for each capability:
586 When using physical addresses (PA mode), with Linux >= 4.0,
587 for access to ``/proc/self/pagemap``.
590 For device configuration.
593 For raw ethernet queue allocation through kernel driver.
596 For DMA memory pinning.
601 - ``rxq_cqe_comp_en`` parameter [int]
603 A nonzero value enables the compression of CQE on RX side. This feature
604 allows to save PCI bandwidth and improve performance. Enabled by default.
605 Different compression formats are supported in order to achieve the best
606 performance for different traffic patterns. Default format depends on
607 Multi-Packet Rx queue configuration: Hash RSS format is used in case
608 MPRQ is disabled, Checksum format is used in case MPRQ is enabled.
610 Specifying 2 as a ``rxq_cqe_comp_en`` value selects Flow Tag format for
611 better compression rate in case of RTE Flow Mark traffic.
612 Specifying 3 as a ``rxq_cqe_comp_en`` value selects Checksum format.
613 Specifying 4 as a ``rxq_cqe_comp_en`` value selects L3/L4 Header format for
614 better compression rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
615 CQE compression format selection requires DevX to be enabled. If there is
616 no DevX enabled/supported the value is reset to 1 by default.
620 - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
621 ConnectX-6 Lx, BlueField and BlueField-2.
622 - POWER9 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
623 ConnectX-6 Lx, BlueField and BlueField-2.
625 - ``rxq_pkt_pad_en`` parameter [int]
627 A nonzero value enables padding Rx packet to the size of cacheline on PCI
628 transaction. This feature would waste PCI bandwidth but could improve
629 performance by avoiding partial cacheline write which may cause costly
630 read-modify-copy in memory transaction on some architectures. Disabled by
635 - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
636 ConnectX-6 Lx, BlueField and BlueField-2.
637 - POWER8 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
638 ConnectX-6 Lx, BlueField and BlueField-2.
640 - ``delay_drop`` parameter [int]
642 Bitmask value for the Rx queue delay drop attribute. Bit 0 is used for the
643 standard Rx queue and bit 1 is used for the hairpin Rx queue. By default, the
644 delay drop is disabled for all Rx queues. It will be ignored if the port does
645 not support the attribute even if it is enabled explicitly.
647 The packets being received will not be dropped immediately when the WQEs are
648 exhausted in a Rx queue with delay drop enabled.
650 A timeout value is set in the driver to control the waiting time before
651 dropping a packet. Once the timer is expired, the delay drop will be
652 deactivated for all the Rx queues with this feature enable. To re-activate
653 it, a rearming is needed and it is part of the kernel driver starting from
656 To enable / disable the delay drop rearming, the private flag ``dropless_rq``
657 can be set and queried via ethtool:
659 - ethtool --set-priv-flags <netdev> dropless_rq on (/ off)
660 - ethtool --show-priv-flags <netdev>
662 The configuration flag is global per PF and can only be set on the PF, once
663 it is on, all the VFs', SFs' and representors' Rx queues will share the timer
666 - ``mprq_en`` parameter [int]
668 A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
669 configured as Multi-Packet RQ if the total number of Rx queues is
670 ``rxqs_min_mprq`` or more. Disabled by default.
672 Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe bandwidth
673 by posting a single large buffer for multiple packets. Instead of posting a
674 buffers per a packet, one large buffer is posted in order to receive multiple
675 packets on the buffer. A MPRQ buffer consists of multiple fixed-size strides
676 and each stride receives one packet. MPRQ can improve throughput for
677 small-packet traffic.
679 When MPRQ is enabled, MTU can be larger than the size of
680 user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
681 configure large stride size enough to accommodate MTU as long as
682 device allows. Note that this can waste system memory compared to enabling Rx
683 scatter and multi-segment packet.
685 - ``mprq_log_stride_num`` parameter [int]
687 Log 2 of the number of strides for Multi-Packet Rx queue. Configuring more
688 strides can reduce PCIe traffic further. If configured value is not in the
689 range of device capability, the default value will be set with a warning
690 message. The default value is 4 which is 16 strides per a buffer, valid only
691 if ``mprq_en`` is set.
693 The size of Rx queue should be bigger than the number of strides.
695 - ``mprq_log_stride_size`` parameter [int]
697 Log 2 of the size of a stride for Multi-Packet Rx queue. Configuring a smaller
698 stride size can save some memory and reduce probability of a depletion of all
699 available strides due to unreleased packets by an application. If configured
700 value is not in the range of device capability, the default value will be set
701 with a warning message. The default value is 11 which is 2048 bytes per a
702 stride, valid only if ``mprq_en`` is set. With ``mprq_log_stride_size`` set
703 it is possible for a packet to span across multiple strides. This mode allows
704 support of jumbo frames (9K) with MPRQ. The memcopy of some packets (or part
705 of a packet if Rx scatter is configured) may be required in case there is no
706 space left for a head room at the end of a stride which incurs some
709 - ``mprq_max_memcpy_len`` parameter [int]
711 The maximum length of packet to memcpy in case of Multi-Packet Rx queue. Rx
712 packet is mem-copied to a user-provided mbuf if the size of Rx packet is less
713 than or equal to this parameter. Otherwise, PMD will attach the Rx packet to
714 the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
715 A mempool for external buffers will be allocated and managed by PMD. If Rx
716 packet is externally attached, ol_flags field of the mbuf will have
717 RTE_MBUF_F_EXTERNAL and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
718 checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
720 - ``rxqs_min_mprq`` parameter [int]
722 Configure Rx queues as Multi-Packet RQ if the total number of Rx queues is
723 greater or equal to this value. The default value is 12, valid only if
726 - ``txq_inline`` parameter [int]
728 Amount of data to be inlined during TX operations. This parameter is
729 deprecated and converted to the new parameter ``txq_inline_max`` providing
730 partial compatibility.
732 - ``txqs_min_inline`` parameter [int]
734 Enable inline data send only when the number of TX queues is greater or equal
737 This option should be used in combination with ``txq_inline_max`` and
738 ``txq_inline_mpw`` below and does not affect ``txq_inline_min`` settings above.
740 If this option is not specified the default value 16 is used for BlueField
741 and 8 for other platforms
743 The data inlining consumes the CPU cycles, so this option is intended to
744 auto enable inline data if we have enough Tx queues, which means we have
745 enough CPU cores and PCI bandwidth is getting more critical and CPU
746 is not supposed to be bottleneck anymore.
748 The copying data into WQE improves latency and can improve PPS performance
749 when PCI back pressure is detected and may be useful for scenarios involving
750 heavy traffic on many queues.
752 Because additional software logic is necessary to handle this mode, this
753 option should be used with care, as it may lower performance when back
754 pressure is not expected.
756 If inline data are enabled it may affect the maximal size of Tx queue in
757 descriptors because the inline data increase the descriptor size and
758 queue size limits supported by hardware may be exceeded.
760 - ``txq_inline_min`` parameter [int]
762 Minimal amount of data to be inlined into WQE during Tx operations. NICs
763 may require this minimal data amount to operate correctly. The exact value
764 may depend on NIC operation mode, requested offloads, etc. It is strongly
765 recommended to omit this parameter and use the default values. Anyway,
766 applications using this parameter should take into consideration that
767 specifying an inconsistent value may prevent the NIC from sending packets.
769 If ``txq_inline_min`` key is present the specified value (may be aligned
770 by the driver in order not to exceed the limits and provide better descriptor
771 space utilization) will be used by the driver and it is guaranteed that
772 requested amount of data bytes are inlined into the WQE beside other inline
773 settings. This key also may update ``txq_inline_max`` value (default
774 or specified explicitly in devargs) to reserve the space for inline data.
776 If ``txq_inline_min`` key is not present, the value may be queried by the
777 driver from the NIC via DevX if this feature is available. If there is no DevX
778 enabled/supported the value 18 (supposing L2 header including VLAN) is set
779 for ConnectX-4 and ConnectX-4 Lx, and 0 is set by default for ConnectX-5
780 and newer NICs. If packet is shorter the ``txq_inline_min`` value, the entire
783 For ConnectX-4 NIC, driver does not allow specifying value below 18
784 (minimal L2 header, including VLAN), error will be raised.
786 For ConnectX-4 Lx NIC, it is allowed to specify values below 18, but
787 it is not recommended and may prevent NIC from sending packets over
790 For ConnectX-4 and ConnectX-4 Lx NICs, automatically configured value
791 is insufficient for some traffic, because they require at least all L2 headers
792 to be inlined. For example, Q-in-Q adds 4 bytes to default 18 bytes
793 of Ethernet and VLAN, thus ``txq_inline_min`` must be set to 22.
794 MPLS would add 4 bytes per label. Final value must account for all possible
795 L2 encapsulation headers used in particular environment.
797 Please, note, this minimal data inlining disengages eMPW feature (Enhanced
798 Multi-Packet Write), because last one does not support partial packet inlining.
799 This is not very critical due to minimal data inlining is mostly required
800 by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW feature.
802 - ``txq_inline_max`` parameter [int]
804 Specifies the maximal packet length to be completely inlined into WQE
805 Ethernet Segment for ordinary SEND method. If packet is larger than specified
806 value, the packet data won't be copied by the driver at all, data buffer
807 is addressed with a pointer. If packet length is less or equal all packet
808 data will be copied into WQE. This may improve PCI bandwidth utilization for
809 short packets significantly but requires the extra CPU cycles.
811 The data inline feature is controlled by number of Tx queues, if number of Tx
812 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
813 is engaged, if there are not enough Tx queues (which means not enough CPU cores
814 and CPU resources are scarce), data inline is not performed by the driver.
815 Assigning ``txqs_min_inline`` with zero always enables the data inline.
817 The default ``txq_inline_max`` value is 290. The specified value may be adjusted
818 by the driver in order not to exceed the limit (930 bytes) and to provide better
819 WQE space filling without gaps, the adjustment is reflected in the debug log.
820 Also, the default value (290) may be decreased in run-time if the large transmit
821 queue size is requested and hardware does not support enough descriptor
822 amount, in this case warning is emitted. If ``txq_inline_max`` key is
823 specified and requested inline settings can not be satisfied then error
826 - ``txq_inline_mpw`` parameter [int]
828 Specifies the maximal packet length to be completely inlined into WQE for
829 Enhanced MPW method. If packet is large the specified value, the packet data
830 won't be copied, and data buffer is addressed with pointer. If packet length
831 is less or equal, all packet data will be copied into WQE. This may improve PCI
832 bandwidth utilization for short packets significantly but requires the extra
835 The data inline feature is controlled by number of TX queues, if number of Tx
836 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
837 is engaged, if there are not enough Tx queues (which means not enough CPU cores
838 and CPU resources are scarce), data inline is not performed by the driver.
839 Assigning ``txqs_min_inline`` with zero always enables the data inline.
841 The default ``txq_inline_mpw`` value is 268. The specified value may be adjusted
842 by the driver in order not to exceed the limit (930 bytes) and to provide better
843 WQE space filling without gaps, the adjustment is reflected in the debug log.
844 Due to multiple packets may be included to the same WQE with Enhanced Multi
845 Packet Write Method and overall WQE size is limited it is not recommended to
846 specify large values for the ``txq_inline_mpw``. Also, the default value (268)
847 may be decreased in run-time if the large transmit queue size is requested
848 and hardware does not support enough descriptor amount, in this case warning
849 is emitted. If ``txq_inline_mpw`` key is specified and requested inline
850 settings can not be satisfied then error will be raised.
852 - ``txqs_max_vec`` parameter [int]
854 Enable vectorized Tx only when the number of TX queues is less than or
855 equal to this value. This parameter is deprecated and ignored, kept
856 for compatibility issue to not prevent driver from probing.
858 - ``txq_mpw_hdr_dseg_en`` parameter [int]
860 A nonzero value enables including two pointers in the first block of TX
861 descriptor. The parameter is deprecated and ignored, kept for compatibility
864 - ``txq_max_inline_len`` parameter [int]
866 Maximum size of packet to be inlined. This limits the size of packet to
867 be inlined. If the size of a packet is larger than configured value, the
868 packet isn't inlined even though there's enough space remained in the
869 descriptor. Instead, the packet is included with pointer. This parameter
870 is deprecated and converted directly to ``txq_inline_mpw`` providing full
871 compatibility. Valid only if eMPW feature is engaged.
873 - ``txq_mpw_en`` parameter [int]
875 A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
876 ConnectX-6, ConnectX-6 Dx, ConnectX-6 Lx, BlueField, BlueField-2.
877 eMPW allows the Tx burst function to pack up multiple packets
878 in a single descriptor session in order to save PCI bandwidth
879 and improve performance at the cost of a slightly higher CPU usage.
880 When ``txq_inline_mpw`` is set along with ``txq_mpw_en``,
881 Tx burst function copies entire packet data on to Tx descriptor
882 instead of including pointer of packet.
884 The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
885 it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` option.
886 Also, if minimal data inlining is requested by non-zero ``txq_inline_min``
887 option or reported by the NIC, the eMPW feature is disengaged.
889 - ``tx_db_nc`` parameter [int]
891 The rdma core library can map doorbell register in two ways, depending on the
892 environment variable "MLX5_SHUT_UP_BF":
894 - As regular cached memory (usually with write combining attribute), if the
895 variable is either missing or set to zero.
896 - As non-cached memory, if the variable is present and set to not "0" value.
898 The type of mapping may slightly affect the Tx performance, the optimal choice
899 is strongly relied on the host architecture and should be deduced practically.
901 If ``tx_db_nc`` is set to zero, the doorbell is forced to be mapped to regular
902 memory (with write combining), the PMD will perform the extra write memory barrier
903 after writing to doorbell, it might increase the needed CPU clocks per packet
904 to send, but latency might be improved.
906 If ``tx_db_nc`` is set to one, the doorbell is forced to be mapped to non
907 cached memory, the PMD will not perform the extra write memory barrier
908 after writing to doorbell, on some architectures it might improve the
911 If ``tx_db_nc`` is set to two, the doorbell is forced to be mapped to regular
912 memory, the PMD will use heuristics to decide whether write memory barrier
913 should be performed. For bursts with size multiple of recommended one (64 pkts)
914 it is supposed the next burst is coming and no need to issue the extra memory
915 barrier (it is supposed to be issued in the next coming burst, at least after
916 descriptor writing). It might increase latency (on some hosts till next
917 packets transmit) and should be used with care.
919 If ``tx_db_nc`` is omitted or set to zero, the preset (if any) environment
920 variable "MLX5_SHUT_UP_BF" value is used. If there is no "MLX5_SHUT_UP_BF",
921 the default ``tx_db_nc`` value is zero for ARM64 hosts and one for others.
923 - ``tx_pp`` parameter [int]
925 If a nonzero value is specified the driver creates all necessary internal
926 objects to provide accurate packet send scheduling on mbuf timestamps.
927 The positive value specifies the scheduling granularity in nanoseconds,
928 the packet send will be accurate up to specified digits. The allowed range is
929 from 500 to 1 million of nanoseconds. The negative value specifies the module
930 of granularity and engages the special test mode the check the schedule rate.
931 By default (if the ``tx_pp`` is not specified) send scheduling on timestamps
934 - ``tx_skew`` parameter [int]
936 The parameter adjusts the send packet scheduling on timestamps and represents
937 the average delay between beginning of the transmitting descriptor processing
938 by the hardware and appearance of actual packet data on the wire. The value
939 should be provided in nanoseconds and is valid only if ``tx_pp`` parameter is
940 specified. The default value is zero.
942 - ``tx_vec_en`` parameter [int]
944 A nonzero value enables Tx vector on ConnectX-5, ConnectX-6, ConnectX-6 Dx,
945 ConnectX-6 Lx, BlueField and BlueField-2 NICs
946 if the number of global Tx queues on the port is less than ``txqs_max_vec``.
947 The parameter is deprecated and ignored.
949 - ``rx_vec_en`` parameter [int]
951 A nonzero value enables Rx vector if the port is not configured in
952 multi-segment otherwise this parameter is ignored.
956 - ``vf_nl_en`` parameter [int]
958 A nonzero value enables Netlink requests from the VF to add/remove MAC
959 addresses or/and enable/disable promiscuous/all multicast on the Netdevice.
960 Otherwise the relevant configuration must be run with Linux iproute2 tools.
961 This is a prerequisite to receive this kind of traffic.
963 Enabled by default, valid only on VF devices ignored otherwise.
965 - ``l3_vxlan_en`` parameter [int]
967 A nonzero value allows L3 VXLAN and VXLAN-GPE flow creation. To enable
968 L3 VXLAN or VXLAN-GPE, users has to configure firmware and enable this
969 parameter. This is a prerequisite to receive this kind of traffic.
973 - ``dv_xmeta_en`` parameter [int]
975 A nonzero value enables extensive flow metadata support if device is
976 capable and driver supports it. This can enable extensive support of
977 ``MARK`` and ``META`` item of ``rte_flow``. The newly introduced
978 ``SET_TAG`` and ``SET_META`` actions do not depend on ``dv_xmeta_en``.
980 There are some possible configurations, depending on parameter value:
982 - 0, this is default value, defines the legacy mode, the ``MARK`` and
983 ``META`` related actions and items operate only within NIC Tx and
984 NIC Rx steering domains, no ``MARK`` and ``META`` information crosses
985 the domain boundaries. The ``MARK`` item is 24 bits wide, the ``META``
986 item is 32 bits wide and match supported on egress only.
988 - 1, this engages extensive metadata mode, the ``MARK`` and ``META``
989 related actions and items operate within all supported steering domains,
990 including FDB, ``MARK`` and ``META`` information may cross the domain
991 boundaries. The ``MARK`` item is 24 bits wide, the ``META`` item width
992 depends on kernel and firmware configurations and might be 0, 16 or
993 32 bits. Within NIC Tx domain ``META`` data width is 32 bits for
994 compatibility, the actual width of data transferred to the FDB domain
995 depends on kernel configuration and may be vary. The actual supported
996 width can be retrieved in runtime by series of rte_flow_validate()
999 - 2, this engages extensive metadata mode, the ``MARK`` and ``META``
1000 related actions and items operate within all supported steering domains,
1001 including FDB, ``MARK`` and ``META`` information may cross the domain
1002 boundaries. The ``META`` item is 32 bits wide, the ``MARK`` item width
1003 depends on kernel and firmware configurations and might be 0, 16 or
1004 24 bits. The actual supported width can be retrieved in runtime by
1005 series of rte_flow_validate() trials.
1007 - 3, this engages tunnel offload mode. In E-Switch configuration, that
1008 mode implicitly activates ``dv_xmeta_en=1``.
1010 +------+-----------+-----------+-------------+-------------+
1011 | Mode | ``MARK`` | ``META`` | ``META`` Tx | FDB/Through |
1012 +======+===========+===========+=============+=============+
1013 | 0 | 24 bits | 32 bits | 32 bits | no |
1014 +------+-----------+-----------+-------------+-------------+
1015 | 1 | 24 bits | vary 0-32 | 32 bits | yes |
1016 +------+-----------+-----------+-------------+-------------+
1017 | 2 | vary 0-24 | 32 bits | 32 bits | yes |
1018 +------+-----------+-----------+-------------+-------------+
1020 If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
1021 ignored and the device is configured to operate in legacy mode (0).
1023 Disabled by default (set to 0).
1025 The Direct Verbs/Rules (engaged with ``dv_flow_en`` = 1) supports all
1026 of the extensive metadata features. The legacy Verbs supports FLAG and
1027 MARK metadata actions over NIC Rx steering domain only.
1029 Setting META value to zero in flow action means there is no item provided
1030 and receiving datapath will not report in mbufs the metadata are present.
1031 Setting MARK value to zero in flow action means the zero FDIR ID value
1032 will be reported on packet receiving.
1034 For the MARK action the last 16 values in the full range are reserved for
1035 internal PMD purposes (to emulate FLAG action). The valid range for the
1036 MARK action values is 0-0xFFEF for the 16-bit mode and 0-0xFFFFEF
1037 for the 24-bit mode, the flows with the MARK action value outside
1038 the specified range will be rejected.
1040 - ``dv_flow_en`` parameter [int]
1042 A nonzero value enables the DV flow steering assuming it is supported
1043 by the driver (RDMA Core library version is rdma-core-24.0 or higher).
1045 Enabled by default if supported.
1047 - ``dv_esw_en`` parameter [int]
1049 A nonzero value enables E-Switch using Direct Rules.
1051 Enabled by default if supported.
1053 - ``lacp_by_user`` parameter [int]
1055 A nonzero value enables the control of LACP traffic by the user application.
1056 When a bond exists in the driver, by default it should be managed by the
1057 kernel and therefore LACP traffic should be steered to the kernel.
1058 If this devarg is set to 1 it will allow the user to manage the bond by
1059 itself and not steer LACP traffic to the kernel.
1061 Disabled by default (set to 0).
1063 - ``mr_ext_memseg_en`` parameter [int]
1065 A nonzero value enables extending memseg when registering DMA memory. If
1066 enabled, the number of entries in MR (Memory Region) lookup table on datapath
1067 is minimized and it benefits performance. On the other hand, it worsens memory
1068 utilization because registered memory is pinned by kernel driver. Even if a
1069 page in the extended chunk is freed, that doesn't become reusable until the
1070 entire memory is freed.
1074 - ``mr_mempool_reg_en`` parameter [int]
1076 A nonzero value enables implicit registration of DMA memory of all mempools
1077 except those having ``RTE_MEMPOOL_F_NON_IO``. This flag is set automatically
1078 for mempools populated with non-contiguous objects or those without IOVA.
1079 The effect is that when a packet from a mempool is transmitted,
1080 its memory is already registered for DMA in the PMD and no registration
1081 will happen on the data path. The tradeoff is extra work on the creation
1082 of each mempool and increased HW resource use if some mempools
1083 are not used with MLX5 devices.
1087 - ``representor`` parameter [list]
1089 This parameter can be used to instantiate DPDK Ethernet devices from
1090 existing port (PF, VF or SF) representors configured on the device.
1092 It is a standard parameter whose format is described in
1093 :ref:`ethernet_device_standard_device_arguments`.
1095 For instance, to probe VF port representors 0 through 2::
1097 <PCI_BDF>,representor=vf[0-2]
1099 To probe SF port representors 0 through 2::
1101 <PCI_BDF>,representor=sf[0-2]
1103 To probe VF port representors 0 through 2 on both PFs of bonding device::
1105 <Primary_PCI_BDF>,representor=pf[0,1]vf[0-2]
1107 - ``max_dump_files_num`` parameter [int]
1109 The maximum number of files per PMD entity that may be created for debug information.
1110 The files will be created in /var/log directory or in current directory.
1112 set to 128 by default.
1114 - ``lro_timeout_usec`` parameter [int]
1116 The maximum allowed duration of an LRO session, in micro-seconds.
1117 PMD will set the nearest value supported by HW, which is not bigger than
1118 the input ``lro_timeout_usec`` value.
1119 If this parameter is not specified, by default PMD will set
1120 the smallest value supported by HW.
1122 - ``hp_buf_log_sz`` parameter [int]
1124 The total data buffer size of a hairpin queue (logarithmic form), in bytes.
1125 PMD will set the data buffer size to 2 ** ``hp_buf_log_sz``, both for RX & TX.
1126 The capacity of the value is specified by the firmware and the initialization
1127 will get a failure if it is out of scope.
1128 The range of the value is from 11 to 19 right now, and the supported frame
1129 size of a single packet for hairpin is from 512B to 128KB. It might change if
1130 different firmware release is being used. By using a small value, it could
1131 reduce memory consumption but not work with a large frame. If the value is
1132 too large, the memory consumption will be high and some potential performance
1133 degradation will be introduced.
1134 By default, the PMD will set this value to 16, which means that 9KB jumbo
1135 frames will be supported.
1137 - ``reclaim_mem_mode`` parameter [int]
1139 Cache some resources in flow destroy will help flow recreation more efficient.
1140 While some systems may require the all the resources can be reclaimed after
1142 The parameter ``reclaim_mem_mode`` provides the option for user to configure
1143 if the resource cache is needed or not.
1145 There are three options to choose:
1147 - 0. It means the flow resources will be cached as usual. The resources will
1148 be cached, helpful with flow insertion rate.
1150 - 1. It will only enable the DPDK PMD level resources reclaim.
1152 - 2. Both DPDK PMD level and rdma-core low level will be configured as
1155 By default, the PMD will set this value to 0.
1157 - ``sys_mem_en`` parameter [int]
1159 A non-zero value enables the PMD memory management allocating memory
1160 from system by default, without explicit rte memory flag.
1162 By default, the PMD will set this value to 0.
1164 - ``decap_en`` parameter [int]
1166 Some devices do not support FCS (frame checksum) scattering for
1167 tunnel-decapsulated packets.
1168 If set to 0, this option forces the FCS feature and rejects tunnel
1169 decapsulation in the flow engine for such devices.
1171 By default, the PMD will set this value to 1.
1173 - ``allow_duplicate_pattern`` parameter [int]
1175 There are two options to choose:
1177 - 0. Prevent insertion of rules with the same pattern items on non-root table.
1178 In this case, only the first rule is inserted and the following rules are
1179 rejected and error code EEXIST is returned.
1181 - 1. Allow insertion of rules with the same pattern items.
1182 In this case, all rules are inserted but only the first rule takes effect,
1183 the next rule takes effect only if the previous rules are deleted.
1185 By default, the PMD will set this value to 1.
1187 .. _mlx5_firmware_config:
1189 Firmware configuration
1190 ~~~~~~~~~~~~~~~~~~~~~~
1192 Firmware features can be configured as key/value pairs.
1194 The command to set a value is::
1196 mlxconfig -d <device> set <key>=<value>
1198 The command to query a value is::
1200 mlxconfig -d <device> query | grep <key>
1202 The device name for the command ``mlxconfig`` can be either the PCI address,
1203 or the mst device name found with::
1207 Below are some firmware configurations listed.
1213 value: 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
1219 - maximum number of SR-IOV virtual functions::
1223 - enable DevX (required by Direct Rules and other features)::
1227 - aggressive CQE zipping::
1231 - L3 VXLAN and VXLAN-GPE destination UDP port::
1234 IP_OVER_VXLAN_PORT=<udp dport>
1236 - enable VXLAN-GPE tunnel flow matching::
1238 FLEX_PARSER_PROFILE_ENABLE=0
1240 FLEX_PARSER_PROFILE_ENABLE=2
1242 - enable IP-in-IP tunnel flow matching::
1244 FLEX_PARSER_PROFILE_ENABLE=0
1246 - enable MPLS flow matching::
1248 FLEX_PARSER_PROFILE_ENABLE=1
1250 - enable ICMP(code/type/identifier/sequence number) / ICMP6(code/type) fields matching::
1252 FLEX_PARSER_PROFILE_ENABLE=2
1254 - enable Geneve flow matching::
1256 FLEX_PARSER_PROFILE_ENABLE=0
1258 FLEX_PARSER_PROFILE_ENABLE=1
1260 - enable Geneve TLV option flow matching::
1262 FLEX_PARSER_PROFILE_ENABLE=0
1264 - enable GTP flow matching::
1266 FLEX_PARSER_PROFILE_ENABLE=3
1268 - enable eCPRI flow matching::
1270 FLEX_PARSER_PROFILE_ENABLE=4
1273 - enable dynamic flex parser for flex item::
1275 FLEX_PARSER_PROFILE_ENABLE=4
1278 - enable realtime timestamp format::
1280 REAL_TIME_CLOCK_ENABLE=1
1285 This driver relies on external libraries and kernel drivers for resources
1286 allocations and initialization. The following dependencies are not part of
1287 DPDK and must be installed separately:
1291 User space Verbs framework used by librte_net_mlx5. This library provides
1292 a generic interface between the kernel and low-level user space drivers
1295 It allows slow and privileged operations (context initialization, hardware
1296 resources allocations) to be managed by the kernel and fast operations to
1297 never leave user space.
1301 Low-level user space driver library for Mellanox
1302 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices, it is automatically loaded
1305 This library basically implements send/receive calls to the hardware
1308 - **Kernel modules**
1310 They provide the kernel-side Verbs API and low level device drivers that
1311 manage actual hardware initialization and resources sharing with user
1314 Unlike most other PMDs, these modules must remain loaded and bound to
1317 - mlx5_core: hardware driver managing Mellanox
1318 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel
1320 - mlx5_ib: InfiniBand device driver.
1321 - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
1323 - **Firmware update**
1325 Mellanox OFED/EN releases include firmware updates for
1326 ConnectX-4/ConnectX-5/ConnectX-6/BlueField adapters.
1328 Because each release provides new features, these updates must be applied to
1329 match the kernel modules and libraries they come with.
1333 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
1339 Either RDMA Core library with a recent enough Linux kernel release
1340 (recommended) or Mellanox OFED/EN, which provides compatibility with older
1343 RDMA Core with Linux Kernel
1344 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
1346 - Minimal kernel version : v4.14 or the most recent 4.14-rc (see `Linux installation documentation`_)
1347 - Minimal rdma-core version: v15+ commit 0c5f5765213a ("Merge pull request #227 from yishaih/tm")
1348 (see `RDMA Core installation documentation`_)
1349 - When building for i686 use:
1351 - rdma-core version 18.0 or above built with 32bit support.
1352 - Kernel version 4.14.41 or above.
1354 - Starting with rdma-core v21, static libraries can be built::
1357 CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
1360 .. _`Linux installation documentation`: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
1361 .. _`RDMA Core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
1367 - Mellanox OFED version: **4.5** and above /
1368 Mellanox EN version: **4.5** and above
1371 - ConnectX-4: **12.21.1000** and above.
1372 - ConnectX-4 Lx: **14.21.1000** and above.
1373 - ConnectX-5: **16.21.1000** and above.
1374 - ConnectX-5 Ex: **16.21.1000** and above.
1375 - ConnectX-6: **20.27.0090** and above.
1376 - ConnectX-6 Dx: **22.27.0090** and above.
1377 - BlueField: **18.25.1010** and above.
1379 While these libraries and kernel modules are available on OpenFabrics
1380 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
1381 managers on most distributions, this PMD requires Ethernet extensions that
1382 may not be supported at the moment (this is a work in progress).
1385 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__ and
1387 <http://www.mellanox.com/page/products_dyn?product_family=27&mtag=linux>`__
1388 include the necessary support and should be used in the meantime. For DPDK,
1389 only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
1390 required from that distribution.
1394 Several versions of Mellanox OFED/EN are available. Installing the version
1395 this DPDK release was developed and tested against is strongly
1396 recommended. Please check the `linux prerequisites`_.
1398 Windows Prerequisites
1399 ---------------------
1401 This driver relies on external libraries and kernel drivers for resources
1402 allocations and initialization. The dependencies in the following sub-sections
1403 are not part of DPDK, and must be installed separately.
1405 Compilation Prerequisites
1406 ~~~~~~~~~~~~~~~~~~~~~~~~~
1408 DevX SDK installation
1409 ^^^^^^^^^^^^^^^^^^^^^
1411 The DevX SDK must be installed on the machine building the Windows PMD.
1412 Additional information can be found at
1413 `How to Integrate Windows DevX in Your Development Environment
1414 <https://docs.mellanox.com/display/winof2v250/RShim+Drivers+and+Usage#RShimDriversandUsage-DevXInterface>`__.
1416 Runtime Prerequisites
1417 ~~~~~~~~~~~~~~~~~~~~~
1419 WinOF2 version 2.60 or higher must be installed on the machine.
1424 The driver can be downloaded from the following site:
1426 <https://www.mellanox.com/products/adapter-software/ethernet/windows/winof-2>`__
1431 DevX for Windows must be enabled in the Windows registry.
1432 The keys ``DevxEnabled`` and ``DevxFsRules`` must be set.
1433 Additional information can be found in the WinOF2 user manual.
1438 The following Mellanox device families are supported by the same mlx5 driver:
1450 Below are detailed device names:
1452 * Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX4111A-XCAT (1x10G)
1453 * Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX412A-XCAT (2x10G)
1454 * Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX4111A-ACAT (1x25G)
1455 * Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX412A-ACAT (2x25G)
1456 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX413A-BCAT (1x40G)
1457 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX4131A-BCAT (1x40G)
1458 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX415A-BCAT (1x40G)
1459 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX413A-GCAT (1x50G)
1460 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX4131A-GCAT (1x50G)
1461 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX414A-BCAT (2x50G)
1462 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-GCAT (1x50G)
1463 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-BCAT (2x50G)
1464 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-GCAT (2x50G)
1465 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-CCAT (1x100G)
1466 * Mellanox\ |reg| ConnectX\ |reg|-4 100G MCX416A-CCAT (2x100G)
1467 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4111A-XCAT (1x10G)
1468 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4121A-XCAT (2x10G)
1469 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4111A-ACAT (1x25G)
1470 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
1471 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 40G MCX4131A-BCAT (1x40G)
1472 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
1473 * Mellanox\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
1474 * Mellanox\ |reg| ConnectX\ |reg|-6 200G MCX654106A-HCAT (2x200G)
1475 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
1476 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 200G MCX623105AN-VDAT (1x200G)
1477 * Mellanox\ |reg| ConnectX\ |reg|-6 Lx EN 25G MCX631102AN-ADAT (2x25G)
1479 Quick Start Guide on OFED/EN
1480 ----------------------------
1482 1. Download latest Mellanox OFED/EN. For more info check the `linux prerequisites`_.
1485 2. Install the required libraries and kernel modules either by installing
1486 only the required set, or by installing the entire Mellanox OFED/EN::
1488 ./mlnxofedinstall --upstream-libs --dpdk
1490 3. Verify the firmware is the correct one::
1494 4. Verify all ports links are set to Ethernet::
1496 mlxconfig -d <mst device> query | grep LINK_TYPE
1500 Link types may have to be configured to Ethernet::
1502 mlxconfig -d <mst device> set LINK_TYPE_P1/2=1/2/3
1504 * LINK_TYPE_P1=<1|2|3> , 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
1506 For hypervisors, verify SR-IOV is enabled on the NIC::
1508 mlxconfig -d <mst device> query | grep SRIOV_EN
1511 If needed, configure SR-IOV::
1513 mlxconfig -d <mst device> set SRIOV_EN=1 NUM_OF_VFS=16
1514 mlxfwreset -d <mst device> reset
1516 5. Restart the driver::
1518 /etc/init.d/openibd restart
1522 service openibd restart
1524 If link type was changed, firmware must be reset as well::
1526 mlxfwreset -d <mst device> reset
1528 For hypervisors, after reset write the sysfs number of virtual functions
1531 To dynamically instantiate a given number of virtual functions (VFs)::
1533 echo [num_vfs] > /sys/class/infiniband/mlx5_0/device/sriov_numvfs
1535 6. Install DPDK and you are ready to go.
1536 See :doc:`compilation instructions <../linux_gsg/build_dpdk>`.
1538 Enable switchdev mode
1539 ---------------------
1541 Switchdev mode is a mode in E-Switch, that binds between representor and VF or SF.
1542 Representor is a port in DPDK that is connected to a VF or SF in such a way
1543 that assuming there are no offload flows, each packet that is sent from the VF or SF
1544 will be received by the corresponding representor. While each packet that is or SF
1545 sent to a representor will be received by the VF or SF.
1546 This is very useful in case of SRIOV mode, where the first packet that is sent
1547 by the VF or SF will be received by the DPDK application which will decide if this
1548 flow should be offloaded to the E-Switch. After offloading the flow packet
1549 that the VF or SF that are matching the flow will not be received any more by
1550 the DPDK application.
1552 1. Enable SRIOV mode::
1554 mlxconfig -d <mst device> set SRIOV_EN=true
1556 2. Configure the max number of VFs::
1558 mlxconfig -d <mst device> set NUM_OF_VFS=<num of vfs>
1562 mlxfwreset -d <mst device> reset
1564 3. Configure the actual number of VFs::
1566 echo <num of vfs > /sys/class/net/<net device>/device/sriov_numvfs
1568 4. Unbind the device (can be rebind after the switchdev mode)::
1570 echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
1572 5. Enable switchdev mode::
1574 echo switchdev > /sys/class/net/<net device>/compat/devlink/mode
1576 Sub-Function support
1577 --------------------
1579 Sub-Function is a portion of the PCI device, a SF netdev has its own
1580 dedicated queues (txq, rxq).
1581 A SF shares PCI level resources with other SFs and/or with its parent PCI function.
1585 OFED version >= 5.4-0.3.3.0
1587 1. Configure SF feature::
1589 # Run mlxconfig on both PFs on host and ECPFs on BlueField.
1590 mlxconfig -d <mst device> set PER_PF_NUM_SF=1 PF_TOTAL_SF=252 PF_SF_BAR_SIZE=12
1592 2. Enable switchdev mode::
1594 mlxdevm dev eswitch set pci/<DBDF> mode switchdev
1598 mlxdevm port add pci/<DBDF> flavour pcisf pfnum 0 sfnum <sfnum>
1600 Get SFID from output: pci/<DBDF>/<SFID>
1602 4. Modify MAC address::
1604 mlxdevm port function set pci/<DBDF>/<SFID> hw_addr <MAC>
1606 5. Activate SF port::
1608 mlxdevm port function set pci/<DBDF>/<ID> state active
1610 6. Devargs to probe SF device::
1612 auxiliary:mlx5_core.sf.<num>,dv_flow_en=1
1614 Sub-Function representor support
1615 --------------------------------
1617 A SF netdev supports E-Switch representation offload
1618 similar to PF and VF representors.
1619 Use <sfnum> to probe SF representor::
1621 testpmd> port attach <PCI_BDF>,representor=sf<sfnum>,dv_flow_en=1
1626 1. Configure aggressive CQE Zipping for maximum performance::
1628 mlxconfig -d <mst device> s CQE_COMPRESSION=1
1630 To set it back to the default CQE Zipping mode use::
1632 mlxconfig -d <mst device> s CQE_COMPRESSION=0
1634 2. In case of virtualization:
1636 - Make sure that hypervisor kernel is 3.16 or newer.
1637 - Configure boot with ``iommu=pt``.
1638 - Use 1G huge pages.
1639 - Make sure to allocate a VM on huge pages.
1640 - Make sure to set CPU pinning.
1642 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
1643 for better performance. For VMs, verify that the right CPU
1644 and NUMA node are pinned according to the above. Run::
1646 lstopo-no-graphics --merge
1648 to identify the NUMA node to which the PCIe adapter is connected.
1650 4. If more than one adapter is used, and root complex capabilities allow
1651 to put both adapters on the same NUMA node without PCI bandwidth degradation,
1652 it is recommended to locate both adapters on the same NUMA node.
1653 This in order to forward packets from one to the other without
1654 NUMA performance penalty.
1656 5. Disable pause frames::
1658 ethtool -A <netdev> rx off tx off
1660 6. Verify IO non-posted prefetch is disabled by default. This can be checked
1661 via the BIOS configuration. Please contact you server provider for more
1662 information about the settings.
1666 On some machines, depends on the machine integrator, it is beneficial
1667 to set the PCI max read request parameter to 1K. This can be
1668 done in the following way:
1670 To query the read request size use::
1672 setpci -s <NIC PCI address> 68.w
1674 If the output is different than 3XXX, set it by::
1676 setpci -s <NIC PCI address> 68.w=3XXX
1678 The XXX can be different on different systems. Make sure to configure
1679 according to the setpci output.
1681 7. To minimize overhead of searching Memory Regions:
1683 - '--socket-mem' is recommended to pin memory by predictable amount.
1684 - Configure per-lcore cache when creating Mempools for packet buffer.
1685 - Refrain from dynamically allocating/freeing memory in run-time.
1690 There are multiple Rx burst functions with different advantages and limitations.
1692 .. table:: Rx burst functions
1694 +-------------------+------------------------+---------+-----------------+------+-------+
1695 || Function Name || Enabler || Scatter|| Error Recovery || CQE || Large|
1696 | | | | || comp|| MTU |
1697 +===================+========================+=========+=================+======+=======+
1698 | rx_burst | rx_vec_en=0 | Yes | Yes | Yes | Yes |
1699 +-------------------+------------------------+---------+-----------------+------+-------+
1700 | rx_burst_vec | rx_vec_en=1 (default) | No | if CQE comp off | Yes | No |
1701 +-------------------+------------------------+---------+-----------------+------+-------+
1702 | rx_burst_mprq || mprq_en=1 | No | Yes | Yes | Yes |
1703 | || RxQs >= rxqs_min_mprq | | | | |
1704 +-------------------+------------------------+---------+-----------------+------+-------+
1705 | rx_burst_mprq_vec || rx_vec_en=1 (default) | No | if CQE comp off | Yes | Yes |
1706 | || mprq_en=1 | | | | |
1707 | || RxQs >= rxqs_min_mprq | | | | |
1708 +-------------------+------------------------+---------+-----------------+------+-------+
1710 .. _mlx5_offloads_support:
1712 Supported hardware offloads
1713 ---------------------------
1715 .. table:: Minimal SW/HW versions for queue offloads
1717 ============== ===== ===== ========= ===== ========== =============
1718 Offload DPDK Linux rdma-core OFED firmware hardware
1719 ============== ===== ===== ========= ===== ========== =============
1720 common base 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1721 checksums 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1722 Rx timestamp 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1723 TSO 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1724 LRO 19.08 N/A N/A 4.6-4 16.25.6406 ConnectX-5
1725 Tx scheduling 20.08 N/A N/A 5.1-2 22.28.2006 ConnectX-6 Dx
1726 Buffer Split 20.11 N/A N/A 5.1-2 16.28.2006 ConnectX-5
1727 ============== ===== ===== ========= ===== ========== =============
1729 .. table:: Minimal SW/HW versions for rte_flow offloads
1731 +-----------------------+-----------------+-----------------+
1732 | Offload | with E-Switch | with NIC |
1733 +=======================+=================+=================+
1734 | Count | | DPDK 19.05 | | DPDK 19.02 |
1735 | | | OFED 4.6 | | OFED 4.6 |
1736 | | | rdma-core 24 | | rdma-core 23 |
1737 | | | ConnectX-5 | | ConnectX-5 |
1738 +-----------------------+-----------------+-----------------+
1739 | Drop | | DPDK 19.05 | | DPDK 18.11 |
1740 | | | OFED 4.6 | | OFED 4.5 |
1741 | | | rdma-core 24 | | rdma-core 23 |
1742 | | | ConnectX-5 | | ConnectX-4 |
1743 +-----------------------+-----------------+-----------------+
1744 | Queue / RSS | | | | DPDK 18.11 |
1745 | | | N/A | | OFED 4.5 |
1746 | | | | | rdma-core 23 |
1747 | | | | | ConnectX-4 |
1748 +-----------------------+-----------------+-----------------+
1749 | Shared action | | | | |
1750 | | | :numref:`sact`| | :numref:`sact`|
1753 +-----------------------+-----------------+-----------------+
1754 | | VLAN | | DPDK 19.11 | | DPDK 19.11 |
1755 | | (of_pop_vlan / | | OFED 4.7-1 | | OFED 4.7-1 |
1756 | | of_push_vlan / | | ConnectX-5 | | ConnectX-5 |
1757 | | of_set_vlan_pcp / | | | | |
1758 | | of_set_vlan_vid) | | | | |
1759 +-----------------------+-----------------+-----------------+
1760 | | VLAN | | DPDK 21.05 | | |
1761 | | ingress and / | | OFED 5.3 | | N/A |
1762 | | of_push_vlan / | | ConnectX-6 Dx | | |
1763 +-----------------------+-----------------+-----------------+
1764 | | VLAN | | DPDK 21.05 | | |
1765 | | egress and / | | OFED 5.3 | | N/A |
1766 | | of_pop_vlan / | | ConnectX-6 Dx | | |
1767 +-----------------------+-----------------+-----------------+
1768 | Encapsulation | | DPDK 19.05 | | DPDK 19.02 |
1769 | (VXLAN / NVGRE / RAW) | | OFED 4.7-1 | | OFED 4.6 |
1770 | | | rdma-core 24 | | rdma-core 23 |
1771 | | | ConnectX-5 | | ConnectX-5 |
1772 +-----------------------+-----------------+-----------------+
1773 | Encapsulation | | DPDK 19.11 | | DPDK 19.11 |
1774 | GENEVE | | OFED 4.7-3 | | OFED 4.7-3 |
1775 | | | rdma-core 27 | | rdma-core 27 |
1776 | | | ConnectX-5 | | ConnectX-5 |
1777 +-----------------------+-----------------+-----------------+
1778 | Tunnel Offload | | DPDK 20.11 | | DPDK 20.11 |
1779 | | | OFED 5.1-2 | | OFED 5.1-2 |
1780 | | | rdma-core 32 | | N/A |
1781 | | | ConnectX-5 | | ConnectX-5 |
1782 +-----------------------+-----------------+-----------------+
1783 | | Header rewrite | | DPDK 19.05 | | DPDK 19.02 |
1784 | | (set_ipv4_src / | | OFED 4.7-1 | | OFED 4.7-1 |
1785 | | set_ipv4_dst / | | rdma-core 24 | | rdma-core 24 |
1786 | | set_ipv6_src / | | ConnectX-5 | | ConnectX-5 |
1787 | | set_ipv6_dst / | | | | |
1788 | | set_tp_src / | | | | |
1789 | | set_tp_dst / | | | | |
1790 | | dec_ttl / | | | | |
1791 | | set_ttl / | | | | |
1792 | | set_mac_src / | | | | |
1793 | | set_mac_dst) | | | | |
1794 +-----------------------+-----------------+-----------------+
1795 | | Header rewrite | | DPDK 20.02 | | DPDK 20.02 |
1796 | | (set_dscp) | | OFED 5.0 | | OFED 5.0 |
1797 | | | | rdma-core 24 | | rdma-core 24 |
1798 | | | | ConnectX-5 | | ConnectX-5 |
1799 +-----------------------+-----------------+-----------------+
1800 | Jump | | DPDK 19.05 | | DPDK 19.02 |
1801 | | | OFED 4.7-1 | | OFED 4.7-1 |
1802 | | | rdma-core 24 | | N/A |
1803 | | | ConnectX-5 | | ConnectX-5 |
1804 +-----------------------+-----------------+-----------------+
1805 | Mark / Flag | | DPDK 19.05 | | DPDK 18.11 |
1806 | | | OFED 4.6 | | OFED 4.5 |
1807 | | | rdma-core 24 | | rdma-core 23 |
1808 | | | ConnectX-5 | | ConnectX-4 |
1809 +-----------------------+-----------------+-----------------+
1810 | Meta data | | DPDK 19.11 | | DPDK 19.11 |
1811 | | | OFED 4.7-3 | | OFED 4.7-3 |
1812 | | | rdma-core 26 | | rdma-core 26 |
1813 | | | ConnectX-5 | | ConnectX-5 |
1814 +-----------------------+-----------------+-----------------+
1815 | Port ID | | DPDK 19.05 | | N/A |
1816 | | | OFED 4.7-1 | | N/A |
1817 | | | rdma-core 24 | | N/A |
1818 | | | ConnectX-5 | | N/A |
1819 +-----------------------+-----------------+-----------------+
1820 | Hairpin | | | | DPDK 19.11 |
1821 | | | N/A | | OFED 4.7-3 |
1822 | | | | | rdma-core 26 |
1823 | | | | | ConnectX-5 |
1824 +-----------------------+-----------------+-----------------+
1825 | 2-port Hairpin | | | | DPDK 20.11 |
1826 | | | N/A | | OFED 5.1-2 |
1828 | | | | | ConnectX-5 |
1829 +-----------------------+-----------------+-----------------+
1830 | Metering | | DPDK 19.11 | | DPDK 19.11 |
1831 | | | OFED 4.7-3 | | OFED 4.7-3 |
1832 | | | rdma-core 26 | | rdma-core 26 |
1833 | | | ConnectX-5 | | ConnectX-5 |
1834 +-----------------------+-----------------+-----------------+
1835 | ASO Metering | | DPDK 21.05 | | DPDK 21.05 |
1836 | | | OFED 5.3 | | OFED 5.3 |
1837 | | | rdma-core 33 | | rdma-core 33 |
1838 | | | ConnectX-6 Dx| | ConnectX-6 Dx |
1839 +-----------------------+-----------------+-----------------+
1840 | Metering Hierarchy | | DPDK 21.08 | | DPDK 21.08 |
1841 | | | OFED 5.3 | | OFED 5.3 |
1843 | | | ConnectX-6 Dx| | ConnectX-6 Dx |
1844 +-----------------------+-----------------+-----------------+
1845 | Sampling | | DPDK 20.11 | | DPDK 20.11 |
1846 | | | OFED 5.1-2 | | OFED 5.1-2 |
1847 | | | rdma-core 32 | | N/A |
1848 | | | ConnectX-5 | | ConnectX-5 |
1849 +-----------------------+-----------------+-----------------+
1850 | Encapsulation | | DPDK 21.02 | | DPDK 21.02 |
1851 | GTP PSC | | OFED 5.2 | | OFED 5.2 |
1852 | | | rdma-core 35 | | rdma-core 35 |
1853 | | | ConnectX-6 Dx| | ConnectX-6 Dx |
1854 +-----------------------+-----------------+-----------------+
1855 | Encapsulation | | DPDK 21.02 | | DPDK 21.02 |
1856 | GENEVE TLV option | | OFED 5.2 | | OFED 5.2 |
1857 | | | rdma-core 34 | | rdma-core 34 |
1858 | | | ConnectX-6 Dx | | ConnectX-6 Dx |
1859 +-----------------------+-----------------+-----------------+
1860 | Modify Field | | DPDK 21.02 | | DPDK 21.02 |
1861 | | | OFED 5.2 | | OFED 5.2 |
1862 | | | rdma-core 35 | | rdma-core 35 |
1863 | | | ConnectX-5 | | ConnectX-5 |
1864 +-----------------------+-----------------+-----------------+
1865 | Connection tracking | | | | DPDK 21.05 |
1866 | | | N/A | | OFED 5.3 |
1867 | | | | | rdma-core 35 |
1868 | | | | | ConnectX-6 Dx |
1869 +-----------------------+-----------------+-----------------+
1871 .. table:: Minimal SW/HW versions for shared action offload
1874 +-----------------------+-----------------+-----------------+
1875 | Shared Action | with E-Switch | with NIC |
1876 +=======================+=================+=================+
1877 | RSS | | | | DPDK 20.11 |
1878 | | | N/A | | OFED 5.2 |
1879 | | | | | rdma-core 33 |
1880 | | | | | ConnectX-5 |
1881 +-----------------------+-----------------+-----------------+
1882 | Age | | DPDK 20.11 | | DPDK 20.11 |
1883 | | | OFED 5.2 | | OFED 5.2 |
1884 | | | rdma-core 32 | | rdma-core 32 |
1885 | | | ConnectX-6 Dx | | ConnectX-6 Dx |
1886 +-----------------------+-----------------+-----------------+
1887 | Count | | DPDK 21.05 | | DPDK 21.05 |
1888 | | | OFED 4.6 | | OFED 4.6 |
1889 | | | rdma-core 24 | | rdma-core 23 |
1890 | | | ConnectX-5 | | ConnectX-5 |
1891 +-----------------------+-----------------+-----------------+
1896 MARK and META items are interrelated with datapath - they might move from/to
1897 the applications in mbuf fields. Hence, zero value for these items has the
1898 special meaning - it means "no metadata are provided", not zero values are
1899 treated by applications and PMD as valid ones.
1901 Moreover in the flow engine domain the value zero is acceptable to match and
1902 set, and we should allow to specify zero values as rte_flow parameters for the
1903 META and MARK items and actions. In the same time zero mask has no meaning and
1904 should be rejected on validation stage.
1909 Flows are not cached in the driver.
1910 When stopping a device port, all the flows created on this port from the
1911 application will be flushed automatically in the background.
1912 After stopping the device port, all flows on this port become invalid and
1913 not represented in the system.
1914 All references to these flows held by the application should be discarded
1915 directly but neither destroyed nor flushed.
1917 The application should re-create the flows as required after the port restart.
1922 Compared to librte_net_mlx4 that implements a single RSS configuration per
1923 port, librte_net_mlx5 supports per-protocol RSS configuration.
1925 Since ``testpmd`` defaults to IP RSS mode and there is currently no
1926 command-line parameter to enable additional protocols (UDP and TCP as well
1927 as IP), the following commands must be entered from its CLI to get the same
1928 behavior as librte_net_mlx4::
1931 > port config all rss all
1937 This section demonstrates how to launch **testpmd** with Mellanox
1938 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
1940 #. Load the kernel modules::
1942 modprobe -a ib_uverbs mlx5_core mlx5_ib
1944 Alternatively if MLNX_OFED/MLNX_EN is fully installed, the following script
1947 /etc/init.d/openibd restart
1951 User space I/O kernel modules (uio and igb_uio) are not used and do
1952 not have to be loaded.
1954 #. Make sure Ethernet interfaces are in working order and linked to kernel
1955 verbs. Related sysfs entries should be present::
1957 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
1966 #. Optionally, retrieve their PCI bus addresses for to be used with the allow list::
1969 for intf in eth2 eth3 eth4 eth5;
1971 (cd "/sys/class/net/${intf}/device/" && pwd -P);
1974 sed -n 's,.*/\(.*\),-a \1,p'
1983 #. Request huge pages::
1985 dpdk-hugepages.py --setup 2G
1987 #. Start testpmd with basic parameters::
1989 dpdk-testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
1994 EAL: PCI device 0000:05:00.0 on NUMA socket 0
1995 EAL: probe driver: 15b3:1013 librte_net_mlx5
1996 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
1997 PMD: librte_net_mlx5: 1 port(s) detected
1998 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
1999 EAL: PCI device 0000:05:00.1 on NUMA socket 0
2000 EAL: probe driver: 15b3:1013 librte_net_mlx5
2001 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
2002 PMD: librte_net_mlx5: 1 port(s) detected
2003 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
2004 EAL: PCI device 0000:06:00.0 on NUMA socket 0
2005 EAL: probe driver: 15b3:1013 librte_net_mlx5
2006 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
2007 PMD: librte_net_mlx5: 1 port(s) detected
2008 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
2009 EAL: PCI device 0000:06:00.1 on NUMA socket 0
2010 EAL: probe driver: 15b3:1013 librte_net_mlx5
2011 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
2012 PMD: librte_net_mlx5: 1 port(s) detected
2013 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
2014 Interactive-mode selected
2015 Configuring Port 0 (socket 0)
2016 PMD: librte_net_mlx5: 0x8cba80: TX queues number update: 0 -> 2
2017 PMD: librte_net_mlx5: 0x8cba80: RX queues number update: 0 -> 2
2018 Port 0: E4:1D:2D:E7:0C:FE
2019 Configuring Port 1 (socket 0)
2020 PMD: librte_net_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
2021 PMD: librte_net_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
2022 Port 1: E4:1D:2D:E7:0C:FF
2023 Configuring Port 2 (socket 0)
2024 PMD: librte_net_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
2025 PMD: librte_net_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
2026 Port 2: E4:1D:2D:E7:0C:FA
2027 Configuring Port 3 (socket 0)
2028 PMD: librte_net_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
2029 PMD: librte_net_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
2030 Port 3: E4:1D:2D:E7:0C:FB
2031 Checking link statuses...
2032 Port 0 Link Up - speed 40000 Mbps - full-duplex
2033 Port 1 Link Up - speed 40000 Mbps - full-duplex
2034 Port 2 Link Up - speed 10000 Mbps - full-duplex
2035 Port 3 Link Up - speed 10000 Mbps - full-duplex
2042 This section demonstrates how to dump flows. Currently, it's possible to dump
2043 all flows with assistance of external tools.
2045 #. 2 ways to get flow raw file:
2047 - Using testpmd CLI:
2049 .. code-block:: console
2052 testpmd> flow dump <port> all <output_file>
2054 testpmd> flow dump <port> rule <rule_id> <output_file>
2056 - call rte_flow_dev_dump api:
2058 .. code-block:: console
2060 rte_flow_dev_dump(port, flow, file, NULL);
2062 #. Dump human-readable flows from raw file:
2064 Get flow parsing tool from: https://github.com/Mellanox/mlx_steering_dump
2066 .. code-block:: console
2068 mlx_steering_dump.py -f <output_file> -flowptr <flow_ptr>
2070 How to share a meter between ports in the same switch domain
2071 ------------------------------------------------------------
2073 This section demonstrates how to use the shared meter. A meter M can be created
2074 on port X and to be shared with a port Y on the same switch domain by the next way:
2076 .. code-block:: console
2078 flow create X ingress transfer pattern eth / port_id id is Y / end actions meter mtr_id M / end
2080 How to use meter hierarchy
2081 --------------------------
2083 This section demonstrates how to create and use a meter hierarchy.
2084 A termination meter M can be the policy green action of another termination meter N.
2085 The two meters are chained together as a chain. Using meter N in a flow will apply
2086 both the meters in hierarchy on that flow.
2088 .. code-block:: console
2090 add port meter policy 0 1 g_actions queue index 0 / end y_actions end r_actions drop / end
2091 create port meter 0 M 1 1 yes 0xffff 1 0
2092 add port meter policy 0 2 g_actions meter mtr_id M / end y_actions end r_actions drop / end
2093 create port meter 0 N 2 2 yes 0xffff 1 0
2094 flow create 0 ingress group 1 pattern eth / end actions meter mtr_id N / end