1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2015 6WIND S.A.
3 Copyright 2015 Mellanox Technologies, Ltd
5 .. include:: <isonum.txt>
7 MLX5 Ethernet Poll Mode Driver
8 ==============================
10 The mlx5 Ethernet poll mode driver library (**librte_net_mlx5**) provides support
11 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
12 ConnectX-5**, **Mellanox ConnectX-6**, **Mellanox ConnectX-6 Dx**, **Mellanox
13 ConnectX-6 Lx**, **Mellanox BlueField** and **Mellanox BlueField-2** families
14 of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
21 Besides its dependency on libibverbs (that implies libmlx5 and associated
22 kernel support), librte_net_mlx5 relies heavily on system calls for control
23 operations such as querying/updating the MTU and flow control parameters.
25 This capability allows the PMD to coexist with kernel network interfaces
26 which remain functional, although they stop receiving unicast packets as
27 long as they share the same MAC address.
28 This means legacy linux control tools (for example: ethtool, ifconfig and
29 more) can operate on the same network interfaces that owned by the DPDK
32 See :doc:`../../platform/mlx5` guide for more design details.
37 - Multi arch support: x86_64, POWER8, ARMv8, i686.
38 - Multiple TX and RX queues.
40 - Rx queue delay drop.
41 - Rx queue available descriptor threshold event.
42 - Host shaper support.
43 - Support steering for external Rx queue created outside the PMD.
44 - Support for scattered TX frames.
45 - Advanced support for scattered Rx frames with tunable buffer attributes.
46 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
47 - RSS using different combinations of fields: L3 only, L4 only or both,
48 and source only, destination only or both.
49 - Several RSS hash keys, one for each flow type.
50 - Default RSS operation with no hash key specification.
51 - Configurable RETA table.
52 - Link flow control (pause frame).
53 - Support for multiple MAC addresses.
57 - RX CRC stripping configuration.
58 - TX mbuf fast free offload.
59 - Promiscuous mode on PF and VF.
60 - Multicast promiscuous mode on PF and VF.
61 - Hardware checksum offloads.
62 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
64 - Flow API, including :ref:`flow_isolated_mode`.
66 - KVM and VMware ESX SR-IOV modes are supported.
67 - RSS hash result is supported.
68 - Hardware TSO for generic IP or UDP tunnel, including VXLAN and GRE.
69 - Hardware checksum Tx offload for generic IP or UDP tunnel, including VXLAN and GRE.
71 - Statistics query including Basic, Extended and per queue.
73 - Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP, IP-in-IP, Geneve, GTP.
74 - Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
75 - NIC HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
76 increment/decrement, count, drop, mark. For details please see :ref:`mlx5_offloads_support`.
77 - Flow insertion rate of more then million flows per second, when using Direct Rules.
78 - Support for multiple rte_flow groups.
79 - Per packet no-inline hint flag to disable packet data copying into Tx descriptors.
82 - Multiple-thread flow insertion.
83 - Matching on IPv4 Internet Header Length (IHL).
84 - Matching on GTP extension header with raw encap/decap action.
85 - Matching on Geneve TLV option header with raw encap/decap action.
86 - Matching on ESP header SPI field.
87 - Modify IPv4/IPv6 ECN field.
88 - RSS support in sample action.
89 - E-Switch mirroring and jump.
90 - E-Switch mirroring and modify.
91 - 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer
93 - Flow metering, including meter policy API.
94 - Flow meter hierarchy.
95 - Flow integrity offload API.
96 - Connection tracking.
97 - Sub-Function representors.
99 - Matching on represented port.
107 On Windows, the features are limited:
109 - Promiscuous mode is not supported
110 - The following rules are supported:
112 - IPv4/UDP with CVLAN filtering
113 - Unicast MAC filtering
115 - Additional rules are supported from WinOF2 version 2.70:
117 - IPv4/TCP with CVLAN filtering
118 - L4 steering rules for port RSS of UDP, TCP and IP
120 - For secondary process:
122 - Forked secondary process not supported.
123 - MPRQ is not supported. Callback to free externally attached MPRQ buffer is set
124 in a primary process, but has a different virtual address in a secondary process.
125 Calling a function at the wrong address leads to a segmentation fault.
126 - External memory unregistered in EAL memseg list cannot be used for DMA
127 unless such memory has been registered by ``mlx5_mr_update_ext_mp()`` in
128 primary process and remapped to the same virtual address in secondary
129 process. If the external memory is registered by primary process but has
130 different virtual address in secondary process, unexpected error may happen.
134 - Counters of received packets and bytes number of devices in same share group are same.
135 - Counters of received packets and bytes number of queues in same group and queue ID are same.
137 - Available descriptor threshold event:
139 - Does not support shared Rx queue and hairpin Rx queue.
143 - Support BlueField series NIC from BlueField 2.
144 - When configuring host shaper with MLX5_HOST_SHAPER_FLAG_AVAIL_THRESH_TRIGGERED flag set,
145 only rates 0 and 100Mbps are supported.
147 - When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
148 specific VLAN will match for VLAN packets as well:
150 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
151 Meaning, the flow rule::
153 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
155 Will only match vlan packets with vid=3. and the flow rule::
157 flow create 0 ingress pattern eth / ipv4 / end ...
159 Will match any ipv4 packet (VLAN included).
161 - When using Verbs flow engine (``dv_flow_en`` = 0), multi-tagged(QinQ) match is not supported.
163 - When using DV flow engine (``dv_flow_en`` = 1), flow pattern with any VLAN specification will match only single-tagged packets unless the ETH item ``type`` field is 0x88A8 or the VLAN item ``has_more_vlan`` field is 1.
166 flow create 0 ingress pattern eth / ipv4 / end ...
168 Will match any ipv4 packet.
171 flow create 0 ingress pattern eth / vlan / end ...
172 flow create 0 ingress pattern eth has_vlan is 1 / end ...
173 flow create 0 ingress pattern eth type is 0x8100 / end ...
175 Will match single-tagged packets only, with any VLAN ID value.
178 flow create 0 ingress pattern eth type is 0x88A8 / end ...
179 flow create 0 ingress pattern eth / vlan has_more_vlan is 1 / end ...
181 Will match multi-tagged packets only, with any VLAN ID value.
183 - A flow pattern with 2 sequential VLAN items is not supported.
185 - VLAN pop offload command:
187 - Flow rules having a VLAN pop offload command as one of their actions and
188 are lacking a match on VLAN as one of their items are not supported.
189 - The command is not supported on egress traffic in NIC mode.
191 - VLAN push offload is not supported on ingress traffic in NIC mode.
193 - VLAN set PCP offload is not supported on existing headers.
195 - A multi segment packet must have not more segments than reported by dev_infos_get()
196 in tx_desc_lim.nb_seg_max field. This value depends on maximal supported Tx descriptor
197 size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
198 inline settings) to 58.
200 - Match on VXLAN supports the following fields only:
203 - Last reserved 8-bits
205 Last reserved 8-bits matching is only supported When using DV flow
206 engine (``dv_flow_en`` = 1).
207 For ConnectX-5, the UDP destination port must be the standard one (4789).
208 Group zero's behavior may differ which depends on FW.
209 Matching value equals 0 (value & mask) is not supported.
211 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
213 - Match on Geneve header supports the following fields only:
220 - Match on Geneve TLV option is supported on the following fields:
227 Only one Class/Type/Length Geneve TLV option is supported per shared device.
228 Class/Type/Length fields must be specified as well as masks.
229 Class/Type/Length specified masks must be full.
230 Matching Geneve TLV option without specifying data is not supported.
231 Matching Geneve TLV option with ``data & mask == 0`` is not supported.
233 - VF: flow rules created on VF devices can only match traffic targeted at the
234 configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``).
236 - Match on GTP tunnel header item supports the following fields only:
238 - v_pt_rsv_flags: E flag, S flag, PN flag
242 - Match on GTP extension header only for GTP PDU session container (next
243 extension header type = 0x85).
244 - Match on GTP extension header is not supported in group 0.
248 - Hardware support: BlueField-2.
249 - Flex item is supported on PF only.
250 - Hardware limits ``header_length_mask_width`` up to 6 bits.
251 - Firmware supports 8 global sample fields.
252 Each flex item allocates non-shared sample fields from that pool.
253 - Supported flex item can have 1 input link - ``eth`` or ``udp``
254 and up to 2 output links - ``ipv4`` or ``ipv6``.
255 - Flex item fields (``next_header``, ``next_protocol``, ``samples``)
256 do not participate in RSS hash functions.
257 - In flex item configuration, ``next_header.field_base`` value
258 must be byte aligned (multiple of 8).
260 - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
261 The flows within group 0 and set metadata action are rejected by hardware.
265 MAC addresses not already present in the bridge table of the associated
266 kernel network device will be added and cleaned up by the PMD when closing
267 the device. In case of ungraceful program termination, some entries may
268 remain present and should be removed manually by other means.
270 - Buffer split offload is supported with regular Rx burst routine only,
271 no MPRQ feature or vectorized code can be engaged.
273 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
274 externally attached to a user-provided mbuf with having RTE_MBUF_F_EXTERNAL in
275 ol_flags. As the mempool for the external buffer is managed by PMD, all the
276 Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
277 the external buffers will be freed by PMD and the application which still
278 holds the external buffers may be corrupted.
279 User-managed mempools with external pinned data buffers
280 cannot be used in conjunction with MPRQ
281 since packets may be already attached to PMD-managed external buffers.
283 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
284 enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
285 supported. Some Rx packets may not have RTE_MBUF_F_RX_RSS_HASH.
287 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
288 and allmulticast mode are both set to off.
289 To receive IPv6 Multicast messages on VM, explicitly set the relevant
290 MAC address using rte_eth_dev_mac_addr_add() API.
292 - To support a mixed traffic pattern (some buffers from local host memory, some
293 buffers from other devices) with high bandwidth, a mbuf flag is used.
295 An application hints the PMD whether or not it should try to inline the
296 given mbuf data buffer. PMD should do the best effort to act upon this request.
298 The hint flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE`` is dynamic,
299 registered by application with rte_mbuf_dynflag_register(). This flag is
300 purely driver-specific and declared in PMD specific header ``rte_pmd_mlx5.h``,
301 which is intended to be used by the application.
303 To query the supported specific flags in runtime,
304 the function ``rte_pmd_mlx5_get_dyn_flag_names`` returns the array of
305 currently (over present hardware and configuration) supported specific flags.
306 The "not inline hint" feature operating flow is the following one:
309 - probe the devices, ports are created
310 - query the port capabilities
311 - if port supporting the feature is found
312 - register dynamic flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE``
313 - application starts the ports
314 - on ``dev_start()`` PMD checks whether the feature flag is registered and
315 enables the feature support in datapath
316 - application might set the registered flag bit in ``ol_flags`` field
317 of mbuf being sent and PMD will handle ones appropriately.
319 - The amount of descriptors in Tx queue may be limited by data inline settings.
320 Inline data require the more descriptor building blocks and overall block
321 amount may exceed the hardware supported limits. The application should
322 reduce the requested Tx size or adjust data inline settings with
323 ``txq_inline_max`` and ``txq_inline_mpw`` devargs keys.
325 - To provide the packet send scheduling on mbuf timestamps the ``tx_pp``
326 parameter should be specified.
327 When PMD sees the RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME set on the packet
328 being sent it tries to synchronize the time of packet appearing on
329 the wire with the specified packet timestamp. It the specified one
330 is in the past it should be ignored, if one is in the distant future
331 it should be capped with some reasonable value (in range of seconds).
332 These specific cases ("too late" and "distant future") can be optionally
333 reported via device xstats to assist applications to detect the
334 time-related problems.
336 The timestamp upper "too-distant-future" limit
337 at the moment of invoking the Tx burst routine
338 can be estimated as ``tx_pp`` option (in nanoseconds) multiplied by 2^23.
339 Please note, for the testpmd txonly mode,
340 the limit is deduced from the expression::
342 (n_tx_descriptors / burst_size + 1) * inter_burst_gap
344 There is no any packet reordering according timestamps is supposed,
345 neither within packet burst, nor between packets, it is an entirely
346 application responsibility to generate packets and its timestamps
347 in desired order. The timestamps can be put only in the first packet
348 in the burst providing the entire burst scheduling.
350 - E-Switch decapsulation Flow:
352 - can be applied to PF port only.
353 - must specify VF port action (packet redirection from PF to VF).
354 - optionally may specify tunnel inner source and destination MAC addresses.
356 - E-Switch encapsulation Flow:
358 - can be applied to VF ports only.
359 - must specify PF port action (packet redirection from VF to PF).
361 - E-Switch Manager matching:
363 - For Bluefield with old FW
364 which doesn't expose the E-Switch Manager vport ID in the capability,
365 matching E-Switch Manager should be used only in Bluefield embedded CPU mode.
369 - The input buffer, used as outer header, is not validated.
373 - The decapsulation is always done up to the outermost tunnel detected by the HW.
374 - The input buffer, providing the removal size, is not validated.
375 - The buffer size must match the length of the headers to be removed.
377 - ICMP(code/type/identifier/sequence number) / ICMP6(code/type) matching, IP-in-IP and MPLS flow matching are all
378 mutually exclusive features which cannot be supported together
379 (see :ref:`mlx5_firmware_config`).
383 - Requires DevX and DV flow to be enabled.
384 - KEEP_CRC offload cannot be supported with LRO.
385 - The first mbuf length, without head-room, must be big enough to include the
387 - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
388 it with size limited to max LRO size, not to max RX packet length.
389 - LRO can be used with outer header of TCP packets of the standard format:
390 eth (with or without vlan) / ipv4 or ipv6 / tcp / payload
392 Other TCP packets (e.g. with MPLS label) received on Rx queue with LRO enabled, will be received with bad checksum.
393 - LRO packet aggregation is performed by HW only for packet size larger than
394 ``lro_min_mss_size``. This value is reported on device start, when debug
399 - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
400 for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
401 The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
405 - fast free offload assumes the all mbufs being sent are originated from the
406 same memory pool and there is no any extra references to the mbufs (the
407 reference counter for each mbuf is equal 1 on tx_burst call). The latter
408 means there should be no any externally attached buffers in mbufs. It is
409 an application responsibility to provide the correct mbufs if the fast
410 free offload is engaged. The mlx5 PMD implicitly produces the mbufs with
411 externally attached buffers if MPRQ option is enabled, hence, the fast
412 free offload is neither supported nor advertised if there is MPRQ enabled.
416 - Supports ``RTE_FLOW_ACTION_TYPE_SAMPLE`` action only within NIC Rx and
417 E-Switch steering domain.
418 - For E-Switch Sampling flow with sample ratio > 1, additional actions are not
419 supported in the sample actions list.
420 - For ConnectX-5, the ``RTE_FLOW_ACTION_TYPE_SAMPLE`` is typically used as
421 first action in the E-Switch egress flow if with header modify or
422 encapsulation actions.
423 - For NIC Rx flow, supports ``MARK``, ``COUNT``, ``QUEUE``, ``RSS`` in the
425 - For E-Switch mirroring flow, supports ``RAW ENCAP``, ``Port ID``,
426 ``VXLAN ENCAP``, ``NVGRE ENCAP`` in the sample actions list.
427 - For ConnectX-5 trusted device, the application metadata with SET_TAG index 0
428 is not supported before ``RTE_FLOW_ACTION_TYPE_SAMPLE`` action.
432 - Supports the 'set' operation only for ``RTE_FLOW_ACTION_TYPE_MODIFY_FIELD`` action.
433 - Modification of an arbitrary place in a packet via the special ``RTE_FLOW_FIELD_START`` Field ID is not supported.
434 - Modification of the 802.1Q Tag, VXLAN Network or GENEVE Network ID's is not supported.
435 - Encapsulation levels are not supported, can modify outermost header fields only.
436 - Offsets must be 32-bits aligned, cannot skip past the boundary of a field.
437 - If the field type is ``RTE_FLOW_FIELD_MAC_TYPE``
438 and packet contains one or more VLAN headers,
439 the meaningful type field following the last VLAN header
440 is used as modify field operation argument.
441 The modify field action is not intended to modify VLAN headers type field,
442 dedicated VLAN push and pop actions should be used instead.
444 - IPv6 header item 'proto' field, indicating the next header protocol, should
445 not be set as extension header.
446 In case the next header is an extension header, it should not be specified in
447 IPv6 header item 'proto' field.
448 The last extension header item 'next header' field can specify the following
449 header protocol type.
453 - Hairpin between two ports could only manual binding and explicit Tx flow mode. For single port hairpin, all the combinations of auto/manual binding and explicit/implicit Tx flow mode could be supported.
454 - Hairpin in switchdev SR-IOV mode is not supported till now.
458 - All the meter colors with drop action will be counted only by the global drop statistics.
459 - Yellow detection is only supported with ASO metering.
460 - Red color must be with drop action.
461 - Meter statistics are supported only for drop case.
462 - A meter action created with pre-defined policy must be the last action in the flow except single case where the policy actions are:
463 - green: NULL or END.
464 - yellow: NULL or END.
466 - The only supported meter policy actions:
467 - green: QUEUE, RSS, PORT_ID, REPRESENTED_PORT, JUMP, DROP, MODIFY_FIELD, MARK, METER and SET_TAG.
468 - yellow: QUEUE, RSS, PORT_ID, REPRESENTED_PORT, JUMP, DROP, MODIFY_FIELD, MARK, METER and SET_TAG.
470 - Policy actions of RSS for green and yellow should have the same configuration except queues.
471 - Policy with RSS/queue action is not supported when ``dv_xmeta_en`` enabled.
472 - If green action is METER, yellow action must be the same METER action or NULL.
473 - meter profile packet mode is supported.
474 - meter profiles of RFC2697, RFC2698 and RFC4115 are supported.
475 - RFC4115 implementation is following MEF, meaning yellow traffic may reclaim unused green bandwidth when green token bucket is full.
479 - Integrity offload is enabled starting from **ConnectX-6 Dx**.
480 - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
481 - ``level`` value 0 references outer headers.
482 - Negative integrity item verification is not supported.
483 - Multiple integrity items not supported in a single flow rule.
484 - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
485 For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
486 TCP or UDP, must be in the rule pattern as well::
488 flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
490 flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec l4_ok / eth / ipv4 proto is udp / end …
492 - Connection tracking:
494 - Cannot co-exist with ASO meter, ASO age action in a single flow rule.
495 - Flow rules insertion rate and memory consumption need more optimization.
497 - 4M connections maximum.
499 - Multi-thread flow insertion:
501 - In order to achieve best insertion rate, application should manage the flows per lcore.
502 - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache.
506 - TXQ affinity subjects to HW hash once enabled.
508 - Bonding under socket direct mode
514 - CQE timestamp field width is limited by hardware to 63 bits, MSB is zero.
515 - In the free-running mode the timestamp counter is reset on power on
516 and 63-bit value provides over 1800 years of uptime till overflow.
517 - In the real-time mode
518 (configurable with ``REAL_TIME_CLOCK_ENABLE`` firmware settings),
519 the timestamp presents the nanoseconds elapsed since 01-Jan-1970,
520 hardware timestamp overflow will happen on 19-Jan-2038
521 (0x80000000 seconds since 01-Jan-1970).
522 - The send scheduling is based on timestamps
523 from the reference "Clock Queue" completions,
524 the scheduled send timestamps should not be specified with non-zero MSB.
528 - WQE based high scaling and safer flow insertion/destruction.
529 - Set ``dv_flow_en`` to 2 in order to enable HW steering.
530 - Async queue-based ``rte_flow_q`` APIs supported only.
532 - Match on GRE header supports the following fields:
534 - c_rsvd0_v: C bit, K bit, S bit
540 Matching on checksum and sequence needs OFED 5.6+.
542 - The NIC egress flow rules on representor port are not supported.
548 MLX5 supports various methods to report statistics:
550 Port statistics can be queried using ``rte_eth_stats_get()``. The received and sent statistics are through SW only and counts the number of packets received or sent successfully by the PMD. The imissed counter is the amount of packets that could not be delivered to SW because a queue was full. Packets not received due to congestion in the bus or on the NIC can be queried via the rx_discards_phy xstats counter.
552 Extended statistics can be queried using ``rte_eth_xstats_get()``. The extended statistics expose a wider set of counters counted by the device. The extended port statistics counts the number of packets received or sent successfully by the port. As Mellanox NICs are using the :ref:`Bifurcated Linux Driver <linux_gsg_linux_drivers>` those counters counts also packet received or sent by the Linux kernel. The counters with ``_phy`` suffix counts the total events on the physical port, therefore not valid for VF.
554 Finally per-flow statistics can by queried using ``rte_flow_query`` when attaching a count action for specific flow. The flow counter counts the number of packets received successfully by the port and match the specific flow.
560 See :ref:`mlx5 common compilation <mlx5_common_compilation>`.
566 Environment Configuration
567 ~~~~~~~~~~~~~~~~~~~~~~~~~
569 See :ref:`mlx5 common configuration <mlx5_common_env>`.
571 Firmware configuration
572 ~~~~~~~~~~~~~~~~~~~~~~
574 See :ref:`mlx5_firmware_config` guide.
579 Please refer to :ref:`mlx5 common options <mlx5_common_driver_options>`
580 for an additional list of options shared with other mlx5 drivers.
582 - ``rxq_cqe_comp_en`` parameter [int]
584 A nonzero value enables the compression of CQE on RX side. This feature
585 allows to save PCI bandwidth and improve performance. Enabled by default.
586 Different compression formats are supported in order to achieve the best
587 performance for different traffic patterns. Default format depends on
588 Multi-Packet Rx queue configuration: Hash RSS format is used in case
589 MPRQ is disabled, Checksum format is used in case MPRQ is enabled.
591 Specifying 2 as a ``rxq_cqe_comp_en`` value selects Flow Tag format for
592 better compression rate in case of RTE Flow Mark traffic.
593 Specifying 3 as a ``rxq_cqe_comp_en`` value selects Checksum format.
594 Specifying 4 as a ``rxq_cqe_comp_en`` value selects L3/L4 Header format for
595 better compression rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
596 CQE compression format selection requires DevX to be enabled. If there is
597 no DevX enabled/supported the value is reset to 1 by default.
601 - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
602 ConnectX-6 Lx, BlueField and BlueField-2.
603 - POWER9 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
604 ConnectX-6 Lx, BlueField and BlueField-2.
606 - ``rxq_pkt_pad_en`` parameter [int]
608 A nonzero value enables padding Rx packet to the size of cacheline on PCI
609 transaction. This feature would waste PCI bandwidth but could improve
610 performance by avoiding partial cacheline write which may cause costly
611 read-modify-copy in memory transaction on some architectures. Disabled by
616 - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
617 ConnectX-6 Lx, BlueField and BlueField-2.
618 - POWER8 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
619 ConnectX-6 Lx, BlueField and BlueField-2.
621 - ``delay_drop`` parameter [int]
623 Bitmask value for the Rx queue delay drop attribute. Bit 0 is used for the
624 standard Rx queue and bit 1 is used for the hairpin Rx queue. By default, the
625 delay drop is disabled for all Rx queues. It will be ignored if the port does
626 not support the attribute even if it is enabled explicitly.
628 The packets being received will not be dropped immediately when the WQEs are
629 exhausted in a Rx queue with delay drop enabled.
631 A timeout value is set in the driver to control the waiting time before
632 dropping a packet. Once the timer is expired, the delay drop will be
633 deactivated for all the Rx queues with this feature enable. To re-activate
634 it, a rearming is needed and it is part of the kernel driver starting from
637 To enable / disable the delay drop rearming, the private flag ``dropless_rq``
638 can be set and queried via ethtool:
640 - ethtool --set-priv-flags <netdev> dropless_rq on (/ off)
641 - ethtool --show-priv-flags <netdev>
643 The configuration flag is global per PF and can only be set on the PF, once
644 it is on, all the VFs', SFs' and representors' Rx queues will share the timer
647 - ``mprq_en`` parameter [int]
649 A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
650 configured as Multi-Packet RQ if the total number of Rx queues is
651 ``rxqs_min_mprq`` or more. Disabled by default.
653 Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe bandwidth
654 by posting a single large buffer for multiple packets. Instead of posting a
655 buffers per a packet, one large buffer is posted in order to receive multiple
656 packets on the buffer. A MPRQ buffer consists of multiple fixed-size strides
657 and each stride receives one packet. MPRQ can improve throughput for
658 small-packet traffic.
660 When MPRQ is enabled, MTU can be larger than the size of
661 user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
662 configure large stride size enough to accommodate MTU as long as
663 device allows. Note that this can waste system memory compared to enabling Rx
664 scatter and multi-segment packet.
666 - ``mprq_log_stride_num`` parameter [int]
668 Log 2 of the number of strides for Multi-Packet Rx queue. Configuring more
669 strides can reduce PCIe traffic further. If configured value is not in the
670 range of device capability, the default value will be set with a warning
671 message. The default value is 4 which is 16 strides per a buffer, valid only
672 if ``mprq_en`` is set.
674 The size of Rx queue should be bigger than the number of strides.
676 - ``mprq_log_stride_size`` parameter [int]
678 Log 2 of the size of a stride for Multi-Packet Rx queue. Configuring a smaller
679 stride size can save some memory and reduce probability of a depletion of all
680 available strides due to unreleased packets by an application. If configured
681 value is not in the range of device capability, the default value will be set
682 with a warning message. The default value is 11 which is 2048 bytes per a
683 stride, valid only if ``mprq_en`` is set. With ``mprq_log_stride_size`` set
684 it is possible for a packet to span across multiple strides. This mode allows
685 support of jumbo frames (9K) with MPRQ. The memcopy of some packets (or part
686 of a packet if Rx scatter is configured) may be required in case there is no
687 space left for a head room at the end of a stride which incurs some
690 - ``mprq_max_memcpy_len`` parameter [int]
692 The maximum length of packet to memcpy in case of Multi-Packet Rx queue. Rx
693 packet is mem-copied to a user-provided mbuf if the size of Rx packet is less
694 than or equal to this parameter. Otherwise, PMD will attach the Rx packet to
695 the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
696 A mempool for external buffers will be allocated and managed by PMD. If Rx
697 packet is externally attached, ol_flags field of the mbuf will have
698 RTE_MBUF_F_EXTERNAL and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
699 checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
701 - ``rxqs_min_mprq`` parameter [int]
703 Configure Rx queues as Multi-Packet RQ if the total number of Rx queues is
704 greater or equal to this value. The default value is 12, valid only if
707 - ``txq_inline`` parameter [int]
709 Amount of data to be inlined during TX operations. This parameter is
710 deprecated and converted to the new parameter ``txq_inline_max`` providing
711 partial compatibility.
713 - ``txqs_min_inline`` parameter [int]
715 Enable inline data send only when the number of TX queues is greater or equal
718 This option should be used in combination with ``txq_inline_max`` and
719 ``txq_inline_mpw`` below and does not affect ``txq_inline_min`` settings above.
721 If this option is not specified the default value 16 is used for BlueField
722 and 8 for other platforms
724 The data inlining consumes the CPU cycles, so this option is intended to
725 auto enable inline data if we have enough Tx queues, which means we have
726 enough CPU cores and PCI bandwidth is getting more critical and CPU
727 is not supposed to be bottleneck anymore.
729 The copying data into WQE improves latency and can improve PPS performance
730 when PCI back pressure is detected and may be useful for scenarios involving
731 heavy traffic on many queues.
733 Because additional software logic is necessary to handle this mode, this
734 option should be used with care, as it may lower performance when back
735 pressure is not expected.
737 If inline data are enabled it may affect the maximal size of Tx queue in
738 descriptors because the inline data increase the descriptor size and
739 queue size limits supported by hardware may be exceeded.
741 - ``txq_inline_min`` parameter [int]
743 Minimal amount of data to be inlined into WQE during Tx operations. NICs
744 may require this minimal data amount to operate correctly. The exact value
745 may depend on NIC operation mode, requested offloads, etc. It is strongly
746 recommended to omit this parameter and use the default values. Anyway,
747 applications using this parameter should take into consideration that
748 specifying an inconsistent value may prevent the NIC from sending packets.
750 If ``txq_inline_min`` key is present the specified value (may be aligned
751 by the driver in order not to exceed the limits and provide better descriptor
752 space utilization) will be used by the driver and it is guaranteed that
753 requested amount of data bytes are inlined into the WQE beside other inline
754 settings. This key also may update ``txq_inline_max`` value (default
755 or specified explicitly in devargs) to reserve the space for inline data.
757 If ``txq_inline_min`` key is not present, the value may be queried by the
758 driver from the NIC via DevX if this feature is available. If there is no DevX
759 enabled/supported the value 18 (supposing L2 header including VLAN) is set
760 for ConnectX-4 and ConnectX-4 Lx, and 0 is set by default for ConnectX-5
761 and newer NICs. If packet is shorter the ``txq_inline_min`` value, the entire
764 For ConnectX-4 NIC, driver does not allow specifying value below 18
765 (minimal L2 header, including VLAN), error will be raised.
767 For ConnectX-4 Lx NIC, it is allowed to specify values below 18, but
768 it is not recommended and may prevent NIC from sending packets over
771 For ConnectX-4 and ConnectX-4 Lx NICs, automatically configured value
772 is insufficient for some traffic, because they require at least all L2 headers
773 to be inlined. For example, Q-in-Q adds 4 bytes to default 18 bytes
774 of Ethernet and VLAN, thus ``txq_inline_min`` must be set to 22.
775 MPLS would add 4 bytes per label. Final value must account for all possible
776 L2 encapsulation headers used in particular environment.
778 Please, note, this minimal data inlining disengages eMPW feature (Enhanced
779 Multi-Packet Write), because last one does not support partial packet inlining.
780 This is not very critical due to minimal data inlining is mostly required
781 by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW feature.
783 - ``txq_inline_max`` parameter [int]
785 Specifies the maximal packet length to be completely inlined into WQE
786 Ethernet Segment for ordinary SEND method. If packet is larger than specified
787 value, the packet data won't be copied by the driver at all, data buffer
788 is addressed with a pointer. If packet length is less or equal all packet
789 data will be copied into WQE. This may improve PCI bandwidth utilization for
790 short packets significantly but requires the extra CPU cycles.
792 The data inline feature is controlled by number of Tx queues, if number of Tx
793 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
794 is engaged, if there are not enough Tx queues (which means not enough CPU cores
795 and CPU resources are scarce), data inline is not performed by the driver.
796 Assigning ``txqs_min_inline`` with zero always enables the data inline.
798 The default ``txq_inline_max`` value is 290. The specified value may be adjusted
799 by the driver in order not to exceed the limit (930 bytes) and to provide better
800 WQE space filling without gaps, the adjustment is reflected in the debug log.
801 Also, the default value (290) may be decreased in run-time if the large transmit
802 queue size is requested and hardware does not support enough descriptor
803 amount, in this case warning is emitted. If ``txq_inline_max`` key is
804 specified and requested inline settings can not be satisfied then error
807 - ``txq_inline_mpw`` parameter [int]
809 Specifies the maximal packet length to be completely inlined into WQE for
810 Enhanced MPW method. If packet is large the specified value, the packet data
811 won't be copied, and data buffer is addressed with pointer. If packet length
812 is less or equal, all packet data will be copied into WQE. This may improve PCI
813 bandwidth utilization for short packets significantly but requires the extra
816 The data inline feature is controlled by number of TX queues, if number of Tx
817 queues is larger than ``txqs_min_inline`` key parameter, the inline feature
818 is engaged, if there are not enough Tx queues (which means not enough CPU cores
819 and CPU resources are scarce), data inline is not performed by the driver.
820 Assigning ``txqs_min_inline`` with zero always enables the data inline.
822 The default ``txq_inline_mpw`` value is 268. The specified value may be adjusted
823 by the driver in order not to exceed the limit (930 bytes) and to provide better
824 WQE space filling without gaps, the adjustment is reflected in the debug log.
825 Due to multiple packets may be included to the same WQE with Enhanced Multi
826 Packet Write Method and overall WQE size is limited it is not recommended to
827 specify large values for the ``txq_inline_mpw``. Also, the default value (268)
828 may be decreased in run-time if the large transmit queue size is requested
829 and hardware does not support enough descriptor amount, in this case warning
830 is emitted. If ``txq_inline_mpw`` key is specified and requested inline
831 settings can not be satisfied then error will be raised.
833 - ``txqs_max_vec`` parameter [int]
835 Enable vectorized Tx only when the number of TX queues is less than or
836 equal to this value. This parameter is deprecated and ignored, kept
837 for compatibility issue to not prevent driver from probing.
839 - ``txq_mpw_hdr_dseg_en`` parameter [int]
841 A nonzero value enables including two pointers in the first block of TX
842 descriptor. The parameter is deprecated and ignored, kept for compatibility
845 - ``txq_max_inline_len`` parameter [int]
847 Maximum size of packet to be inlined. This limits the size of packet to
848 be inlined. If the size of a packet is larger than configured value, the
849 packet isn't inlined even though there's enough space remained in the
850 descriptor. Instead, the packet is included with pointer. This parameter
851 is deprecated and converted directly to ``txq_inline_mpw`` providing full
852 compatibility. Valid only if eMPW feature is engaged.
854 - ``txq_mpw_en`` parameter [int]
856 A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
857 ConnectX-6, ConnectX-6 Dx, ConnectX-6 Lx, BlueField, BlueField-2.
858 eMPW allows the Tx burst function to pack up multiple packets
859 in a single descriptor session in order to save PCI bandwidth
860 and improve performance at the cost of a slightly higher CPU usage.
861 When ``txq_inline_mpw`` is set along with ``txq_mpw_en``,
862 Tx burst function copies entire packet data on to Tx descriptor
863 instead of including pointer of packet.
865 The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
866 it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` option.
867 Also, if minimal data inlining is requested by non-zero ``txq_inline_min``
868 option or reported by the NIC, the eMPW feature is disengaged.
870 - ``tx_db_nc`` parameter [int]
872 This parameter name is deprecated and ignored.
873 The new name for this parameter is ``sq_db_nc``.
874 See :ref:`common driver options <mlx5_common_driver_options>`.
876 - ``tx_pp`` parameter [int]
878 If a nonzero value is specified the driver creates all necessary internal
879 objects to provide accurate packet send scheduling on mbuf timestamps.
880 The positive value specifies the scheduling granularity in nanoseconds,
881 the packet send will be accurate up to specified digits. The allowed range is
882 from 500 to 1 million of nanoseconds. The negative value specifies the module
883 of granularity and engages the special test mode the check the schedule rate.
884 By default (if the ``tx_pp`` is not specified) send scheduling on timestamps
887 Starting with ConnectX-7 the capability to schedule traffic directly
888 on timestamp specified in descriptor is provided,
889 no extra objects are needed anymore and scheduling capability
890 is advertised and handled regardless ``tx_pp`` parameter presence.
892 - ``tx_skew`` parameter [int]
894 The parameter adjusts the send packet scheduling on timestamps and represents
895 the average delay between beginning of the transmitting descriptor processing
896 by the hardware and appearance of actual packet data on the wire. The value
897 should be provided in nanoseconds and is valid only if ``tx_pp`` parameter is
898 specified. The default value is zero.
900 - ``tx_vec_en`` parameter [int]
902 A nonzero value enables Tx vector on ConnectX-5, ConnectX-6, ConnectX-6 Dx,
903 ConnectX-6 Lx, BlueField and BlueField-2 NICs
904 if the number of global Tx queues on the port is less than ``txqs_max_vec``.
905 The parameter is deprecated and ignored.
907 - ``rx_vec_en`` parameter [int]
909 A nonzero value enables Rx vector if the port is not configured in
910 multi-segment otherwise this parameter is ignored.
914 - ``vf_nl_en`` parameter [int]
916 A nonzero value enables Netlink requests from the VF to add/remove MAC
917 addresses or/and enable/disable promiscuous/all multicast on the Netdevice.
918 Otherwise the relevant configuration must be run with Linux iproute2 tools.
919 This is a prerequisite to receive this kind of traffic.
921 Enabled by default, valid only on VF devices ignored otherwise.
923 - ``l3_vxlan_en`` parameter [int]
925 A nonzero value allows L3 VXLAN and VXLAN-GPE flow creation. To enable
926 L3 VXLAN or VXLAN-GPE, users has to configure firmware and enable this
927 parameter. This is a prerequisite to receive this kind of traffic.
931 - ``dv_xmeta_en`` parameter [int]
933 A nonzero value enables extensive flow metadata support if device is
934 capable and driver supports it. This can enable extensive support of
935 ``MARK`` and ``META`` item of ``rte_flow``. The newly introduced
936 ``SET_TAG`` and ``SET_META`` actions do not depend on ``dv_xmeta_en``.
938 There are some possible configurations, depending on parameter value:
940 - 0, this is default value, defines the legacy mode, the ``MARK`` and
941 ``META`` related actions and items operate only within NIC Tx and
942 NIC Rx steering domains, no ``MARK`` and ``META`` information crosses
943 the domain boundaries. The ``MARK`` item is 24 bits wide, the ``META``
944 item is 32 bits wide and match supported on egress only.
946 - 1, this engages extensive metadata mode, the ``MARK`` and ``META``
947 related actions and items operate within all supported steering domains,
948 including FDB, ``MARK`` and ``META`` information may cross the domain
949 boundaries. The ``MARK`` item is 24 bits wide, the ``META`` item width
950 depends on kernel and firmware configurations and might be 0, 16 or
951 32 bits. Within NIC Tx domain ``META`` data width is 32 bits for
952 compatibility, the actual width of data transferred to the FDB domain
953 depends on kernel configuration and may be vary. The actual supported
954 width can be retrieved in runtime by series of rte_flow_validate()
957 - 2, this engages extensive metadata mode, the ``MARK`` and ``META``
958 related actions and items operate within all supported steering domains,
959 including FDB, ``MARK`` and ``META`` information may cross the domain
960 boundaries. The ``META`` item is 32 bits wide, the ``MARK`` item width
961 depends on kernel and firmware configurations and might be 0, 16 or
962 24 bits. The actual supported width can be retrieved in runtime by
963 series of rte_flow_validate() trials.
965 - 3, this engages tunnel offload mode. In E-Switch configuration, that
966 mode implicitly activates ``dv_xmeta_en=1``.
968 +------+-----------+-----------+-------------+-------------+
969 | Mode | ``MARK`` | ``META`` | ``META`` Tx | FDB/Through |
970 +======+===========+===========+=============+=============+
971 | 0 | 24 bits | 32 bits | 32 bits | no |
972 +------+-----------+-----------+-------------+-------------+
973 | 1 | 24 bits | vary 0-32 | 32 bits | yes |
974 +------+-----------+-----------+-------------+-------------+
975 | 2 | vary 0-24 | 32 bits | 32 bits | yes |
976 +------+-----------+-----------+-------------+-------------+
978 If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
979 ignored and the device is configured to operate in legacy mode (0).
981 Disabled by default (set to 0).
983 The Direct Verbs/Rules (engaged with ``dv_flow_en`` = 1) supports all
984 of the extensive metadata features. The legacy Verbs supports FLAG and
985 MARK metadata actions over NIC Rx steering domain only.
987 Setting META value to zero in flow action means there is no item provided
988 and receiving datapath will not report in mbufs the metadata are present.
989 Setting MARK value to zero in flow action means the zero FDIR ID value
990 will be reported on packet receiving.
992 For the MARK action the last 16 values in the full range are reserved for
993 internal PMD purposes (to emulate FLAG action). The valid range for the
994 MARK action values is 0-0xFFEF for the 16-bit mode and 0-0xFFFFEF
995 for the 24-bit mode, the flows with the MARK action value outside
996 the specified range will be rejected.
998 - ``dv_flow_en`` parameter [int]
1000 Value 0 means legacy Verbs flow offloading.
1002 Value 1 enables the DV flow steering assuming it is supported by the
1003 driver (requires rdma-core 24 or higher).
1005 Value 2 enables the WQE based hardware steering.
1006 In this mode, only queue-based flow management is supported.
1008 It is configured by default to 1 (DV flow steering) if supported.
1009 Otherwise, the value is 0 which indicates legacy Verbs flow offloading.
1011 - ``dv_esw_en`` parameter [int]
1013 A nonzero value enables E-Switch using Direct Rules.
1015 Enabled by default if supported.
1017 - ``lacp_by_user`` parameter [int]
1019 A nonzero value enables the control of LACP traffic by the user application.
1020 When a bond exists in the driver, by default it should be managed by the
1021 kernel and therefore LACP traffic should be steered to the kernel.
1022 If this devarg is set to 1 it will allow the user to manage the bond by
1023 itself and not steer LACP traffic to the kernel.
1025 Disabled by default (set to 0).
1027 - ``representor`` parameter [list]
1029 This parameter can be used to instantiate DPDK Ethernet devices from
1030 existing port (PF, VF or SF) representors configured on the device.
1032 It is a standard parameter whose format is described in
1033 :ref:`ethernet_device_standard_device_arguments`.
1035 For instance, to probe VF port representors 0 through 2::
1037 <PCI_BDF>,representor=vf[0-2]
1039 To probe SF port representors 0 through 2::
1041 <PCI_BDF>,representor=sf[0-2]
1043 To probe VF port representors 0 through 2 on both PFs of bonding device::
1045 <Primary_PCI_BDF>,representor=pf[0,1]vf[0-2]
1047 - ``max_dump_files_num`` parameter [int]
1049 The maximum number of files per PMD entity that may be created for debug information.
1050 The files will be created in /var/log directory or in current directory.
1052 set to 128 by default.
1054 - ``lro_timeout_usec`` parameter [int]
1056 The maximum allowed duration of an LRO session, in micro-seconds.
1057 PMD will set the nearest value supported by HW, which is not bigger than
1058 the input ``lro_timeout_usec`` value.
1059 If this parameter is not specified, by default PMD will set
1060 the smallest value supported by HW.
1062 - ``hp_buf_log_sz`` parameter [int]
1064 The total data buffer size of a hairpin queue (logarithmic form), in bytes.
1065 PMD will set the data buffer size to 2 ** ``hp_buf_log_sz``, both for RX & TX.
1066 The capacity of the value is specified by the firmware and the initialization
1067 will get a failure if it is out of scope.
1068 The range of the value is from 11 to 19 right now, and the supported frame
1069 size of a single packet for hairpin is from 512B to 128KB. It might change if
1070 different firmware release is being used. By using a small value, it could
1071 reduce memory consumption but not work with a large frame. If the value is
1072 too large, the memory consumption will be high and some potential performance
1073 degradation will be introduced.
1074 By default, the PMD will set this value to 16, which means that 9KB jumbo
1075 frames will be supported.
1077 - ``reclaim_mem_mode`` parameter [int]
1079 Cache some resources in flow destroy will help flow recreation more efficient.
1080 While some systems may require the all the resources can be reclaimed after
1082 The parameter ``reclaim_mem_mode`` provides the option for user to configure
1083 if the resource cache is needed or not.
1085 There are three options to choose:
1087 - 0. It means the flow resources will be cached as usual. The resources will
1088 be cached, helpful with flow insertion rate.
1090 - 1. It will only enable the DPDK PMD level resources reclaim.
1092 - 2. Both DPDK PMD level and rdma-core low level will be configured as
1095 By default, the PMD will set this value to 0.
1097 - ``decap_en`` parameter [int]
1099 Some devices do not support FCS (frame checksum) scattering for
1100 tunnel-decapsulated packets.
1101 If set to 0, this option forces the FCS feature and rejects tunnel
1102 decapsulation in the flow engine for such devices.
1104 By default, the PMD will set this value to 1.
1106 - ``allow_duplicate_pattern`` parameter [int]
1108 There are two options to choose:
1110 - 0. Prevent insertion of rules with the same pattern items on non-root table.
1111 In this case, only the first rule is inserted and the following rules are
1112 rejected and error code EEXIST is returned.
1114 - 1. Allow insertion of rules with the same pattern items.
1115 In this case, all rules are inserted but only the first rule takes effect,
1116 the next rule takes effect only if the previous rules are deleted.
1118 By default, the PMD will set this value to 1.
1124 The following Mellanox device families are supported by the same mlx5 driver:
1136 Below are detailed device names:
1138 * Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX4111A-XCAT (1x10G)
1139 * Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX412A-XCAT (2x10G)
1140 * Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX4111A-ACAT (1x25G)
1141 * Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX412A-ACAT (2x25G)
1142 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX413A-BCAT (1x40G)
1143 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX4131A-BCAT (1x40G)
1144 * Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX415A-BCAT (1x40G)
1145 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX413A-GCAT (1x50G)
1146 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX4131A-GCAT (1x50G)
1147 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX414A-BCAT (2x50G)
1148 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-GCAT (1x50G)
1149 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-BCAT (2x50G)
1150 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-GCAT (2x50G)
1151 * Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-CCAT (1x100G)
1152 * Mellanox\ |reg| ConnectX\ |reg|-4 100G MCX416A-CCAT (2x100G)
1153 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4111A-XCAT (1x10G)
1154 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4121A-XCAT (2x10G)
1155 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4111A-ACAT (1x25G)
1156 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
1157 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 40G MCX4131A-BCAT (1x40G)
1158 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
1159 * Mellanox\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
1160 * Mellanox\ |reg| ConnectX\ |reg|-6 200G MCX654106A-HCAT (2x200G)
1161 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
1162 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 200G MCX623105AN-VDAT (1x200G)
1163 * Mellanox\ |reg| ConnectX\ |reg|-6 Lx EN 25G MCX631102AN-ADAT (2x25G)
1169 See :ref:`mlx5_sub_function`.
1171 Sub-Function representor support
1172 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1174 A SF netdev supports E-Switch representation offload
1175 similar to PF and VF representors.
1176 Use <sfnum> to probe SF representor::
1178 testpmd> port attach <PCI_BDF>,representor=sf<sfnum>,dv_flow_en=1
1184 1. Configure aggressive CQE Zipping for maximum performance::
1186 mlxconfig -d <mst device> s CQE_COMPRESSION=1
1188 To set it back to the default CQE Zipping mode use::
1190 mlxconfig -d <mst device> s CQE_COMPRESSION=0
1192 2. In case of virtualization:
1194 - Make sure that hypervisor kernel is 3.16 or newer.
1195 - Configure boot with ``iommu=pt``.
1196 - Use 1G huge pages.
1197 - Make sure to allocate a VM on huge pages.
1198 - Make sure to set CPU pinning.
1200 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
1201 for better performance. For VMs, verify that the right CPU
1202 and NUMA node are pinned according to the above. Run::
1204 lstopo-no-graphics --merge
1206 to identify the NUMA node to which the PCIe adapter is connected.
1208 4. If more than one adapter is used, and root complex capabilities allow
1209 to put both adapters on the same NUMA node without PCI bandwidth degradation,
1210 it is recommended to locate both adapters on the same NUMA node.
1211 This in order to forward packets from one to the other without
1212 NUMA performance penalty.
1214 5. Disable pause frames::
1216 ethtool -A <netdev> rx off tx off
1218 6. Verify IO non-posted prefetch is disabled by default. This can be checked
1219 via the BIOS configuration. Please contact you server provider for more
1220 information about the settings.
1224 On some machines, depends on the machine integrator, it is beneficial
1225 to set the PCI max read request parameter to 1K. This can be
1226 done in the following way:
1228 To query the read request size use::
1230 setpci -s <NIC PCI address> 68.w
1232 If the output is different than 3XXX, set it by::
1234 setpci -s <NIC PCI address> 68.w=3XXX
1236 The XXX can be different on different systems. Make sure to configure
1237 according to the setpci output.
1239 7. To minimize overhead of searching Memory Regions:
1241 - '--socket-mem' is recommended to pin memory by predictable amount.
1242 - Configure per-lcore cache when creating Mempools for packet buffer.
1243 - Refrain from dynamically allocating/freeing memory in run-time.
1248 There are multiple Rx burst functions with different advantages and limitations.
1250 .. table:: Rx burst functions
1252 +-------------------+------------------------+---------+-----------------+------+-------+
1253 || Function Name || Enabler || Scatter|| Error Recovery || CQE || Large|
1254 | | | | || comp|| MTU |
1255 +===================+========================+=========+=================+======+=======+
1256 | rx_burst | rx_vec_en=0 | Yes | Yes | Yes | Yes |
1257 +-------------------+------------------------+---------+-----------------+------+-------+
1258 | rx_burst_vec | rx_vec_en=1 (default) | No | if CQE comp off | Yes | No |
1259 +-------------------+------------------------+---------+-----------------+------+-------+
1260 | rx_burst_mprq || mprq_en=1 | No | Yes | Yes | Yes |
1261 | || RxQs >= rxqs_min_mprq | | | | |
1262 +-------------------+------------------------+---------+-----------------+------+-------+
1263 | rx_burst_mprq_vec || rx_vec_en=1 (default) | No | if CQE comp off | Yes | Yes |
1264 | || mprq_en=1 | | | | |
1265 | || RxQs >= rxqs_min_mprq | | | | |
1266 +-------------------+------------------------+---------+-----------------+------+-------+
1268 .. _mlx5_offloads_support:
1270 Supported hardware offloads
1271 ---------------------------
1273 .. table:: Minimal SW/HW versions for queue offloads
1275 ============== ===== ===== ========= ===== ========== =============
1276 Offload DPDK Linux rdma-core OFED firmware hardware
1277 ============== ===== ===== ========= ===== ========== =============
1278 common base 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1279 checksums 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1280 Rx timestamp 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1281 TSO 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
1282 LRO 19.08 N/A N/A 4.6-4 16.25.6406 ConnectX-5
1283 Tx scheduling 20.08 N/A N/A 5.1-2 22.28.2006 ConnectX-6 Dx
1284 Buffer Split 20.11 N/A N/A 5.1-2 16.28.2006 ConnectX-5
1285 ============== ===== ===== ========= ===== ========== =============
1287 .. table:: Minimal SW/HW versions for rte_flow offloads
1289 +-----------------------+-----------------+-----------------+
1290 | Offload | with E-Switch | with NIC |
1291 +=======================+=================+=================+
1292 | Count | | DPDK 19.05 | | DPDK 19.02 |
1293 | | | OFED 4.6 | | OFED 4.6 |
1294 | | | rdma-core 24 | | rdma-core 23 |
1295 | | | ConnectX-5 | | ConnectX-5 |
1296 +-----------------------+-----------------+-----------------+
1297 | Drop | | DPDK 19.05 | | DPDK 18.11 |
1298 | | | OFED 4.6 | | OFED 4.5 |
1299 | | | rdma-core 24 | | rdma-core 23 |
1300 | | | ConnectX-5 | | ConnectX-4 |
1301 +-----------------------+-----------------+-----------------+
1302 | Queue / RSS | | | | DPDK 18.11 |
1303 | | | N/A | | OFED 4.5 |
1304 | | | | | rdma-core 23 |
1305 | | | | | ConnectX-4 |
1306 +-----------------------+-----------------+-----------------+
1307 | Shared action | | | | |
1308 | | | :numref:`sact`| | :numref:`sact`|
1311 +-----------------------+-----------------+-----------------+
1312 | | VLAN | | DPDK 19.11 | | DPDK 19.11 |
1313 | | (of_pop_vlan / | | OFED 4.7-1 | | OFED 4.7-1 |
1314 | | of_push_vlan / | | ConnectX-5 | | ConnectX-5 |
1315 | | of_set_vlan_pcp / | | | | |
1316 | | of_set_vlan_vid) | | | | |
1317 +-----------------------+-----------------+-----------------+
1318 | | VLAN | | DPDK 21.05 | | |
1319 | | ingress and / | | OFED 5.3 | | N/A |
1320 | | of_push_vlan / | | ConnectX-6 Dx | | |
1321 +-----------------------+-----------------+-----------------+
1322 | | VLAN | | DPDK 21.05 | | |
1323 | | egress and / | | OFED 5.3 | | N/A |
1324 | | of_pop_vlan / | | ConnectX-6 Dx | | |
1325 +-----------------------+-----------------+-----------------+
1326 | Encapsulation | | DPDK 19.05 | | DPDK 19.02 |
1327 | (VXLAN / NVGRE / RAW) | | OFED 4.7-1 | | OFED 4.6 |
1328 | | | rdma-core 24 | | rdma-core 23 |
1329 | | | ConnectX-5 | | ConnectX-5 |
1330 +-----------------------+-----------------+-----------------+
1331 | Encapsulation | | DPDK 19.11 | | DPDK 19.11 |
1332 | GENEVE | | OFED 4.7-3 | | OFED 4.7-3 |
1333 | | | rdma-core 27 | | rdma-core 27 |
1334 | | | ConnectX-5 | | ConnectX-5 |
1335 +-----------------------+-----------------+-----------------+
1336 | Tunnel Offload | | DPDK 20.11 | | DPDK 20.11 |
1337 | | | OFED 5.1-2 | | OFED 5.1-2 |
1338 | | | rdma-core 32 | | N/A |
1339 | | | ConnectX-5 | | ConnectX-5 |
1340 +-----------------------+-----------------+-----------------+
1341 | | Header rewrite | | DPDK 19.05 | | DPDK 19.02 |
1342 | | (set_ipv4_src / | | OFED 4.7-1 | | OFED 4.7-1 |
1343 | | set_ipv4_dst / | | rdma-core 24 | | rdma-core 24 |
1344 | | set_ipv6_src / | | ConnectX-5 | | ConnectX-5 |
1345 | | set_ipv6_dst / | | | | |
1346 | | set_tp_src / | | | | |
1347 | | set_tp_dst / | | | | |
1348 | | dec_ttl / | | | | |
1349 | | set_ttl / | | | | |
1350 | | set_mac_src / | | | | |
1351 | | set_mac_dst) | | | | |
1352 +-----------------------+-----------------+-----------------+
1353 | | Header rewrite | | DPDK 20.02 | | DPDK 20.02 |
1354 | | (set_dscp) | | OFED 5.0 | | OFED 5.0 |
1355 | | | | rdma-core 24 | | rdma-core 24 |
1356 | | | | ConnectX-5 | | ConnectX-5 |
1357 +-----------------------+-----------------+-----------------+
1358 | Jump | | DPDK 19.05 | | DPDK 19.02 |
1359 | | | OFED 4.7-1 | | OFED 4.7-1 |
1360 | | | rdma-core 24 | | N/A |
1361 | | | ConnectX-5 | | ConnectX-5 |
1362 +-----------------------+-----------------+-----------------+
1363 | Mark / Flag | | DPDK 19.05 | | DPDK 18.11 |
1364 | | | OFED 4.6 | | OFED 4.5 |
1365 | | | rdma-core 24 | | rdma-core 23 |
1366 | | | ConnectX-5 | | ConnectX-4 |
1367 +-----------------------+-----------------+-----------------+
1368 | Meta data | | DPDK 19.11 | | DPDK 19.11 |
1369 | | | OFED 4.7-3 | | OFED 4.7-3 |
1370 | | | rdma-core 26 | | rdma-core 26 |
1371 | | | ConnectX-5 | | ConnectX-5 |
1372 +-----------------------+-----------------+-----------------+
1373 | Port ID | | DPDK 19.05 | | N/A |
1374 | | | OFED 4.7-1 | | N/A |
1375 | | | rdma-core 24 | | N/A |
1376 | | | ConnectX-5 | | N/A |
1377 +-----------------------+-----------------+-----------------+
1378 | Hairpin | | | | DPDK 19.11 |
1379 | | | N/A | | OFED 4.7-3 |
1380 | | | | | rdma-core 26 |
1381 | | | | | ConnectX-5 |
1382 +-----------------------+-----------------+-----------------+
1383 | 2-port Hairpin | | | | DPDK 20.11 |
1384 | | | N/A | | OFED 5.1-2 |
1386 | | | | | ConnectX-5 |
1387 +-----------------------+-----------------+-----------------+
1388 | Metering | | DPDK 19.11 | | DPDK 19.11 |
1389 | | | OFED 4.7-3 | | OFED 4.7-3 |
1390 | | | rdma-core 26 | | rdma-core 26 |
1391 | | | ConnectX-5 | | ConnectX-5 |
1392 +-----------------------+-----------------+-----------------+
1393 | ASO Metering | | DPDK 21.05 | | DPDK 21.05 |
1394 | | | OFED 5.3 | | OFED 5.3 |
1395 | | | rdma-core 33 | | rdma-core 33 |
1396 | | | ConnectX-6 Dx| | ConnectX-6 Dx |
1397 +-----------------------+-----------------+-----------------+
1398 | Metering Hierarchy | | DPDK 21.08 | | DPDK 21.08 |
1399 | | | OFED 5.3 | | OFED 5.3 |
1401 | | | ConnectX-6 Dx| | ConnectX-6 Dx |
1402 +-----------------------+-----------------+-----------------+
1403 | Sampling | | DPDK 20.11 | | DPDK 20.11 |
1404 | | | OFED 5.1-2 | | OFED 5.1-2 |
1405 | | | rdma-core 32 | | N/A |
1406 | | | ConnectX-5 | | ConnectX-5 |
1407 +-----------------------+-----------------+-----------------+
1408 | Encapsulation | | DPDK 21.02 | | DPDK 21.02 |
1409 | GTP PSC | | OFED 5.2 | | OFED 5.2 |
1410 | | | rdma-core 35 | | rdma-core 35 |
1411 | | | ConnectX-6 Dx| | ConnectX-6 Dx |
1412 +-----------------------+-----------------+-----------------+
1413 | Encapsulation | | DPDK 21.02 | | DPDK 21.02 |
1414 | GENEVE TLV option | | OFED 5.2 | | OFED 5.2 |
1415 | | | rdma-core 34 | | rdma-core 34 |
1416 | | | ConnectX-6 Dx | | ConnectX-6 Dx |
1417 +-----------------------+-----------------+-----------------+
1418 | Modify Field | | DPDK 21.02 | | DPDK 21.02 |
1419 | | | OFED 5.2 | | OFED 5.2 |
1420 | | | rdma-core 35 | | rdma-core 35 |
1421 | | | ConnectX-5 | | ConnectX-5 |
1422 +-----------------------+-----------------+-----------------+
1423 | Connection tracking | | | | DPDK 21.05 |
1424 | | | N/A | | OFED 5.3 |
1425 | | | | | rdma-core 35 |
1426 | | | | | ConnectX-6 Dx |
1427 +-----------------------+-----------------+-----------------+
1429 .. table:: Minimal SW/HW versions for shared action offload
1432 +-----------------------+-----------------+-----------------+
1433 | Shared Action | with E-Switch | with NIC |
1434 +=======================+=================+=================+
1435 | RSS | | | | DPDK 20.11 |
1436 | | | N/A | | OFED 5.2 |
1437 | | | | | rdma-core 33 |
1438 | | | | | ConnectX-5 |
1439 +-----------------------+-----------------+-----------------+
1440 | Age | | DPDK 20.11 | | DPDK 20.11 |
1441 | | | OFED 5.2 | | OFED 5.2 |
1442 | | | rdma-core 32 | | rdma-core 32 |
1443 | | | ConnectX-6 Dx | | ConnectX-6 Dx |
1444 +-----------------------+-----------------+-----------------+
1445 | Count | | DPDK 21.05 | | DPDK 21.05 |
1446 | | | OFED 4.6 | | OFED 4.6 |
1447 | | | rdma-core 24 | | rdma-core 23 |
1448 | | | ConnectX-5 | | ConnectX-5 |
1449 +-----------------------+-----------------+-----------------+
1454 MARK and META items are interrelated with datapath - they might move from/to
1455 the applications in mbuf fields. Hence, zero value for these items has the
1456 special meaning - it means "no metadata are provided", not zero values are
1457 treated by applications and PMD as valid ones.
1459 Moreover in the flow engine domain the value zero is acceptable to match and
1460 set, and we should allow to specify zero values as rte_flow parameters for the
1461 META and MARK items and actions. In the same time zero mask has no meaning and
1462 should be rejected on validation stage.
1467 Flows are not cached in the driver.
1468 When stopping a device port, all the flows created on this port from the
1469 application will be flushed automatically in the background.
1470 After stopping the device port, all flows on this port become invalid and
1471 not represented in the system.
1472 All references to these flows held by the application should be discarded
1473 directly but neither destroyed nor flushed.
1475 The application should re-create the flows as required after the port restart.
1480 Compared to librte_net_mlx4 that implements a single RSS configuration per
1481 port, librte_net_mlx5 supports per-protocol RSS configuration.
1483 Since ``testpmd`` defaults to IP RSS mode and there is currently no
1484 command-line parameter to enable additional protocols (UDP and TCP as well
1485 as IP), the following commands must be entered from its CLI to get the same
1486 behavior as librte_net_mlx4::
1489 > port config all rss all
1495 This section demonstrates how to launch **testpmd** with Mellanox
1496 ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
1498 #. Load the kernel modules::
1500 modprobe -a ib_uverbs mlx5_core mlx5_ib
1502 Alternatively if MLNX_OFED/MLNX_EN is fully installed, the following script
1505 /etc/init.d/openibd restart
1509 User space I/O kernel modules (uio and igb_uio) are not used and do
1510 not have to be loaded.
1512 #. Make sure Ethernet interfaces are in working order and linked to kernel
1513 verbs. Related sysfs entries should be present::
1515 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
1524 #. Optionally, retrieve their PCI bus addresses for to be used with the allow list::
1527 for intf in eth2 eth3 eth4 eth5;
1529 (cd "/sys/class/net/${intf}/device/" && pwd -P);
1532 sed -n 's,.*/\(.*\),-a \1,p'
1541 #. Request huge pages::
1543 dpdk-hugepages.py --setup 2G
1545 #. Start testpmd with basic parameters::
1547 dpdk-testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
1552 EAL: PCI device 0000:05:00.0 on NUMA socket 0
1553 EAL: probe driver: 15b3:1013 librte_net_mlx5
1554 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
1555 PMD: librte_net_mlx5: 1 port(s) detected
1556 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
1557 EAL: PCI device 0000:05:00.1 on NUMA socket 0
1558 EAL: probe driver: 15b3:1013 librte_net_mlx5
1559 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
1560 PMD: librte_net_mlx5: 1 port(s) detected
1561 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
1562 EAL: PCI device 0000:06:00.0 on NUMA socket 0
1563 EAL: probe driver: 15b3:1013 librte_net_mlx5
1564 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
1565 PMD: librte_net_mlx5: 1 port(s) detected
1566 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
1567 EAL: PCI device 0000:06:00.1 on NUMA socket 0
1568 EAL: probe driver: 15b3:1013 librte_net_mlx5
1569 PMD: librte_net_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
1570 PMD: librte_net_mlx5: 1 port(s) detected
1571 PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
1572 Interactive-mode selected
1573 Configuring Port 0 (socket 0)
1574 PMD: librte_net_mlx5: 0x8cba80: TX queues number update: 0 -> 2
1575 PMD: librte_net_mlx5: 0x8cba80: RX queues number update: 0 -> 2
1576 Port 0: E4:1D:2D:E7:0C:FE
1577 Configuring Port 1 (socket 0)
1578 PMD: librte_net_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
1579 PMD: librte_net_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
1580 Port 1: E4:1D:2D:E7:0C:FF
1581 Configuring Port 2 (socket 0)
1582 PMD: librte_net_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
1583 PMD: librte_net_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
1584 Port 2: E4:1D:2D:E7:0C:FA
1585 Configuring Port 3 (socket 0)
1586 PMD: librte_net_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
1587 PMD: librte_net_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
1588 Port 3: E4:1D:2D:E7:0C:FB
1589 Checking link statuses...
1590 Port 0 Link Up - speed 40000 Mbps - full-duplex
1591 Port 1 Link Up - speed 40000 Mbps - full-duplex
1592 Port 2 Link Up - speed 10000 Mbps - full-duplex
1593 Port 3 Link Up - speed 10000 Mbps - full-duplex
1600 This section demonstrates how to dump flows. Currently, it's possible to dump
1601 all flows with assistance of external tools.
1603 #. 2 ways to get flow raw file:
1605 - Using testpmd CLI:
1607 .. code-block:: console
1610 testpmd> flow dump <port> all <output_file>
1612 testpmd> flow dump <port> rule <rule_id> <output_file>
1614 - call rte_flow_dev_dump api:
1616 .. code-block:: console
1618 rte_flow_dev_dump(port, flow, file, NULL);
1620 #. Dump human-readable flows from raw file:
1622 Get flow parsing tool from: https://github.com/Mellanox/mlx_steering_dump
1624 .. code-block:: console
1626 mlx_steering_dump.py -f <output_file> -flowptr <flow_ptr>
1628 How to share a meter between ports in the same switch domain
1629 ------------------------------------------------------------
1631 This section demonstrates how to use the shared meter. A meter M can be created
1632 on port X and to be shared with a port Y on the same switch domain by the next way:
1634 .. code-block:: console
1636 flow create X ingress transfer pattern eth / port_id id is Y / end actions meter mtr_id M / end
1638 How to use meter hierarchy
1639 --------------------------
1641 This section demonstrates how to create and use a meter hierarchy.
1642 A termination meter M can be the policy green action of another termination meter N.
1643 The two meters are chained together as a chain. Using meter N in a flow will apply
1644 both the meters in hierarchy on that flow.
1646 .. code-block:: console
1648 add port meter policy 0 1 g_actions queue index 0 / end y_actions end r_actions drop / end
1649 create port meter 0 M 1 1 yes 0xffff 1 0
1650 add port meter policy 0 2 g_actions meter mtr_id M / end y_actions end r_actions drop / end
1651 create port meter 0 N 2 2 yes 0xffff 1 0
1652 flow create 0 ingress group 1 pattern eth / end actions meter mtr_id N / end
1654 How to configure a VF as trusted
1655 --------------------------------
1657 This section demonstrates how to configure a virtual function (VF) interface as trusted.
1658 Trusted VF is needed to offload rules with rte_flow to a group that is bigger than 0.
1659 The configuration is done in two parts: driver and FW.
1661 The procedure below is an example of using a ConnectX-5 adapter card (pf0) with 2 VFs:
1663 #. Create 2 VFs on the PF pf0 when in Legacy SR-IOV mode::
1665 $ echo 2 > /sys/class/net/pf0/device/mlx5_num_vfs
1667 #. Verify the VFs are created:
1669 .. code-block:: console
1671 $ lspci | grep Mellanox
1672 82:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
1673 82:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
1674 82:00.2 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
1675 82:00.3 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
1677 #. Unbind all VFs. For each VF PCIe, using the following command to unbind the driver::
1679 $ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/unbind
1681 #. Set the VFs to be trusted for the kernel by using one of the methods below:
1683 - Using sysfs file::
1685 $ echo ON | tee /sys/class/net/pf0/device/sriov/0/trust
1686 $ echo ON | tee /sys/class/net/pf0/device/sriov/1/trust
1688 - Using “ip link” command::
1690 $ ip link set p0 vf 0 trust on
1691 $ ip link set p0 vf 1 trust on
1693 #. Configure all VFs using mlxreg::
1695 $ mlxreg -d /dev/mst/mt4121_pciconf0 --reg_name VHCA_TRUST_LEVEL --yes --set "all_vhca=0x1,trust_level=0x1"
1699 Firmware version used must be >= xx.29.1016 and MFT >= 4.18
1701 #. For each VF PCIe, using the following command to bind the driver::
1703 $ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/bind
1708 Host shaper register is per host port register
1709 which sets a shaper on the host port.
1710 All VF/host PF representors belonging to one host port share one host shaper.
1711 For example, if representor 0 and representor 1 belong to the same host port,
1712 and a host shaper rate of 1Gbps is configured,
1713 the shaper throttles both representors traffic from the host.
1715 Host shaper has two modes for setting the shaper,
1716 immediate and deferred to available descriptor threshold event trigger.
1718 In immediate mode, the rate limit is configured immediately to host shaper.
1720 When deferring to the available descriptor threshold trigger,
1721 the shaper is not set until an available descriptor threshold event
1722 is received by any Rx queue in a VF representor belonging to the host port.
1723 The only rate supported for deferred mode is 100Mbps
1724 (there is no limit on the supported rates for immediate mode).
1725 In deferred mode, the shaper is set on the host port by the firmware
1726 upon receiving the available descriptor threshold event,
1727 which allows throttling host traffic on available descriptor threshold events
1728 at minimum latency, preventing excess drops in the Rx queue.
1730 Dependency on mstflint package
1731 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1733 In order to configure host shaper register,
1734 ``librte_net_mlx5`` depends on ``libmtcr_ul``
1735 which can be installed from OFED mstflint package.
1736 Meson detects ``libmtcr_ul`` existence at configure stage.
1737 If the library is detected, the application must link with ``-lmtcr_ul``,
1738 as done by the pkg-config file libdpdk.pc.
1740 Available descriptor threshold and host shaper
1741 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1743 There is a command to configure the available descriptor threshold in testpmd.
1744 Testpmd also contains sample logic to handle available descriptor threshold events.
1745 The typical workflow is:
1746 testpmd configures available descriptor threshold for Rx queues,
1747 enables ``avail_thresh_triggered`` in host shaper and registers a callback.
1748 When traffic from the host is too high
1749 and Rx queue emptiness is below the available descriptor threshold,
1750 the PMD receives an event
1751 and the firmware configures a 100Mbps shaper on the host port automatically.
1752 Then the PMD call the callback registered previously,
1753 which will delay a while to let Rx queue empty,
1754 then disable host shaper.
1756 Let's assume we have a simple BlueField 2 setup:
1757 port 0 is uplink, port 1 is VF representor.
1758 Each port has 2 Rx queues.
1759 To control traffic from the host to the Arm device,
1760 we can enable the available descriptor threshold in testpmd by:
1762 .. code-block:: console
1764 testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 1 rate 0
1765 testpmd> set port 1 rxq 0 avail_thresh 70
1766 testpmd> set port 1 rxq 1 avail_thresh 70
1768 The first command disables the current host shaper
1769 and enables the available descriptor threshold triggered mode.
1770 The other commands configure the available descriptor threshold
1771 to 70% of Rx queue size for both Rx queues.
1773 When traffic from the host is too high,
1774 testpmd console prints log about available descriptor threshold event,
1775 then host shaper is disabled.
1776 The traffic rate from the host is controlled and less drop happens in Rx queues.
1778 The threshold event and shaper can be disabled like this:
1780 .. code-block:: console
1782 testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 0
1783 testpmd> set port 1 rxq 0 avail_thresh 0
1784 testpmd> set port 1 rxq 1 avail_thresh 0
1786 It is recommended an application disables the available descriptor threshold
1787 and ``avail_thresh_triggered`` before exit,
1788 if it enables them before.
1790 The shaper can also be configured with a value, the rate unit is 100Mbps.
1791 Below, the command sets the current shaper to 5Gbps
1792 and disables ``avail_thresh_triggered``.
1794 .. code-block:: console
1796 testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 50
1802 port attach with socket path
1803 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1805 It is possible to allocate a port with ``libibverbs`` from external application.
1806 For importing the external port with extra device arguments,
1807 there is a specific testpmd command
1808 similar to :ref:`port attach command <port_attach>`::
1810 testpmd> mlx5 port attach (identifier) socket=(path)
1814 * ``identifier``: device identifier with optional parameters
1815 as same as :ref:`port attach command <port_attach>`.
1816 * ``path``: path to IPC server socket created by the external application.
1818 This command performs:
1820 #. Open IPC client socket using the given path, and connect it.
1822 #. Import ibverbs context and ibverbs protection domain.
1824 #. Add two device arguments for context (``cmd_fd``)
1825 and protection domain (``pd_handle``) to the device identifier.
1826 See :ref:`mlx5 driver options <mlx5_common_driver_options>` for more
1827 information about these device arguments.
1829 #. Call the regular ``port attach`` function with updated identifier.
1831 For example, to attach a port whose PCI address is ``0000:0a:00.0``
1832 and its socket path is ``/var/run/import_ipc_socket``:
1834 .. code-block:: console
1836 testpmd> mlx5 port attach 0000:0a:00.0 socket=/var/run/import_ipc_socket
1837 testpmd: MLX5 socket path is /var/run/import_ipc_socket
1838 testpmd: Attach port with extra devargs 0000:0a:00.0,cmd_fd=40,pd_handle=1
1839 Attaching a new port...
1840 EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:0a:00.0 (socket 0)
1841 Port 0 is attached. Now total ports is 1
1845 port map external Rx queue
1846 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1848 External Rx queue indexes mapping management.
1850 Map HW queue index (32-bit) to ethdev queue index (16-bit) for external Rx queue::
1852 testpmd> mlx5 port (port_id) ext_rxq map (sw_queue_id) (hw_queue_id)
1854 Unmap external Rx queue::
1856 testpmd> mlx5 port (port_id) ext_rxq unmap (sw_queue_id)
1860 * ``sw_queue_id``: queue index in range [64536, 65535].
1861 This range is the highest 1000 numbers.
1862 * ``hw_queue_id``: queue index given by HW in queue creation.