2 Copyright 2015 6WIND S.A.
3 Copyright 2015 Mellanox
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of 6WIND S.A. nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
34 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
35 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** and **Mellanox
36 ConnectX-5** families of 10/25/40/50/100 Gb/s adapters as well as their
37 virtual functions (VF) in SR-IOV context.
39 Information and documentation about these adapters can be found on the
40 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
41 `Mellanox community <http://community.mellanox.com/welcome>`__.
43 There is also a `section dedicated to this poll mode driver
44 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
48 Due to external dependencies, this driver is disabled by default. It must
49 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX5_PMD=y`` and
52 Implementation details
53 ----------------------
55 Besides its dependency on libibverbs (that implies libmlx5 and associated
56 kernel support), librte_pmd_mlx5 relies heavily on system calls for control
57 operations such as querying/updating the MTU and flow control parameters.
59 For security reasons and robustness, this driver only deals with virtual
60 memory addresses. The way resources allocations are handled by the kernel
61 combined with hardware specifications that allow it to handle virtual memory
62 addresses directly ensure that DPDK applications cannot access random
63 physical memory (or memory that does not belong to the current process).
65 This capability allows the PMD to coexist with kernel network interfaces
66 which remain functional, although they stop receiving unicast packets as
67 long as they share the same MAC address.
68 This means legacy linux control tools (for example: ethtool, ifconfig and
69 more) can operate on the same network interfaces that owned by the DPDK
72 Enabling librte_pmd_mlx5 causes DPDK applications to be linked against
78 - Multi arch support: x86_64, POWER8, ARMv8.
79 - Multiple TX and RX queues.
80 - Support for scattered TX and RX frames.
81 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
82 - Several RSS hash keys, one for each flow type.
83 - Configurable RETA table.
84 - Support for multiple MAC addresses.
88 - RX CRC stripping configuration.
90 - Multicast promiscuous mode.
91 - Hardware checksum offloads.
92 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
96 - KVM and VMware ESX SR-IOV modes are supported.
97 - RSS hash result is supported.
99 - Hardware checksum TX offload for VXLAN and GRE.
101 - Statistics query including Basic, Extended and per queue.
106 - Inner RSS for VXLAN frames is not supported yet.
107 - Port statistics through software counters only.
108 - Hardware checksum RX offloads for VXLAN inner header are not supported yet.
109 - Forked secondary process not supported.
110 - Flow pattern without any specific vlan will match for vlan packets as well:
112 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
113 Meaning, the flow rule::
115 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
117 Will only match vlan packets with vid=3. and the flow rules::
119 flow create 0 ingress pattern eth / ipv4 / end ...
123 flow create 0 ingress pattern eth / vlan / ipv4 / end ...
125 Will match any ipv4 packet (VLAN included).
127 - A multi segment packet must have less than 6 segments in case the Tx burst function
128 is set to multi-packet send or Enhanced multi-packet send. Otherwise it must have
129 less than 50 segments.
137 These options can be modified in the ``.config`` file.
139 - ``CONFIG_RTE_LIBRTE_MLX5_PMD`` (default **n**)
141 Toggle compilation of librte_pmd_mlx5 itself.
143 - ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
145 Toggle debugging code and stricter compilation flags. Enabling this option
146 adds additional run-time checks and debugging messages at the cost of
149 - ``CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACHE`` (default **8**)
151 Maximum number of cached memory pools (MPs) per TX queue. Each MP from
152 which buffers are to be transmitted must be associated to memory regions
153 (MRs). This is a slow operation that must be cached.
155 This value is always 1 for RX queues since they use a single MP.
157 Environment variables
158 ~~~~~~~~~~~~~~~~~~~~~
160 - ``MLX5_PMD_ENABLE_PADDING``
162 Enables HW packet padding in PCI bus transactions.
164 When packet size is cache aligned and CRC stripping is enabled, 4 fewer
165 bytes are written to the PCI bus. Enabling padding makes such packets
168 In cases where PCI bandwidth is the bottleneck, padding can improve
171 This is disabled by default since this can also decrease performance for
172 unaligned packet sizes.
174 Run-time configuration
175 ~~~~~~~~~~~~~~~~~~~~~~
177 - librte_pmd_mlx5 brings kernel network interfaces up during initialization
178 because it is affected by their state. Forcing them down prevents packets
181 - **ethtool** operations on related kernel interfaces also affect the PMD.
183 - ``rxq_cqe_comp_en`` parameter [int]
185 A nonzero value enables the compression of CQE on RX side. This feature
186 allows to save PCI bandwidth and improve performance. Enabled by default.
190 - x86_64 with ConnectX-4, ConnectX-4 LX and ConnectX-5.
191 - POWER8 and ARMv8 with ConnectX-4 LX and ConnectX-5.
193 - ``txq_inline`` parameter [int]
195 Amount of data to be inlined during TX operations. Improves latency.
196 Can improve PPS performance when PCI back pressure is detected and may be
197 useful for scenarios involving heavy traffic on many queues.
199 Because additional software logic is necessary to handle this mode, this
200 option should be used with care, as it can lower performance when back
201 pressure is not expected.
203 - ``txqs_min_inline`` parameter [int]
205 Enable inline send only when the number of TX queues is greater or equal
208 This option should be used in combination with ``txq_inline`` above.
210 On ConnectX-4, ConnectX-4 LX and ConnectX-5 without Enhanced MPW:
212 - Disabled by default.
213 - In case ``txq_inline`` is set recommendation is 4.
215 On ConnectX-5 with Enhanced MPW:
217 - Set to 8 by default.
219 - ``txq_mpw_en`` parameter [int]
221 A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
222 enhanced multi-packet send (Enhanced MPS) for ConnectX-5. MPS allows the
223 TX burst function to pack up multiple packets in a single descriptor
224 session in order to save PCI bandwidth and improve performance at the
225 cost of a slightly higher CPU usage. When ``txq_inline`` is set along
226 with ``txq_mpw_en``, TX burst function tries to copy entire packet data
227 on to TX descriptor instead of including pointer of packet only if there
228 is enough room remained in the descriptor. ``txq_inline`` sets
229 per-descriptor space for either pointers or inlined packets. In addition,
230 Enhanced MPS supports hybrid mode - mixing inlined packets and pointers
231 in the same descriptor.
233 This option cannot be used in conjunction with ``tso`` below. When ``tso``
234 is set, ``txq_mpw_en`` is disabled.
236 It is currently only supported on the ConnectX-4 Lx and ConnectX-5
237 families of adapters. Enabled by default.
239 - ``txq_mpw_hdr_dseg_en`` parameter [int]
241 A nonzero value enables including two pointers in the first block of TX
242 descriptor. This can be used to lessen CPU load for memory copy.
244 Effective only when Enhanced MPS is supported. Disabled by default.
246 - ``txq_max_inline_len`` parameter [int]
248 Maximum size of packet to be inlined. This limits the size of packet to
249 be inlined. If the size of a packet is larger than configured value, the
250 packet isn't inlined even though there's enough space remained in the
251 descriptor. Instead, the packet is included with pointer.
253 Effective only when Enhanced MPS is supported. The default value is 256.
255 - ``tso`` parameter [int]
257 A nonzero value enables hardware TSO.
258 When hardware TSO is enabled, packets marked with TCP segmentation
259 offload will be divided into segments by the hardware. Disabled by default.
261 - ``tx_vec_en`` parameter [int]
263 A nonzero value enables Tx vector on ConnectX-5 only NIC if the number of
264 global Tx queues on the port is lesser than MLX5_VPMD_MIN_TXQS.
266 Enabled by default on ConnectX-5.
268 - ``rx_vec_en`` parameter [int]
270 A nonzero value enables Rx vector if the port is not configured in
271 multi-segment otherwise this parameter is ignored.
278 This driver relies on external libraries and kernel drivers for resources
279 allocations and initialization. The following dependencies are not part of
280 DPDK and must be installed separately:
284 User space Verbs framework used by librte_pmd_mlx5. This library provides
285 a generic interface between the kernel and low-level user space drivers
288 It allows slow and privileged operations (context initialization, hardware
289 resources allocations) to be managed by the kernel and fast operations to
290 never leave user space.
294 Low-level user space driver library for Mellanox ConnectX-4/ConnectX-5
295 devices, it is automatically loaded by libibverbs.
297 This library basically implements send/receive calls to the hardware
302 They provide the kernel-side Verbs API and low level device drivers that
303 manage actual hardware initialization and resources sharing with user
306 Unlike most other PMDs, these modules must remain loaded and bound to
309 - mlx5_core: hardware driver managing Mellanox ConnectX-4/ConnectX-5
310 devices and related Ethernet kernel network devices.
311 - mlx5_ib: InifiniBand device driver.
312 - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
314 - **Firmware update**
316 Mellanox OFED releases include firmware updates for ConnectX-4/ConnectX-5
319 Because each release provides new features, these updates must be applied to
320 match the kernel modules and libraries they come with.
324 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
330 Either RDMA Core library with a recent enough Linux kernel release
331 (recommended) or Mellanox OFED, which provides compatibility with older
334 RMDA Core with Linux Kernel
335 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
337 - Minimal kernel version : 4.13-rc4 (see `Linux installation documentation`_)
338 - Minimal rdma-core version: v15 (see `RDMA Core installation documentation`_)
340 .. _`Linux installation documentation`: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
341 .. _`RDMA Core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
346 - Mellanox OFED version: **4.2**.
349 - ConnectX-4: **12.20.1010** and above.
350 - ConnectX-4 Lx: **14.20.1010** and above.
351 - ConnectX-5: **16.20.1010** and above.
352 - ConnectX-5 Ex: **16.20.1010** and above.
354 While these libraries and kernel modules are available on OpenFabrics
355 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
356 managers on most distributions, this PMD requires Ethernet extensions that
357 may not be supported at the moment (this is a work in progress).
360 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__
361 includes the necessary support and should be used in the meantime. For DPDK,
362 only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
363 required from that distribution.
367 Several versions of Mellanox OFED are available. Installing the version
368 this DPDK release was developed and tested against is strongly
369 recommended. Please check the `prerequisites`_.
374 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
375 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
376 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
377 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
378 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT (1x40G)
379 * Mellanox(R) ConnectX(R)-4 40G MCX413A-BCAT (1x40G)
380 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
381 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT (1x50G)
382 * Mellanox(R) ConnectX(R)-4 50G MCX413A-GCAT (1x50G)
383 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
384 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT (2x50G)
385 * Mellanox(R) ConnectX(R)-4 50G MCX416A-BCAT (2x50G)
386 * Mellanox(R) ConnectX(R)-4 50G MCX416A-GCAT (2x50G)
387 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
388 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
389 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
390 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
391 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
392 * Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G)
394 Quick Start Guide on OFED
395 -------------------------
397 1. Download latest Mellanox OFED. For more info check the `prerequisites`_.
400 2. Install the required libraries and kernel modules either by installing
401 only the required set, or by installing the entire Mellanox OFED:
403 .. code-block:: console
407 3. Verify the firmware is the correct one:
409 .. code-block:: console
413 4. Verify all ports links are set to Ethernet:
415 .. code-block:: console
417 mlxconfig -d <mst device> query | grep LINK_TYPE
421 Link types may have to be configured to Ethernet:
423 .. code-block:: console
425 mlxconfig -d <mst device> set LINK_TYPE_P1/2=1/2/3
427 * LINK_TYPE_P1=<1|2|3> , 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
429 For hypervisors verify SR-IOV is enabled on the NIC:
431 .. code-block:: console
433 mlxconfig -d <mst device> query | grep SRIOV_EN
436 If needed, set enable the set the relevant fields:
438 .. code-block:: console
440 mlxconfig -d <mst device> set SRIOV_EN=1 NUM_OF_VFS=16
441 mlxfwreset -d <mst device> reset
443 5. Restart the driver:
445 .. code-block:: console
447 /etc/init.d/openibd restart
451 .. code-block:: console
453 service openibd restart
455 If link type was changed, firmware must be reset as well:
457 .. code-block:: console
459 mlxfwreset -d <mst device> reset
461 For hypervisors, after reset write the sysfs number of virtual functions
464 To dynamically instantiate a given number of virtual functions (VFs):
466 .. code-block:: console
468 echo [num_vfs] > /sys/class/infiniband/mlx5_0/device/sriov_numvfs
470 6. Compile DPDK and you are ready to go. See instructions on
471 :ref:`Development Kit Build System <Development_Kit_Build_System>`
476 1. Configure aggressive CQE Zipping for maximum performance:
478 .. code-block:: console
480 mlxconfig -d <mst device> s CQE_COMPRESSION=1
482 To set it back to the default CQE Zipping mode use:
484 .. code-block:: console
486 mlxconfig -d <mst device> s CQE_COMPRESSION=0
488 2. In case of virtualization:
490 - Make sure that hypervisor kernel is 3.16 or newer.
491 - Configure boot with ``iommu=pt``.
493 - Make sure to allocate a VM on huge pages.
494 - Make sure to set CPU pinning.
496 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
497 for better performance. For VMs, verify that the right CPU
498 and NUMA node are pinned according to the above. Run:
500 .. code-block:: console
504 to identify the NUMA node to which the PCIe adapter is connected.
506 4. If more than one adapter is used, and root complex capabilities allow
507 to put both adapters on the same NUMA node without PCI bandwidth degradation,
508 it is recommended to locate both adapters on the same NUMA node.
509 This in order to forward packets from one to the other without
510 NUMA performance penalty.
512 5. Disable pause frames:
514 .. code-block:: console
516 ethtool -A <netdev> rx off tx off
518 6. Verify IO non-posted prefetch is disabled by default. This can be checked
519 via the BIOS configuration. Please contact you server provider for more
520 information about the settings.
524 On some machines, depends on the machine integrator, it is beneficial
525 to set the PCI max read request parameter to 1K. This can be
526 done in the following way:
528 To query the read request size use:
530 .. code-block:: console
532 setpci -s <NIC PCI address> 68.w
534 If the output is different than 3XXX, set it by:
536 .. code-block:: console
538 setpci -s <NIC PCI address> 68.w=3XXX
540 The XXX can be different on different systems. Make sure to configure
541 according to the setpci output.
546 Compared to librte_pmd_mlx4 that implements a single RSS configuration per
547 port, librte_pmd_mlx5 supports per-protocol RSS configuration.
549 Since ``testpmd`` defaults to IP RSS mode and there is currently no
550 command-line parameter to enable additional protocols (UDP and TCP as well
551 as IP), the following commands must be entered from its CLI to get the same
552 behavior as librte_pmd_mlx4:
554 .. code-block:: console
557 > port config all rss all
563 This section demonstrates how to launch **testpmd** with Mellanox
564 ConnectX-4/ConnectX-5 devices managed by librte_pmd_mlx5.
566 #. Load the kernel modules:
568 .. code-block:: console
570 modprobe -a ib_uverbs mlx5_core mlx5_ib
572 Alternatively if MLNX_OFED is fully installed, the following script can
575 .. code-block:: console
577 /etc/init.d/openibd restart
581 User space I/O kernel modules (uio and igb_uio) are not used and do
582 not have to be loaded.
584 #. Make sure Ethernet interfaces are in working order and linked to kernel
585 verbs. Related sysfs entries should be present:
587 .. code-block:: console
589 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
593 .. code-block:: console
600 #. Optionally, retrieve their PCI bus addresses for whitelisting:
602 .. code-block:: console
605 for intf in eth2 eth3 eth4 eth5;
607 (cd "/sys/class/net/${intf}/device/" && pwd -P);
610 sed -n 's,.*/\(.*\),-w \1,p'
614 .. code-block:: console
621 #. Request huge pages:
623 .. code-block:: console
625 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
627 #. Start testpmd with basic parameters:
629 .. code-block:: console
631 testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
635 .. code-block:: console
638 EAL: PCI device 0000:05:00.0 on NUMA socket 0
639 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
640 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
641 PMD: librte_pmd_mlx5: 1 port(s) detected
642 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
643 EAL: PCI device 0000:05:00.1 on NUMA socket 0
644 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
645 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
646 PMD: librte_pmd_mlx5: 1 port(s) detected
647 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
648 EAL: PCI device 0000:06:00.0 on NUMA socket 0
649 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
650 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
651 PMD: librte_pmd_mlx5: 1 port(s) detected
652 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
653 EAL: PCI device 0000:06:00.1 on NUMA socket 0
654 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
655 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
656 PMD: librte_pmd_mlx5: 1 port(s) detected
657 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
658 Interactive-mode selected
659 Configuring Port 0 (socket 0)
660 PMD: librte_pmd_mlx5: 0x8cba80: TX queues number update: 0 -> 2
661 PMD: librte_pmd_mlx5: 0x8cba80: RX queues number update: 0 -> 2
662 Port 0: E4:1D:2D:E7:0C:FE
663 Configuring Port 1 (socket 0)
664 PMD: librte_pmd_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
665 PMD: librte_pmd_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
666 Port 1: E4:1D:2D:E7:0C:FF
667 Configuring Port 2 (socket 0)
668 PMD: librte_pmd_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
669 PMD: librte_pmd_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
670 Port 2: E4:1D:2D:E7:0C:FA
671 Configuring Port 3 (socket 0)
672 PMD: librte_pmd_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
673 PMD: librte_pmd_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
674 Port 3: E4:1D:2D:E7:0C:FB
675 Checking link statuses...
676 Port 0 Link Up - speed 40000 Mbps - full-duplex
677 Port 1 Link Up - speed 40000 Mbps - full-duplex
678 Port 2 Link Up - speed 10000 Mbps - full-duplex
679 Port 3 Link Up - speed 10000 Mbps - full-duplex