2 Copyright 2015 6WIND S.A.
3 Copyright 2015 Mellanox
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of 6WIND S.A. nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
34 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
35 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** and **Mellanox
36 ConnectX-5** families of 10/25/40/50/100 Gb/s adapters as well as their
37 virtual functions (VF) in SR-IOV context.
39 Information and documentation about these adapters can be found on the
40 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
41 `Mellanox community <http://community.mellanox.com/welcome>`__.
43 There is also a `section dedicated to this poll mode driver
44 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
48 Due to external dependencies, this driver is disabled by default. It must
49 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX5_PMD=y`` and
52 Implementation details
53 ----------------------
55 Besides its dependency on libibverbs (that implies libmlx5 and associated
56 kernel support), librte_pmd_mlx5 relies heavily on system calls for control
57 operations such as querying/updating the MTU and flow control parameters.
59 For security reasons and robustness, this driver only deals with virtual
60 memory addresses. The way resources allocations are handled by the kernel
61 combined with hardware specifications that allow it to handle virtual memory
62 addresses directly ensure that DPDK applications cannot access random
63 physical memory (or memory that does not belong to the current process).
65 This capability allows the PMD to coexist with kernel network interfaces
66 which remain functional, although they stop receiving unicast packets as
67 long as they share the same MAC address.
68 This means legacy linux control tools (for example: ethtool, ifconfig and
69 more) can operate on the same network interfaces that owned by the DPDK
72 Enabling librte_pmd_mlx5 causes DPDK applications to be linked against
78 - Multi arch support: x86_64, POWER8, ARMv8.
79 - Multiple TX and RX queues.
80 - Support for scattered TX and RX frames.
81 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
82 - Several RSS hash keys, one for each flow type.
83 - Configurable RETA table.
84 - Support for multiple MAC addresses.
88 - RX CRC stripping configuration.
90 - Multicast promiscuous mode.
91 - Hardware checksum offloads.
92 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
96 - KVM and VMware ESX SR-IOV modes are supported.
97 - RSS hash result is supported.
99 - Hardware checksum TX offload for VXLAN and GRE.
101 - Statistics query including Basic, Extended and per queue.
106 - Inner RSS for VXLAN frames is not supported yet.
107 - Port statistics through software counters only. Flow statistics are
108 supported by hardware counters.
109 - Hardware checksum RX offloads for VXLAN inner header are not supported yet.
110 - Forked secondary process not supported.
111 - Flow pattern without any specific vlan will match for vlan packets as well:
113 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
114 Meaning, the flow rule::
116 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
118 Will only match vlan packets with vid=3. and the flow rules::
120 flow create 0 ingress pattern eth / ipv4 / end ...
124 flow create 0 ingress pattern eth / vlan / ipv4 / end ...
126 Will match any ipv4 packet (VLAN included).
128 - A multi segment packet must have less than 6 segments in case the Tx burst function
129 is set to multi-packet send or Enhanced multi-packet send. Otherwise it must have
130 less than 50 segments.
131 - Count action for RTE flow is only supported in Mellanox OFED 4.2.
139 These options can be modified in the ``.config`` file.
141 - ``CONFIG_RTE_LIBRTE_MLX5_PMD`` (default **n**)
143 Toggle compilation of librte_pmd_mlx5 itself.
145 - ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
147 Toggle debugging code and stricter compilation flags. Enabling this option
148 adds additional run-time checks and debugging messages at the cost of
151 - ``CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACHE`` (default **8**)
153 Maximum number of cached memory pools (MPs) per TX queue. Each MP from
154 which buffers are to be transmitted must be associated to memory regions
155 (MRs). This is a slow operation that must be cached.
157 This value is always 1 for RX queues since they use a single MP.
159 Environment variables
160 ~~~~~~~~~~~~~~~~~~~~~
162 - ``MLX5_PMD_ENABLE_PADDING``
164 Enables HW packet padding in PCI bus transactions.
166 When packet size is cache aligned and CRC stripping is enabled, 4 fewer
167 bytes are written to the PCI bus. Enabling padding makes such packets
170 In cases where PCI bandwidth is the bottleneck, padding can improve
173 This is disabled by default since this can also decrease performance for
174 unaligned packet sizes.
176 Run-time configuration
177 ~~~~~~~~~~~~~~~~~~~~~~
179 - librte_pmd_mlx5 brings kernel network interfaces up during initialization
180 because it is affected by their state. Forcing them down prevents packets
183 - **ethtool** operations on related kernel interfaces also affect the PMD.
185 - ``rxq_cqe_comp_en`` parameter [int]
187 A nonzero value enables the compression of CQE on RX side. This feature
188 allows to save PCI bandwidth and improve performance. Enabled by default.
192 - x86_64 with ConnectX-4, ConnectX-4 LX and ConnectX-5.
193 - POWER8 and ARMv8 with ConnectX-4 LX and ConnectX-5.
195 - ``txq_inline`` parameter [int]
197 Amount of data to be inlined during TX operations. Improves latency.
198 Can improve PPS performance when PCI back pressure is detected and may be
199 useful for scenarios involving heavy traffic on many queues.
201 Because additional software logic is necessary to handle this mode, this
202 option should be used with care, as it can lower performance when back
203 pressure is not expected.
205 - ``txqs_min_inline`` parameter [int]
207 Enable inline send only when the number of TX queues is greater or equal
210 This option should be used in combination with ``txq_inline`` above.
212 On ConnectX-4, ConnectX-4 LX and ConnectX-5 without Enhanced MPW:
214 - Disabled by default.
215 - In case ``txq_inline`` is set recommendation is 4.
217 On ConnectX-5 with Enhanced MPW:
219 - Set to 8 by default.
221 - ``txq_mpw_en`` parameter [int]
223 A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
224 enhanced multi-packet send (Enhanced MPS) for ConnectX-5. MPS allows the
225 TX burst function to pack up multiple packets in a single descriptor
226 session in order to save PCI bandwidth and improve performance at the
227 cost of a slightly higher CPU usage. When ``txq_inline`` is set along
228 with ``txq_mpw_en``, TX burst function tries to copy entire packet data
229 on to TX descriptor instead of including pointer of packet only if there
230 is enough room remained in the descriptor. ``txq_inline`` sets
231 per-descriptor space for either pointers or inlined packets. In addition,
232 Enhanced MPS supports hybrid mode - mixing inlined packets and pointers
233 in the same descriptor.
235 This option cannot be used in conjunction with ``tso`` below. When ``tso``
236 is set, ``txq_mpw_en`` is disabled.
238 It is currently only supported on the ConnectX-4 Lx and ConnectX-5
239 families of adapters. Enabled by default.
241 - ``txq_mpw_hdr_dseg_en`` parameter [int]
243 A nonzero value enables including two pointers in the first block of TX
244 descriptor. This can be used to lessen CPU load for memory copy.
246 Effective only when Enhanced MPS is supported. Disabled by default.
248 - ``txq_max_inline_len`` parameter [int]
250 Maximum size of packet to be inlined. This limits the size of packet to
251 be inlined. If the size of a packet is larger than configured value, the
252 packet isn't inlined even though there's enough space remained in the
253 descriptor. Instead, the packet is included with pointer.
255 Effective only when Enhanced MPS is supported. The default value is 256.
257 - ``tso`` parameter [int]
259 A nonzero value enables hardware TSO.
260 When hardware TSO is enabled, packets marked with TCP segmentation
261 offload will be divided into segments by the hardware. Disabled by default.
263 - ``tx_vec_en`` parameter [int]
265 A nonzero value enables Tx vector on ConnectX-5 only NIC if the number of
266 global Tx queues on the port is lesser than MLX5_VPMD_MIN_TXQS.
268 Enabled by default on ConnectX-5.
270 - ``rx_vec_en`` parameter [int]
272 A nonzero value enables Rx vector if the port is not configured in
273 multi-segment otherwise this parameter is ignored.
280 This driver relies on external libraries and kernel drivers for resources
281 allocations and initialization. The following dependencies are not part of
282 DPDK and must be installed separately:
286 User space Verbs framework used by librte_pmd_mlx5. This library provides
287 a generic interface between the kernel and low-level user space drivers
290 It allows slow and privileged operations (context initialization, hardware
291 resources allocations) to be managed by the kernel and fast operations to
292 never leave user space.
296 Low-level user space driver library for Mellanox ConnectX-4/ConnectX-5
297 devices, it is automatically loaded by libibverbs.
299 This library basically implements send/receive calls to the hardware
304 They provide the kernel-side Verbs API and low level device drivers that
305 manage actual hardware initialization and resources sharing with user
308 Unlike most other PMDs, these modules must remain loaded and bound to
311 - mlx5_core: hardware driver managing Mellanox ConnectX-4/ConnectX-5
312 devices and related Ethernet kernel network devices.
313 - mlx5_ib: InifiniBand device driver.
314 - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
316 - **Firmware update**
318 Mellanox OFED releases include firmware updates for ConnectX-4/ConnectX-5
321 Because each release provides new features, these updates must be applied to
322 match the kernel modules and libraries they come with.
326 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
332 Either RDMA Core library with a recent enough Linux kernel release
333 (recommended) or Mellanox OFED, which provides compatibility with older
336 RMDA Core with Linux Kernel
337 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
339 - Minimal kernel version : 4.13-rc4 (see `Linux installation documentation`_)
340 - Minimal rdma-core version: v15 (see `RDMA Core installation documentation`_)
342 .. _`Linux installation documentation`: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
343 .. _`RDMA Core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
348 - Mellanox OFED version: **4.2**.
351 - ConnectX-4: **12.20.1010** and above.
352 - ConnectX-4 Lx: **14.20.1010** and above.
353 - ConnectX-5: **16.20.1010** and above.
354 - ConnectX-5 Ex: **16.20.1010** and above.
356 While these libraries and kernel modules are available on OpenFabrics
357 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
358 managers on most distributions, this PMD requires Ethernet extensions that
359 may not be supported at the moment (this is a work in progress).
362 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__
363 includes the necessary support and should be used in the meantime. For DPDK,
364 only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
365 required from that distribution.
369 Several versions of Mellanox OFED are available. Installing the version
370 this DPDK release was developed and tested against is strongly
371 recommended. Please check the `prerequisites`_.
376 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
377 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
378 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
379 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
380 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT (1x40G)
381 * Mellanox(R) ConnectX(R)-4 40G MCX413A-BCAT (1x40G)
382 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
383 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT (1x50G)
384 * Mellanox(R) ConnectX(R)-4 50G MCX413A-GCAT (1x50G)
385 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
386 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT (2x50G)
387 * Mellanox(R) ConnectX(R)-4 50G MCX416A-BCAT (2x50G)
388 * Mellanox(R) ConnectX(R)-4 50G MCX416A-GCAT (2x50G)
389 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
390 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
391 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
392 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
393 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
394 * Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G)
396 Quick Start Guide on OFED
397 -------------------------
399 1. Download latest Mellanox OFED. For more info check the `prerequisites`_.
402 2. Install the required libraries and kernel modules either by installing
403 only the required set, or by installing the entire Mellanox OFED:
405 .. code-block:: console
409 3. Verify the firmware is the correct one:
411 .. code-block:: console
415 4. Verify all ports links are set to Ethernet:
417 .. code-block:: console
419 mlxconfig -d <mst device> query | grep LINK_TYPE
423 Link types may have to be configured to Ethernet:
425 .. code-block:: console
427 mlxconfig -d <mst device> set LINK_TYPE_P1/2=1/2/3
429 * LINK_TYPE_P1=<1|2|3> , 1=Infiniband 2=Ethernet 3=VPI(auto-sense)
431 For hypervisors verify SR-IOV is enabled on the NIC:
433 .. code-block:: console
435 mlxconfig -d <mst device> query | grep SRIOV_EN
438 If needed, set enable the set the relevant fields:
440 .. code-block:: console
442 mlxconfig -d <mst device> set SRIOV_EN=1 NUM_OF_VFS=16
443 mlxfwreset -d <mst device> reset
445 5. Restart the driver:
447 .. code-block:: console
449 /etc/init.d/openibd restart
453 .. code-block:: console
455 service openibd restart
457 If link type was changed, firmware must be reset as well:
459 .. code-block:: console
461 mlxfwreset -d <mst device> reset
463 For hypervisors, after reset write the sysfs number of virtual functions
466 To dynamically instantiate a given number of virtual functions (VFs):
468 .. code-block:: console
470 echo [num_vfs] > /sys/class/infiniband/mlx5_0/device/sriov_numvfs
472 6. Compile DPDK and you are ready to go. See instructions on
473 :ref:`Development Kit Build System <Development_Kit_Build_System>`
478 1. Configure aggressive CQE Zipping for maximum performance:
480 .. code-block:: console
482 mlxconfig -d <mst device> s CQE_COMPRESSION=1
484 To set it back to the default CQE Zipping mode use:
486 .. code-block:: console
488 mlxconfig -d <mst device> s CQE_COMPRESSION=0
490 2. In case of virtualization:
492 - Make sure that hypervisor kernel is 3.16 or newer.
493 - Configure boot with ``iommu=pt``.
495 - Make sure to allocate a VM on huge pages.
496 - Make sure to set CPU pinning.
498 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
499 for better performance. For VMs, verify that the right CPU
500 and NUMA node are pinned according to the above. Run:
502 .. code-block:: console
506 to identify the NUMA node to which the PCIe adapter is connected.
508 4. If more than one adapter is used, and root complex capabilities allow
509 to put both adapters on the same NUMA node without PCI bandwidth degradation,
510 it is recommended to locate both adapters on the same NUMA node.
511 This in order to forward packets from one to the other without
512 NUMA performance penalty.
514 5. Disable pause frames:
516 .. code-block:: console
518 ethtool -A <netdev> rx off tx off
520 6. Verify IO non-posted prefetch is disabled by default. This can be checked
521 via the BIOS configuration. Please contact you server provider for more
522 information about the settings.
526 On some machines, depends on the machine integrator, it is beneficial
527 to set the PCI max read request parameter to 1K. This can be
528 done in the following way:
530 To query the read request size use:
532 .. code-block:: console
534 setpci -s <NIC PCI address> 68.w
536 If the output is different than 3XXX, set it by:
538 .. code-block:: console
540 setpci -s <NIC PCI address> 68.w=3XXX
542 The XXX can be different on different systems. Make sure to configure
543 according to the setpci output.
548 Compared to librte_pmd_mlx4 that implements a single RSS configuration per
549 port, librte_pmd_mlx5 supports per-protocol RSS configuration.
551 Since ``testpmd`` defaults to IP RSS mode and there is currently no
552 command-line parameter to enable additional protocols (UDP and TCP as well
553 as IP), the following commands must be entered from its CLI to get the same
554 behavior as librte_pmd_mlx4:
556 .. code-block:: console
559 > port config all rss all
565 This section demonstrates how to launch **testpmd** with Mellanox
566 ConnectX-4/ConnectX-5 devices managed by librte_pmd_mlx5.
568 #. Load the kernel modules:
570 .. code-block:: console
572 modprobe -a ib_uverbs mlx5_core mlx5_ib
574 Alternatively if MLNX_OFED is fully installed, the following script can
577 .. code-block:: console
579 /etc/init.d/openibd restart
583 User space I/O kernel modules (uio and igb_uio) are not used and do
584 not have to be loaded.
586 #. Make sure Ethernet interfaces are in working order and linked to kernel
587 verbs. Related sysfs entries should be present:
589 .. code-block:: console
591 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
595 .. code-block:: console
602 #. Optionally, retrieve their PCI bus addresses for whitelisting:
604 .. code-block:: console
607 for intf in eth2 eth3 eth4 eth5;
609 (cd "/sys/class/net/${intf}/device/" && pwd -P);
612 sed -n 's,.*/\(.*\),-w \1,p'
616 .. code-block:: console
623 #. Request huge pages:
625 .. code-block:: console
627 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
629 #. Start testpmd with basic parameters:
631 .. code-block:: console
633 testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
637 .. code-block:: console
640 EAL: PCI device 0000:05:00.0 on NUMA socket 0
641 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
642 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
643 PMD: librte_pmd_mlx5: 1 port(s) detected
644 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
645 EAL: PCI device 0000:05:00.1 on NUMA socket 0
646 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
647 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
648 PMD: librte_pmd_mlx5: 1 port(s) detected
649 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
650 EAL: PCI device 0000:06:00.0 on NUMA socket 0
651 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
652 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
653 PMD: librte_pmd_mlx5: 1 port(s) detected
654 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
655 EAL: PCI device 0000:06:00.1 on NUMA socket 0
656 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
657 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
658 PMD: librte_pmd_mlx5: 1 port(s) detected
659 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
660 Interactive-mode selected
661 Configuring Port 0 (socket 0)
662 PMD: librte_pmd_mlx5: 0x8cba80: TX queues number update: 0 -> 2
663 PMD: librte_pmd_mlx5: 0x8cba80: RX queues number update: 0 -> 2
664 Port 0: E4:1D:2D:E7:0C:FE
665 Configuring Port 1 (socket 0)
666 PMD: librte_pmd_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
667 PMD: librte_pmd_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
668 Port 1: E4:1D:2D:E7:0C:FF
669 Configuring Port 2 (socket 0)
670 PMD: librte_pmd_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
671 PMD: librte_pmd_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
672 Port 2: E4:1D:2D:E7:0C:FA
673 Configuring Port 3 (socket 0)
674 PMD: librte_pmd_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
675 PMD: librte_pmd_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
676 Port 3: E4:1D:2D:E7:0C:FB
677 Checking link statuses...
678 Port 0 Link Up - speed 40000 Mbps - full-duplex
679 Port 1 Link Up - speed 40000 Mbps - full-duplex
680 Port 2 Link Up - speed 10000 Mbps - full-duplex
681 Port 3 Link Up - speed 10000 Mbps - full-duplex