2 Copyright 2015 6WIND S.A.
4 Redistribution and use in source and binary forms, with or without
5 modification, are permitted provided that the following conditions
8 * Redistributions of source code must retain the above copyright
9 notice, this list of conditions and the following disclaimer.
10 * Redistributions in binary form must reproduce the above copyright
11 notice, this list of conditions and the following disclaimer in
12 the documentation and/or other materials provided with the
14 * Neither the name of 6WIND S.A. nor the names of its
15 contributors may be used to endorse or promote products derived
16 from this software without specific prior written permission.
18 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
19 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
20 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
21 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
22 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
23 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
24 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
25 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
26 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
27 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
33 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
34 for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** and **Mellanox
35 ConnectX-5** families of 10/25/40/50/100 Gb/s adapters as well as their
36 virtual functions (VF) in SR-IOV context.
38 Information and documentation about these adapters can be found on the
39 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
40 `Mellanox community <http://community.mellanox.com/welcome>`__.
42 There is also a `section dedicated to this poll mode driver
43 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
47 Due to external dependencies, this driver is disabled by default. It must
48 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX5_PMD=y`` and
51 Implementation details
52 ----------------------
54 Besides its dependency on libibverbs (that implies libmlx5 and associated
55 kernel support), librte_pmd_mlx5 relies heavily on system calls for control
56 operations such as querying/updating the MTU and flow control parameters.
58 For security reasons and robustness, this driver only deals with virtual
59 memory addresses. The way resources allocations are handled by the kernel
60 combined with hardware specifications that allow it to handle virtual memory
61 addresses directly ensure that DPDK applications cannot access random
62 physical memory (or memory that does not belong to the current process).
64 This capability allows the PMD to coexist with kernel network interfaces
65 which remain functional, although they stop receiving unicast packets as
66 long as they share the same MAC address.
68 Enabling librte_pmd_mlx5 causes DPDK applications to be linked against
74 - Multiple TX and RX queues.
75 - Support for scattered TX and RX frames.
76 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
77 - Several RSS hash keys, one for each flow type.
78 - Configurable RETA table.
79 - Support for multiple MAC addresses.
83 - RX CRC stripping configuration.
85 - Multicast promiscuous mode.
86 - Hardware checksum offloads.
87 - Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
90 - Secondary process TX is supported.
91 - KVM and VMware ESX SR-IOV modes are supported.
92 - RSS hash result is supported.
94 - Hardware checksum TX offload for VXLAN and GRE.
99 - Inner RSS for VXLAN frames is not supported yet.
100 - Port statistics through software counters only.
101 - Hardware checksum RX offloads for VXLAN inner header are not supported yet.
102 - Secondary process RX is not supported.
110 These options can be modified in the ``.config`` file.
112 - ``CONFIG_RTE_LIBRTE_MLX5_PMD`` (default **n**)
114 Toggle compilation of librte_pmd_mlx5 itself.
116 - ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
118 Toggle debugging code and stricter compilation flags. Enabling this option
119 adds additional run-time checks and debugging messages at the cost of
122 - ``CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACHE`` (default **8**)
124 Maximum number of cached memory pools (MPs) per TX queue. Each MP from
125 which buffers are to be transmitted must be associated to memory regions
126 (MRs). This is a slow operation that must be cached.
128 This value is always 1 for RX queues since they use a single MP.
130 Environment variables
131 ~~~~~~~~~~~~~~~~~~~~~
133 - ``MLX5_PMD_ENABLE_PADDING``
135 Enables HW packet padding in PCI bus transactions.
137 When packet size is cache aligned and CRC stripping is enabled, 4 fewer
138 bytes are written to the PCI bus. Enabling padding makes such packets
141 In cases where PCI bandwidth is the bottleneck, padding can improve
144 This is disabled by default since this can also decrease performance for
145 unaligned packet sizes.
147 Run-time configuration
148 ~~~~~~~~~~~~~~~~~~~~~~
150 - librte_pmd_mlx5 brings kernel network interfaces up during initialization
151 because it is affected by their state. Forcing them down prevents packets
154 - **ethtool** operations on related kernel interfaces also affect the PMD.
156 - ``rxq_cqe_comp_en`` parameter [int]
158 A nonzero value enables the compression of CQE on RX side. This feature
159 allows to save PCI bandwidth and improve performance at the cost of a
160 slightly higher CPU usage. Enabled by default.
164 - x86_64 with ConnectX4 and ConnectX4 LX
165 - Power8 with ConnectX4 LX
167 - ``txq_inline`` parameter [int]
169 Amount of data to be inlined during TX operations. Improves latency.
170 Can improve PPS performance when PCI back pressure is detected and may be
171 useful for scenarios involving heavy traffic on many queues.
173 It is not enabled by default (set to 0) since the additional software
174 logic necessary to handle this mode can lower performance when back
175 pressure is not expected.
177 - ``txqs_min_inline`` parameter [int]
179 Enable inline send only when the number of TX queues is greater or equal
182 This option should be used in combination with ``txq_inline`` above.
184 - ``txq_mpw_en`` parameter [int]
186 A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
187 enhanced multi-packet send (Enhanced MPS) for ConnectX-5. MPS allows the
188 TX burst function to pack up multiple packets in a single descriptor
189 session in order to save PCI bandwidth and improve performance at the
190 cost of a slightly higher CPU usage. When ``txq_inline`` is set along
191 with ``txq_mpw_en``, TX burst function tries to copy entire packet data
192 on to TX descriptor instead of including pointer of packet only if there
193 is enough room remained in the descriptor. ``txq_inline`` sets
194 per-descriptor space for either pointers or inlined packets. In addition,
195 Enhanced MPS supports hybrid mode - mixing inlined packets and pointers
196 in the same descriptor.
198 This option cannot be used in conjunction with ``tso`` below. When ``tso``
199 is set, ``txq_mpw_en`` is disabled.
201 It is currently only supported on the ConnectX-4 Lx and ConnectX-5
202 families of adapters. Enabled by default.
204 - ``txq_mpw_hdr_dseg_en`` parameter [int]
206 A nonzero value enables including two pointers in the first block of TX
207 descriptor. This can be used to lessen CPU load for memory copy.
209 Effective only when Enhanced MPS is supported. Disabled by default.
211 - ``txq_max_inline_len`` parameter [int]
213 Maximum size of packet to be inlined. This limits the size of packet to
214 be inlined. If the size of a packet is larger than configured value, the
215 packet isn't inlined even though there's enough space remained in the
216 descriptor. Instead, the packet is included with pointer.
218 Effective only when Enhanced MPS is supported. The default value is 256.
220 - ``tso`` parameter [int]
222 A nonzero value enables hardware TSO.
223 When hardware TSO is enabled, packets marked with TCP segmentation
224 offload will be divided into segments by the hardware.
231 This driver relies on external libraries and kernel drivers for resources
232 allocations and initialization. The following dependencies are not part of
233 DPDK and must be installed separately:
237 User space Verbs framework used by librte_pmd_mlx5. This library provides
238 a generic interface between the kernel and low-level user space drivers
241 It allows slow and privileged operations (context initialization, hardware
242 resources allocations) to be managed by the kernel and fast operations to
243 never leave user space.
247 Low-level user space driver library for Mellanox ConnectX-4/ConnectX-5
248 devices, it is automatically loaded by libibverbs.
250 This library basically implements send/receive calls to the hardware
253 - **Kernel modules** (mlnx-ofed-kernel)
255 They provide the kernel-side Verbs API and low level device drivers that
256 manage actual hardware initialization and resources sharing with user
259 Unlike most other PMDs, these modules must remain loaded and bound to
262 - mlx5_core: hardware driver managing Mellanox ConnectX-4/ConnectX-5
263 devices and related Ethernet kernel network devices.
264 - mlx5_ib: InifiniBand device driver.
265 - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
267 - **Firmware update**
269 Mellanox OFED releases include firmware updates for ConnectX-4/ConnectX-5
272 Because each release provides new features, these updates must be applied to
273 match the kernel modules and libraries they come with.
277 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
280 Currently supported by DPDK:
282 - Mellanox OFED version: **4.0-2.0.0.0**
285 - ConnectX-4: **12.18.2000**
286 - ConnectX-4 Lx: **14.18.2000**
287 - ConnectX-5: **16.19.1200**
288 - ConnectX-5 Ex: **16.19.1200**
290 Getting Mellanox OFED
291 ~~~~~~~~~~~~~~~~~~~~~
293 While these libraries and kernel modules are available on OpenFabrics
294 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
295 managers on most distributions, this PMD requires Ethernet extensions that
296 may not be supported at the moment (this is a work in progress).
299 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__
300 includes the necessary support and should be used in the meantime. For DPDK,
301 only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
302 required from that distribution.
306 Several versions of Mellanox OFED are available. Installing the version
307 this DPDK release was developed and tested against is strongly
308 recommended. Please check the `prerequisites`_.
313 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
314 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
315 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
316 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
317 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT (1x40G)
318 * Mellanox(R) ConnectX(R)-4 40G MCX413A-BCAT (1x40G)
319 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
320 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT (1x50G)
321 * Mellanox(R) ConnectX(R)-4 50G MCX413A-GCAT (1x50G)
322 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
323 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT (2x50G)
324 * Mellanox(R) ConnectX(R)-4 50G MCX416A-BCAT (2x50G)
325 * Mellanox(R) ConnectX(R)-4 50G MCX416A-GCAT (2x50G)
326 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
327 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
328 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
329 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
330 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
331 * Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G)
336 * **Flow pattern without any specific vlan will match for vlan packets as well.**
338 When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
339 Meaning, the flow rule::
341 flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
343 Will only match vlan packets with vid=3. and the flow rules::
345 flow create 0 ingress pattern eth / ipv4 / end ...
349 flow create 0 ingress pattern eth / vlan / ipv4 / end ...
351 Will match any ipv4 packet (VLAN included).
356 Compared to librte_pmd_mlx4 that implements a single RSS configuration per
357 port, librte_pmd_mlx5 supports per-protocol RSS configuration.
359 Since ``testpmd`` defaults to IP RSS mode and there is currently no
360 command-line parameter to enable additional protocols (UDP and TCP as well
361 as IP), the following commands must be entered from its CLI to get the same
362 behavior as librte_pmd_mlx4:
364 .. code-block:: console
367 > port config all rss all
373 This section demonstrates how to launch **testpmd** with Mellanox
374 ConnectX-4/ConnectX-5 devices managed by librte_pmd_mlx5.
376 #. Load the kernel modules:
378 .. code-block:: console
380 modprobe -a ib_uverbs mlx5_core mlx5_ib
382 Alternatively if MLNX_OFED is fully installed, the following script can
385 .. code-block:: console
387 /etc/init.d/openibd restart
391 User space I/O kernel modules (uio and igb_uio) are not used and do
392 not have to be loaded.
394 #. Make sure Ethernet interfaces are in working order and linked to kernel
395 verbs. Related sysfs entries should be present:
397 .. code-block:: console
399 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
403 .. code-block:: console
410 #. Optionally, retrieve their PCI bus addresses for whitelisting:
412 .. code-block:: console
415 for intf in eth2 eth3 eth4 eth5;
417 (cd "/sys/class/net/${intf}/device/" && pwd -P);
420 sed -n 's,.*/\(.*\),-w \1,p'
424 .. code-block:: console
431 #. Request huge pages:
433 .. code-block:: console
435 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
437 #. Start testpmd with basic parameters:
439 .. code-block:: console
441 testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
445 .. code-block:: console
448 EAL: PCI device 0000:05:00.0 on NUMA socket 0
449 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
450 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
451 PMD: librte_pmd_mlx5: 1 port(s) detected
452 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
453 EAL: PCI device 0000:05:00.1 on NUMA socket 0
454 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
455 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
456 PMD: librte_pmd_mlx5: 1 port(s) detected
457 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
458 EAL: PCI device 0000:06:00.0 on NUMA socket 0
459 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
460 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
461 PMD: librte_pmd_mlx5: 1 port(s) detected
462 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
463 EAL: PCI device 0000:06:00.1 on NUMA socket 0
464 EAL: probe driver: 15b3:1013 librte_pmd_mlx5
465 PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
466 PMD: librte_pmd_mlx5: 1 port(s) detected
467 PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
468 Interactive-mode selected
469 Configuring Port 0 (socket 0)
470 PMD: librte_pmd_mlx5: 0x8cba80: TX queues number update: 0 -> 2
471 PMD: librte_pmd_mlx5: 0x8cba80: RX queues number update: 0 -> 2
472 Port 0: E4:1D:2D:E7:0C:FE
473 Configuring Port 1 (socket 0)
474 PMD: librte_pmd_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
475 PMD: librte_pmd_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
476 Port 1: E4:1D:2D:E7:0C:FF
477 Configuring Port 2 (socket 0)
478 PMD: librte_pmd_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
479 PMD: librte_pmd_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
480 Port 2: E4:1D:2D:E7:0C:FA
481 Configuring Port 3 (socket 0)
482 PMD: librte_pmd_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
483 PMD: librte_pmd_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
484 Port 3: E4:1D:2D:E7:0C:FB
485 Checking link statuses...
486 Port 0 Link Up - speed 40000 Mbps - full-duplex
487 Port 1 Link Up - speed 40000 Mbps - full-duplex
488 Port 2 Link Up - speed 10000 Mbps - full-duplex
489 Port 3 Link Up - speed 10000 Mbps - full-duplex