2 Copyright 2012 6WIND S.A.
3 Copyright 2015 Mellanox
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of 6WIND S.A. nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 MLX4 poll mode driver library
32 =============================
34 The MLX4 poll mode driver library (**librte_pmd_mlx4**) implements support
35 for **Mellanox ConnectX-3** and **Mellanox ConnectX-3 Pro** 10/40 Gbps adapters
36 as well as their virtual functions (VF) in SR-IOV context.
38 Information and documentation about this family of adapters can be found on
39 the `Mellanox website <http://www.mellanox.com>`_. Help is also provided by
40 the `Mellanox community <http://community.mellanox.com/welcome>`_.
42 There is also a `section dedicated to this poll mode driver
43 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`_.
47 Due to external dependencies, this driver is disabled by default. It must
48 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX4_PMD=y`` and
51 Implementation details
52 ----------------------
54 Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
55 bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
56 PCI driver that allocates one Ethernet device per detected port.
58 For this reason, one cannot white/blacklist a single port without also
59 white/blacklisting the others on the same device.
61 Besides its dependency on libibverbs (that implies libmlx4 and associated
62 kernel support), librte_pmd_mlx4 relies heavily on system calls for control
63 operations such as querying/updating the MTU and flow control parameters.
65 For security reasons and robustness, this driver only deals with virtual
66 memory addresses. The way resources allocations are handled by the kernel
67 combined with hardware specifications that allow it to handle virtual memory
68 addresses directly ensure that DPDK applications cannot access random
69 physical memory (or memory that does not belong to the current process).
71 This capability allows the PMD to coexist with kernel network interfaces
72 which remain functional, although they stop receiving unicast packets as
73 long as they share the same MAC address.
75 Compiling librte_pmd_mlx4 causes DPDK to be linked against libibverbs.
80 - Multi arch support: x86_64 and POWER8.
81 - RSS, also known as RCA, is supported. In this mode the number of
82 configured RX queues must be a power of two.
83 - VLAN filtering is supported.
84 - Link state information is provided.
85 - Promiscuous mode is supported.
86 - All multicast mode is supported.
87 - Multiple MAC addresses (unicast, multicast) can be configured.
88 - Scattered packets are supported for TX and RX.
89 - Inner L3/L4 (IP, TCP and UDP) TX/RX checksum offloading and validation.
90 - Outer L3 (IP) TX/RX checksum offloading and validation for VXLAN frames.
96 - RSS hash key cannot be modified.
97 - RSS RETA cannot be configured
98 - RSS always includes L3 (IPv4/IPv6) and L4 (UDP/TCP). They cannot be
107 These options can be modified in the ``.config`` file.
109 - ``CONFIG_RTE_LIBRTE_MLX4_PMD`` (default **n**)
111 Toggle compilation of librte_pmd_mlx4 itself.
113 - ``CONFIG_RTE_LIBRTE_MLX4_DEBUG`` (default **n**)
115 Toggle debugging code and stricter compilation flags. Enabling this option
116 adds additional run-time checks and debugging messages at the cost of
119 - ``CONFIG_RTE_LIBRTE_MLX4_DEBUG_BROKEN_VERBS`` (default **n**)
121 Mellanox OFED versions earlier than 4.2 may return false errors from
122 Verbs object destruction APIs after the device is plugged out.
123 Enabling this option replaces assertion checks that cause the program
124 to abort with harmless debugging messages as a workaround.
125 Relevant only when CONFIG_RTE_LIBRTE_MLX4_DEBUG is enabled.
127 - ``CONFIG_RTE_LIBRTE_MLX4_SGE_WR_N`` (default **4**)
129 Number of scatter/gather elements (SGEs) per work request (WR). Lowering
130 this number improves performance but also limits the ability to receive
131 scattered packets (packets that do not fit a single mbuf). The default
132 value is a safe tradeoff.
134 - ``CONFIG_RTE_LIBRTE_MLX4_MAX_INLINE`` (default **0**)
136 Amount of data to be inlined during TX operations. Improves latency but
139 - ``CONFIG_RTE_LIBRTE_MLX4_TX_MP_CACHE`` (default **8**)
141 Maximum number of cached memory pools (MPs) per TX queue. Each MP from
142 which buffers are to be transmitted must be associated to memory regions
143 (MRs). This is a slow operation that must be cached.
145 This value is always 1 for RX queues since they use a single MP.
147 Environment variables
148 ~~~~~~~~~~~~~~~~~~~~~
150 - ``MLX4_INLINE_RECV_SIZE``
152 A nonzero value enables inline receive for packets up to that size. May
153 significantly improve performance in some cases but lower it in
154 others. Requires careful testing.
156 Run-time configuration
157 ~~~~~~~~~~~~~~~~~~~~~~
159 - The only constraint when RSS mode is requested is to make sure the number
160 of RX queues is a power of two. This is a hardware requirement.
162 - librte_pmd_mlx4 brings kernel network interfaces up during initialization
163 because it is affected by their state. Forcing them down prevents packets
166 - **ethtool** operations on related kernel interfaces also affect the PMD.
168 - ``port`` parameter [int]
170 This parameter provides a physical port to probe and can be specified multiple
171 times for additional ports. All ports are probed by default if left
174 Kernel module parameters
175 ~~~~~~~~~~~~~~~~~~~~~~~~
177 The **mlx4_core** kernel module has several parameters that affect the
178 behavior and/or the performance of librte_pmd_mlx4. Some of them are described
181 - **num_vfs** (integer or triplet, optionally prefixed by device address
184 Create the given number of VFs on the specified devices.
186 - **log_num_mgm_entry_size** (integer)
188 Device-managed flow steering (DMFS) is required by DPDK applications. It is
189 enabled by using a negative value, the last four bits of which have a
192 - **-1**: force device-managed flow steering (DMFS).
193 - **-7**: configure optimized steering mode to improve performance with the
194 following limitation: VLAN filtering is not supported with this mode.
195 This is the recommended mode in case VLAN filter is not needed.
200 This driver relies on external libraries and kernel drivers for resources
201 allocations and initialization. The following dependencies are not part of
202 DPDK and must be installed separately:
206 User space verbs framework used by librte_pmd_mlx4. This library provides
207 a generic interface between the kernel and low-level user space drivers
210 It allows slow and privileged operations (context initialization, hardware
211 resources allocations) to be managed by the kernel and fast operations to
212 never leave user space.
216 Low-level user space driver library for Mellanox ConnectX-3 devices,
217 it is automatically loaded by libibverbs.
219 This library basically implements send/receive calls to the hardware
222 - **Kernel modules** (mlnx-ofed-kernel)
224 They provide the kernel-side verbs API and low level device drivers that
225 manage actual hardware initialization and resources sharing with user
228 Unlike most other PMDs, these modules must remain loaded and bound to
231 - mlx4_core: hardware driver managing Mellanox ConnectX-3 devices.
232 - mlx4_en: Ethernet device driver that provides kernel network interfaces.
233 - mlx4_ib: InifiniBand device driver.
234 - ib_uverbs: user space driver for verbs (entry point for libibverbs).
236 - **Firmware update**
238 Mellanox OFED releases include firmware updates for ConnectX-3 adapters.
240 Because each release provides new features, these updates must be applied to
241 match the kernel modules and libraries they come with.
245 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
248 Currently supported by DPDK:
250 - Mellanox OFED **4.1**.
251 - Firmware version **2.36.5000** and above.
253 Getting Mellanox OFED
254 ~~~~~~~~~~~~~~~~~~~~~
256 While these libraries and kernel modules are available on OpenFabrics
257 Alliance's `website <https://www.openfabrics.org/>`_ and provided by package
258 managers on most distributions, this PMD requires Ethernet extensions that
259 may not be supported at the moment (this is a work in progress).
262 <http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers>`_
263 includes the necessary support and should be used in the meantime. For DPDK,
264 only libibverbs, libmlx4, mlnx-ofed-kernel packages and firmware updates are
265 required from that distribution.
269 Several versions of Mellanox OFED are available. Installing the version
270 this DPDK release was developed and tested against is strongly
271 recommended. Please check the `prerequisites`_.
276 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2*40G)
281 1. Download latest Mellanox OFED. For more info check the `prerequisites`_.
283 2. Install the required libraries and kernel modules either by installing
284 only the required set, or by installing the entire Mellanox OFED:
288 .. code-block:: console
292 For SR-IOV hypervisors use:
294 .. code-block:: console
296 ./mlnxofedinstall --enable-sriov -hypervisor
298 For SR-IOV virtual machine use:
300 .. code-block:: console
302 ./mlnxofedinstall --guest
304 3. Verify the firmware is the correct one:
306 .. code-block:: console
310 4. Set all ports links to Ethernet, follow instructions on the screen:
312 .. code-block:: console
316 Or in the manual way:
318 .. code-block:: console
320 PCI=<NIC PCI address>
321 echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port0"
322 echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port1"
324 5. In case of bare metal or hypervisor, configure optimized steering mode
325 by adding the following line to ``/etc/modprobe.d/mlx4_core.conf``:
327 .. code-block:: console
329 options mlx4_core log_num_mgm_entry_size=-7
333 If VLAN filtering is used, set log_num_mgm_entry_size=-1.
334 Performance degradation can occur on this case.
336 6. Restart the driver:
338 .. code-block:: console
340 /etc/init.d/openibd restart
344 .. code-block:: console
346 service openibd restart
348 7. Compile DPDK and you are ready to go. See instructions on
349 :ref:`Development Kit Build System <Development_Kit_Build_System>`
354 1. Verify the optimized steering mode is configured:
356 .. code-block:: console
358 cat /sys/module/mlx4_core/parameters/log_num_mgm_entry_size
360 2. Use environment variable MLX4_INLINE_RECV_SIZE=64 to get maximum
361 performance for 64B messages.
363 3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
364 for better performance. For VMs, verify that the right CPU
365 and NUMA node are pinned according to the above. Run:
367 .. code-block:: console
371 to identify the NUMA node to which the PCIe adapter is connected.
373 4. If more than one adapter is used, and root complex capabilities allow
374 to put both adapters on the same NUMA node without PCI bandwidth degradation,
375 it is recommended to locate both adapters on the same NUMA node.
376 This in order to forward packets from one to the other without
377 NUMA performance penalty.
379 5. Disable pause frames:
381 .. code-block:: console
383 ethtool -A <netdev> rx off tx off
385 6. Verify IO non-posted prefetch is disabled by default. This can be checked
386 via the BIOS configuration. Please contact you server provider for more
387 information about the settings.
391 On some machines, depends on the machine integrator, it is beneficial
392 to set the PCI max read request parameter to 1K. This can be
393 done in the following way:
395 To query the read request size use:
397 .. code-block:: console
399 setpci -s <NIC PCI address> 68.w
401 If the output is different than 3XXX, set it by:
403 .. code-block:: console
405 setpci -s <NIC PCI address> 68.w=3XXX
407 The XXX can be different on different systems. Make sure to configure
408 according to the setpci output.
413 This section demonstrates how to launch **testpmd** with Mellanox ConnectX-3
414 devices managed by librte_pmd_mlx4.
416 #. Load the kernel modules:
418 .. code-block:: console
420 modprobe -a ib_uverbs mlx4_en mlx4_core mlx4_ib
422 Alternatively if MLNX_OFED is fully installed, the following script can
425 .. code-block:: console
427 /etc/init.d/openibd restart
431 User space I/O kernel modules (uio and igb_uio) are not used and do
432 not have to be loaded.
434 #. Make sure Ethernet interfaces are in working order and linked to kernel
435 verbs. Related sysfs entries should be present:
437 .. code-block:: console
439 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
443 .. code-block:: console
450 #. Optionally, retrieve their PCI bus addresses for whitelisting:
452 .. code-block:: console
455 for intf in eth2 eth3 eth4 eth5;
457 (cd "/sys/class/net/${intf}/device/" && pwd -P);
460 sed -n 's,.*/\(.*\),-w \1,p'
464 .. code-block:: console
473 There are only two distinct PCI bus addresses because the Mellanox
474 ConnectX-3 adapters installed on this system are dual port.
476 #. Request huge pages:
478 .. code-block:: console
480 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
482 #. Start testpmd with basic parameters:
484 .. code-block:: console
486 testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
490 .. code-block:: console
493 EAL: PCI device 0000:83:00.0 on NUMA socket 1
494 EAL: probe driver: 15b3:1007 librte_pmd_mlx4
495 PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_0" (VF: false)
496 PMD: librte_pmd_mlx4: 2 port(s) detected
497 PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:b7:50
498 PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:b7:51
499 EAL: PCI device 0000:84:00.0 on NUMA socket 1
500 EAL: probe driver: 15b3:1007 librte_pmd_mlx4
501 PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_1" (VF: false)
502 PMD: librte_pmd_mlx4: 2 port(s) detected
503 PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:ba:b0
504 PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:ba:b1
505 Interactive-mode selected
506 Configuring Port 0 (socket 0)
507 PMD: librte_pmd_mlx4: 0x867d60: TX queues number update: 0 -> 2
508 PMD: librte_pmd_mlx4: 0x867d60: RX queues number update: 0 -> 2
509 Port 0: 00:02:C9:B5:B7:50
510 Configuring Port 1 (socket 0)
511 PMD: librte_pmd_mlx4: 0x867da0: TX queues number update: 0 -> 2
512 PMD: librte_pmd_mlx4: 0x867da0: RX queues number update: 0 -> 2
513 Port 1: 00:02:C9:B5:B7:51
514 Configuring Port 2 (socket 0)
515 PMD: librte_pmd_mlx4: 0x867de0: TX queues number update: 0 -> 2
516 PMD: librte_pmd_mlx4: 0x867de0: RX queues number update: 0 -> 2
517 Port 2: 00:02:C9:B5:BA:B0
518 Configuring Port 3 (socket 0)
519 PMD: librte_pmd_mlx4: 0x867e20: TX queues number update: 0 -> 2
520 PMD: librte_pmd_mlx4: 0x867e20: RX queues number update: 0 -> 2
521 Port 3: 00:02:C9:B5:BA:B1
522 Checking link statuses...
523 Port 0 Link Up - speed 10000 Mbps - full-duplex
524 Port 1 Link Up - speed 40000 Mbps - full-duplex
525 Port 2 Link Up - speed 10000 Mbps - full-duplex
526 Port 3 Link Up - speed 40000 Mbps - full-duplex