2 Copyright 2012 6WIND S.A.
3 Copyright 2015 Mellanox
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of 6WIND S.A. nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 MLX4 poll mode driver library
32 =============================
34 The MLX4 poll mode driver library (**librte_pmd_mlx4**) implements support
35 for **Mellanox ConnectX-3** and **Mellanox ConnectX-3 Pro** 10/40 Gbps adapters
36 as well as their virtual functions (VF) in SR-IOV context.
38 Information and documentation about this family of adapters can be found on
39 the `Mellanox website <http://www.mellanox.com>`_. Help is also provided by
40 the `Mellanox community <http://community.mellanox.com/welcome>`_.
42 There is also a `section dedicated to this poll mode driver
43 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`_.
47 Due to external dependencies, this driver is disabled by default. It must
48 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX4_PMD=y`` and
51 Implementation details
52 ----------------------
54 Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
55 bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
56 PCI driver that allocates one Ethernet device per detected port.
58 For this reason, one cannot white/blacklist a single port without also
59 white/blacklisting the others on the same device.
61 Besides its dependency on libibverbs (that implies libmlx4 and associated
62 kernel support), librte_pmd_mlx4 relies heavily on system calls for control
63 operations such as querying/updating the MTU and flow control parameters.
65 For security reasons and robustness, this driver only deals with virtual
66 memory addresses. The way resources allocations are handled by the kernel
67 combined with hardware specifications that allow it to handle virtual memory
68 addresses directly ensure that DPDK applications cannot access random
69 physical memory (or memory that does not belong to the current process).
71 This capability allows the PMD to coexist with kernel network interfaces
72 which remain functional, although they stop receiving unicast packets as
73 long as they share the same MAC address.
75 Compiling librte_pmd_mlx4 causes DPDK to be linked against libibverbs.
83 These options can be modified in the ``.config`` file.
85 - ``CONFIG_RTE_LIBRTE_MLX4_PMD`` (default **n**)
87 Toggle compilation of librte_pmd_mlx4 itself.
89 - ``CONFIG_RTE_LIBRTE_MLX4_DLOPEN_DEPS`` (default **n**)
91 Build PMD with additional code to make it loadable without hard
92 dependencies on **libibverbs** nor **libmlx4**, which may not be installed
95 In this mode, their presence is still required for it to run properly,
96 however their absence won't prevent a DPDK application from starting (with
97 ``CONFIG_RTE_BUILD_SHARED_LIB`` disabled) and they won't show up as
98 missing with ``ldd(1)``.
100 It works by moving these dependencies to a purpose-built rdma-core "glue"
101 plug-in, which must either be installed in ``CONFIG_RTE_EAL_PMD_PATH`` if
102 set, or in a standard location for the dynamic linker (e.g. ``/lib``) if
103 left to the default empty string (``""``).
105 This option has no performance impact.
107 - ``CONFIG_RTE_LIBRTE_MLX4_DEBUG`` (default **n**)
109 Toggle debugging code and stricter compilation flags. Enabling this option
110 adds additional run-time checks and debugging messages at the cost of
113 - ``CONFIG_RTE_LIBRTE_MLX4_TX_MP_CACHE`` (default **8**)
115 Maximum number of cached memory pools (MPs) per TX queue. Each MP from
116 which buffers are to be transmitted must be associated to memory regions
117 (MRs). This is a slow operation that must be cached.
119 This value is always 1 for RX queues since they use a single MP.
121 Environment variables
122 ~~~~~~~~~~~~~~~~~~~~~
126 A list of directories in which to search for the rdma-core "glue" plug-in,
127 separated by colons or semi-colons.
129 Only matters when compiled with ``CONFIG_RTE_LIBRTE_MLX4_DLOPEN_DEPS``
130 enabled and most useful when ``CONFIG_RTE_EAL_PMD_PATH`` is also set,
131 since ``LD_LIBRARY_PATH`` has no effect in this case.
133 Run-time configuration
134 ~~~~~~~~~~~~~~~~~~~~~~
136 - librte_pmd_mlx4 brings kernel network interfaces up during initialization
137 because it is affected by their state. Forcing them down prevents packets
140 - **ethtool** operations on related kernel interfaces also affect the PMD.
142 - ``port`` parameter [int]
144 This parameter provides a physical port to probe and can be specified multiple
145 times for additional ports. All ports are probed by default if left
148 Kernel module parameters
149 ~~~~~~~~~~~~~~~~~~~~~~~~
151 The **mlx4_core** kernel module has several parameters that affect the
152 behavior and/or the performance of librte_pmd_mlx4. Some of them are described
155 - **num_vfs** (integer or triplet, optionally prefixed by device address
158 Create the given number of VFs on the specified devices.
160 - **log_num_mgm_entry_size** (integer)
162 Device-managed flow steering (DMFS) is required by DPDK applications. It is
163 enabled by using a negative value, the last four bits of which have a
166 - **-1**: force device-managed flow steering (DMFS).
167 - **-7**: configure optimized steering mode to improve performance with the
168 following limitation: VLAN filtering is not supported with this mode.
169 This is the recommended mode in case VLAN filter is not needed.
174 This driver relies on external libraries and kernel drivers for resources
175 allocations and initialization. The following dependencies are not part of
176 DPDK and must be installed separately:
178 - **libibverbs** (provided by rdma-core package)
180 User space verbs framework used by librte_pmd_mlx4. This library provides
181 a generic interface between the kernel and low-level user space drivers
184 It allows slow and privileged operations (context initialization, hardware
185 resources allocations) to be managed by the kernel and fast operations to
186 never leave user space.
188 - **libmlx4** (provided by rdma-core package)
190 Low-level user space driver library for Mellanox ConnectX-3 devices,
191 it is automatically loaded by libibverbs.
193 This library basically implements send/receive calls to the hardware
198 They provide the kernel-side verbs API and low level device drivers that
199 manage actual hardware initialization and resources sharing with user
202 Unlike most other PMDs, these modules must remain loaded and bound to
205 - mlx4_core: hardware driver managing Mellanox ConnectX-3 devices.
206 - mlx4_en: Ethernet device driver that provides kernel network interfaces.
207 - mlx4_ib: InifiniBand device driver.
208 - ib_uverbs: user space driver for verbs (entry point for libibverbs).
210 - **Firmware update**
212 Mellanox OFED releases include firmware updates for ConnectX-3 adapters.
214 Because each release provides new features, these updates must be applied to
215 match the kernel modules and libraries they come with.
219 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
222 Depending on system constraints and user preferences either RDMA core library
223 with a recent enough Linux kernel release (recommended) or Mellanox OFED,
224 which provides compatibility with older releases.
226 Current RDMA core package and Linux kernel (recommended)
227 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
229 - Minimal Linux kernel version: 4.14.
230 - Minimal RDMA core version: v15 (see `RDMA core installation documentation`_).
232 .. _`RDMA core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
234 .. _Mellanox_OFED_as_a_fallback:
236 Mellanox OFED as a fallback
237 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
239 - `Mellanox OFED`_ version: **4.2, 4.3**.
240 - firmware version: **2.42.5000** and above.
242 .. _`Mellanox OFED`: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
246 Several versions of Mellanox OFED are available. Installing the version
247 this DPDK release was developed and tested against is strongly
248 recommended. Please check the `prerequisites`_.
250 Installing Mellanox OFED
251 ^^^^^^^^^^^^^^^^^^^^^^^^
253 1. Download latest Mellanox OFED.
255 2. Install the required libraries and kernel modules either by installing
256 only the required set, or by installing the entire Mellanox OFED:
260 .. code-block:: console
262 ./mlnxofedinstall --dpdk --upstream-libs
264 For SR-IOV hypervisors use:
266 .. code-block:: console
268 ./mlnxofedinstall --dpdk --upstream-libs --enable-sriov --hypervisor
270 For SR-IOV virtual machine use:
272 .. code-block:: console
274 ./mlnxofedinstall --dpdk --upstream-libs --guest
276 3. Verify the firmware is the correct one:
278 .. code-block:: console
282 4. Set all ports links to Ethernet, follow instructions on the screen:
284 .. code-block:: console
288 5. Continue with :ref:`section 2 of the Quick Start Guide <QSG_2>`.
293 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2*40G)
300 1. Set all ports links to Ethernet
302 .. code-block:: console
304 PCI=<NIC PCI address>
305 echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port0"
306 echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port1"
310 If using Mellanox OFED one can permanently set the port link
311 to Ethernet using connectx_port_config tool provided by it.
312 :ref:`Mellanox_OFED_as_a_fallback`:
316 2. In case of bare metal or hypervisor, configure optimized steering mode
317 by adding the following line to ``/etc/modprobe.d/mlx4_core.conf``:
319 .. code-block:: console
321 options mlx4_core log_num_mgm_entry_size=-7
325 If VLAN filtering is used, set log_num_mgm_entry_size=-1.
326 Performance degradation can occur on this case.
328 3. Restart the driver:
330 .. code-block:: console
332 /etc/init.d/openibd restart
336 .. code-block:: console
338 service openibd restart
340 4. Compile DPDK and you are ready to go. See instructions on
341 :ref:`Development Kit Build System <Development_Kit_Build_System>`
346 1. Verify the optimized steering mode is configured:
348 .. code-block:: console
350 cat /sys/module/mlx4_core/parameters/log_num_mgm_entry_size
352 2. Use the CPU near local NUMA node to which the PCIe adapter is connected,
353 for better performance. For VMs, verify that the right CPU
354 and NUMA node are pinned according to the above. Run:
356 .. code-block:: console
360 to identify the NUMA node to which the PCIe adapter is connected.
362 3. If more than one adapter is used, and root complex capabilities allow
363 to put both adapters on the same NUMA node without PCI bandwidth degradation,
364 it is recommended to locate both adapters on the same NUMA node.
365 This in order to forward packets from one to the other without
366 NUMA performance penalty.
368 4. Disable pause frames:
370 .. code-block:: console
372 ethtool -A <netdev> rx off tx off
374 5. Verify IO non-posted prefetch is disabled by default. This can be checked
375 via the BIOS configuration. Please contact you server provider for more
376 information about the settings.
380 On some machines, depends on the machine integrator, it is beneficial
381 to set the PCI max read request parameter to 1K. This can be
382 done in the following way:
384 To query the read request size use:
386 .. code-block:: console
388 setpci -s <NIC PCI address> 68.w
390 If the output is different than 3XXX, set it by:
392 .. code-block:: console
394 setpci -s <NIC PCI address> 68.w=3XXX
396 The XXX can be different on different systems. Make sure to configure
397 according to the setpci output.
402 This section demonstrates how to launch **testpmd** with Mellanox ConnectX-3
403 devices managed by librte_pmd_mlx4.
405 #. Load the kernel modules:
407 .. code-block:: console
409 modprobe -a ib_uverbs mlx4_en mlx4_core mlx4_ib
411 Alternatively if MLNX_OFED is fully installed, the following script can
414 .. code-block:: console
416 /etc/init.d/openibd restart
420 User space I/O kernel modules (uio and igb_uio) are not used and do
421 not have to be loaded.
423 #. Make sure Ethernet interfaces are in working order and linked to kernel
424 verbs. Related sysfs entries should be present:
426 .. code-block:: console
428 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
432 .. code-block:: console
439 #. Optionally, retrieve their PCI bus addresses for whitelisting:
441 .. code-block:: console
444 for intf in eth2 eth3 eth4 eth5;
446 (cd "/sys/class/net/${intf}/device/" && pwd -P);
449 sed -n 's,.*/\(.*\),-w \1,p'
453 .. code-block:: console
462 There are only two distinct PCI bus addresses because the Mellanox
463 ConnectX-3 adapters installed on this system are dual port.
465 #. Request huge pages:
467 .. code-block:: console
469 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
471 #. Start testpmd with basic parameters:
473 .. code-block:: console
475 testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
479 .. code-block:: console
482 EAL: PCI device 0000:83:00.0 on NUMA socket 1
483 EAL: probe driver: 15b3:1007 librte_pmd_mlx4
484 PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_0" (VF: false)
485 PMD: librte_pmd_mlx4: 2 port(s) detected
486 PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:b7:50
487 PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:b7:51
488 EAL: PCI device 0000:84:00.0 on NUMA socket 1
489 EAL: probe driver: 15b3:1007 librte_pmd_mlx4
490 PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_1" (VF: false)
491 PMD: librte_pmd_mlx4: 2 port(s) detected
492 PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:ba:b0
493 PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:ba:b1
494 Interactive-mode selected
495 Configuring Port 0 (socket 0)
496 PMD: librte_pmd_mlx4: 0x867d60: TX queues number update: 0 -> 2
497 PMD: librte_pmd_mlx4: 0x867d60: RX queues number update: 0 -> 2
498 Port 0: 00:02:C9:B5:B7:50
499 Configuring Port 1 (socket 0)
500 PMD: librte_pmd_mlx4: 0x867da0: TX queues number update: 0 -> 2
501 PMD: librte_pmd_mlx4: 0x867da0: RX queues number update: 0 -> 2
502 Port 1: 00:02:C9:B5:B7:51
503 Configuring Port 2 (socket 0)
504 PMD: librte_pmd_mlx4: 0x867de0: TX queues number update: 0 -> 2
505 PMD: librte_pmd_mlx4: 0x867de0: RX queues number update: 0 -> 2
506 Port 2: 00:02:C9:B5:BA:B0
507 Configuring Port 3 (socket 0)
508 PMD: librte_pmd_mlx4: 0x867e20: TX queues number update: 0 -> 2
509 PMD: librte_pmd_mlx4: 0x867e20: RX queues number update: 0 -> 2
510 Port 3: 00:02:C9:B5:BA:B1
511 Checking link statuses...
512 Port 0 Link Up - speed 10000 Mbps - full-duplex
513 Port 1 Link Up - speed 40000 Mbps - full-duplex
514 Port 2 Link Up - speed 10000 Mbps - full-duplex
515 Port 3 Link Up - speed 40000 Mbps - full-duplex