1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2012 6WIND S.A.
3 Copyright 2015 Mellanox Technologies, Ltd
5 MLX4 poll mode driver library
6 =============================
8 The MLX4 poll mode driver library (**librte_pmd_mlx4**) implements support
9 for **Mellanox ConnectX-3** and **Mellanox ConnectX-3 Pro** 10/40 Gbps adapters
10 as well as their virtual functions (VF) in SR-IOV context.
12 Information and documentation about this family of adapters can be found on
13 the `Mellanox website <http://www.mellanox.com>`_. Help is also provided by
14 the `Mellanox community <http://community.mellanox.com/welcome>`_.
16 There is also a `section dedicated to this poll mode driver
17 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`_.
21 Due to external dependencies, this driver is disabled by default. It must
22 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX4_PMD=y`` and
25 Implementation details
26 ----------------------
28 Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
29 bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
30 PCI driver that allocates one Ethernet device per detected port.
32 For this reason, one cannot white/blacklist a single port without also
33 white/blacklisting the others on the same device.
35 Besides its dependency on libibverbs (that implies libmlx4 and associated
36 kernel support), librte_pmd_mlx4 relies heavily on system calls for control
37 operations such as querying/updating the MTU and flow control parameters.
39 For security reasons and robustness, this driver only deals with virtual
40 memory addresses. The way resources allocations are handled by the kernel
41 combined with hardware specifications that allow it to handle virtual memory
42 addresses directly ensure that DPDK applications cannot access random
43 physical memory (or memory that does not belong to the current process).
45 This capability allows the PMD to coexist with kernel network interfaces
46 which remain functional, although they stop receiving unicast packets as
47 long as they share the same MAC address.
49 The :ref:`flow_isolated_mode` is supported.
51 Compiling librte_pmd_mlx4 causes DPDK to be linked against libibverbs.
59 These options can be modified in the ``.config`` file.
61 - ``CONFIG_RTE_LIBRTE_MLX4_PMD`` (default **n**)
63 Toggle compilation of librte_pmd_mlx4 itself.
65 - ``CONFIG_RTE_IBVERBS_LINK_DLOPEN`` (default **n**)
67 Build PMD with additional code to make it loadable without hard
68 dependencies on **libibverbs** nor **libmlx4**, which may not be installed
71 In this mode, their presence is still required for it to run properly,
72 however their absence won't prevent a DPDK application from starting (with
73 ``CONFIG_RTE_BUILD_SHARED_LIB`` disabled) and they won't show up as
74 missing with ``ldd(1)``.
76 It works by moving these dependencies to a purpose-built rdma-core "glue"
77 plug-in which must either be installed in a directory whose name is based
78 on ``CONFIG_RTE_EAL_PMD_PATH`` suffixed with ``-glue`` if set, or in a
79 standard location for the dynamic linker (e.g. ``/lib``) if left to the
80 default empty string (``""``).
82 This option has no performance impact.
84 - ``CONFIG_RTE_IBVERBS_LINK_STATIC`` (default **n**)
86 Embed static flavor of the dependencies **libibverbs** and **libmlx4**
87 in the PMD shared library or the executable static binary.
89 - ``CONFIG_RTE_LIBRTE_MLX4_DEBUG`` (default **n**)
91 Toggle debugging code and stricter compilation flags. Enabling this option
92 adds additional run-time checks and debugging messages at the cost of
95 This option is available in meson:
97 - ``ibverbs_link`` can be ``static``, ``shared``, or ``dlopen``.
100 ~~~~~~~~~~~~~~~~~~~~~
104 A list of directories in which to search for the rdma-core "glue" plug-in,
105 separated by colons or semi-colons.
107 Only matters when compiled with ``CONFIG_RTE_IBVERBS_LINK_DLOPEN``
108 enabled and most useful when ``CONFIG_RTE_EAL_PMD_PATH`` is also set,
109 since ``LD_LIBRARY_PATH`` has no effect in this case.
111 Run-time configuration
112 ~~~~~~~~~~~~~~~~~~~~~~
114 - librte_pmd_mlx4 brings kernel network interfaces up during initialization
115 because it is affected by their state. Forcing them down prevents packets
118 - **ethtool** operations on related kernel interfaces also affect the PMD.
120 - ``port`` parameter [int]
122 This parameter provides a physical port to probe and can be specified multiple
123 times for additional ports. All ports are probed by default if left
126 - ``mr_ext_memseg_en`` parameter [int]
128 A nonzero value enables extending memseg when registering DMA memory. If
129 enabled, the number of entries in MR (Memory Region) lookup table on datapath
130 is minimized and it benefits performance. On the other hand, it worsens memory
131 utilization because registered memory is pinned by kernel driver. Even if a
132 page in the extended chunk is freed, that doesn't become reusable until the
133 entire memory is freed.
137 Kernel module parameters
138 ~~~~~~~~~~~~~~~~~~~~~~~~
140 The **mlx4_core** kernel module has several parameters that affect the
141 behavior and/or the performance of librte_pmd_mlx4. Some of them are described
144 - **num_vfs** (integer or triplet, optionally prefixed by device address
147 Create the given number of VFs on the specified devices.
149 - **log_num_mgm_entry_size** (integer)
151 Device-managed flow steering (DMFS) is required by DPDK applications. It is
152 enabled by using a negative value, the last four bits of which have a
155 - **-1**: force device-managed flow steering (DMFS).
156 - **-7**: configure optimized steering mode to improve performance with the
157 following limitation: VLAN filtering is not supported with this mode.
158 This is the recommended mode in case VLAN filter is not needed.
163 - For secondary process:
165 - Forked secondary process not supported.
166 - External memory unregistered in EAL memseg list cannot be used for DMA
167 unless such memory has been registered by ``mlx4_mr_update_ext_mp()`` in
168 primary process and remapped to the same virtual address in secondary
169 process. If the external memory is registered by primary process but has
170 different virtual address in secondary process, unexpected error may happen.
172 - CRC stripping is supported by default and always reported as "true".
173 The ability to enable/disable CRC stripping requires OFED version
174 4.3-1.5.0.0 and above or rdma-core version v18 and above.
176 - TSO (Transmit Segmentation Offload) is supported in OFED version
182 This driver relies on external libraries and kernel drivers for resources
183 allocations and initialization. The following dependencies are not part of
184 DPDK and must be installed separately:
186 - **libibverbs** (provided by rdma-core package)
188 User space verbs framework used by librte_pmd_mlx4. This library provides
189 a generic interface between the kernel and low-level user space drivers
192 It allows slow and privileged operations (context initialization, hardware
193 resources allocations) to be managed by the kernel and fast operations to
194 never leave user space.
196 - **libmlx4** (provided by rdma-core package)
198 Low-level user space driver library for Mellanox ConnectX-3 devices,
199 it is automatically loaded by libibverbs.
201 This library basically implements send/receive calls to the hardware
206 They provide the kernel-side verbs API and low level device drivers that
207 manage actual hardware initialization and resources sharing with user
210 Unlike most other PMDs, these modules must remain loaded and bound to
213 - mlx4_core: hardware driver managing Mellanox ConnectX-3 devices.
214 - mlx4_en: Ethernet device driver that provides kernel network interfaces.
215 - mlx4_ib: InifiniBand device driver.
216 - ib_uverbs: user space driver for verbs (entry point for libibverbs).
218 - **Firmware update**
220 Mellanox OFED releases include firmware updates for ConnectX-3 adapters.
222 Because each release provides new features, these updates must be applied to
223 match the kernel modules and libraries they come with.
227 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
230 Depending on system constraints and user preferences either RDMA core library
231 with a recent enough Linux kernel release (recommended) or Mellanox OFED,
232 which provides compatibility with older releases.
234 Current RDMA core package and Linux kernel (recommended)
235 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
237 - Minimal Linux kernel version: 4.14.
238 - Minimal RDMA core version: v15 (see `RDMA core installation documentation`_).
240 - Starting with rdma-core v21, static libraries can be built::
243 CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
246 .. _`RDMA core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
248 If rdma-core libraries are built but not installed, DPDK makefile can link them,
249 thanks to these environment variables:
251 - ``EXTRA_CFLAGS=-I/path/to/rdma-core/build/include``
252 - ``EXTRA_LDFLAGS=-L/path/to/rdma-core/build/lib``
253 - ``PKG_CONFIG_PATH=/path/to/rdma-core/build/lib/pkgconfig``
255 .. _Mellanox_OFED_as_a_fallback:
257 Mellanox OFED as a fallback
258 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
260 - `Mellanox OFED`_ version: **4.4, 4.5, 4.6**.
261 - firmware version: **2.42.5000** and above.
263 .. _`Mellanox OFED`: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
267 Several versions of Mellanox OFED are available. Installing the version
268 this DPDK release was developed and tested against is strongly
269 recommended. Please check the `prerequisites`_.
271 Installing Mellanox OFED
272 ^^^^^^^^^^^^^^^^^^^^^^^^
274 1. Download latest Mellanox OFED.
276 2. Install the required libraries and kernel modules either by installing
277 only the required set, or by installing the entire Mellanox OFED:
281 ./mlnxofedinstall --dpdk --upstream-libs
283 For SR-IOV hypervisors use::
285 ./mlnxofedinstall --dpdk --upstream-libs --enable-sriov --hypervisor
287 For SR-IOV virtual machine use::
289 ./mlnxofedinstall --dpdk --upstream-libs --guest
291 3. Verify the firmware is the correct one::
295 4. Set all ports links to Ethernet, follow instructions on the screen::
299 5. Continue with :ref:`section 2 of the Quick Start Guide <QSG_2>`.
304 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2*40G)
311 1. Set all ports links to Ethernet::
313 PCI=<NIC PCI address>
314 echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port0"
315 echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port1"
319 If using Mellanox OFED one can permanently set the port link
320 to Ethernet using connectx_port_config tool provided by it.
321 :ref:`Mellanox_OFED_as_a_fallback`:
325 2. In case of bare metal or hypervisor, configure optimized steering mode
326 by adding the following line to ``/etc/modprobe.d/mlx4_core.conf``::
328 options mlx4_core log_num_mgm_entry_size=-7
332 If VLAN filtering is used, set log_num_mgm_entry_size=-1.
333 Performance degradation can occur on this case.
335 3. Restart the driver::
337 /etc/init.d/openibd restart
341 service openibd restart
343 4. Compile DPDK and you are ready to go. See instructions on
344 :ref:`Development Kit Build System <Development_Kit_Build_System>`
349 1. Verify the optimized steering mode is configured::
351 cat /sys/module/mlx4_core/parameters/log_num_mgm_entry_size
353 2. Use the CPU near local NUMA node to which the PCIe adapter is connected,
354 for better performance. For VMs, verify that the right CPU
355 and NUMA node are pinned according to the above. Run::
359 to identify the NUMA node to which the PCIe adapter is connected.
361 3. If more than one adapter is used, and root complex capabilities allow
362 to put both adapters on the same NUMA node without PCI bandwidth degradation,
363 it is recommended to locate both adapters on the same NUMA node.
364 This in order to forward packets from one to the other without
365 NUMA performance penalty.
367 4. Disable pause frames::
369 ethtool -A <netdev> rx off tx off
371 5. Verify IO non-posted prefetch is disabled by default. This can be checked
372 via the BIOS configuration. Please contact you server provider for more
373 information about the settings.
377 On some machines, depends on the machine integrator, it is beneficial
378 to set the PCI max read request parameter to 1K. This can be
379 done in the following way:
381 To query the read request size use::
383 setpci -s <NIC PCI address> 68.w
385 If the output is different than 3XXX, set it by::
387 setpci -s <NIC PCI address> 68.w=3XXX
389 The XXX can be different on different systems. Make sure to configure
390 according to the setpci output.
392 6. To minimize overhead of searching Memory Regions:
394 - '--socket-mem' is recommended to pin memory by predictable amount.
395 - Configure per-lcore cache when creating Mempools for packet buffer.
396 - Refrain from dynamically allocating/freeing memory in run-time.
401 This section demonstrates how to launch **testpmd** with Mellanox ConnectX-3
402 devices managed by librte_pmd_mlx4.
404 #. Load the kernel modules::
406 modprobe -a ib_uverbs mlx4_en mlx4_core mlx4_ib
408 Alternatively if MLNX_OFED is fully installed, the following script can
411 /etc/init.d/openibd restart
415 User space I/O kernel modules (uio and igb_uio) are not used and do
416 not have to be loaded.
418 #. Make sure Ethernet interfaces are in working order and linked to kernel
419 verbs. Related sysfs entries should be present::
421 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
430 #. Optionally, retrieve their PCI bus addresses for whitelisting::
433 for intf in eth2 eth3 eth4 eth5;
435 (cd "/sys/class/net/${intf}/device/" && pwd -P);
438 sed -n 's,.*/\(.*\),-w \1,p'
449 There are only two distinct PCI bus addresses because the Mellanox
450 ConnectX-3 adapters installed on this system are dual port.
452 #. Request huge pages::
454 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
456 #. Start testpmd with basic parameters::
458 testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
463 EAL: PCI device 0000:83:00.0 on NUMA socket 1
464 EAL: probe driver: 15b3:1007 librte_pmd_mlx4
465 PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_0" (VF: false)
466 PMD: librte_pmd_mlx4: 2 port(s) detected
467 PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:b7:50
468 PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:b7:51
469 EAL: PCI device 0000:84:00.0 on NUMA socket 1
470 EAL: probe driver: 15b3:1007 librte_pmd_mlx4
471 PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_1" (VF: false)
472 PMD: librte_pmd_mlx4: 2 port(s) detected
473 PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:ba:b0
474 PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:ba:b1
475 Interactive-mode selected
476 Configuring Port 0 (socket 0)
477 PMD: librte_pmd_mlx4: 0x867d60: TX queues number update: 0 -> 2
478 PMD: librte_pmd_mlx4: 0x867d60: RX queues number update: 0 -> 2
479 Port 0: 00:02:C9:B5:B7:50
480 Configuring Port 1 (socket 0)
481 PMD: librte_pmd_mlx4: 0x867da0: TX queues number update: 0 -> 2
482 PMD: librte_pmd_mlx4: 0x867da0: RX queues number update: 0 -> 2
483 Port 1: 00:02:C9:B5:B7:51
484 Configuring Port 2 (socket 0)
485 PMD: librte_pmd_mlx4: 0x867de0: TX queues number update: 0 -> 2
486 PMD: librte_pmd_mlx4: 0x867de0: RX queues number update: 0 -> 2
487 Port 2: 00:02:C9:B5:BA:B0
488 Configuring Port 3 (socket 0)
489 PMD: librte_pmd_mlx4: 0x867e20: TX queues number update: 0 -> 2
490 PMD: librte_pmd_mlx4: 0x867e20: RX queues number update: 0 -> 2
491 Port 3: 00:02:C9:B5:BA:B1
492 Checking link statuses...
493 Port 0 Link Up - speed 10000 Mbps - full-duplex
494 Port 1 Link Up - speed 40000 Mbps - full-duplex
495 Port 2 Link Up - speed 10000 Mbps - full-duplex
496 Port 3 Link Up - speed 40000 Mbps - full-duplex