1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2012 6WIND S.A.
3 Copyright 2015 Mellanox Technologies, Ltd
5 MLX4 poll mode driver library
6 =============================
8 The MLX4 poll mode driver library (**librte_pmd_mlx4**) implements support
9 for **Mellanox ConnectX-3** and **Mellanox ConnectX-3 Pro** 10/40 Gbps adapters
10 as well as their virtual functions (VF) in SR-IOV context.
12 Information and documentation about this family of adapters can be found on
13 the `Mellanox website <http://www.mellanox.com>`_. Help is also provided by
14 the `Mellanox community <http://community.mellanox.com/welcome>`_.
16 There is also a `section dedicated to this poll mode driver
17 <http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`_.
21 Due to external dependencies, this driver is disabled by default. It must
22 be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX4_PMD=y`` and
25 Implementation details
26 ----------------------
28 Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
29 bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
30 PCI driver that allocates one Ethernet device per detected port.
32 For this reason, one cannot white/blacklist a single port without also
33 white/blacklisting the others on the same device.
35 Besides its dependency on libibverbs (that implies libmlx4 and associated
36 kernel support), librte_pmd_mlx4 relies heavily on system calls for control
37 operations such as querying/updating the MTU and flow control parameters.
39 For security reasons and robustness, this driver only deals with virtual
40 memory addresses. The way resources allocations are handled by the kernel
41 combined with hardware specifications that allow it to handle virtual memory
42 addresses directly ensure that DPDK applications cannot access random
43 physical memory (or memory that does not belong to the current process).
45 This capability allows the PMD to coexist with kernel network interfaces
46 which remain functional, although they stop receiving unicast packets as
47 long as they share the same MAC address.
49 The :ref:`flow_isolated_mode` is supported.
51 Compiling librte_pmd_mlx4 causes DPDK to be linked against libibverbs.
59 These options can be modified in the ``.config`` file.
61 - ``CONFIG_RTE_LIBRTE_MLX4_PMD`` (default **n**)
63 Toggle compilation of librte_pmd_mlx4 itself.
65 - ``CONFIG_RTE_IBVERBS_LINK_DLOPEN`` (default **n**)
67 Build PMD with additional code to make it loadable without hard
68 dependencies on **libibverbs** nor **libmlx4**, which may not be installed
71 In this mode, their presence is still required for it to run properly,
72 however their absence won't prevent a DPDK application from starting (with
73 ``CONFIG_RTE_BUILD_SHARED_LIB`` disabled) and they won't show up as
74 missing with ``ldd(1)``.
76 It works by moving these dependencies to a purpose-built rdma-core "glue"
77 plug-in which must either be installed in a directory whose name is based
78 on ``CONFIG_RTE_EAL_PMD_PATH`` suffixed with ``-glue`` if set, or in a
79 standard location for the dynamic linker (e.g. ``/lib``) if left to the
80 default empty string (``""``).
82 This option has no performance impact.
84 - ``CONFIG_RTE_IBVERBS_LINK_STATIC`` (default **n**)
86 Embed static flavour of the dependencies **libibverbs** and **libmlx4**
87 in the PMD shared library or the executable static binary.
89 - ``CONFIG_RTE_LIBRTE_MLX4_DEBUG`` (default **n**)
91 Toggle debugging code and stricter compilation flags. Enabling this option
92 adds additional run-time checks and debugging messages at the cost of
100 A list of directories in which to search for the rdma-core "glue" plug-in,
101 separated by colons or semi-colons.
103 Only matters when compiled with ``CONFIG_RTE_IBVERBS_LINK_DLOPEN``
104 enabled and most useful when ``CONFIG_RTE_EAL_PMD_PATH`` is also set,
105 since ``LD_LIBRARY_PATH`` has no effect in this case.
107 Run-time configuration
108 ~~~~~~~~~~~~~~~~~~~~~~
110 - librte_pmd_mlx4 brings kernel network interfaces up during initialization
111 because it is affected by their state. Forcing them down prevents packets
114 - **ethtool** operations on related kernel interfaces also affect the PMD.
116 - ``port`` parameter [int]
118 This parameter provides a physical port to probe and can be specified multiple
119 times for additional ports. All ports are probed by default if left
122 - ``mr_ext_memseg_en`` parameter [int]
124 A nonzero value enables extending memseg when registering DMA memory. If
125 enabled, the number of entries in MR (Memory Region) lookup table on datapath
126 is minimized and it benefits performance. On the other hand, it worsens memory
127 utilization because registered memory is pinned by kernel driver. Even if a
128 page in the extended chunk is freed, that doesn't become reusable until the
129 entire memory is freed.
133 Kernel module parameters
134 ~~~~~~~~~~~~~~~~~~~~~~~~
136 The **mlx4_core** kernel module has several parameters that affect the
137 behavior and/or the performance of librte_pmd_mlx4. Some of them are described
140 - **num_vfs** (integer or triplet, optionally prefixed by device address
143 Create the given number of VFs on the specified devices.
145 - **log_num_mgm_entry_size** (integer)
147 Device-managed flow steering (DMFS) is required by DPDK applications. It is
148 enabled by using a negative value, the last four bits of which have a
151 - **-1**: force device-managed flow steering (DMFS).
152 - **-7**: configure optimized steering mode to improve performance with the
153 following limitation: VLAN filtering is not supported with this mode.
154 This is the recommended mode in case VLAN filter is not needed.
159 - For secondary process:
161 - Forked secondary process not supported.
162 - External memory unregistered in EAL memseg list cannot be used for DMA
163 unless such memory has been registered by ``mlx4_mr_update_ext_mp()`` in
164 primary process and remapped to the same virtual address in secondary
165 process. If the external memory is registered by primary process but has
166 different virtual address in secondary process, unexpected error may happen.
168 - CRC stripping is supported by default and always reported as "true".
169 The ability to enable/disable CRC stripping requires OFED version
170 4.3-1.5.0.0 and above or rdma-core version v18 and above.
172 - TSO (Transmit Segmentation Offload) is supported in OFED version
178 This driver relies on external libraries and kernel drivers for resources
179 allocations and initialization. The following dependencies are not part of
180 DPDK and must be installed separately:
182 - **libibverbs** (provided by rdma-core package)
184 User space verbs framework used by librte_pmd_mlx4. This library provides
185 a generic interface between the kernel and low-level user space drivers
188 It allows slow and privileged operations (context initialization, hardware
189 resources allocations) to be managed by the kernel and fast operations to
190 never leave user space.
192 - **libmlx4** (provided by rdma-core package)
194 Low-level user space driver library for Mellanox ConnectX-3 devices,
195 it is automatically loaded by libibverbs.
197 This library basically implements send/receive calls to the hardware
202 They provide the kernel-side verbs API and low level device drivers that
203 manage actual hardware initialization and resources sharing with user
206 Unlike most other PMDs, these modules must remain loaded and bound to
209 - mlx4_core: hardware driver managing Mellanox ConnectX-3 devices.
210 - mlx4_en: Ethernet device driver that provides kernel network interfaces.
211 - mlx4_ib: InifiniBand device driver.
212 - ib_uverbs: user space driver for verbs (entry point for libibverbs).
214 - **Firmware update**
216 Mellanox OFED releases include firmware updates for ConnectX-3 adapters.
218 Because each release provides new features, these updates must be applied to
219 match the kernel modules and libraries they come with.
223 Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
226 Depending on system constraints and user preferences either RDMA core library
227 with a recent enough Linux kernel release (recommended) or Mellanox OFED,
228 which provides compatibility with older releases.
230 Current RDMA core package and Linux kernel (recommended)
231 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
233 - Minimal Linux kernel version: 4.14.
234 - Minimal RDMA core version: v15 (see `RDMA core installation documentation`_).
236 - Starting with rdma-core v21, static libraries can be built::
239 CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
242 .. _`RDMA core installation documentation`: https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
244 If rdma-core libraries are built but not installed, DPDK makefile can link them,
245 thanks to these environment variables:
247 - ``EXTRA_CFLAGS=-I/path/to/rdma-core/build/include``
248 - ``EXTRA_LDFLAGS=-L/path/to/rdma-core/build/lib``
249 - ``PKG_CONFIG_PATH=/path/to/rdma-core/build/lib/pkgconfig``
251 .. _Mellanox_OFED_as_a_fallback:
253 Mellanox OFED as a fallback
254 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
256 - `Mellanox OFED`_ version: **4.4, 4.5**.
257 - firmware version: **2.42.5000** and above.
259 .. _`Mellanox OFED`: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
263 Several versions of Mellanox OFED are available. Installing the version
264 this DPDK release was developed and tested against is strongly
265 recommended. Please check the `prerequisites`_.
267 Installing Mellanox OFED
268 ^^^^^^^^^^^^^^^^^^^^^^^^
270 1. Download latest Mellanox OFED.
272 2. Install the required libraries and kernel modules either by installing
273 only the required set, or by installing the entire Mellanox OFED:
277 .. code-block:: console
279 ./mlnxofedinstall --dpdk --upstream-libs
281 For SR-IOV hypervisors use:
283 .. code-block:: console
285 ./mlnxofedinstall --dpdk --upstream-libs --enable-sriov --hypervisor
287 For SR-IOV virtual machine use:
289 .. code-block:: console
291 ./mlnxofedinstall --dpdk --upstream-libs --guest
293 3. Verify the firmware is the correct one:
295 .. code-block:: console
299 4. Set all ports links to Ethernet, follow instructions on the screen:
301 .. code-block:: console
305 5. Continue with :ref:`section 2 of the Quick Start Guide <QSG_2>`.
310 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2*40G)
317 1. Set all ports links to Ethernet
319 .. code-block:: console
321 PCI=<NIC PCI address>
322 echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port0"
323 echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port1"
327 If using Mellanox OFED one can permanently set the port link
328 to Ethernet using connectx_port_config tool provided by it.
329 :ref:`Mellanox_OFED_as_a_fallback`:
333 2. In case of bare metal or hypervisor, configure optimized steering mode
334 by adding the following line to ``/etc/modprobe.d/mlx4_core.conf``:
336 .. code-block:: console
338 options mlx4_core log_num_mgm_entry_size=-7
342 If VLAN filtering is used, set log_num_mgm_entry_size=-1.
343 Performance degradation can occur on this case.
345 3. Restart the driver:
347 .. code-block:: console
349 /etc/init.d/openibd restart
353 .. code-block:: console
355 service openibd restart
357 4. Compile DPDK and you are ready to go. See instructions on
358 :ref:`Development Kit Build System <Development_Kit_Build_System>`
363 1. Verify the optimized steering mode is configured:
365 .. code-block:: console
367 cat /sys/module/mlx4_core/parameters/log_num_mgm_entry_size
369 2. Use the CPU near local NUMA node to which the PCIe adapter is connected,
370 for better performance. For VMs, verify that the right CPU
371 and NUMA node are pinned according to the above. Run:
373 .. code-block:: console
377 to identify the NUMA node to which the PCIe adapter is connected.
379 3. If more than one adapter is used, and root complex capabilities allow
380 to put both adapters on the same NUMA node without PCI bandwidth degradation,
381 it is recommended to locate both adapters on the same NUMA node.
382 This in order to forward packets from one to the other without
383 NUMA performance penalty.
385 4. Disable pause frames:
387 .. code-block:: console
389 ethtool -A <netdev> rx off tx off
391 5. Verify IO non-posted prefetch is disabled by default. This can be checked
392 via the BIOS configuration. Please contact you server provider for more
393 information about the settings.
397 On some machines, depends on the machine integrator, it is beneficial
398 to set the PCI max read request parameter to 1K. This can be
399 done in the following way:
401 To query the read request size use:
403 .. code-block:: console
405 setpci -s <NIC PCI address> 68.w
407 If the output is different than 3XXX, set it by:
409 .. code-block:: console
411 setpci -s <NIC PCI address> 68.w=3XXX
413 The XXX can be different on different systems. Make sure to configure
414 according to the setpci output.
416 6. To minimize overhead of searching Memory Regions:
418 - '--socket-mem' is recommended to pin memory by predictable amount.
419 - Configure per-lcore cache when creating Mempools for packet buffer.
420 - Refrain from dynamically allocating/freeing memory in run-time.
425 This section demonstrates how to launch **testpmd** with Mellanox ConnectX-3
426 devices managed by librte_pmd_mlx4.
428 #. Load the kernel modules:
430 .. code-block:: console
432 modprobe -a ib_uverbs mlx4_en mlx4_core mlx4_ib
434 Alternatively if MLNX_OFED is fully installed, the following script can
437 .. code-block:: console
439 /etc/init.d/openibd restart
443 User space I/O kernel modules (uio and igb_uio) are not used and do
444 not have to be loaded.
446 #. Make sure Ethernet interfaces are in working order and linked to kernel
447 verbs. Related sysfs entries should be present:
449 .. code-block:: console
451 ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
455 .. code-block:: console
462 #. Optionally, retrieve their PCI bus addresses for whitelisting:
464 .. code-block:: console
467 for intf in eth2 eth3 eth4 eth5;
469 (cd "/sys/class/net/${intf}/device/" && pwd -P);
472 sed -n 's,.*/\(.*\),-w \1,p'
476 .. code-block:: console
485 There are only two distinct PCI bus addresses because the Mellanox
486 ConnectX-3 adapters installed on this system are dual port.
488 #. Request huge pages:
490 .. code-block:: console
492 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
494 #. Start testpmd with basic parameters:
496 .. code-block:: console
498 testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
502 .. code-block:: console
505 EAL: PCI device 0000:83:00.0 on NUMA socket 1
506 EAL: probe driver: 15b3:1007 librte_pmd_mlx4
507 PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_0" (VF: false)
508 PMD: librte_pmd_mlx4: 2 port(s) detected
509 PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:b7:50
510 PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:b7:51
511 EAL: PCI device 0000:84:00.0 on NUMA socket 1
512 EAL: probe driver: 15b3:1007 librte_pmd_mlx4
513 PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_1" (VF: false)
514 PMD: librte_pmd_mlx4: 2 port(s) detected
515 PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:ba:b0
516 PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:ba:b1
517 Interactive-mode selected
518 Configuring Port 0 (socket 0)
519 PMD: librte_pmd_mlx4: 0x867d60: TX queues number update: 0 -> 2
520 PMD: librte_pmd_mlx4: 0x867d60: RX queues number update: 0 -> 2
521 Port 0: 00:02:C9:B5:B7:50
522 Configuring Port 1 (socket 0)
523 PMD: librte_pmd_mlx4: 0x867da0: TX queues number update: 0 -> 2
524 PMD: librte_pmd_mlx4: 0x867da0: RX queues number update: 0 -> 2
525 Port 1: 00:02:C9:B5:B7:51
526 Configuring Port 2 (socket 0)
527 PMD: librte_pmd_mlx4: 0x867de0: TX queues number update: 0 -> 2
528 PMD: librte_pmd_mlx4: 0x867de0: RX queues number update: 0 -> 2
529 Port 2: 00:02:C9:B5:BA:B0
530 Configuring Port 3 (socket 0)
531 PMD: librte_pmd_mlx4: 0x867e20: TX queues number update: 0 -> 2
532 PMD: librte_pmd_mlx4: 0x867e20: RX queues number update: 0 -> 2
533 Port 3: 00:02:C9:B5:BA:B1
534 Checking link statuses...
535 Port 0 Link Up - speed 10000 Mbps - full-duplex
536 Port 1 Link Up - speed 40000 Mbps - full-duplex
537 Port 2 Link Up - speed 10000 Mbps - full-duplex
538 Port 3 Link Up - speed 40000 Mbps - full-duplex