2 Copyright(c) 2015 Netronome Systems, Inc. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 NFP poll mode driver library
32 ============================
34 Netronome's sixth generation of flow processors pack 216 programmable
35 cores and over 100 hardware accelerators that uniquely combine packet,
36 flow, security and content processing in a single device that scales
39 This document explains how to use DPDK with the Netronome Poll Mode
40 Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
43 Currently the driver supports virtual functions (VFs) only.
48 Before using the Netronome's DPDK PMD some NFP-6xxx configuration,
49 which is not related to DPDK, is required. The system requires
50 installation of **Netronome's BSP (Board Support Package)** which includes
51 Linux drivers, programs and libraries.
53 If you have a NFP-6xxx device you should already have the code and
54 documentation for doing this configuration. Contact
55 **support@netronome.com** to obtain the latest available firmware.
57 The NFP Linux kernel drivers (including the required PF driver for the
58 NFP) are available on Github at
59 **https://github.com/Netronome/nfp-drv-kmods** along with build
62 DPDK runs in userspace and PMDs uses the Linux kernel UIO interface to
63 allow access to physical devices from userspace. The NFP PMD requires
64 the **igb_uio** UIO driver, available with DPDK, to perform correct
70 Netronome's PMD code is provided in the **drivers/net/nfp** directory.
71 Because NetronomeĀ“s BSP dependencies the driver is disabled by default
72 in DPDK build using **common_linuxapp configuration** file. Enabling the
73 driver or if you use another configuration file and want to have NFP
74 support, this variable is needed:
76 - **CONFIG_RTE_LIBRTE_NFP_PMD=y**
78 Once DPDK is built all the DPDK apps and examples include support for
85 Using the NFP PMD is not different to using other PMDs. Usual steps are:
87 #. **Configure hugepages:** All major Linux distributions have the hugepages
88 functionality enabled by default. By default this allows the system uses for
89 working with transparent hugepages. But in this case some hugepages need to
90 be created/reserved for use with the DPDK through the hugetlbfs file system.
91 First the virtual file system need to be mounted:
93 .. code-block:: console
95 mount -t hugetlbfs none /mnt/hugetlbfs
97 The command uses the common mount point for this file system and it needs to
98 be created if necessary.
100 Configuring hugepages is performed via sysfs:
102 .. code-block:: console
104 /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
106 This sysfs file is used to specify the number of hugepages to reserve.
109 .. code-block:: console
111 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
113 This will reserve 2GB of memory using 1024 2MB hugepages. The file may be
114 read to see if the operation was performed correctly:
116 .. code-block:: console
118 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
120 The number of unused hugepages may also be inspected.
122 Before executing the DPDK app it should match the value of nr_hugepages.
124 .. code-block:: console
126 cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
128 The hugepages reservation should be performed at system initialization and
129 it is usual to use a kernel parameter for configuration. If the reservation
130 is attempted on a busy system it will likely fail. Reserving memory for
131 hugepages may be done adding the following to the grub kernel command line:
133 .. code-block:: console
135 default_hugepagesz=1M hugepagesz=2M hugepages=1024
137 This will reserve 2GBytes of memory using 2Mbytes huge pages.
139 Finally, for a NUMA system the allocation needs to be made on the correct
140 NUMA node. In a DPDK app there is a master core which will (usually) perform
141 memory allocation. It is important that some of the hugepages are reserved
142 on the NUMA memory node where the network device is attached. This is because
143 of a restriction in DPDK by which TX and RX descriptors rings must be created
146 Per-node allocation of hugepages may be inspected and controlled using sysfs.
149 .. code-block:: console
151 cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
153 For a NUMA system there will be a specific hugepage directory per node
154 allowing control of hugepage reservation. A common problem may occur when
155 hugepages reservation is performed after the system has been working for
156 some time. Configuration using the global sysfs hugepage interface will
157 succeed but the per-node allocations may be unsatisfactory.
159 The number of hugepages that need to be reserved depends on how the app uses
160 TX and RX descriptors, and packets mbufs.
162 #. **Enable SR-IOV on the NFP-6xxx device:** The current NFP PMD works with
163 Virtual Functions (VFs) on a NFP device. Make sure that one of the Physical
164 Function (PF) drivers from the above Github repository is installed and
167 Virtual Functions need to be enabled before they can be used with the PMD.
168 Before enabling the VFs it is useful to obtain information about the
169 current NFP PCI device detected by the system:
171 .. code-block:: console
175 Now, for example, configure two virtual functions on a NFP-6xxx device
176 whose PCI system identity is "0000:03:00.0":
178 .. code-block:: console
180 echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
182 The result of this command may be shown using lspci again:
184 .. code-block:: console
188 Two new PCI devices should appear in the output of the above command. The
189 -k option shows the device driver, if any, that devices are bound to.
190 Depending on the modules loaded at this point the new PCI devices may be
191 bound to nfp_netvf driver.
193 #. **To install the uio kernel module (manually):** All major Linux
194 distributions have support for this kernel module so it is straightforward
197 .. code-block:: console
201 The module should now be listed by the lsmod command.
203 #. **To install the igb_uio kernel module (manually):** This module is part
204 of DPDK sources and configured by default (CONFIG_RTE_EAL_IGB_UIO=y).
206 .. code-block:: console
210 The module should now be listed by the lsmod command.
212 Depending on which NFP modules are loaded, it could be necessary to
213 detach NFP devices from the nfp_netvf module. If this is the case the
214 device needs to be unbound, for example:
216 .. code-block:: console
218 echo 0000:03:08.0 > /sys/bus/pci/devices/0000:03:08.0/driver/unbind
222 The output of lspci should now show that 0000:03:08.0 is not bound to
225 The next step is to add the NFP PCI ID to the IGB UIO driver:
227 .. code-block:: console
229 echo 19ee 6003 > /sys/bus/pci/drivers/igb_uio/new_id
231 And then to bind the device to the igb_uio driver:
233 .. code-block:: console
235 echo 0000:03:08.0 > /sys/bus/pci/drivers/igb_uio/bind
239 lspci should show that device bound to igb_uio driver.
241 #. **Using scripts to install and bind modules:** DPDK provides scripts which are
242 useful for installing the UIO modules and for binding the right device to those
243 modules avoiding doing so manually:
246 * **dpdk-devbind.py**
248 Configuration may be performed by running dpdk-setup.sh which invokes
249 dpdk-devbind.py as needed. Executing dpdk-setup.sh will display a menu of
250 configuration options.