X-Git-Url: http://git.droids-corp.org/?a=blobdiff_plain;f=doc%2Fguides%2Fnics%2Fnfp.rst;h=c732fb1f0e31455192ff516c43f9ab0b5d445261;hb=76f5f48c51ccbb5ab696af6ae22a7c1acbe63a07;hp=55ba64d459845a43feb0937f23b5ee72a5a2b0d8;hpb=80bc1752f16e20d7de9d0add8384403df89a93a0;p=dpdk.git diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst index 55ba64d459..c732fb1f0e 100644 --- a/doc/guides/nics/nfp.rst +++ b/doc/guides/nics/nfp.rst @@ -61,18 +61,19 @@ instructions. DPDK runs in userspace and PMDs uses the Linux kernel UIO interface to allow access to physical devices from userspace. The NFP PMD requires -a separate UIO driver, **nfp_uio**, to perform correct -initialization. This driver is part of Netronome´s BSP and it is -equivalent to Intel's igb_uio driver. +the **igb_uio** UIO driver, available with DPDK, to perform correct +initialization. Building the software --------------------- Netronome's PMD code is provided in the **drivers/net/nfp** directory. -Because Netronome´s BSP dependencies the driver is disabled by default -in DPDK build using **common_linuxapp configuration** file. Enabling the -driver or if you use another configuration file and want to have NFP -support, this variable is needed: +Although NFP PMD has Netronome´s BSP dependencies, it is possible to +compile it along with other DPDK PMDs even if no BSP was installed before. +Of course, a DPDK app will require such a BSP installed for using the +NFP PMD. + +Default PMD configuration is at **common_linuxapp configuration** file: - **CONFIG_RTE_LIBRTE_NFP_PMD=y** @@ -80,85 +81,15 @@ Once DPDK is built all the DPDK apps and examples include support for the NFP PMD. -System configuration --------------------- - -Using the NFP PMD is not different to using other PMDs. Usual steps are: - -#. **Configure hugepages:** All major Linux distributions have the hugepages - functionality enabled by default. By default this allows the system uses for - working with transparent hugepages. But in this case some hugepages need to - be created/reserved for use with the DPDK through the hugetlbfs file system. - First the virtual file system need to be mounted: - - .. code-block:: console - - mount -t hugetlbfs none /mnt/hugetlbfs - - The command uses the common mount point for this file system and it needs to - be created if necessary. - - Configuring hugepages is performed via sysfs: - - .. code-block:: console - - /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages - - This sysfs file is used to specify the number of hugepages to reserve. - For example: - - .. code-block:: console - - echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages - - This will reserve 2GB of memory using 1024 2MB hugepages. The file may be - read to see if the operation was performed correctly: +Driver compilation and testing +------------------------------ - .. code-block:: console - - cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages - - The number of unused hugepages may also be inspected. - - Before executing the DPDK app it should match the value of nr_hugepages. - - .. code-block:: console - - cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages - - The hugepages reservation should be performed at system initialisation and - it is usual to use a kernel parameter for configuration. If the reservation - is attempted on a busy system it will likely fail. Reserving memory for - hugepages may be done adding the following to the grub kernel command line: - - .. code-block:: console - - default_hugepagesz=1M hugepagesz=2M hugepages=1024 - - This will reserve 2GBytes of memory using 2Mbytes huge pages. +Refer to the document :ref:`compiling and testing a PMD for a NIC ` +for details. - Finally, for a NUMA system the allocation needs to be made on the correct - NUMA node. In a DPDK app there is a master core which will (usually) perform - memory allocation. It is important that some of the hugepages are reserved - on the NUMA memory node where the network device is attached. This is because - of a restriction in DPDK by which TX and RX descriptors rings must be created - on the master code. - Per-node allocation of hugepages may be inspected and controlled using sysfs. - For example: - - .. code-block:: console - - cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages - - For a NUMA system there will be a specific hugepage directory per node - allowing control of hugepage reservation. A common problem may occur when - hugepages reservation is performed after the system has been working for - some time. Configuration using the global sysfs hugepage interface will - succeed but the per-node allocations may be unsatisfactory. - - The number of hugepages that need to be reserved depends on how the app uses - TX and RX descriptors, and packets mbufs. +System configuration +-------------------- #. **Enable SR-IOV on the NFP-6xxx device:** The current NFP PMD works with Virtual Functions (VFs) on a NFP device. Make sure that one of the Physical @@ -190,76 +121,3 @@ Using the NFP PMD is not different to using other PMDs. Usual steps are: -k option shows the device driver, if any, that devices are bound to. Depending on the modules loaded at this point the new PCI devices may be bound to nfp_netvf driver. - -#. **To install the uio kernel module (manually):** All major Linux - distributions have support for this kernel module so it is straightforward - to install it: - - .. code-block:: console - - modprobe uio - - The module should now be listed by the lsmod command. - -#. **To install the nfp_uio kernel module (manually):** This module supports - NFP-6xxx devices through the UIO interface. - - This module is part of Netronome´s BSP and it should be available when the - BSP is installed. - - .. code-block:: console - - modprobe nfp_uio.ko - - The module should now be listed by the lsmod command. - - Depending on which NFP modules are loaded, nfp_uio may be automatically - bound to the NFP PCI devices by the system. Otherwise the binding needs - to be done explicitly. This is the case when nfp_netvf, the Linux kernel - driver for NFP VFs, was loaded when VFs were created. As described later - in this document this configuration may also be performed using scripts - provided by the Netronome´s BSP. - - First the device needs to be unbound, for example from the nfp_netvf - driver: - - .. code-block:: console - - echo 0000:03:08.0 > /sys/bus/pci/devices/0000:03:08.0/driver/unbind - - lspci -d19ee: -k - - The output of lspci should now show that 0000:03:08.0 is not bound to - any driver. - - The next step is to add the NFP PCI ID to the NFP UIO driver: - - .. code-block:: console - - echo 19ee 6003 > /sys/bus/pci/drivers/nfp_uio/new_id - - And then to bind the device to the nfp_uio driver: - - .. code-block:: console - - echo 0000:03:08.0 > /sys/bus/pci/drivers/nfp_uio/bind - - lspci -d19ee: -k - - lspci should show that device bound to nfp_uio driver. - -#. **Using tools from Netronome´s BSP to install and bind modules:** DPDK provides - scripts which are useful for installing the UIO modules and for binding the - right device to those modules avoiding doing so manually. However, these scripts - have not support for Netronome´s UIO driver. Along with drivers, the BSP installs - those DPDK scripts slightly modified with support for Netronome´s UIO driver. - - Those specific scripts can be found in Netronome´s BSP installation directory. - Refer to BSP documentation for more information. - - * **setup.sh** - * **dpdk_nic_bind.py** - - Configuration may be performed by running setup.sh which invokes - dpdk_nic_bind.py as needed. Executing setup.sh will display a menu of - configuration options.