Removed redundant references to Intel(R) DPDK in Programmers Guide.
Signed-off-by: Siobhan Butler <siobhan.a.butler@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Development Kit Build System
============================
-The Intel® DPDK requires a build system for compilation activities and so on.
-This section describes the constraints and the mechanisms used in the Intel® DPDK framework.
+The DPDK requires a build system for compilation activities and so on.
+This section describes the constraints and the mechanisms used in the DPDK framework.
There are two use-cases for the framework:
-* Compilation of the Intel® DPDK libraries and sample applications;
+* Compilation of the DPDK libraries and sample applications;
the framework generates specific binary libraries,
include files and sample applications
-* Compilation of an external application or library, using an installed binary Intel® DPDK
+* Compilation of an external application or library, using an installed binary DPDK
Building the Development Kit Binary
-----------------------------------
-The following provides details on how to build the Intel® DPDK binary.
+The following provides details on how to build the DPDK binary.
Build Directory Concept
~~~~~~~~~~~~~~~~~~~~~~~
Refer to
:ref:`Development Kit Root Makefile Help <Development_Kit_Root_Makefile_Help>`
-for details about make commands that can be used from the root of Intel® DPDK.
+for details about make commands that can be used from the root of DPDK.
Building External Applications
------------------------------
-Since Intel® DPDK is in essence a development kit, the first objective of end users will be to create an application using this SDK.
+Since DPDK is in essence a development kit, the first objective of end users will be to create an application using this SDK.
To compile an application, the user must set the RTE_SDK and RTE_TARGET environment variables.
.. code-block:: console
Makefile Description
--------------------
-General Rules For Intel® DPDK Makefiles
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+General Rules For DPDK Makefiles
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In the Intel® DPDK, Makefiles always follow the same scheme:
+In the DPDK, Makefiles always follow the same scheme:
#. Include $(RTE_SDK)/mk/DPDK.vars.mk at the beginning.
Useful Variables Provided by the Build System
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-* RTE_SDK: The absolute path to the Intel® DPDK sources.
+* RTE_SDK: The absolute path to the DPDK sources.
When compiling the development kit, this variable is automatically set by the framework.
It has to be defined by the user as an environment variable if compiling an external application.
* SYMLINK-y-$(INSTPATH): A list of files to be installed in $(INSTPATH).
The files must be available from VPATH and will be linked (symbolically) in $(RTE_OUTPUT)/$(INSTPATH).
- This variable can be used in almost any Intel® DPDK Makefile.
+ This variable can be used in almost any DPDK Makefile.
* PREBUILD: A list of prerequisite actions to be taken before building. The user should use += to append data in this variable.
Development Kit Root Makefile Help
==================================
-The Intel® DPDK provides a root level Makefile with targets for configuration, building, cleaning, testing, installation and others.
+The DPDK provides a root level Makefile with targets for configuration, building, cleaning, testing, installation and others.
These targets are explained in the following sections.
Configuration Targets
* all, build or just make
- Build the Intel® DPDK in the output directory previously created by a make config.
+ Build the DPDK in the output directory previously created by a make config.
Example:
* Install
- Build the Intel® DPDK binary.
+ Build the DPDK binary.
Actually, this builds each supported target in a separate directory.
The name of each directory is the name of the target.
The name of the targets to install can be optionally specified using T=mytarget.
Compiling for Debug
-------------------
-To compile the Intel® DPDK and sample applications with debugging information included and the optimization level set to 0,
+To compile the DPDK and sample applications with debugging information included and the optimization level set to 0,
the EXTRA_CFLAGS environment variable should be set before compiling as follows:
.. code-block:: console
export EXTRA_CFLAGS='-O0 -g'
-The Intel® DPDK and any user or sample applications can then be compiled in the usual way.
+The DPDK and any user or sample applications can then be compiled in the usual way.
For example:
.. code-block:: console
Driver for VM Emulated Devices
==============================
-The Intel® DPDK EM poll mode driver supports the following emulated devices:
+The DPDK EM poll mode driver supports the following emulated devices:
* qemu-kvm emulated Intel® 82540EM Gigabit Ethernet Controller (qemu e1000 device)
* Fedora* 18 (64-bit)
-For supported kernel versions, refer to the *Intel® DPDK Release Notes*.
+For supported kernel versions, refer to the *DPDK Release Notes*.
Setting Up a KVM Virtual Machine
--------------------------------
* Guest Operating System: Fedora 14
-* Linux Kernel Version: Refer to the Intel® DPDK Getting Started Guide
+* Linux Kernel Version: Refer to the DPDK Getting Started Guide
* Target Applications: testpmd
00:04.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
00:05.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
-#. Install the Intel® DPDK and run testpmd.
+#. Install the DPDK and run testpmd.
Known Limitations of Emulated Devices
-------------------------------------
Typical services expected from the EAL are:
-* Intel® DPDK Loading and Launching:
- The Intel® DPDK and its application are linked as a single application and must be loaded by some means.
+* DPDK Loading and Launching:
+ The DPDK and its application are linked as a single application and must be loaded by some means.
* Core Affinity/Assignment Procedures:
The EAL provides mechanisms for assigning execution units to specific cores as well as creating execution instances.
EAL in a Linux-userland Execution Environment
---------------------------------------------
-In a Linux user space environment, the Intel® DPDK application runs as a user-space application using the pthread library.
+In a Linux user space environment, the DPDK application runs as a user-space application using the pthread library.
PCI information about devices and address space is discovered through the /sys kernel interface and through a module called igb_uio.
Refer to the UIO: User-space drivers documentation in the Linux kernel. This memory is mmap'd in the application.
The EAL performs physical memory allocation using mmap() in hugetlbfs (using huge page sizes to increase performance).
-This memory is exposed to Intel® DPDK service layers such as the :ref:`Mempool Library <Mempool_Library>`.
+This memory is exposed to DPDK service layers such as the :ref:`Mempool Library <Mempool_Library>`.
-At this point, the Intel® DPDK services layer will be initialized, then through pthread setaffinity calls,
+At this point, the DPDK services layer will be initialized, then through pthread setaffinity calls,
each execution unit will be assigned to a specific logical core to run as a user-level thread.
The time reference is provided by the CPU Time-Stamp Counter (TSC) or by the HPET kernel API through a mmap() call.
.. note::
- The only interrupts supported by the Intel® PDK Poll-Mode Drivers are those for link status change,
+ The only interrupts supported by the DPDK Poll-Mode Drivers are those for link status change,
i.e. link up and link down notification.
Blacklisting
~~~~~~~~~~~~
The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
-so they are ignored by the Intel® DPDK.
+so they are ignored by the DPDK.
The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
The following variables must be defined:
-* ${RTE_SDK}: Points to the root directory of the Intel® DPDK.
+* ${RTE_SDK}: Points to the root directory of the DPDK.
* ${RTE_TARGET}: Reference the target to be used for compilation (for example, x86_64-native-linuxapp-gcc).
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-Extending the Intel® DPDK
+Extending the DPDK
=========================
-This chapter describes how a developer can extend the Intel® DPDK to provide a new library,
+This chapter describes how a developer can extend the DPDK to provide a new library,
a new target, or support a new target.
Example: Adding a New Library libfoo
------------------------------------
-To add a new library to the Intel® DPDK, proceed as follows:
+To add a new library to the DPDK, proceed as follows:
#. Add a new configuration option:
#. Update mk/DPDK.app.mk, and add -lfoo in LDLIBS variable when the option is enabled.
- This will automatically add this flag when linking an Intel® DPDK application.
+ This will automatically add this flag when linking a DPDK application.
-#. Build the Intel® DPDK with the new library (we only show a specific target here):
+#. Build the DPDK with the new library (we only show a specific target here):
.. code-block:: console
Example: Using libfoo in the Test Application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The test application is used to validate all functionality of the Intel® DPDK.
+The test application is used to validate all functionality of the DPDK.
Once you have added a library, a new test case should be added in the test application.
* A new test_foo.c file should be added, that includes foo.h and calls the foo() function from test_foo().
${RTE_SDK}/doc/rst/test_report/autotests directory. This script must be updated also.
If libfoo is in a new test family, the links in ${RTE_SDK}/doc/rst/test_report/test_report.rst must be updated.
-* Build the Intel® DPDK with the updated test application (we only show a specific target here):
+* Build the DPDK with the updated test application (we only show a specific target here):
.. code-block:: console
Core A core may include several lcores or threads if the processor supports hyperthreading.
-Core Components A set of libraries provided by the Intel® DPDK, including eal, ring, mempool, mbuf, timers, and so on.
+Core Components A set of libraries provided by the DPDK, including eal, ring, mempool, mbuf, timers, and so on.
CPU Central Processing Unit
DIMM Dual In-line Memory Module
-Doxygen A documentation generator used in the Intel® DPDK to generate the API reference.
+Doxygen A documentation generator used in the DPDK to generate the API reference.
DPDK Data Plane Development Kit
SW Software
-Target In the Intel® DPDK, the target is a combination of architecture,
+Target In the DPDK, the target is a combination of architecture,
machine, executive environment and toolchain.
For example: i686-native-linuxapp-gcc.
Hash Library
============
-The Intel® DPDK provides a Hash Library for creating hash table for fast lookup.
+The DPDK provides a Hash Library for creating hash table for fast lookup.
The hash table is a data structure optimized for searching through a set of entries that are each identified by a unique key.
-For increased performance the Intel® DPDK Hash requires that all the keys have the same number of bytes which is set at the hash creation time.
+For increased performance the DPDK Hash requires that all the keys have the same number of bytes which is set at the hash creation time.
Hash API Overview
-----------------
One example is to use the DiffServ 5-tuple made up of the following fields of the IP and transport layer packet headers:
Source IP Address, Destination IP Address, Protocol, Source Port, Destination Port.
-The Intel® DPDK hash provides a generic method to implement an application specific flow classification mechanism.
+The DPDK hash provides a generic method to implement an application specific flow classification mechanism.
Given a flow table implemented as an array, the application should create a hash object with the same number of entries as the flow table and
with the hash key size set to the number of bytes in the selected flow key.
I40E/IXGBE/IGB Virtual Function Driver
======================================
-Supported Intel® Ethernet Controllers (see the *Intel® DPDK Release Notes* for details)
+Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
support the following modes of operation in a virtualized environment:
* **SR-IOV mode**: Involves direct assignment of part of the port resources to different guest operating systems
a Virtual Machine Monitor (VMM), also known as software switch acceleration mode.
In this chapter, this mode is referred to as the Next Generation VMDq mode.
-SR-IOV Mode Utilization in an Intel® DPDK Environment
------------------------------------------------------
+SR-IOV Mode Utilization in a DPDK Environment
+---------------------------------------------
-The Intel® DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode.
+The DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode.
Therefore, it is possible to partition SR-IOV capability on Ethernet controller NIC resources logically and
expose them to a virtual machine as a separate PCI function called a "Virtual Function".
Refer to Figure 10.
Therefore, a NIC is logically distributed among multiple virtual machines (as shown in Figure 10),
while still having global data in common to share with the Physical Function and other Virtual Functions.
-The Intel® DPDK i40evf, igbvf or ixgbevf as a Poll Mode Driver (PMD) serves for the Intel® 82576 Gigabit Ethernet Controller,
+The DPDK i40evf, igbvf or ixgbevf as a Poll Mode Driver (PMD) serves for the Intel® 82576 Gigabit Ethernet Controller,
Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller NIC,
or Intel® Fortville 10/40 Gigabit Ethernet Controller NIC's virtual PCI function.
-Meanwhile the Intel® DPDK Poll Mode Driver (PMD) also supports "Physical Function" of such NIC's on the host.
+Meanwhile the DPDK Poll Mode Driver (PMD) also supports "Physical Function" of such NIC's on the host.
-The Intel® DPDK PF/VF Poll Mode Driver (PMD) supports the Layer 2 switch on Intel® 82576 Gigabit Ethernet Controller,
+The DPDK PF/VF Poll Mode Driver (PMD) supports the Layer 2 switch on Intel® 82576 Gigabit Ethernet Controller,
Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller,
and Intel® Fortville 10/40 Gigabit Ethernet Controller NICs so that guest can choose it for inter virtual machine traffic in SR-IOV mode.
rmmod i40e (To remove the i40e module)
insmod i40e.ko max_vfs=2,2 (To enable two Virtual Functions per port)
-* Using the Intel® DPDK PMD PF i40e driver:
+* Using the DPDK PMD PF i40e driver:
Kernel Params: iommu=pt, intel_iommu=on
./dpdk_nic_bind.py -b igb_uio bb:ss.f
echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
- Launch the Intel® DPDK testpmd/example or your own host daemon application using the Intel® DPDK PMD library.
+ Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
10 Gigabit Ethernet Controller NIC port.
The reason for this is that the device allows for a maximum of 128 queues per port and a virtual/physical function has to
have at least one queue pair (RX/TX).
-The current implementation of the Intel® DPDK ixgbevf driver supports a single queue pair (RX/TX) per Virtual Function.
+The current implementation of the DPDK ixgbevf driver supports a single queue pair (RX/TX) per Virtual Function.
The Physical Function in host could be either configured by the Linux* ixgbe driver
(in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
rmmod ixgbe (To remove the ixgbe module)
insmod ixgbe max_vfs=2,2 (To enable two Virtual Functions per port)
-* Using the Intel® DPDK PMD PF ixgbe driver:
+* Using the DPDK PMD PF ixgbe driver:
Kernel Params: iommu=pt, intel_iommu=on
./dpdk_nic_bind.py -b igb_uio bb:ss.f
echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
- Launch the Intel® DPDK testpmd/example or your own host daemon application using the Intel® DPDK PMD library.
+ Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
./dpdk_nic_bind.py -b igb_uio bb:ss.f
echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific pci device)
- Launch Intel® DPDK testpmd/example or your own host daemon application using the Intel® DPDK PMD library.
+ Launch DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a four-port NIC.
When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
However, the hypervisor is bypassed to configure the Virtual Function devices using the Mailbox interface,
the solution is hypervisor-agnostic.
-Xen* and VMware* (when SR- IOV is supported) will also be able to support the Intel® DPDK with Virtual Function driver support.
+Xen* and VMware* (when SR- IOV is supported) will also be able to support the DPDK with Virtual Function driver support.
Expected Guest Operating System in Virtual Machine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Ubuntu* 10.04 (64-bit)
-For supported kernel versions, refer to the *Intel® DPDK Release Notes*.
+For supported kernel versions, refer to the *DPDK Release Notes*.
Setting Up a KVM Virtual Machine Monitor
----------------------------------------
* Guest Operating System: Fedora 14
-* Linux Kernel Version: Refer to the *Intel® DPDK Getting Started Guide*
+* Linux Kernel Version: Refer to the *DPDK Getting Started Guide*
* Target Applications: l2fwd, l3fwd-vf
#. Before booting the Host OS, open **BIOS setup** and enable **Intel® VT features**.
#. While booting the Host OS kernel, pass the intel_iommu=on kernel command line argument using GRUB.
- When using Intel® DPDK PF driver on host, pass the iommu=pt kernel command line argument in GRUB.
+ When using DPDK PF driver on host, pass the iommu=pt kernel command line argument in GRUB.
#. Download qemu-kvm-0.14.0 from
`http://sourceforge.net/projects/kvm/files/qemu-kvm/ <http://sourceforge.net/projects/kvm/files/qemu-kvm/>`_
rmmod ixgbe
"modprobe ixgbe max_vfs=2,2"
- When using DPDK PMD PF driver, insert Intel® DPDK kernel module igb_uio and set the number of VF by sysfs max_vfs:
+ When using DPDK PMD PF driver, insert DPDK kernel module igb_uio and set the number of VF by sysfs max_vfs:
.. code-block:: console
#. Finally, access the Guest OS using vncviewer with the localhost:5900 port and check the lspci command output in the Guest OS.
The virtual functions will be listed as available for use.
-#. Configure and install the Intel® DPDK with an x86_64-native-linuxapp-gcc configuration on the Guest OS as normal,
+#. Configure and install the DPDK with an x86_64-native-linuxapp-gcc configuration on the Guest OS as normal,
that is, there is no change to the normal installation procedure.
.. code-block:: console
.. note::
- If you are unable to compile the Intel® DPDK and you are getting "error: CPU you selected does not support x86-64 instruction set",
+ If you are unable to compile the DPDK and you are getting "error: CPU you selected does not support x86-64 instruction set",
power off the Guest OS and start the virtual machine with the correct -cpu option in the qemu- system-x86_64 command as shown in step 9.
You must select the best x86_64 cpu_model to emulate or you can select host option if available.
.. note::
- Run the Intel® DPDK l2fwd sample application in the Guest OS with Hugepages enabled.
+ Run the DPDK l2fwd sample application in the Guest OS with Hugepages enabled.
For the expected benchmark performance, you must pin the cores from the Guest OS to the Host OS (taskset can be used to do this) and
you must also look at the PCI Bus layout on the board to ensure you are not running the traffic over the QPI Inteface.
|perf_benchmark|
-Intel® DPDK SR-IOV PMD PF/VF Driver Usage Model
------------------------------------------------
+DPDK SR-IOV PMD PF/VF Driver Usage Model
+----------------------------------------
Fast Host-based Packet Processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Software Defined Network (SDN) trends are demanding fast host-based packet handling.
In a virtualization environment,
-the Intel® DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
+the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
With such host instance fast packet processing, lots of services such as filtering, QoS,
DPI can be offloaded on the host fast path.
SR-IOV device assignment helps a VM to attach the real device, taking advantage of the bridge in the NIC.
So VF-to-VF traffic within the same physical port (VM0<->VM1) have hardware acceleration.
However, when VF crosses physical ports (VM0<->VM2), there is no such hardware bridge.
-In this case, the Intel® DPDK PMD PF driver provides host forwarding between such VMs.
+In this case, the DPDK PMD PF driver provides host forwarding between such VMs.
Figure 13 shows an example.
-In this case an update of the MAC address lookup tables in both the NIC and host Intel® DPDK application is required.
+In this case an update of the MAC address lookup tables in both the NIC and host DPDK application is required.
In the NIC, writing the destination of a MAC address belongs to another cross device VM to the PF specific pool.
-So when a packet comes in, its destination MAC address will match and forward to the host Intel® DPDK PMD application.
+So when a packet comes in, its destination MAC address will match and forward to the host DPDK PMD application.
-In the host Intel® DPDK application, the behavior is similar to L2 forwarding,
+In the host DPDK application, the behavior is similar to L2 forwarding,
that is, the packet is forwarded to the correct PF pool.
The SR-IOV NIC switch forwards the packet to a specific VM according to the MAC destination address
which belongs to the destination VF on the VM.
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-Intel® DPDK Xen Based Packet-Switching Solution
-===============================================
+DPDK Xen Based Packet-Switching Solution
+========================================
Introduction
------------
-Intel® DPDK provides a para-virtualization packet switching solution, based on the Xen hypervisor's Grant Table, Note 1,
+DPDK provides a para-virtualization packet switching solution, based on the Xen hypervisor's Grant Table, Note 1,
which provides simple and fast packet switching capability between guest domains and host domain based on MAC address or VLAN tag.
This solution is comprised of two components;
MAC address, device state, and so on. XenStore is an information storage space shared between domains,
see further information on XenStore below.
-The front end PMD can be found in the Intel® DPDK directory lib/ librte_pmd_xenvirt and back end example in examples/vhost_xen.
+The front end PMD can be found in the DPDK directory lib/ librte_pmd_xenvirt and back end example in examples/vhost_xen.
The PMD front end and switching back end use shared Virtio RX/TX rings as para- virtualized interface.
The Virtio ring is created by the front end, and Grant table references for the ring are passed to host.
The switching back end maps those grant table references and creates shared rings in a mapped address space.
-The following diagram describes the functionality of the Intel® DPDK Xen Packet- Switching Solution.
+The following diagram describes the functionality of the DPDK Xen Packet- Switching Solution.
.. image35_png has been renamed
* Mbuf pool allocation:
- To use a Xen switching solution, the Intel® DPDK application should use rte_mempool_gntalloc_create()
+ To use a Xen switching solution, the DPDK application should use rte_mempool_gntalloc_create()
to reserve mbuf pools during initialization.
rte_mempool_gntalloc_create() creates a mempool with objects from memory allocated and managed via gntalloc/gntdev.
- The Intel® DPDK now supports construction of mempools from allocated virtual memory through the rte_mempool_xmem_create() API.
+ The DPDK now supports construction of mempools from allocated virtual memory through the rte_mempool_xmem_create() API.
This front end constructs mempools based on memory allocated through the xen_gntalloc driver.
rte_mempool_gntalloc_create() allocates Grant pages, maps them to continuous virtual address space,
* Interrupt and Kick:
- There are no interrupts in Intel® DPDK Xen Switching as both front and back ends work in polling mode.
+ There are no interrupts in DPDK Xen Switching as both front and back ends work in polling mode.
There is no requirement for notification.
* Feature Negotiation:
#. Copy the contents of the packet to the memory buffer pointed to by gva.
-The Intel® DPDK application in the guest domain, based on the PMD front end,
+The DPDK application in the guest domain, based on the PMD front end,
is polling the shared Virtio RX ring for available packets and receives them on arrival.
Packet Transmission
limit=nb_mbuf# * VM#.
- In Intel® DPDK examples, nb_mbuf# is normally 8192.
+ In DPDK examples, nb_mbuf# is normally 8192.
Building and Running the Switching Backend
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
make -C examples/vhost_xen/
-#. Load the Xen Intel® DPDK memory management module and preallocate memory:
+#. Load the Xen DPDK memory management module and preallocate memory:
.. code-block:: console
.. note::
On Xen Dom0, there is no hugepage support.
- Under Xen Dom0, the Intel® DPDK uses a special memory management kernel module
+ Under Xen Dom0, the DPDK uses a special memory management kernel module
to allocate chunks of physically continuous memory.
- Refer to the *Intel® DPDK Getting Started Guide* for more information on memory management in the Intel® DPDK.
- In the above command, 4 GB memory is reserved (2048 of 2 MB pages) for Intel® DPDK.
+ Refer to the *DPDK Getting Started Guide* for more information on memory management in the DPDK.
+ In the above command, 4 GB memory is reserved (2048 of 2 MB pages) for DPDK.
#. Load igb_uio and bind one Intel NIC controller to igb_uio:
.. note::
- The -xen-dom0 option instructs the Intel® DPDK to use the Xen kernel module to allocate memory.
+ The -xen-dom0 option instructs the DPDK to use the Xen kernel module to allocate memory.
Other Parameters:
make install T=x86_64-native-linuxapp-gcc
-#. Enable hugepages. Refer to the *Intel® DPDK Getting Started Guide* for instructions on
- how to use hugepages in the Intel® DPDK.
+#. Enable hugepages. Refer to the *DPDK Getting Started Guide* for instructions on
+ how to use hugepages in the DPDK.
-#. Run TestPMD. Refer to *Intel® DPDK TestPMD Application User Guide* for detailed parameter usage.
+#. Run TestPMD. Refer to *DPDK TestPMD Application User Guide* for detailed parameter usage.
.. code-block:: console
development environment information and optimization guidelines.
For programming examples and for instructions on compiling and running each sample application,
-see the *Intel® DPDK Sample Applications User Guide* for details.
+see the *DPDK Sample Applications User Guide* for details.
-For general information on compiling and running applications, see the *Intel® DPDK Getting Started Guide*.
+For general information on compiling and running applications, see the *DPDK Getting Started Guide*.
Documentation Roadmap
---------------------
-The following is a list of Intel® DPDK documents in the suggested reading order:
+The following is a list of DPDK documents in the suggested reading order:
* **Release Notes** (this document): Provides release-specific information, including supported features,
limitations, fixed issues, known issues and so on.
Also, provides the answers to frequently asked questions in FAQ format.
-* **Getting Started Guide** : Describes how to install and configure the Intel® DPDK software;
+* **Getting Started Guide** : Describes how to install and configure the DPDK software;
designed to get users up and running quickly with the software.
-* **FreeBSD* Getting Started Guide** : A document describing the use of the Intel® DPDK with FreeBSD*
- has been added in Intel® DPDK Release 1.6.0.
- Refer to this guide for installation and configuration instructions to get started using the Intel® DPDK with FreeBSD*.
+* **FreeBSD* Getting Started Guide** : A document describing the use of the DPDK with FreeBSD*
+ has been added in DPDK Release 1.6.0.
+ Refer to this guide for installation and configuration instructions to get started using the DPDK with FreeBSD*.
* **Programmer's Guide** (this document): Describes:
* The software architecture and how to use it (through examples),
specifically in a Linux* application (linuxapp) environment
- * The content of the Intel® DPDK, the build system
- (including the commands that can be used in the root Intel® DPDK Makefile to build the development kit and an application)
+ * The content of the DPDK, the build system
+ (including the commands that can be used in the root DPDK Makefile to build the development kit and an application)
and guidelines for porting an application
* Optimizations used in the software and those that should be considered for new development
A glossary of terms is also provided.
-* **API Reference** : Provides detailed information about Intel® DPDK functions,
+* **API Reference** : Provides detailed information about DPDK functions,
data structures and other programming constructs.
* **Sample Applications User Guide**: Describes a set of sample applications.
Related Publications
--------------------
-The following documents provide information that is relevant to the development of applications using the Intel® DPDK:
+The following documents provide information that is relevant to the development of applications using the DPDK:
* Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 3A: System Programming Guide
The caller has an ability to explicitly specify which mempools should be used to allocate 'direct' and 'indirect' mbufs from.
Note that configuration macro RTE_MBUF_SCATTER_GATHER has to be enabled to make fragmentation library build and work correctly.
-For more information about direct and indirect mbufs, refer to the *Intel DPDK Programmers guide 7.7 Direct and Indirect Buffers.*
+For more information about direct and indirect mbufs, refer to the *DPDK Programmers guide 7.7 Direct and Indirect Buffers.*
Packet reassembly
-----------------
IVSHMEM Library
===============
-The Intel® DPDK IVSHMEM library facilitates fast zero-copy data sharing among virtual machines
+The DPDK IVSHMEM library facilitates fast zero-copy data sharing among virtual machines
(host-to-guest or guest-to-guest) by means of QEUMU's IVSHMEM mechanism.
The library works by providing a command line for QEMU to map several hugepages into a single IVSHMEM device.
For the guest to know what is inside any given IVSHMEM device
-(and to distinguish between Intel® DPDK and non-Intel® DPDK IVSHMEM devices),
+(and to distinguish between DPDK and non-DPDK IVSHMEM devices),
a metadata file is also mapped into the IVSHMEM segment.
No work needs to be done by the guest application to map IVSHMEM devices into memory;
-they are automatically recognized by the Intel® DPDK Environment Abstraction Layer (EAL).
+they are automatically recognized by the DPDK Environment Abstraction Layer (EAL).
-A typical Intel® DPDK IVSHMEM use case looks like the following.
+A typical DPDK IVSHMEM use case looks like the following.
.. image28_png has been renamed
* Call rte_ivshmem_metadata_create() to create a new metadata file.
The metadata name is used to distinguish between multiple metadata files.
-* Populate each metadata file with Intel® DPDK data structures.
+* Populate each metadata file with DPDK data structures.
This can be done using the following API calls:
* rte_ivhshmem_metadata_add_memzone() to add rte_memzone to metadata file
.. note::
- Only data structures fully residing in Intel® DPDK hugepage memory work correctly.
+ Only data structures fully residing in DPDK hugepage memory work correctly.
Supported data structures created by malloc(), mmap()
- or otherwise using non-Intel® DPDK memory cause undefined behavior and even a segmentation fault.
+ or otherwise using non-DPDK memory cause undefined behavior and even a segmentation fault.
IVSHMEM Environment Configuration
---------------------------------
The source code can be found on the QEMU website (currently, version 1.4.x is supported, but version 1.5.x is known to work also),
however, the source code will need to be patched to support using regular files as the IVSHMEM memory backend.
- The patch is not included in the Intel® DPDK package,
+ The patch is not included in the DPDK package,
but is available on the `Intel®DPDK-vswitch project webpage <https://01.org/packet-processing/intel%C2%AE-ovdk>`_
- (either separately or in an Intel® DPDK vSwitch package).
+ (either separately or in a DPDK vSwitch package).
-* Enable IVSHMEM library in the Intel® DPDK build configuration.
+* Enable IVSHMEM library in the DPDK build configuration.
In the default configuration, IVSHMEM library is not compiled. To compile the IVSHMEM library,
one has to either use one of the provided IVSHMEM targets
* Set up hugepage memory on the virtual machine.
- The guest applications run as regular Intel® DPDK (primary) processes and thus need their own hugepage memory set up inside the VM.
- The process is identical to the one described in the *Intel® DPDK Getting Started Guide*.
+ The guest applications run as regular DPDK (primary) processes and thus need their own hugepage memory set up inside the VM.
+ The process is identical to the one described in the *DPDK Getting Started Guide*.
Best Practices for Writing IVSHMEM Applications
-----------------------------------------------
IVSHMEM applications essentially behave like multi-process applications,
so it is important to implement access serialization to data and thread safety.
-Intel® DPDK ring structures are already thread-safe, however,
+DPDK ring structures are already thread-safe, however,
any custom data structures that the user might need would have to be thread-safe also.
-Similar to regular Intel® DPDK multi-process applications,
+Similar to regular DPDK multi-process applications,
it is not recommended to use function pointers as functions might have different memory addresses in different processes.
It is best to avoid freeing the rte_mbuf structure on a different machine from where it was allocated,
For the best performance across all NUMA nodes, each QUEMU core should be pinned to host CPU core on the appropriate NUMA node.
QEMU's virtual NUMA nodes should also be set up to correspond to physical NUMA nodes.
-More on how to set up Intel® DPDK and QEMU NUMA support can be found in *Intel® DPDK Getting Started Guide* and
+More on how to set up DPDK and QEMU NUMA support can be found in *DPDK Getting Started Guide* and
`QEMU documentation <http://qemu.weilnetz.de/qemu-doc.html>`_ respectively.
-A script called cpu_layout.py is provided with the Intel® DPDK package (in the tools directory)
+A script called cpu_layout.py is provided with the DPDK package (in the tools directory)
that can be used to identify which CPU cores correspond to which NUMA node.
The QEMU IVSHMEM command line creation should be considered the last step before starting the virtual machine.
Kernel NIC Interface
====================
-The Intel® DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
+The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
-The benefits of using the Intel® DPDK KNI are:
+The benefits of using the DPDK KNI are:
* Faster than existing Linux TUN/TAP interfaces
(by eliminating system calls and copy_to_user()/copy_from_user() operations.
-* Allows management of Intel® DPDK ports using standard Linux net tools such as ethtool, ifconfig and tcpdump.
+* Allows management of DPDK ports using standard Linux net tools such as ethtool, ifconfig and tcpdump.
* Allows an interface with the kernel network stack.
-The components of an application using the Intel® DPDK Kernel NIC Interface are shown in Figure 17.
+The components of an application using the DPDK Kernel NIC Interface are shown in Figure 17.
.. _pg_figure_17:
-**Figure 17. Components of an Intel® DPDK KNI Application**
+**Figure 17. Components of a DPDK KNI Application**
.. image43_png has been renamed
|kernel_nic_intf|
-The Intel® DPDK KNI Kernel Module
----------------------------------
+The DPDK KNI Kernel Module
+--------------------------
The KNI kernel loadable module provides support for two types of devices:
* Net functionality provided by implementing several operations such as netdev_ops,
header_ops, ethtool_ops that are defined by struct net_device,
- including support for Intel® DPDK mbufs and FIFOs.
+ including support for DPDK mbufs and FIFOs.
* The interface name is provided from userspace.
KNI Creation and Deletion
-------------------------
-The KNI interfaces are created by an Intel® DPDK application dynamically.
+The KNI interfaces are created by a DPDK application dynamically.
The interface name and FIFO details are provided by the application through an ioctl call
using the rte_kni_device_info struct which contains:
* Core affinity.
-Refer to rte_kni_common.h in the Intel® DPDK source code for more details.
+Refer to rte_kni_common.h in the DPDK source code for more details.
The physical addresses will be re-mapped into the kernel address space and stored in separate KNI contexts.
Once KNI interfaces are created, the KNI context information can be queried by calling the rte_kni_info_get() function.
-The KNI interfaces can be deleted by an Intel® DPDK application dynamically after being created.
+The KNI interfaces can be deleted by a DPDK application dynamically after being created.
Furthermore, all those KNI interfaces not deleted will be deleted on the release operation
-of the miscellaneous device (when the Intel® DPDK application is closed).
+of the miscellaneous device (when the DPDK application is closed).
-Intel® DPDK mbuf Flow
----------------------
+DPDK mbuf Flow
+--------------
-To minimize the amount of Intel® DPDK code running in kernel space, the mbuf mempool is managed in userspace only.
+To minimize the amount of DPDK code running in kernel space, the mbuf mempool is managed in userspace only.
The kernel module will be aware of mbufs,
-but all mbuf allocation and free operations will be handled by the Intel® DPDK application only.
+but all mbuf allocation and free operations will be handled by the DPDK application only.
Figure 18 shows a typical scenario with packets sent in both directions.
.. _pg_figure_18:
-**Figure 18. Packet Flow via mbufs in the Intel DPDK® KNI**
+**Figure 18. Packet Flow via mbufs in the DPDK KNI**
.. image44_png has been renamed
Use Case: Ingress
-----------------
-On the Intel® DPDK RX side, the mbuf is allocated by the PMD in the RX thread context.
+On the DPDK RX side, the mbuf is allocated by the PMD in the RX thread context.
This thread will enqueue the mbuf in the rx_q FIFO.
The KNI thread will poll all KNI active devices for the rx_q.
If an mbuf is dequeued, it will be converted to a sk_buff and sent to the net stack via netif_rx().
Use Case: Egress
----------------
-For packet egress the Intel® DPDK application must first enqueue several mbufs to create an mbuf cache on the kernel side.
+For packet egress the DPDK application must first enqueue several mbufs to create an mbuf cache on the kernel side.
The packet is received from the Linux net stack, by calling the kni_net_tx() callback.
The mbuf is dequeued (without waiting due the cache) and filled with data from sk_buff.
The sk_buff is then freed and the mbuf sent in the tx_q FIFO.
-The Intel® DPDK TX thread dequeues the mbuf and sends it to the PMD (via rte_eth_tx_burst()).
+The DPDK TX thread dequeues the mbuf and sends it to the PMD (via rte_eth_tx_burst()).
It then puts the mbuf back in the cache.
Ethtool
Link state and MTU change are network interface specific operations usually done via ifconfig.
The request is initiated from the kernel side (in the context of the ifconfig process)
-and handled by the user space Intel® DPDK application.
+and handled by the user space DPDK application.
The application polls the request, calls the application handler and returns the response back into the kernel space.
The application handlers can be registered upon interface creation or explicitly registered/unregistered in runtime.
vHost is a kernel module usually working as the backend of virtio (a para- virtualization driver framework)
to accelerate the traffic from the guest to the host.
-The Intel® DPDK Kernel NIC interface provides the ability to hookup vHost traffic into userspace Intel® DPDK application.
-Together with the Intel® DPDK PMD virtio, it significantly improves the throughput between guest and host.
-In the scenario where Intel® DPDK is running as fast path in the host, kni-vhost is an efficient path for the traffic.
+The DPDK Kernel NIC interface provides the ability to hookup vHost traffic into userspace DPDK application.
+Together with the DPDK PMD virtio, it significantly improves the throughput between guest and host.
+In the scenario where DPDK is running as fast path in the host, kni-vhost is an efficient path for the traffic.
Overview
~~~~~~~~
It is using the existing interface with vHost-net, so it does not require any kernel hacking,
and is fully-compatible with the kernel vhost module.
As vHost is still taking responsibility for communicating with the front-end virtio,
-it naturally supports both legacy virtio -net and the Intel® DPDK PMD virtio.
+it naturally supports both legacy virtio -net and the DPDK PMD virtio.
There is a little penalty that comes from the non-polling mode of vhost.
However, it scales throughput well when using KNI in multi-thread mode.
Of course, as a prerequisite, the vhost/vhost-net kernel CONFIG should be chosen before compiling the kernel.
-#. Compile the Intel® DPDK and insert igb_uio as normal.
+#. Compile the DPDK and insert igb_uio as normal.
#. Insert the KNI kernel module:
Each port pins two forwarding cores (ingress/egress) in user space.
#. Assign a raw socket to vhost-net during qemu-kvm startup.
- The Intel® DPDK does not provide a script to do this since it is easy for the user to customize.
+ The DPDK does not provide a script to do this since it is easy for the user to customize.
The following shows the key steps to launch qemu-kvm with kni-vhost:
.. code-block:: bash
Compatibility Configure Option
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There is a CONFIG_RTE_KNI_VHOST_VNET_HDR_EN configuration option in Intel® DPDK configuration file.
+There is a CONFIG_RTE_KNI_VHOST_VNET_HDR_EN configuration option in DPDK configuration file.
By default, it set to n, which means do not turn on the virtio net header,
which is used to support additional features (such as, csum offload, vlan offload, generic-segmentation and so on),
since the kni-vhost does not yet support those features.
========================================
In addition to Poll Mode Drivers (PMDs) for physical and virtual hardware,
-the Intel® DPDK also includes two pure-software PMDs. These two drivers are:
+the DPDK also includes two pure-software PMDs. These two drivers are:
* A libpcap -based PMD (librte_pmd_pcap) that reads and writes packets using libpcap,
- both from files on disk, as well as from physical NIC devices using standard Linux kernel drivers.
Using the Drivers from the EAL Command Line
-------------------------------------------
-For ease of use, the Intel® DPDK EAL also has been extended to allow pseudo-ethernet devices,
+For ease of use, the DPDK EAL also has been extended to allow pseudo-ethernet devices,
using one or more of these drivers,
to be created at application startup time during EAL initialization.
Rings-based PMD
~~~~~~~~~~~~~~~
-To run an Intel® DPDK application on a machine without any Ethernet devices, a pair of ring-based rte_ethdevs can be used as below.
+To run a DPDK application on a machine without any Ethernet devices, a pair of ring-based rte_ethdevs can be used as below.
The device names passed to the --vdev option must start with eth_ring and take no additional parameters.
Multiple devices may be specified, separated by commas.
for reasons of API consistency.
Enqueuing and dequeuing items from an rte_ring using the rings-based PMD may be slower than using the native rings API.
-This is because Intel® DPDK Ethernet drivers make use of function pointers to call the appropriate enqueue or dequeue functions,
+This is because DPDK Ethernet drivers make use of function pointers to call the appropriate enqueue or dequeue functions,
while the rte_ring specific functions are direct function calls in the code and are often inlined by the compiler.
Once an ethdev has been created, for either a ring or a pcap-based PMD,
.. BSD LICENSE
- Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ Copyright(c) 2010-2014 ntel Corporation. All rights reserved.
All rights reserved.
Redistribution and use in source and binary forms, with or without
=====================================
In addition to Poll Mode Drivers (PMDs) for physical and virtual hardware,
-Intel® DPDK also includes a pure-software library that
+DPDK also includes a pure-software library that
allows physical PMD's to be bonded together to create a single logical PMD.
|bond-overview|
The Link Bonding PMD Library is enabled by default in the build
configuration files, the library can be disabled by setting
- ``CONFIG_RTE_LIBRTE_PMD_BOND=n`` and recompiling the Intel® DPDK.
+ ``CONFIG_RTE_LIBRTE_PMD_BOND=n`` and recompiling the DPDK.
Link Bonding Modes Overview
---------------------------
----------------------
The librte_pmd_bond bonded device are compatible with the Ethernet device API
-exported by the Ethernet PMDs described in the *Intel® DPDK API Reference*.
+exported by the Ethernet PMDs described in the *DPDK API Reference*.
The Link Bonding Library supports the creation of bonded devices at application
startup time during EAL initialization using the ``--vdev`` option as well as
LPM Library
===========
-The Intel® DPDK LPM library component implements the Longest Prefix Match (LPM) table search method for 32-bit keys
+The DPDK LPM library component implements the Longest Prefix Match (LPM) table search method for 32-bit keys
that is typically used to find the best route match in IP forwarding applications.
LPM API Overview
The objective of this library is to provide malloc-like functions to allow allocation from hugepage memory
and to facilitate application porting.
-The *Intel® DPDK API Reference* manual describes the available functions.
+The *DPDK API Reference* manual describes the available functions.
Typically, these kinds of allocations should not be done in data plane processing
because they are slower than pool-based allocation and make use of locks within the allocation
and free paths.
However, they can be used in configuration code.
-Refer to the rte_malloc() function description in the *Intel® DPDK API Reference* manual for more information.
+Refer to the rte_malloc() function description in the *DPDK API Reference* manual for more information.
Cookies
-------
and on this occasion should find the newly created,
suitable element as the size of memory reserved in the memzone is set to be
at least the size of the requested data block plus the alignment -
-subject to a minimum size specified in the Intel DPDK compile-time configuration.
+subject to a minimum size specified in the DPDK compile-time configuration.
When a suitable, free element has been identified, the pointer to be returned to the user is calculated,
with the space to be provided to the user being at the end of the free block.
============
The mbuf library provides the ability to allocate and free buffers (mbufs)
-that may be used by the Intel® DPDK application to store message buffers.
+that may be used by the DPDK application to store message buffers.
The message buffers are stored in a mempool, using the :ref:`Mempool Library <Mempool_Library>`.
A rte_mbuf struct can carry network packet buffers (type is RTE_MBUF_PKT)
On the other hand, the second method is more flexible and allows
the complete separation of the allocation of metadata structures from the allocation of packet data buffers.
-The first method was chosen for the Intel® DPDK.
+The first method was chosen for the DPDK.
The metadata contains control information such as message type, length,
pointer to the start of the data and a pointer for additional mbuf structures allowing buffer chaining.
* Remove data at the beginning of the buffer (rte_pktmbuf_adj())
- * Remove data at the end of the buffer (rte_pktmbuf_trim()) Refer to the *Intel® DPDK API Reference* for details.
+ * Remove data at the end of the buffer (rte_pktmbuf_trim()) Refer to the *DPDK API Reference* for details.
Meta Information
----------------
===============
A memory pool is an allocator of a fixed-sized object.
-In the Intel® DPDK, it is identified by name and uses a ring to store free objects.
+In the DPDK, it is identified by name and uses a ring to store free objects.
It provides some other optional services such as a per-core object cache and
an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
Multi-process Support
=====================
-In the Intel® DPDK, multi-process support is designed to allow a group of Intel® DPDK processes
+In the DPDK, multi-process support is designed to allow a group of DPDK processes
to work together in a simple transparent manner to perform packet processing,
or other workloads, on Intel® architecture hardware.
To support this functionality,
-a number of additions have been made to the core Intel® DPDK Environment Abstraction Layer (EAL).
+a number of additions have been made to the core DPDK Environment Abstraction Layer (EAL).
-The EAL has been modified to allow different types of Intel® DPDK processes to be spawned,
+The EAL has been modified to allow different types of DPDK processes to be spawned,
each with different permissions on the hugepage memory used by the applications.
For now, there are two types of process specified:
* secondary processes, which cannot initialize shared memory,
but can attach to pre- initialized shared memory and create objects in it.
-Standalone Intel® DPDK processes are primary processes,
+Standalone DPDK processes are primary processes,
while secondary processes can only run alongside a primary process or
after a primary process has already configured the hugepage shared memory for them.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
-* --proc-type: for specifying a given process instance as the primary or secondary Intel® DPDK instance
+* --proc-type: for specifying a given process instance as the primary or secondary DPDK instance
* --file-prefix: to allow processes that do not want to co-operate to have different memory regions
-A number of example applications are provided that demonstrate how multiple Intel® DPDK processes can be used together.
+A number of example applications are provided that demonstrate how multiple DPDK processes can be used together.
These are more fully documented in the "Multi- process Sample Application" chapter
-in the *Intel® DPDK Sample Application's User Guide*.
+in the *DPDK Sample Application's User Guide*.
Memory Sharing
--------------
-The key element in getting a multi-process application working using the Intel® DPDK is to ensure that
+The key element in getting a multi-process application working using the DPDK is to ensure that
memory resources are properly shared among the processes making up the multi-process application.
Once there are blocks of shared memory available that can be accessed by multiple processes,
then issues such as inter-process communication (IPC) becomes much simpler.
On application start-up in a primary or standalone process,
-the Intel DPDK records to memory-mapped files the details of the memory configuration it is using - hugepages in use,
+the DPDK records to memory-mapped files the details of the memory configuration it is using - hugepages in use,
the virtual addresses they are mapped at, the number of memory channels present, etc.
When a secondary process is started, these files are read and the EAL recreates the same memory configuration
in the secondary process so that all memory zones are shared between processes and all pointers to that memory are valid,
.. _pg_figure_16:
-**Figure 16. Memory Sharing in the Intel® DPDK Multi-process Sample Application**
+**Figure 16. Memory Sharing in the DPDK Multi-process Sample Application**
.. image42_png has been replaced
|multi_process_memory|
The EAL also supports an auto-detection mode (set by EAL --proc-type=auto flag ),
-whereby an Intel® DPDK process is started as a secondary instance if a primary instance is already running.
+whereby an DPDK process is started as a secondary instance if a primary instance is already running.
Deployment Models
-----------------
Symmetric/Peer Processes
~~~~~~~~~~~~~~~~~~~~~~~~
-Intel® DPDK multi-process support can be used to create a set of peer processes where each process performs the same workload.
+DPDK multi-process support can be used to create a set of peer processes where each process performs the same workload.
This model is equivalent to having multiple threads each running the same main-loop function,
-as is done in most of the supplied Intel® DPDK sample applications.
+as is done in most of the supplied DPDK sample applications.
In this model, the first of the processes spawned should be spawned using the --proc-type=primary EAL flag,
while all subsequent instances should be spawned using the --proc-type=secondary flag.
The simple_mp and symmetric_mp sample applications demonstrate this usage model.
-They are described in the "Multi-process Sample Application" chapter in the *Intel® DPDK Sample Application's User Guide*.
+They are described in the "Multi-process Sample Application" chapter in the *DPDK Sample Application's User Guide*.
Asymmetric/Non-Peer Processes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case, extensive use of rte_ring objects is made, which are located in shared hugepage memory.
The client_server_mp sample application shows this usage model.
-It is described in the "Multi-process Sample Application" chapter in the *Intel® DPDK Sample Application's User Guide*.
+It is described in the "Multi-process Sample Application" chapter in the *DPDK Sample Application's User Guide*.
-Running Multiple Independent Intel® DPDK Applications
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Running Multiple Independent DPDK Applications
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In addition to the above scenarios involving multiple Intel® DPDK processes working together,
-it is possible to run multiple Intel® DPDK processes side-by-side,
+In addition to the above scenarios involving multiple DPDK processes working together,
+it is possible to run multiple DPDK processes side-by-side,
where those processes are all working independently.
Support for this usage scenario is provided using the --file-prefix parameter to the EAL.
The rte part of the filenames of each of the above is configurable using the file-prefix parameter.
In addition to specifying the file-prefix parameter,
-any Intel® DPDK applications that are to be run side-by-side must explicitly limit their memory use.
+any DPDK applications that are to be run side-by-side must explicitly limit their memory use.
This is done by passing the -m flag to each process to specify how much hugepage memory, in megabytes,
each process can use (or passing --socket-mem to specify how much hugepage memory on each socket each process can use).
.. note::
- Independent Intel® DPDK instances running side-by-side on a single machine cannot share any network ports.
+ Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
Any network ports being used by one process should be blacklisted in every other process.
-Running Multiple Independent Groups of Intel® DPDK Applications
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Running Multiple Independent Groups of DPDK Applications
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In the same way that it is possible to run independent Intel® DPDK applications side- by-side on a single system,
-this can be trivially extended to multi-process groups of Intel® DPDK applications running side-by-side.
+In the same way that it is possible to run independent DPDK applications side- by-side on a single system,
+this can be trivially extended to multi-process groups of DPDK applications running side-by-side.
In this case, the secondary processes must use the same --file-prefix parameter
as the primary process whose shared memory they are connecting to.
.. note::
- All restrictions and issues with multiple independent Intel® DPDK processes running side-by-side
+ All restrictions and issues with multiple independent DPDK processes running side-by-side
apply in this usage scenario also.
Multi-process Limitations
-------------------------
-There are a number of limitations to what can be done when running Intel® DPDK multi-process applications.
+There are a number of limitations to what can be done when running DPDK multi-process applications.
Some of these are documented below:
* The multi-process feature requires that the exact same hugepage memory mappings be present in all applications.
so it is recommended that it be disabled only when absolutely necessary,
and only when the implications of this change have been understood.
-* All Intel® DPDK processes running as a single application and using shared memory must have distinct coremask arguments.
+* All DPDK processes running as a single application and using shared memory must have distinct coremask arguments.
It is not possible to have a primary and secondary instance, or two secondary instances,
using any of the same logical cores.
Attempting to do so can cause corruption of memory pool caches, among other issues.
the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions
instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
-* Depending upon the hardware in use, and the number of Intel® DPDK processes used,
- it may not be possible to have HPET timers available in each Intel® DPDK instance.
+* Depending upon the hardware in use, and the number of DPDK processes used,
+ it may not be possible to have HPET timers available in each DPDK instance.
The minimum number of HPET comparators available to Linux* userspace can be just a single comparator,
- which means that only the first, primary Intel® DPDK process instance can open and mmap /dev/hpet.
- If the number of required Intel® DPDK processes exceeds that of the number of available HPET comparators,
+ which means that only the first, primary DPDK process instance can open and mmap /dev/hpet.
+ If the number of required DPDK processes exceeds that of the number of available HPET comparators,
the TSC (which is the default timer in this release) must be used as a time source across all processes instead of the HPET.
.. |multi_process_memory| image:: img/multi_process_memory.svg
Overview
========
-This section gives a global overview of the architecture of Intel® Data Plane Development Kit (Intel® DPDK).
+This section gives a global overview of the architecture of Data Plane Development Kit (DPDK).
-The main goal of the Intel® DPDK is to provide a simple,
+The main goal of the DPDK is to provide a simple,
complete framework for fast packet processing in data plane applications.
Users may use the code to understand some of the techniques employed,
to build upon for prototyping or to add their own protocol stacks.
-Alternative ecosystem options that use the Intel® DPDK are available.
+Alternative ecosystem options that use the DPDK are available.
The framework creates a set of libraries for specific environments
through the creation of an Environment Abstraction Layer (EAL),
Once the EAL library is created, the user may link with the library to create their own applications.
Other libraries, outside of EAL, including the Hash,
Longest Prefix Match (LPM) and rings libraries are also provided.
-Sample applications are provided to help show the user how to use various features of the Intel® DPDK.
+Sample applications are provided to help show the user how to use various features of the DPDK.
-The Intel® DPDK implements a run to completion model for packet processing,
+The DPDK implements a run to completion model for packet processing,
where all resources must be allocated prior to calling Data Plane applications,
running as execution units on logical processing cores.
The model does not support a scheduler and all devices are accessed by polling.
Development Environment
-----------------------
-The Intel® DPDK project installation requires Linux and the associated toolchain,
+The DPDK project installation requires Linux and the associated toolchain,
such as one or more compilers, assembler, make utility,
-editor and various libraries to create the Intel® DPDK components and libraries.
+editor and various libraries to create the DPDK components and libraries.
Once these libraries are created for the specific environment and architecture,
they may then be used to create the user's data plane application.
When creating applications for the Linux user space, the glibc library is used.
-For Intel® DPDK applications, two environmental variables (RTE_SDK and RTE_TARGET)
+For DPDK applications, two environmental variables (RTE_SDK and RTE_TARGET)
must be configured before compiling the applications.
The following are examples of how the variables can be set:
export RTE_SDK=/home/user/DPDK
export RTE_TARGET=x86_64-native-linuxapp-gcc
-See the *Intel® DPDK Getting Started Guide* for information on setting up the development environment.
+See the *DPDK Getting Started Guide* for information on setting up the development environment.
Environment Abstraction Layer
-----------------------------
that hides the environment specifics from the applications and libraries.
The services provided by the EAL are:
-* Intel® DPDK loading and launching
+* DPDK loading and launching
* Support for multi-process and multi-thread execution types
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The mbuf library provides the facility to create and destroy buffers
-that may be used by the Intel® DPDK application to store message buffers.
-The message buffers are created at startup time and stored in a mempool, using the Intel® DPDK mempool library.
+that may be used by the DPDK application to store message buffers.
+The message buffers are created at startup time and stored in a mempool, using the DPDK mempool library.
This library provide an API to allocate/free mbufs, manipulate control message buffers (ctrlmbuf) which are generic message buffers,
and packet buffers (pktmbuf) which are used to carry network packets.
Timer Manager (librte_timer)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This library provides a timer service to Intel® DPDK execution units,
+This library provides a timer service to DPDK execution units,
providing the ability to execute a function asynchronously.
It can be periodic function calls, or just a one-shot call.
It uses the timer interface provided by the Environment Abstraction Layer (EAL)
Ethernet* Poll Mode Driver Architecture
---------------------------------------
-The Intel® DPDK includes Poll Mode Drivers (PMDs) for 1 GbE, 10 GbE and 40GbE, and para virtualized virtio
+The DPDK includes Poll Mode Drivers (PMDs) for 1 GbE, 10 GbE and 40GbE, and para virtualized virtio
Ethernet controllers which are designed to work without asynchronous, interrupt-based signaling mechanisms.
See :ref:`Poll Mode Driver <Poll_Mode_Driver>`.
Packet Forwarding Algorithm Support
-----------------------------------
-The Intel® DPDK includes Hash (librte_hash) and Longest Prefix Match (LPM,librte_lpm)
+The DPDK includes Hash (librte_hash) and Longest Prefix Match (LPM,librte_lpm)
libraries to support the corresponding packet forwarding algorithms.
See :ref:`Hash Library <Hash_Library>` and :ref:`LPM Library <LPM_Library>` for more information.
Packet Classification and Access Control
========================================
-The Intel® DPDK provides an Access Control library that gives the ability
+The DPDK provides an Access Control library that gives the ability
to classify an input packet based on a set of classification rules.
The ACL library is used to perform an N-tuple search over a set of rules with multiple categories
.. note::
- For more details about the Access Control API, please refer to the *Intel® DPDK API Reference*.
+ For more details about the Access Control API, please refer to the *DPDK API Reference*.
The following example demonstrates IPv4, 5-tuple classification for rules defined above
with multiple categories in more detail.
Packet Distributor Library
==========================
-The Intel® DPDK Packet Distributor library is a library designed to be used for dynamic load balancing of traffic
+The DPDK Packet Distributor library is a library designed to be used for dynamic load balancing of traffic
while supporting single packet at a time operation.
When using this library, the logical cores in use are to be considered in two roles: firstly a distributor lcore,
which is responsible for load balancing or distributing packets,
The flush and clear_returns API calls, mentioned previously,
are likely of less use that the process and returned_pkts APIS, and are principally provided to aid in unit testing of the library.
-Descriptions of these functions and their use can be found in the Intel® DPDK API Reference document.
+Descriptions of these functions and their use can be found in the DPDK API Reference document.
Worker Operation
----------------
Design Objectives
-----------------
-The main design objectives for the Intel DPDK Packet Framework are:
+The main design objectives for the DPDK Packet Framework are:
* Provide standard methodology to build complex packet processing pipelines.
Provide reusable and extensible templates for the commonly used pipeline functional blocks;
For each incoming packet, the table defines the set of actions to be applied to the packet,
as well as the next stage to send the packet to.
-The Intel DPDK Packet Framework minimizes the development effort required to build packet processing pipelines
+The DPDK Packet Framework minimizes the development effort required to build packet processing pipelines
by defining a standard methodology for pipeline development,
as well as providing libraries of reusable templates for the commonly used pipeline blocks.
| | | |
+===+==================+=======================================================================================+
| 1 | SW ring | SW circular buffer used for message passing between the application threads. Uses |
-| | | the Intel DPDK rte_ring primitive. Expected to be the most commonly used type of |
+| | | the DPDK rte_ring primitive. Expected to be the most commonly used type of |
| | | port. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
| 2 | HW ring | Queue of buffer descriptors used to interact with NIC, switch or accelerator ports. |
-| | | For NIC ports, it uses the Intel DPDK rte_eth_rx_queue or rte_eth_tx_queue |
+| | | For NIC ports, it uses the DPDK rte_eth_rx_queue or rte_eth_tx_queue |
| | | primitives. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
Introduction
------------
-The following sections describe optimizations used in the Intel® DPDK and optimizations that should be considered for a new applications.
+The following sections describe optimizations used in the DPDK and optimizations that should be considered for a new applications.
They also highlight the performance-impacting coding techniques that should,
-and should not be, used when developing an application using the Intel® DPDK.
+and should not be, used when developing an application using the DPDK.
And finally, they give an introduction to application profiling using a Performance Analyzer from Intel to optimize the software.
Poll Mode Driver
================
-The Intel® DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
+The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space,
to configure the devices and their respective queues.
Requirements and Assumptions
----------------------------
-The Intel® DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
+The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
* In the *run-to-completion* model, a specific port's RX descriptor ring is polled for packets through an API.
Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission.
The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission.
In a synchronous run-to-completion model,
-each logical core assigned to the Intel® DPDK executes a packet processing loop that includes the following steps:
+each logical core assigned to the DPDK executes a packet processing loop that includes the following steps:
* Retrieve input packets through the PMD receive API
Logical Cores, Memory and NIC Queues Relationships
--------------------------------------------------
-The Intel® DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
+The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory.
The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors
should be populated with mbufs allocated from a mempool allocated from local memory.
~~~~~~~~~~~~~~~~~~~~~
Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI
-identifiers assigned by the PCI probing/enumeration function executed at Intel® DPDK initialization.
+identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization.
Based on their PCI identifier, NIC ports are assigned two other identifiers:
* A port index used to designate the NIC port in all functions exported by the PMD API.
Ethernet Device API
~~~~~~~~~~~~~~~~~~~
-The Ethernet device API exported by the Ethernet PMDs is described in the *Intel® DPDK API Reference*.
+The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK API Reference*.
Vector PMD for IXGBE
--------------------
========================================
Virtio is a para-virtualization framework initiated by IBM, and supported by KVM hypervisor.
-In the Intel® Data Plane Development Kit (Intel® DPDK),
+In the Data Plane Development Kit (DPDK),
we provide a virtio Poll Mode Driver (PMD) as a software solution, comparing to SRIOV hardware solution,
for fast guest VM to guest VM communication and guest VM to host communication.
Vhost is a kernel acceleration module for virtio qemu backend.
-The Intel® DPDK extends kni to support vhost raw socket interface,
+The DPDK extends kni to support vhost raw socket interface,
which enables vhost to directly read/ write packets from/to a physical port.
With this enhancement, virtio could achieve quite promising performance.
In this chapter, we will demonstrate usage of virtio PMD driver with two backends,
standard qemu vhost back end and vhost kni back end.
-Virtio Implementation in Intel® DPDK
-------------------------------------
+Virtio Implementation in DPDK
+-----------------------------
For details about the virtio spec, refer to Virtio PCI Card Specification written by Rusty Russell.
insmod rte_kni.ko
- Other basic Intel® DPDK preparations like hugepage enabling, igb_uio port binding are not listed here.
- Please refer to the *Intel® DPDK Getting Started Guide* for detailed instructions.
+ Other basic DPDK preparations like hugepage enabling, igb_uio port binding are not listed here.
+ Please refer to the *DPDK Getting Started Guide* for detailed instructions.
#. Launch the kni user application:
IPv6 offloads, and MSI/MSI-X interrupt delivery.
Because operating system vendors do not provide built-in drivers for this card,
VMware Tools must be installed to have a driver for the VMXNET3 network adapter available.
-One can use the same device in an Intel® DPDK application with VMXNET3 PMD introduced in Intel® DPDK API.
+One can use the same device in a DPDK application with VMXNET3 PMD introduced in DPDK API.
-Currently, the driver provides basic support for using the device in an Intel® DPDK application running on a guest OS.
+Currently, the driver provides basic support for using the device in a DPDK application running on a guest OS.
Optimization is needed on the backend, that is, the VMware* ESXi vmkernel switch, to achieve optimal performance end-to-end.
In this chapter, two setups with the use of the VMXNET3 PMD are demonstrated:
#. Vmxnet3 chaining VMs connected to a vSwitch
-VMXNET3 Implementation in the Intel® DPDK
------------------------------------------
+VMXNET3 Implementation in the DPDK
+----------------------------------
For details on the VMXNET3 device, refer to the VMXNET3 driver's vmxnet3 directory and support manual from VMware*.
and the hypervisor loads the buffers with packets in the RX case and sends packets to vSwitch in the TX case.
The VMXNET3 PMD is compiled with vmxnet3 device headers.
-The interface is similar to that of the other PMDs available in the Intel® DPDK API.
+The interface is similar to that of the other PMDs available in the DPDK API.
The driver pre-allocates the packet buffers and loads the command ring descriptors in advance.
The hypervisor fills those packet buffers on packet arrival and write completion ring descriptors,
which are eventually pulled by the PMD.
-After reception, the Intel® DPDK application frees the descriptors and loads new packet buffers for the coming packets.
+After reception, the DPDK application frees the descriptors and loads new packet buffers for the coming packets.
The interrupts are disabled and there is no notification required.
This keeps performance up on the RX side, even though the device provides a notification feature.
-In the transmit routine, the Intel® DPDK application fills packet buffer pointers in the descriptors of the command ring
+In the transmit routine, the DPDK application fills packet buffer pointers in the descriptors of the command ring
and notifies the hypervisor.
In response the hypervisor takes packets and passes them to the vSwitch. It writes into the completion descriptors ring.
The rings are read by the PMD in the next transmit routine call and the buffers and descriptors are freed from memory.
.. note::
- Follow the *Intel® DPDK Getting Started Guide* to setup the basic Intel® DPDK environment.
+ Follow the *DPDK Getting Started Guide* to setup the basic DPDK environment.
.. note::
- Follow the *Intel® DPDK Sample Application's User Guide*, L2 Forwarding/L3 Forwarding and
- TestPMD for instructions on how to run an Intel® DPDK application using an assigned VMXNET3 device.
+ Follow the *DPDK Sample Application's User Guide*, L2 Forwarding/L3 Forwarding and
+ TestPMD for instructions on how to run a DPDK application using an assigned VMXNET3 device.
VMXNET3 with a Native NIC Connected to a vSwitch
------------------------------------------------
.. note::
- Other instructions on preparing to use Intel® DPDK such as, hugepage enabling, igb_uio port binding are not listed here.
- Please refer to *Intel® DPDK Getting Started Guide and Intel® DPDK Sample Application's User Guide* for detailed instructions.
+ Other instructions on preparing to use DPDK such as, hugepage enabling, igb_uio port binding are not listed here.
+ Please refer to *DPDK Getting Started Guide and DPDK Sample Application's User Guide* for detailed instructions.
The packet reception and transmission flow path is:
Power Management
================
-The Intel® DPDK Power Management feature allows users space applications to save power
+The DPDK Power Management feature allows users space applications to save power
by dynamically adjusting CPU frequency or entering into different C-States.
* Adjusting the CPU frequency dynamically according to the utilization of RX queue.
* scaling_setspeed
-In the Intel® DPDK, scaling_governor is configured in user space.
+In the DPDK, scaling_governor is configured in user space.
Then, a user space application can prompt the kernel by writing scaling_setspeed to adjust the CPU frequency
according to the strategies defined by the user space application.
-------------------------------------
Core state can be altered by speculative sleeps whenever the specified lcore has nothing to do.
-In the Intel® DPDK, if no packet is received after polling,
+In the DPDK, if no packet is received after polling,
speculative sleeps can be triggered according the strategies defined by the user space application.
API Overview of the Power Library
References
----------
-* l3fwd-power: The sample application in Intel® DPDK that performs L3 forwarding with power management.
+* l3fwd-power: The sample application in DPDK that performs L3 forwarding with power management.
-* The "L3 Forwarding with Power Management Sample Application" chapter in the *Intel® DPDK Sample Application's User Guide*.
+* The "L3 Forwarding with Power Management Sample Application" chapter in the *DPDK Sample Application's User Guide*.
Intel processors provide performance counters to monitor events.
Some tools provided by Intel can be used to profile and benchmark an application.
-See the *VTune™ Performance Analyzer Essentials* publication from Intel Press for more information.
+See the *VTune Performance Analyzer Essentials* publication from Intel Press for more information.
-For an Intel® DPDK application, this can be done in a Linux* application environment only.
+For a DPDK application, this can be done in a Linux* application environment only.
The main situations that should be monitored through event counters are:
Quality of Service (QoS) Framework
==================================
-This chapter describes the Intel® DPDK Quality of Service (QoS) framework.
+This chapter describes the DPDK Quality of Service (QoS) framework.
Packet Pipeline with QoS Support
--------------------------------
|pkt_proc_pipeline_qos|
-This pipeline can be built using reusable Intel® DPDK software libraries.
+This pipeline can be built using reusable DPDK software libraries.
The main blocks implementing QoS in this pipeline are: the policer, the dropper and the scheduler.
A functional description of each block is provided in the following table.
Port Scheduler Enqueue API
^^^^^^^^^^^^^^^^^^^^^^^^^^
-The port scheduler enqueue API is very similar to the API of the Intel® DPDK PMD TX function.
+The port scheduler enqueue API is very similar to the API of the DPDK PMD TX function.
.. code-block:: c
Port Scheduler Dequeue API
^^^^^^^^^^^^^^^^^^^^^^^^^^
-The port scheduler dequeue API is very similar to the API of the Intel® DPDK PMD RX function.
+The port scheduler dequeue API is very similar to the API of the DPDK PMD RX function.
.. code-block:: c
Dropper
-------
-The purpose of the Intel® DPDK dropper is to drop packets arriving at a packet scheduler to avoid congestion.
+The purpose of the DPDK dropper is to drop packets arriving at a packet scheduler to avoid congestion.
The dropper supports the Random Early Detection (RED),
Weighted Random Early Detection (WRED) and tail drop algorithms.
Figure 1 illustrates how the dropper integrates with the scheduler.
-The Intel® DPDK currently does not support congestion management
+The DPDK currently does not support congestion management
so the dropper provides the only method for congestion avoidance.
.. _pg_figure_27:
-**Figure 27. High-level Block Diagram of the Intel® DPDK Dropper**
+**Figure 27. High-level Block Diagram of the DPDK Dropper**
.. image53_png has been renamed
Source Files Location
~~~~~~~~~~~~~~~~~~~~~
-The source files for the Intel® DPDK dropper are located at:
+The source files for the DPDK dropper are located at:
* DPDK/lib/librte_sched/rte_red.h
* DPDK/lib/librte_sched/rte_red.c
-Integration with the Intel® DPDK QoS Scheduler
+Integration with the DPDK QoS Scheduler
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-RED functionality in the Intel® DPDK QoS scheduler is disabled by default.
-To enable it, use the Intel® DPDK configuration parameter:
+RED functionality in the DPDK QoS scheduler is disabled by default.
+To enable it, use the DPDK configuration parameter:
::
RED parameters are specified separately for four traffic classes and three packet colors (green, yellow and red)
allowing the scheduler to implement Weighted Random Early Detection (WRED).
-Integration with the Intel® DPDK QoS Scheduler Sample Application
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Integration with the DPDK QoS Scheduler Sample Application
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The Intel® DPDK QoS Scheduler Application reads a configuration file on start-up.
+The DPDK QoS Scheduler Application reads a configuration file on start-up.
The configuration file includes a section containing RED parameters.
The format of these parameters is described in :ref:`Section2.23.3.1 <Configuration>`.
A sample RED configuration is shown below. In this example, the queue size is 64 packets.
Use cases for the Ring library include:
- * Communication between applications in the Intel® DPDK
+ * Communication between applications in the DPDK
* Used by memory pool allocator
Source Organization
===================
-This section describes the organization of sources in the Intel® DPDK framework.
+This section describes the organization of sources in the DPDK framework.
Makefiles and Config
--------------------
:ref:`Useful Variables Provided by the Build System <Useful_Variables_Provided_by_the_Build_System>`
for descriptions of other variables.
-Makefiles that are provided by the Intel® DPDK libraries and applications are located in $(RTE_SDK)/mk.
+Makefiles that are provided by the DPDK libraries and applications are located in $(RTE_SDK)/mk.
Config templates are located in $(RTE_SDK)/config. The templates describe the options that are enabled for each target.
-The config file also contains items that can be enabled and disabled for many of the Intel® DPDK libraries,
+The config file also contains items that can be enabled and disabled for many of the DPDK libraries,
including debug options.
The user should look at the config file and become familiar with the options.
The config file is also used to create a header file, which will be located in the new build directory.
Applications are sources that contain a main() function.
They are located in the $(RTE_SDK)/app and $(RTE_SDK)/examples directories.
-The app directory contains sample applications that are used to test the Intel® DPDK (autotests).
+The app directory contains sample applications that are used to test the DPDK (autotests).
The examples directory contains sample applications that show how libraries can be used.
::
.. note::
The actual examples directory may contain additional sample applications to those shown above.
- Check the latest Intel® DPDK source files for details.
+ Check the latest DPDK source files for details.
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-Thread Safety of Intel® DPDK Functions
-======================================
+Thread Safety of DPDK Functions
+===============================
-The Intel® DPDK is comprised of several libraries.
+The DPDK is comprised of several libraries.
Some of the functions in these libraries can be safely called from multiple threads simultaneously, while others cannot.
This section allows the developer to take these issues into account when building their own application.
-The run-time environment of the Intel® DPDK is typically a single thread per logical core.
+The run-time environment of the DPDK is typically a single thread per logical core.
In some cases, it is not only multi-threaded, but multi-process.
Typically, it is best to avoid sharing data structures between threads and/or processes where possible.
Where this is not possible, then the execution blocks must access the data in a thread- safe manner.
cannot be done in multiple threads without using locking when a single hash or LPM table is accessed.
Another alternative to locking would be to create multiple instances of these tables allowing each thread its own copy.
-The RX and TX of the PMD are the most critical aspects of an Intel® DPDK application
+The RX and TX of the PMD are the most critical aspects of a DPDK application
and it is recommended that no locking be used as it will impact performance.
Note, however, that these functions can safely be used from multiple threads
when each thread is performing I/O on a different NIC queue.
The ring library is based on a lockless ring-buffer algorithm that maintains its original design for thread safety.
Moreover, it provides high performance for either multi- or single-consumer/producer enqueue/dequeue operations.
-The mempool library is based on the Intel® DPDK lockless ring library and therefore is also multi-thread safe.
+The mempool library is based on the DPDK lockless ring library and therefore is also multi-thread safe.
Performance Insensitive API
---------------------------
Outside of the performance sensitive areas described in Section 25.1,
-the Intel® DPDK provides a thread-safe API for most other libraries.
+the DPDK provides a thread-safe API for most other libraries.
For example, malloc(librte_malloc) and memzone functions are safe for use in multi-threaded and multi-process environments.
The setup and configuration of the PMD is not performance sensitive, but is not thread safe either.
Library Initialization
----------------------
-It is recommended that Intel® DPDK libraries are initialized in the main thread at application startup
+It is recommended that DPDK libraries are initialized in the main thread at application startup
rather than subsequently in the forwarding threads.
-However, the Intel® DPDK performs checks to ensure that libraries are only initialized once.
+However, the DPDK performs checks to ensure that libraries are only initialized once.
If initialization is attempted more than once, an error is returned.
In the multi-process case, the configuration information of shared memory will only be initialized by the master process.
Interrupt Thread
----------------
-The Intel® DPDK works almost entirely in Linux user space in polling mode.
+The DPDK works almost entirely in Linux user space in polling mode.
For certain infrequent operations, such as receiving a PMD link status change notification,
-callbacks may be called in an additional thread outside the main Intel® DPDK processing threads.
-These function callbacks should avoid manipulating Intel® DPDK objects that are also managed by the normal Intel® DPDK threads,
+callbacks may be called in an additional thread outside the main DPDK processing threads.
+These function callbacks should avoid manipulating DPDK objects that are also managed by the normal DPDK threads,
and if they need to do so,
it is up to the application to provide the appropriate locking or mutual exclusion restrictions around those objects.
Timer Library
=============
-The Timer library provides a timer service to Intel® DPDK execution units to enable execution of callback functions asynchronously.
+The Timer library provides a timer service to DPDK execution units to enable execution of callback functions asynchronously.
Features of the library are:
* Timers can be periodic (multi-shot) or single (one-shot).
Writing Efficient Code
======================
-This chapter provides some tips for developing efficient code using the Intel® DPDK.
+This chapter provides some tips for developing efficient code using the DPDK.
For additional and more general information,
please refer to the *Intel® 64 and IA-32 Architectures Optimization Reference Manual*
which is a valuable reference to writing efficient code.
Memory
------
-This section describes some key memory considerations when developing applications in the Intel® DPDK environment.
+This section describes some key memory considerations when developing applications in the DPDK environment.
Memory Copy: Do not Use libc in the Data Plane
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Many libc functions are available in the Intel® DPDK, via the Linux* application environment.
+Many libc functions are available in the DPDK, via the Linux* application environment.
This can ease the porting of applications and the development of the configuration plane.
However, many of these functions are not designed for performance.
Functions such as memcpy() or strcpy() should not be used in the data plane.
For specific functions that are called often,
it is also a good idea to provide a self-made optimized function, which should be declared as static inline.
-The Intel® DPDK API provides an optimized rte_memcpy() function.
+The DPDK API provides an optimized rte_memcpy() function.
Memory Allocation
~~~~~~~~~~~~~~~~~
~~~~
On a NUMA system, it is preferable to access local memory since remote memory access is slower.
-In the Intel® DPDK, the memzone, ring, rte_malloc and mempool APIs provide a way to create a pool on a specific socket.
+In the DPDK, the memzone, ring, rte_malloc and mempool APIs provide a way to create a pool on a specific socket.
Sometimes, it can be a good idea to duplicate data to optimize speed.
For read-mostly variables that are often accessed,
----------------------------
To provide a message-based communication between lcores,
-it is advised to use the Intel® DPDK ring API, which provides a lockless ring implementation.
+it is advised to use the DPDK ring API, which provides a lockless ring implementation.
The ring supports bulk and burst access,
meaning that it is possible to read several elements from the ring with only one costly atomic operation
PMD Driver
----------
-The Intel® DPDK Poll Mode Driver (PMD) is also able to work in bulk/burst mode,
+The DPDK Poll Mode Driver (PMD) is also able to work in bulk/burst mode,
allowing the factorization of some code for each call in the send or receive function.
Avoid partial writes.
a low end-to-end latency, at the cost of lower throughput.
In order to achieve higher throughput,
-the Intel® DPDK attempts to aggregate the cost of processing each packet individually by processing packets in bursts.
+the DPDK attempts to aggregate the cost of processing each packet individually by processing packets in bursts.
Using the testpmd application as an example,
the burst size can be set on the command line to a value of 16 (also the default value).
Setting the Target CPU Type
---------------------------
-The Intel® DPDK supports CPU microarchitecture-specific optimizations by means of CONFIG_RTE_MACHINE option
-in the Intel® DPDK configuration file.
+The DPDK supports CPU microarchitecture-specific optimizations by means of CONFIG_RTE_MACHINE option
+in the DPDK configuration file.
The degree of optimization depends on the compiler's ability to optimize for a specitic microarchitecture,
therefore it is preferable to use the latest compiler versions whenever possible.