---------------------------------------------
In a Linux user space environment, the DPDK application runs as a user-space application using the pthread library.
-PCI information about devices and address space is discovered through the /sys kernel interface and through a module called igb_uio.
+PCI information about devices and address space is discovered through the /sys kernel interface and through kernel modules such as uio_pci_generic, or igb_uio.
Refer to the UIO: User-space drivers documentation in the Linux kernel. This memory is mmap'd in the application.
The EAL performs physical memory allocation using mmap() in hugetlbfs (using huge page sizes to increase performance).
~~~~~~~~~~
The EAL uses the /sys/bus/pci utilities provided by the kernel to scan the content on the PCI bus.
-
-To access PCI memory, a kernel module called igb_uio provides a /dev/uioX device file
+To access PCI memory, a kernel module called uio_pci_generic provides a /dev/uioX device file
+and resource files in /sys
that can be mmap'd to obtain access to PCI address space from the application.
-It uses the uio kernel feature (userland driver).
+The DPDK-specific igb_uio module can also be used for this. Both drivers use the uio kernel feature (userland driver).
Per-lcore and Shared Variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Refer to the *DPDK Getting Started Guide* for more information on memory management in the DPDK.
In the above command, 4 GB memory is reserved (2048 of 2 MB pages) for DPDK.
-#. Load igb_uio and bind one Intel NIC controller to igb_uio:
+#. Load uio_pci_generic and bind one Intel NIC controller to it:
.. code-block:: console
- insmod x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
- python tools/dpdk_nic_bind.py -b igb_uio 0000:09:00:00.0
+ modprobe uio_pci_generic
+ python tools/dpdk_nic_bind.py -b uio_pci_generic 0000:09:00:00.0
In this case, 0000:09:00.0 is the PCI address for the NIC controller.
Of course, as a prerequisite, the vhost/vhost-net kernel CONFIG should be chosen before compiling the kernel.
-#. Compile the DPDK and insert igb_uio as normal.
+#. Compile the DPDK and insert uio_pci_generic/igb_uio kernel modules as normal.
#. Insert the KNI kernel module:
insmod rte_kni.ko
- Other basic DPDK preparations like hugepage enabling, igb_uio port binding are not listed here.
+ Other basic DPDK preparations like hugepage enabling, uio port binding are not listed here.
Please refer to the *DPDK Getting Started Guide* for detailed instructions.
#. Launch the kni user application:
In the above example, virtio port 0 in the guest VM will be associated with vEth0, which in turns corresponds to a physical port,
which means received packets come from vEth0, and transmitted packets is sent to vEth0.
-#. In the guest, bind the virtio device to the igb_uio kernel module and start the forwarding application.
+#. In the guest, bind the virtio device to the uio_pci_generic kernel module and start the forwarding application.
When the virtio port in guest bursts rx, it is getting packets from the raw socket's receive queue.
When the virtio port bursts tx, it is sending packet to the tx_q.
modprobe uio
echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
- insmod x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
- python tools/dpdk_nic_bind.py -b igb_uio 00:03.0
+ modprobe uio_pci_generic
+ python tools/dpdk_nic_bind.py -b uio_pci_generic 00:03.0
We use testpmd as the forwarding application in this example.
.. note::
- Other instructions on preparing to use DPDK such as, hugepage enabling, igb_uio port binding are not listed here.
+ Other instructions on preparing to use DPDK such as, hugepage enabling, uio port binding are not listed here.
Please refer to *DPDK Getting Started Guide and DPDK Sample Application's User Guide* for detailed instructions.
The packet reception and transmission flow path is: