The DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode.
Therefore, it is possible to partition SR-IOV capability on Ethernet controller NIC resources logically and
expose them to a virtual machine as a separate PCI function called a "Virtual Function".
-Refer to Figure 10.
+Refer to :numref:`figure_single_port_nic`.
-Therefore, a NIC is logically distributed among multiple virtual machines (as shown in Figure 10),
+Therefore, a NIC is logically distributed among multiple virtual machines (as shown in :numref:`figure_single_port_nic`),
while still having global data in common to share with the Physical Function and other Virtual Functions.
The DPDK fm10kvf, i40evf, igbvf or ixgbevf as a Poll Mode Driver (PMD) serves for the Intel® 82576 Gigabit Ethernet Controller,
Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller NIC,
-Intel® Fortville 10/40 Gigabit Ethernet Controller NIC's virtual PCI function,or PCIE host-interface of the Intel Ethernet Switch
+Intel® Fortville 10/40 Gigabit Ethernet Controller NIC's virtual PCI function, or PCIe host-interface of the Intel Ethernet Switch
FM10000 Series.
Meanwhile the DPDK Poll Mode Driver (PMD) also supports "Physical Function" of such NIC's on the host.
* `Scalable I/O Virtualized Servers <http://www.intel.com/content/www/us/en/virtualization/server-virtualization/scalable-i-o-virtualized-servers-paper.html>`_
-.. _nic_figure_1:
+.. _figure_single_port_nic:
-**Figure 1. Virtualization for a Single Port NIC in SR-IOV Mode**
+.. figure:: img/single_port_nic.*
+
+ Virtualization for a Single Port NIC in SR-IOV Mode
-.. image:: img/single_port_nic.*
Physical and Virtual Function Infrastructure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
modprobe uio
insmod igb_uio
- ./dpdk_nic_bind.py -b igb_uio bb:ss.f
+ ./dpdk-devbind.py -b igb_uio bb:ss.f
echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
modprobe uio
insmod igb_uio
- ./dpdk_nic_bind.py -b igb_uio bb:ss.f
+ ./dpdk-devbind.py -b igb_uio bb:ss.f
echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
rmmod igb (To remove the igb module)
insmod igb max_vfs=2,2 (To enable two Virtual Functions per port)
-* Using Intel® DPDK PMD PF igb driver:
+* Using DPDK PMD PF igb driver:
Kernel Params: iommu=pt, intel_iommu=on modprobe uio
.. code-block:: console
insmod igb_uio
- ./dpdk_nic_bind.py -b igb_uio bb:ss.f
+ ./dpdk-devbind.py -b igb_uio bb:ss.f
echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific pci device)
Launch DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
modprobe uio
insmod igb_uio
- ./dpdk_nic_bind.py -b igb_uio 02:00.0 02:00.1 0e:00.0 0e:00.1
+ ./dpdk-devbind.py -b igb_uio 02:00.0 02:00.1 0e:00.0 0e:00.1
echo 2 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
echo 2 > /sys/bus/pci/devices/0000\:02\:00.1/max_vfs
echo 2 > /sys/bus/pci/devices/0000\:0e\:00.0/max_vfs
.. note::
- — The pci-assign,host=08:10.0 alue indicates that you want to attach a PCI device
+ — The pci-assign,host=08:10.0 value indicates that you want to attach a PCI device
to a Virtual Machine and the respective (Bus:Device.Function)
numbers should be passed for the Virtual Function to be attached.
Run the DPDK l2fwd sample application in the Guest OS with Hugepages enabled.
For the expected benchmark performance, you must pin the cores from the Guest OS to the Host OS (taskset can be used to do this) and
- you must also look at the PCI Bus layout on the board to ensure you are not running the traffic over the QPI Inteface.
+ you must also look at the PCI Bus layout on the board to ensure you are not running the traffic over the QPI Interface.
.. note::
can also be used to bind and unbind devices to a virtual machine in Ubuntu.
If this option is used, step 6 in the instructions provided will be different.
- * The Virtual Machine Monitor (see Figure 11) is equivalent to a Host OS with KVM installed as described in the instructions.
+ * The Virtual Machine Monitor (see :numref:`figure_perf_benchmark`) is equivalent to a Host OS with KVM installed as described in the instructions.
+
+.. _figure_perf_benchmark:
-.. _nic_figure_2:
+.. figure:: img/perf_benchmark.*
-**Figure 2. Performance Benchmark Setup**
+ Performance Benchmark Setup
-.. image:: img/perf_benchmark.*
DPDK SR-IOV PMD PF/VF Driver Usage Model
----------------------------------------
With such host instance fast packet processing, lots of services such as filtering, QoS,
DPI can be offloaded on the host fast path.
-Figure 12 shows the scenario where some VMs directly communicate externally via a VFs,
+:numref:`figure_fast_pkt_proc` shows the scenario where some VMs directly communicate externally via a VFs,
while others connect to a virtual switch and share the same uplink bandwidth.
-.. _nic_figure_3:
+.. _figure_fast_pkt_proc:
+
+.. figure:: img/fast_pkt_proc.*
-**Figure 3. Fast Host-based Packet Processing**
+ Fast Host-based Packet Processing
-.. image:: img/fast_pkt_proc.*
SR-IOV (PF/VF) Approach for Inter-VM Communication
--------------------------------------------------
However, when VF crosses physical ports (VM0<->VM2), there is no such hardware bridge.
In this case, the DPDK PMD PF driver provides host forwarding between such VMs.
-Figure 13 shows an example.
+:numref:`figure_inter_vm_comms` shows an example.
In this case an update of the MAC address lookup tables in both the NIC and host DPDK application is required.
In the NIC, writing the destination of a MAC address belongs to another cross device VM to the PF specific pool.
The SR-IOV NIC switch forwards the packet to a specific VM according to the MAC destination address
which belongs to the destination VF on the VM.
-.. _nic_figure_4:
+.. _figure_inter_vm_comms:
-**Figure 4. Inter-VM Communication**
+.. figure:: img/inter_vm_comms.*
-.. image:: img/inter_vm_comms.*
+ Inter-VM Communication