* **SR-IOV mode**: Involves direct assignment of part of the port resources to different guest operating systems
using the PCI-SIG Single Root I/O Virtualization (SR IOV) standard,
- also known as "native mode" or"pass-through" mode.
+ also known as "native mode" or "pass-through" mode.
In this chapter, this mode is referred to as IOV mode.
* **VMDq mode**: Involves central management of the networking resources by an IO Virtual Machine (IOVM) or
.. code-block:: console
rmmod ixgbe
- "modprobe ixgbe max_vfs=2,2"
+ modprobe ixgbe max_vfs=2,2
When using DPDK PMD PF driver, insert DPDK kernel module igb_uio and set the number of VF by sysfs max_vfs:
.. code-block:: console
ls -alrt /sys/bus/pci/devices/0000\:02\:00.0/virt*
- lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/ virtfn1 -> ../0000:02:10.2
- lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/ virtfn0 -> ../0000:02:10.0
+ lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn1 -> ../0000:02:10.2
+ lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn0 -> ../0000:02:10.0
It also creates two vfs for device 0000:02:00.1:
.. code-block:: console
ls -alrt /sys/bus/pci/devices/0000\:02\:00.1/virt*
- lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/
- virtfn1 -> ../0000:02:10.3
- lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/
- virtfn0 -> ../0000:02:10.1
+ lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn1 -> ../0000:02:10.3
+ lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn0 -> ../0000:02:10.1
#. List the PCI devices connected and notice that the Host OS shows two Physical Functions (traditional ports)
and four Virtual Functions (two for each port).
With such host instance fast packet processing, lots of services such as filtering, QoS,
DPI can be offloaded on the host fast path.
-shows the scenario where some VMs directly communicate externally via a VFs,
+Figure 12 shows the scenario where some VMs directly communicate externally via a VFs,
while others connect to a virtual switch and share the same uplink bandwidth.
.. _pg_figure_12:
To achieve optimal performance, overall software design choices and pure software optimization techniques must be considered and
balanced against available low-level hardware-based optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on).
-The case of packet transmission is an example of this software/ hardware tradeoff issue when optimizing burst-oriented network packet processing engines.
+The case of packet transmission is an example of this software/hardware tradeoff issue when optimizing burst-oriented network packet processing engines.
In the initial case, the PMD could export only an rte_eth_tx_one function to transmit one packet at a time on a given queue.
On top of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one function to transmit several packets at a time.
However, an rte_eth_tx_burst function is effectively implemented by the PMD to minimize the driver-level transmit cost per packet through the following optimizations: