1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2014 Intel Corporation.
4 Intel Virtual Function Driver
5 =============================
7 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
8 support the following modes of operation in a virtualized environment:
10 * **SR-IOV mode**: Involves direct assignment of part of the port resources to different guest operating systems
11 using the PCI-SIG Single Root I/O Virtualization (SR IOV) standard,
12 also known as "native mode" or "pass-through" mode.
13 In this chapter, this mode is referred to as IOV mode.
15 * **VMDq mode**: Involves central management of the networking resources by an IO Virtual Machine (IOVM) or
16 a Virtual Machine Monitor (VMM), also known as software switch acceleration mode.
17 In this chapter, this mode is referred to as the Next Generation VMDq mode.
19 SR-IOV Mode Utilization in a DPDK Environment
20 ---------------------------------------------
22 The DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode.
23 Therefore, it is possible to partition SR-IOV capability on Ethernet controller NIC resources logically and
24 expose them to a virtual machine as a separate PCI function called a "Virtual Function".
25 Refer to :numref:`figure_single_port_nic`.
27 Therefore, a NIC is logically distributed among multiple virtual machines (as shown in :numref:`figure_single_port_nic`),
28 while still having global data in common to share with the Physical Function and other Virtual Functions.
29 The DPDK fm10kvf, i40evf, igbvf or ixgbevf as a Poll Mode Driver (PMD) serves for the Intel® 82576 Gigabit Ethernet Controller,
30 Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller NIC,
31 Intel® Fortville 10/40 Gigabit Ethernet Controller NIC's virtual PCI function, or PCIe host-interface of the Intel Ethernet Switch
33 Meanwhile the DPDK Poll Mode Driver (PMD) also supports "Physical Function" of such NIC's on the host.
35 The DPDK PF/VF Poll Mode Driver (PMD) supports the Layer 2 switch on Intel® 82576 Gigabit Ethernet Controller,
36 Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller,
37 and Intel® Fortville 10/40 Gigabit Ethernet Controller NICs so that guest can choose it for inter virtual machine traffic in SR-IOV mode.
39 For more detail on SR-IOV, please refer to the following documents:
41 * `SR-IOV provides hardware based I/O sharing <http://www.intel.com/network/connectivity/solutions/vmdc.htm>`_
43 * `PCI-SIG-Single Root I/O Virtualization Support on IA
44 <http://www.intel.com/content/www/us/en/pci-express/pci-sig-single-root-io-virtualization-support-in-virtualization-technology-for-connectivity-paper.html>`_
46 * `Scalable I/O Virtualized Servers <http://www.intel.com/content/www/us/en/virtualization/server-virtualization/scalable-i-o-virtualized-servers-paper.html>`_
48 .. _figure_single_port_nic:
50 .. figure:: img/single_port_nic.*
52 Virtualization for a Single Port NIC in SR-IOV Mode
55 Physical and Virtual Function Infrastructure
56 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
58 The following describes the Physical Function and Virtual Functions infrastructure for the supported Ethernet Controller NICs.
60 Virtual Functions operate under the respective Physical Function on the same NIC Port and therefore have no access
61 to the global NIC resources that are shared between other functions for the same NIC port.
63 A Virtual Function has basic access to the queue resources and control structures of the queues assigned to it.
64 For global resource access, a Virtual Function has to send a request to the Physical Function for that port,
65 and the Physical Function operates on the global resources on behalf of the Virtual Function.
66 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
67 which is called a "Mailbox".
69 Intel® Ethernet Adaptive Virtual Function
70 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
71 Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
72 AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
73 every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
74 advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and
75 interface between AVF driver and a compliant PF driver is specified.
77 Intel products starting Ethernet Controller 700 Series to support Adaptive Virtual Function.
79 The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
81 For more detail on SR-IOV, please refer to the following documents:
83 * `Intel® AVF HAS <https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf>`_
87 To use DPDK AVF PMD on Intel® 700 Series Ethernet Controller, the device id (0x1889) need to specified during device
88 assignment in hypervisor. Take qemu for example, the device assignment should carry the AVF device id (0x1889) like
89 ``-device vfio-pci,x-pci-device-id=0x1889,host=03:0a.0``.
91 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
92 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
94 In a virtualized environment, the programmer can enable a maximum of *64 Virtual Functions (VF)*
95 globally per PCIE host-interface of the Intel Ethernet Switch FM10000 Series device.
96 Each VF can have a maximum of 16 queue pairs.
97 The Physical Function in host could be only configured by the Linux* fm10k driver
98 (in the case of the Linux Kernel-based Virtual Machine [KVM]), DPDK PMD PF driver doesn't support it yet.
102 * Using Linux* fm10k driver:
104 .. code-block:: console
106 rmmod fm10k (To remove the fm10k module)
107 insmod fm0k.ko max_vfs=2,2 (To enable two Virtual Functions per port)
109 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
110 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
111 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
114 * Virtual Functions 0 and 2 belong to Physical Function 0
116 * Virtual Functions 1 and 3 belong to Physical Function 1
120 The above is an important consideration to take into account when targeting specific packets to a selected port.
122 Intel® X710/XL710 Gigabit Ethernet Controller VF Infrastructure
123 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
125 In a virtualized environment, the programmer can enable a maximum of *128 Virtual Functions (VF)*
126 globally per Intel® X710/XL710 Gigabit Ethernet Controller NIC device.
127 The number of queue pairs of each VF can be configured by ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF`` in ``config`` file.
128 The Physical Function in host could be either configured by the Linux* i40e driver
129 (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
130 When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
134 * Using Linux* i40e driver:
136 .. code-block:: console
138 rmmod i40e (To remove the i40e module)
139 insmod i40e.ko max_vfs=2,2 (To enable two Virtual Functions per port)
141 * Using the DPDK PMD PF i40e driver:
143 Kernel Params: iommu=pt, intel_iommu=on
145 .. code-block:: console
149 ./dpdk-devbind.py -b igb_uio bb:ss.f
150 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
152 Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
154 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
155 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
156 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
159 * Virtual Functions 0 and 2 belong to Physical Function 0
161 * Virtual Functions 1 and 3 belong to Physical Function 1
165 The above is an important consideration to take into account when targeting specific packets to a selected port.
167 For Intel® X710/XL710 Gigabit Ethernet Controller, queues are in pairs. One queue pair means one receive queue and
168 one transmit queue. The default number of queue pairs per VF is 4, and can be 16 in maximum.
170 Intel® 82599 10 Gigabit Ethernet Controller VF Infrastructure
171 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
173 The programmer can enable a maximum of *63 Virtual Functions* and there must be *one Physical Function* per Intel® 82599
174 10 Gigabit Ethernet Controller NIC port.
175 The reason for this is that the device allows for a maximum of 128 queues per port and a virtual/physical function has to
176 have at least one queue pair (RX/TX).
177 The current implementation of the DPDK ixgbevf driver supports a single queue pair (RX/TX) per Virtual Function.
178 The Physical Function in host could be either configured by the Linux* ixgbe driver
179 (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
180 When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
184 * Using Linux* ixgbe driver:
186 .. code-block:: console
188 rmmod ixgbe (To remove the ixgbe module)
189 insmod ixgbe max_vfs=2,2 (To enable two Virtual Functions per port)
191 * Using the DPDK PMD PF ixgbe driver:
193 Kernel Params: iommu=pt, intel_iommu=on
195 .. code-block:: console
199 ./dpdk-devbind.py -b igb_uio bb:ss.f
200 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
202 Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
204 * Using the DPDK PMD PF ixgbe driver to enable VF RSS:
206 Same steps as above to install the modules of uio, igb_uio, specify max_vfs for PCI device, and
207 launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
209 The available queue number (at most 4) per VF depends on the total number of pool, which is
210 determined by the max number of VF at PF initialization stage and the number of queue specified
213 * If the max number of VFs (max_vfs) is set in the range of 1 to 32:
215 If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
216 pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
218 If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
219 pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
221 * If the max number of VFs (max_vfs) is in the range of 33 to 64:
223 If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
224 as ``rxq`` is not correct at this case;
226 If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
227 and each VF have 2 Rx queues;
229 On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
230 or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
231 It also needs config VF RSS information like hash function, RSS key, RSS key length.
235 The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is:
236 The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared
237 among PF and all VF; So it could not to provide a method to query the hash and reta content per
238 VF on guest, while, if possible, please query them on host for the shared RETA information.
240 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
241 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
242 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
245 * Virtual Functions 0 and 2 belong to Physical Function 0
247 * Virtual Functions 1 and 3 belong to Physical Function 1
251 The above is an important consideration to take into account when targeting specific packets to a selected port.
253 Intel® 82576 Gigabit Ethernet Controller and Intel® Ethernet Controller I350 Family VF Infrastructure
254 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
256 In a virtualized environment, an Intel® 82576 Gigabit Ethernet Controller serves up to eight virtual machines (VMs).
257 The controller has 16 TX and 16 RX queues.
258 They are generally referred to (or thought of) as queue pairs (one TX and one RX queue).
259 This gives the controller 16 queue pairs.
261 A pool is a group of queue pairs for assignment to the same VF, used for transmit and receive operations.
262 The controller has eight pools, with each pool containing two queue pairs, that is, two TX and two RX queues assigned to each VF.
264 In a virtualized environment, an Intel® Ethernet Controller I350 family device serves up to eight virtual machines (VMs) per port.
265 The eight queues can be accessed by eight different VMs if configured correctly (the i350 has 4x1GbE ports each with 8T X and 8 RX queues),
266 that means, one Transmit and one Receive queue assigned to each VF.
270 * Using Linux* igb driver:
272 .. code-block:: console
274 rmmod igb (To remove the igb module)
275 insmod igb max_vfs=2,2 (To enable two Virtual Functions per port)
277 * Using DPDK PMD PF igb driver:
279 Kernel Params: iommu=pt, intel_iommu=on modprobe uio
281 .. code-block:: console
284 ./dpdk-devbind.py -b igb_uio bb:ss.f
285 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific pci device)
287 Launch DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
289 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a four-port NIC.
290 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
291 represented by (Bus#, Device#, Function#) in sequence, starting from 0 to 7.
294 * Virtual Functions 0 and 4 belong to Physical Function 0
296 * Virtual Functions 1 and 5 belong to Physical Function 1
298 * Virtual Functions 2 and 6 belong to Physical Function 2
300 * Virtual Functions 3 and 7 belong to Physical Function 3
304 The above is an important consideration to take into account when targeting specific packets to a selected port.
306 Validated Hypervisors
307 ~~~~~~~~~~~~~~~~~~~~~
309 The validated hypervisor is:
311 * KVM (Kernel Virtual Machine) with Qemu, version 0.14.0
313 However, the hypervisor is bypassed to configure the Virtual Function devices using the Mailbox interface,
314 the solution is hypervisor-agnostic.
315 Xen* and VMware* (when SR- IOV is supported) will also be able to support the DPDK with Virtual Function driver support.
317 Expected Guest Operating System in Virtual Machine
318 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
320 The expected guest operating systems in a virtualized environment are:
322 * Fedora* 14 (64-bit)
324 * Ubuntu* 10.04 (64-bit)
326 For supported kernel versions, refer to the *DPDK Release Notes*.
328 Setting Up a KVM Virtual Machine Monitor
329 ----------------------------------------
331 The following describes a target environment:
333 * Host Operating System: Fedora 14
335 * Hypervisor: KVM (Kernel Virtual Machine) with Qemu version 0.14.0
337 * Guest Operating System: Fedora 14
339 * Linux Kernel Version: Refer to the *DPDK Getting Started Guide*
341 * Target Applications: l2fwd, l3fwd-vf
343 The setup procedure is as follows:
345 #. Before booting the Host OS, open **BIOS setup** and enable **Intel® VT features**.
347 #. While booting the Host OS kernel, pass the intel_iommu=on kernel command line argument using GRUB.
348 When using DPDK PF driver on host, pass the iommu=pt kernel command line argument in GRUB.
350 #. Download qemu-kvm-0.14.0 from
351 `http://sourceforge.net/projects/kvm/files/qemu-kvm/ <http://sourceforge.net/projects/kvm/files/qemu-kvm/>`_
352 and install it in the Host OS using the following steps:
354 When using a recent kernel (2.6.25+) with kvm modules included:
356 .. code-block:: console
358 tar xzf qemu-kvm-release.tar.gz
360 ./configure --prefix=/usr/local/kvm
363 sudo /sbin/modprobe kvm-intel
365 When using an older kernel, or a kernel from a distribution without the kvm modules,
366 you must download (from the same link), compile and install the modules yourself:
368 .. code-block:: console
370 tar xjf kvm-kmod-release.tar.bz2
375 sudo /sbin/modprobe kvm-intel
377 qemu-kvm installs in the /usr/local/bin directory.
379 For more details about KVM configuration and usage, please refer to:
381 `http://www.linux-kvm.org/page/HOWTO1 <http://www.linux-kvm.org/page/HOWTO1>`_.
383 #. Create a Virtual Machine and install Fedora 14 on the Virtual Machine.
384 This is referred to as the Guest Operating System (Guest OS).
386 #. Download and install the latest ixgbe driver from:
388 `http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=14687 <http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=14687>`_
392 When using Linux kernel ixgbe driver, unload the Linux ixgbe driver and reload it with the max_vfs=2,2 argument:
394 .. code-block:: console
397 modprobe ixgbe max_vfs=2,2
399 When using DPDK PMD PF driver, insert DPDK kernel module igb_uio and set the number of VF by sysfs max_vfs:
401 .. code-block:: console
405 ./dpdk-devbind.py -b igb_uio 02:00.0 02:00.1 0e:00.0 0e:00.1
406 echo 2 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
407 echo 2 > /sys/bus/pci/devices/0000\:02\:00.1/max_vfs
408 echo 2 > /sys/bus/pci/devices/0000\:0e\:00.0/max_vfs
409 echo 2 > /sys/bus/pci/devices/0000\:0e\:00.1/max_vfs
413 You need to explicitly specify number of vfs for each port, for example,
414 in the command above, it creates two vfs for the first two ixgbe ports.
416 Let say we have a machine with four physical ixgbe ports:
427 The command above creates two vfs for device 0000:02:00.0:
429 .. code-block:: console
431 ls -alrt /sys/bus/pci/devices/0000\:02\:00.0/virt*
432 lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn1 -> ../0000:02:10.2
433 lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn0 -> ../0000:02:10.0
435 It also creates two vfs for device 0000:02:00.1:
437 .. code-block:: console
439 ls -alrt /sys/bus/pci/devices/0000\:02\:00.1/virt*
440 lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn1 -> ../0000:02:10.3
441 lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn0 -> ../0000:02:10.1
443 #. List the PCI devices connected and notice that the Host OS shows two Physical Functions (traditional ports)
444 and four Virtual Functions (two for each port).
445 This is the result of the previous step.
447 #. Insert the pci_stub module to hold the PCI devices that are freed from the default driver using the following command
448 (see http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM Section 4 for more information):
450 .. code-block:: console
452 sudo /sbin/modprobe pci-stub
454 Unbind the default driver from the PCI devices representing the Virtual Functions.
455 A script to perform this action is as follows:
457 .. code-block:: console
459 echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id
460 echo 0000:08:10.0 > /sys/bus/pci/devices/0000:08:10.0/driver/unbind
461 echo 0000:08:10.0 > /sys/bus/pci/drivers/pci-stub/bind
463 where, 0000:08:10.0 belongs to the Virtual Function visible in the Host OS.
465 #. Now, start the Virtual Machine by running the following command:
467 .. code-block:: console
469 /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -smp 4 -boot c -hda lucid.qcow2 -device pci-assign,host=08:10.0
473 — -m = memory to assign
475 — -smp = number of smp cores
477 — -boot = boot option
479 — -hda = virtual disk image
481 — -device = device to attach
485 — The pci-assign,host=08:10.0 value indicates that you want to attach a PCI device
486 to a Virtual Machine and the respective (Bus:Device.Function)
487 numbers should be passed for the Virtual Function to be attached.
489 — qemu-kvm-0.14.0 allows a maximum of four PCI devices assigned to a VM,
490 but this is qemu-kvm version dependent since qemu-kvm-0.14.1 allows a maximum of five PCI devices.
492 — qemu-system-x86_64 also has a -cpu command line option that is used to select the cpu_model
493 to emulate in a Virtual Machine. Therefore, it can be used as:
495 .. code-block:: console
497 /usr/local/kvm/bin/qemu-system-x86_64 -cpu ?
499 (to list all available cpu_models)
501 /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -cpu host -smp 4 -boot c -hda lucid.qcow2 -device pci-assign,host=08:10.0
503 (to use the same cpu_model equivalent to the host cpu)
505 For more information, please refer to: `http://wiki.qemu.org/Features/CPUModels <http://wiki.qemu.org/Features/CPUModels>`_.
507 #. If use vfio-pci to pass through device instead of pci-assign, steps 8 and 9 need to be updated to bind device to vfio-pci and
508 replace pci-assign with vfio-pci when start virtual machine.
510 .. code-block:: console
512 sudo /sbin/modprobe vfio-pci
514 echo "8086 10ed" > /sys/bus/pci/drivers/vfio-pci/new_id
515 echo 0000:08:10.0 > /sys/bus/pci/devices/0000:08:10.0/driver/unbind
516 echo 0000:08:10.0 > /sys/bus/pci/drivers/vfio-pci/bind
518 /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -smp 4 -boot c -hda lucid.qcow2 -device vfio-pci,host=08:10.0
520 #. Install and run DPDK host app to take over the Physical Function. Eg.
522 .. code-block:: console
524 make install T=x86_64-native-linuxapp-gcc
525 ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 -- -i
527 #. Finally, access the Guest OS using vncviewer with the localhost:5900 port and check the lspci command output in the Guest OS.
528 The virtual functions will be listed as available for use.
530 #. Configure and install the DPDK with an x86_64-native-linuxapp-gcc configuration on the Guest OS as normal,
531 that is, there is no change to the normal installation procedure.
533 .. code-block:: console
535 make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
536 cd x86_64-native-linuxapp-gcc
541 If you are unable to compile the DPDK and you are getting "error: CPU you selected does not support x86-64 instruction set",
542 power off the Guest OS and start the virtual machine with the correct -cpu option in the qemu- system-x86_64 command as shown in step 9.
543 You must select the best x86_64 cpu_model to emulate or you can select host option if available.
547 Run the DPDK l2fwd sample application in the Guest OS with Hugepages enabled.
548 For the expected benchmark performance, you must pin the cores from the Guest OS to the Host OS (taskset can be used to do this) and
549 you must also look at the PCI Bus layout on the board to ensure you are not running the traffic over the QPI Interface.
553 * The Virtual Machine Manager (the Fedora package name is virt-manager) is a utility for virtual machine management
554 that can also be used to create, start, stop and delete virtual machines.
555 If this option is used, step 2 and 6 in the instructions provided will be different.
557 * virsh, a command line utility for virtual machine management,
558 can also be used to bind and unbind devices to a virtual machine in Ubuntu.
559 If this option is used, step 6 in the instructions provided will be different.
561 * The Virtual Machine Monitor (see :numref:`figure_perf_benchmark`) is equivalent to a Host OS with KVM installed as described in the instructions.
563 .. _figure_perf_benchmark:
565 .. figure:: img/perf_benchmark.*
567 Performance Benchmark Setup
570 DPDK SR-IOV PMD PF/VF Driver Usage Model
571 ----------------------------------------
573 Fast Host-based Packet Processing
574 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
576 Software Defined Network (SDN) trends are demanding fast host-based packet handling.
577 In a virtualization environment,
578 the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
580 With such host instance fast packet processing, lots of services such as filtering, QoS,
581 DPI can be offloaded on the host fast path.
583 :numref:`figure_fast_pkt_proc` shows the scenario where some VMs directly communicate externally via a VFs,
584 while others connect to a virtual switch and share the same uplink bandwidth.
586 .. _figure_fast_pkt_proc:
588 .. figure:: img/fast_pkt_proc.*
590 Fast Host-based Packet Processing
593 SR-IOV (PF/VF) Approach for Inter-VM Communication
594 --------------------------------------------------
596 Inter-VM data communication is one of the traffic bottle necks in virtualization platforms.
597 SR-IOV device assignment helps a VM to attach the real device, taking advantage of the bridge in the NIC.
598 So VF-to-VF traffic within the same physical port (VM0<->VM1) have hardware acceleration.
599 However, when VF crosses physical ports (VM0<->VM2), there is no such hardware bridge.
600 In this case, the DPDK PMD PF driver provides host forwarding between such VMs.
602 :numref:`figure_inter_vm_comms` shows an example.
603 In this case an update of the MAC address lookup tables in both the NIC and host DPDK application is required.
605 In the NIC, writing the destination of a MAC address belongs to another cross device VM to the PF specific pool.
606 So when a packet comes in, its destination MAC address will match and forward to the host DPDK PMD application.
608 In the host DPDK application, the behavior is similar to L2 forwarding,
609 that is, the packet is forwarded to the correct PF pool.
610 The SR-IOV NIC switch forwards the packet to a specific VM according to the MAC destination address
611 which belongs to the destination VF on the VM.
613 .. _figure_inter_vm_comms:
615 .. figure:: img/inter_vm_comms.*
617 Inter-VM Communication