2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 I40E/IXGBE/IGB Virtual Function Driver
32 ======================================
34 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
35 support the following modes of operation in a virtualized environment:
37 * **SR-IOV mode**: Involves direct assignment of part of the port resources to different guest operating systems
38 using the PCI-SIG Single Root I/O Virtualization (SR IOV) standard,
39 also known as "native mode" or "pass-through" mode.
40 In this chapter, this mode is referred to as IOV mode.
42 * **VMDq mode**: Involves central management of the networking resources by an IO Virtual Machine (IOVM) or
43 a Virtual Machine Monitor (VMM), also known as software switch acceleration mode.
44 In this chapter, this mode is referred to as the Next Generation VMDq mode.
46 SR-IOV Mode Utilization in a DPDK Environment
47 ---------------------------------------------
49 The DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode.
50 Therefore, it is possible to partition SR-IOV capability on Ethernet controller NIC resources logically and
51 expose them to a virtual machine as a separate PCI function called a "Virtual Function".
52 Refer to :numref:`figure_single_port_nic`.
54 Therefore, a NIC is logically distributed among multiple virtual machines (as shown in :numref:`figure_single_port_nic`),
55 while still having global data in common to share with the Physical Function and other Virtual Functions.
56 The DPDK fm10kvf, i40evf, igbvf or ixgbevf as a Poll Mode Driver (PMD) serves for the Intel® 82576 Gigabit Ethernet Controller,
57 Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller NIC,
58 Intel® Fortville 10/40 Gigabit Ethernet Controller NIC's virtual PCI function, or PCIe host-interface of the Intel Ethernet Switch
60 Meanwhile the DPDK Poll Mode Driver (PMD) also supports "Physical Function" of such NIC's on the host.
62 The DPDK PF/VF Poll Mode Driver (PMD) supports the Layer 2 switch on Intel® 82576 Gigabit Ethernet Controller,
63 Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller,
64 and Intel® Fortville 10/40 Gigabit Ethernet Controller NICs so that guest can choose it for inter virtual machine traffic in SR-IOV mode.
66 For more detail on SR-IOV, please refer to the following documents:
68 * `SR-IOV provides hardware based I/O sharing <http://www.intel.com/network/connectivity/solutions/vmdc.htm>`_
70 * `PCI-SIG-Single Root I/O Virtualization Support on IA
71 <http://www.intel.com/content/www/us/en/pci-express/pci-sig-single-root-io-virtualization-support-in-virtualization-technology-for-connectivity-paper.html>`_
73 * `Scalable I/O Virtualized Servers <http://www.intel.com/content/www/us/en/virtualization/server-virtualization/scalable-i-o-virtualized-servers-paper.html>`_
75 .. _figure_single_port_nic:
77 .. figure:: img/single_port_nic.*
79 Virtualization for a Single Port NIC in SR-IOV Mode
82 Physical and Virtual Function Infrastructure
83 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
85 The following describes the Physical Function and Virtual Functions infrastructure for the supported Ethernet Controller NICs.
87 Virtual Functions operate under the respective Physical Function on the same NIC Port and therefore have no access
88 to the global NIC resources that are shared between other functions for the same NIC port.
90 A Virtual Function has basic access to the queue resources and control structures of the queues assigned to it.
91 For global resource access, a Virtual Function has to send a request to the Physical Function for that port,
92 and the Physical Function operates on the global resources on behalf of the Virtual Function.
93 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
94 which is called a "Mailbox".
96 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
97 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
99 In a virtualized environment, the programmer can enable a maximum of *64 Virtual Functions (VF)*
100 globally per PCIE host-interface of the Intel Ethernet Switch FM10000 Series device.
101 Each VF can have a maximum of 16 queue pairs.
102 The Physical Function in host could be only configured by the Linux* fm10k driver
103 (in the case of the Linux Kernel-based Virtual Machine [KVM]), DPDK PMD PF driver doesn't support it yet.
107 * Using Linux* fm10k driver:
109 .. code-block:: console
111 rmmod fm10k (To remove the fm10k module)
112 insmod fm0k.ko max_vfs=2,2 (To enable two Virtual Functions per port)
114 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
115 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
116 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
119 * Virtual Functions 0 and 2 belong to Physical Function 0
121 * Virtual Functions 1 and 3 belong to Physical Function 1
125 The above is an important consideration to take into account when targeting specific packets to a selected port.
127 Intel® X710/XL710 Gigabit Ethernet Controller VF Infrastructure
128 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
130 In a virtualized environment, the programmer can enable a maximum of *128 Virtual Functions (VF)*
131 globally per Intel® X710/XL710 Gigabit Ethernet Controller NIC device.
132 The number of queue pairs of each VF can be configured by ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF`` in ``config`` file.
133 The Physical Function in host could be either configured by the Linux* i40e driver
134 (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
135 When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
139 * Using Linux* i40e driver:
141 .. code-block:: console
143 rmmod i40e (To remove the i40e module)
144 insmod i40e.ko max_vfs=2,2 (To enable two Virtual Functions per port)
146 * Using the DPDK PMD PF i40e driver:
148 Kernel Params: iommu=pt, intel_iommu=on
150 .. code-block:: console
154 ./dpdk-devbind.py -b igb_uio bb:ss.f
155 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
157 Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
159 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
160 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
161 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
164 * Virtual Functions 0 and 2 belong to Physical Function 0
166 * Virtual Functions 1 and 3 belong to Physical Function 1
170 The above is an important consideration to take into account when targeting specific packets to a selected port.
172 For Intel® X710/XL710 Gigabit Ethernet Controller, queues are in pairs. One queue pair means one receive queue and
173 one transmit queue. The default number of queue pairs per VF is 4, and can be 16 in maximum.
175 Intel® 82599 10 Gigabit Ethernet Controller VF Infrastructure
176 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
178 The programmer can enable a maximum of *63 Virtual Functions* and there must be *one Physical Function* per Intel® 82599
179 10 Gigabit Ethernet Controller NIC port.
180 The reason for this is that the device allows for a maximum of 128 queues per port and a virtual/physical function has to
181 have at least one queue pair (RX/TX).
182 The current implementation of the DPDK ixgbevf driver supports a single queue pair (RX/TX) per Virtual Function.
183 The Physical Function in host could be either configured by the Linux* ixgbe driver
184 (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
185 When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
189 * Using Linux* ixgbe driver:
191 .. code-block:: console
193 rmmod ixgbe (To remove the ixgbe module)
194 insmod ixgbe max_vfs=2,2 (To enable two Virtual Functions per port)
196 * Using the DPDK PMD PF ixgbe driver:
198 Kernel Params: iommu=pt, intel_iommu=on
200 .. code-block:: console
204 ./dpdk-devbind.py -b igb_uio bb:ss.f
205 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
207 Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
209 * Using the DPDK PMD PF ixgbe driver to enable VF RSS:
211 Same steps as above to install the modules of uio, igb_uio, specify max_vfs for PCI device, and
212 launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
214 The available queue number (at most 4) per VF depends on the total number of pool, which is
215 determined by the max number of VF at PF initialization stage and the number of queue specified
218 * If the max number of VFs (max_vfs) is set in the range of 1 to 32:
220 If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
221 pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
223 If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
224 pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
226 * If the max number of VFs (max_vfs) is in the range of 33 to 64:
228 If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
229 as ``rxq`` is not correct at this case;
231 If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
232 and each VF have 2 Rx queues;
234 On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
235 or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
236 It also needs config VF RSS information like hash function, RSS key, RSS key length.
240 The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is:
241 The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared
242 among PF and all VF; So it could not to provide a method to query the hash and reta content per
243 VF on guest, while, if possible, please query them on host for the shared RETA information.
245 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
246 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
247 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
250 * Virtual Functions 0 and 2 belong to Physical Function 0
252 * Virtual Functions 1 and 3 belong to Physical Function 1
256 The above is an important consideration to take into account when targeting specific packets to a selected port.
258 Intel® 82576 Gigabit Ethernet Controller and Intel® Ethernet Controller I350 Family VF Infrastructure
259 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
261 In a virtualized environment, an Intel® 82576 Gigabit Ethernet Controller serves up to eight virtual machines (VMs).
262 The controller has 16 TX and 16 RX queues.
263 They are generally referred to (or thought of) as queue pairs (one TX and one RX queue).
264 This gives the controller 16 queue pairs.
266 A pool is a group of queue pairs for assignment to the same VF, used for transmit and receive operations.
267 The controller has eight pools, with each pool containing two queue pairs, that is, two TX and two RX queues assigned to each VF.
269 In a virtualized environment, an Intel® Ethernet Controller I350 family device serves up to eight virtual machines (VMs) per port.
270 The eight queues can be accessed by eight different VMs if configured correctly (the i350 has 4x1GbE ports each with 8T X and 8 RX queues),
271 that means, one Transmit and one Receive queue assigned to each VF.
275 * Using Linux* igb driver:
277 .. code-block:: console
279 rmmod igb (To remove the igb module)
280 insmod igb max_vfs=2,2 (To enable two Virtual Functions per port)
282 * Using DPDK PMD PF igb driver:
284 Kernel Params: iommu=pt, intel_iommu=on modprobe uio
286 .. code-block:: console
289 ./dpdk-devbind.py -b igb_uio bb:ss.f
290 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific pci device)
292 Launch DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
294 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a four-port NIC.
295 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
296 represented by (Bus#, Device#, Function#) in sequence, starting from 0 to 7.
299 * Virtual Functions 0 and 4 belong to Physical Function 0
301 * Virtual Functions 1 and 5 belong to Physical Function 1
303 * Virtual Functions 2 and 6 belong to Physical Function 2
305 * Virtual Functions 3 and 7 belong to Physical Function 3
309 The above is an important consideration to take into account when targeting specific packets to a selected port.
311 Validated Hypervisors
312 ~~~~~~~~~~~~~~~~~~~~~
314 The validated hypervisor is:
316 * KVM (Kernel Virtual Machine) with Qemu, version 0.14.0
318 However, the hypervisor is bypassed to configure the Virtual Function devices using the Mailbox interface,
319 the solution is hypervisor-agnostic.
320 Xen* and VMware* (when SR- IOV is supported) will also be able to support the DPDK with Virtual Function driver support.
322 Expected Guest Operating System in Virtual Machine
323 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
325 The expected guest operating systems in a virtualized environment are:
327 * Fedora* 14 (64-bit)
329 * Ubuntu* 10.04 (64-bit)
331 For supported kernel versions, refer to the *DPDK Release Notes*.
333 Setting Up a KVM Virtual Machine Monitor
334 ----------------------------------------
336 The following describes a target environment:
338 * Host Operating System: Fedora 14
340 * Hypervisor: KVM (Kernel Virtual Machine) with Qemu version 0.14.0
342 * Guest Operating System: Fedora 14
344 * Linux Kernel Version: Refer to the *DPDK Getting Started Guide*
346 * Target Applications: l2fwd, l3fwd-vf
348 The setup procedure is as follows:
350 #. Before booting the Host OS, open **BIOS setup** and enable **Intel® VT features**.
352 #. While booting the Host OS kernel, pass the intel_iommu=on kernel command line argument using GRUB.
353 When using DPDK PF driver on host, pass the iommu=pt kernel command line argument in GRUB.
355 #. Download qemu-kvm-0.14.0 from
356 `http://sourceforge.net/projects/kvm/files/qemu-kvm/ <http://sourceforge.net/projects/kvm/files/qemu-kvm/>`_
357 and install it in the Host OS using the following steps:
359 When using a recent kernel (2.6.25+) with kvm modules included:
361 .. code-block:: console
363 tar xzf qemu-kvm-release.tar.gz
365 ./configure --prefix=/usr/local/kvm
368 sudo /sbin/modprobe kvm-intel
370 When using an older kernel, or a kernel from a distribution without the kvm modules,
371 you must download (from the same link), compile and install the modules yourself:
373 .. code-block:: console
375 tar xjf kvm-kmod-release.tar.bz2
380 sudo /sbin/modprobe kvm-intel
382 qemu-kvm installs in the /usr/local/bin directory.
384 For more details about KVM configuration and usage, please refer to:
386 `http://www.linux-kvm.org/page/HOWTO1 <http://www.linux-kvm.org/page/HOWTO1>`_.
388 #. Create a Virtual Machine and install Fedora 14 on the Virtual Machine.
389 This is referred to as the Guest Operating System (Guest OS).
391 #. Download and install the latest ixgbe driver from:
393 `http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=14687 <http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=14687>`_
397 When using Linux kernel ixgbe driver, unload the Linux ixgbe driver and reload it with the max_vfs=2,2 argument:
399 .. code-block:: console
402 modprobe ixgbe max_vfs=2,2
404 When using DPDK PMD PF driver, insert DPDK kernel module igb_uio and set the number of VF by sysfs max_vfs:
406 .. code-block:: console
410 ./dpdk-devbind.py -b igb_uio 02:00.0 02:00.1 0e:00.0 0e:00.1
411 echo 2 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
412 echo 2 > /sys/bus/pci/devices/0000\:02\:00.1/max_vfs
413 echo 2 > /sys/bus/pci/devices/0000\:0e\:00.0/max_vfs
414 echo 2 > /sys/bus/pci/devices/0000\:0e\:00.1/max_vfs
418 You need to explicitly specify number of vfs for each port, for example,
419 in the command above, it creates two vfs for the first two ixgbe ports.
421 Let say we have a machine with four physical ixgbe ports:
432 The command above creates two vfs for device 0000:02:00.0:
434 .. code-block:: console
436 ls -alrt /sys/bus/pci/devices/0000\:02\:00.0/virt*
437 lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn1 -> ../0000:02:10.2
438 lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn0 -> ../0000:02:10.0
440 It also creates two vfs for device 0000:02:00.1:
442 .. code-block:: console
444 ls -alrt /sys/bus/pci/devices/0000\:02\:00.1/virt*
445 lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn1 -> ../0000:02:10.3
446 lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn0 -> ../0000:02:10.1
448 #. List the PCI devices connected and notice that the Host OS shows two Physical Functions (traditional ports)
449 and four Virtual Functions (two for each port).
450 This is the result of the previous step.
452 #. Insert the pci_stub module to hold the PCI devices that are freed from the default driver using the following command
453 (see http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM Section 4 for more information):
455 .. code-block:: console
457 sudo /sbin/modprobe pci-stub
459 Unbind the default driver from the PCI devices representing the Virtual Functions.
460 A script to perform this action is as follows:
462 .. code-block:: console
464 echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id
465 echo 0000:08:10.0 > /sys/bus/pci/devices/0000:08:10.0/driver/unbind
466 echo 0000:08:10.0 > /sys/bus/pci/drivers/pci-stub/bind
468 where, 0000:08:10.0 belongs to the Virtual Function visible in the Host OS.
470 #. Now, start the Virtual Machine by running the following command:
472 .. code-block:: console
474 /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -smp 4 -boot c -hda lucid.qcow2 -device pci-assign,host=08:10.0
478 — -m = memory to assign
480 — -smp = number of smp cores
482 — -boot = boot option
484 — -hda = virtual disk image
486 — -device = device to attach
490 — The pci-assign,host=08:10.0 value indicates that you want to attach a PCI device
491 to a Virtual Machine and the respective (Bus:Device.Function)
492 numbers should be passed for the Virtual Function to be attached.
494 — qemu-kvm-0.14.0 allows a maximum of four PCI devices assigned to a VM,
495 but this is qemu-kvm version dependent since qemu-kvm-0.14.1 allows a maximum of five PCI devices.
497 — qemu-system-x86_64 also has a -cpu command line option that is used to select the cpu_model
498 to emulate in a Virtual Machine. Therefore, it can be used as:
500 .. code-block:: console
502 /usr/local/kvm/bin/qemu-system-x86_64 -cpu ?
504 (to list all available cpu_models)
506 /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -cpu host -smp 4 -boot c -hda lucid.qcow2 -device pci-assign,host=08:10.0
508 (to use the same cpu_model equivalent to the host cpu)
510 For more information, please refer to: `http://wiki.qemu.org/Features/CPUModels <http://wiki.qemu.org/Features/CPUModels>`_.
512 #. Install and run DPDK host app to take over the Physical Function. Eg.
514 .. code-block:: console
516 make install T=x86_64-native-linuxapp-gcc
517 ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 -- -i
519 #. Finally, access the Guest OS using vncviewer with the localhost:5900 port and check the lspci command output in the Guest OS.
520 The virtual functions will be listed as available for use.
522 #. Configure and install the DPDK with an x86_64-native-linuxapp-gcc configuration on the Guest OS as normal,
523 that is, there is no change to the normal installation procedure.
525 .. code-block:: console
527 make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
528 cd x86_64-native-linuxapp-gcc
533 If you are unable to compile the DPDK and you are getting "error: CPU you selected does not support x86-64 instruction set",
534 power off the Guest OS and start the virtual machine with the correct -cpu option in the qemu- system-x86_64 command as shown in step 9.
535 You must select the best x86_64 cpu_model to emulate or you can select host option if available.
539 Run the DPDK l2fwd sample application in the Guest OS with Hugepages enabled.
540 For the expected benchmark performance, you must pin the cores from the Guest OS to the Host OS (taskset can be used to do this) and
541 you must also look at the PCI Bus layout on the board to ensure you are not running the traffic over the QPI Interface.
545 * The Virtual Machine Manager (the Fedora package name is virt-manager) is a utility for virtual machine management
546 that can also be used to create, start, stop and delete virtual machines.
547 If this option is used, step 2 and 6 in the instructions provided will be different.
549 * virsh, a command line utility for virtual machine management,
550 can also be used to bind and unbind devices to a virtual machine in Ubuntu.
551 If this option is used, step 6 in the instructions provided will be different.
553 * The Virtual Machine Monitor (see :numref:`figure_perf_benchmark`) is equivalent to a Host OS with KVM installed as described in the instructions.
555 .. _figure_perf_benchmark:
557 .. figure:: img/perf_benchmark.*
559 Performance Benchmark Setup
562 DPDK SR-IOV PMD PF/VF Driver Usage Model
563 ----------------------------------------
565 Fast Host-based Packet Processing
566 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
568 Software Defined Network (SDN) trends are demanding fast host-based packet handling.
569 In a virtualization environment,
570 the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
572 With such host instance fast packet processing, lots of services such as filtering, QoS,
573 DPI can be offloaded on the host fast path.
575 :numref:`figure_fast_pkt_proc` shows the scenario where some VMs directly communicate externally via a VFs,
576 while others connect to a virtual switch and share the same uplink bandwidth.
578 .. _figure_fast_pkt_proc:
580 .. figure:: img/fast_pkt_proc.*
582 Fast Host-based Packet Processing
585 SR-IOV (PF/VF) Approach for Inter-VM Communication
586 --------------------------------------------------
588 Inter-VM data communication is one of the traffic bottle necks in virtualization platforms.
589 SR-IOV device assignment helps a VM to attach the real device, taking advantage of the bridge in the NIC.
590 So VF-to-VF traffic within the same physical port (VM0<->VM1) have hardware acceleration.
591 However, when VF crosses physical ports (VM0<->VM2), there is no such hardware bridge.
592 In this case, the DPDK PMD PF driver provides host forwarding between such VMs.
594 :numref:`figure_inter_vm_comms` shows an example.
595 In this case an update of the MAC address lookup tables in both the NIC and host DPDK application is required.
597 In the NIC, writing the destination of a MAC address belongs to another cross device VM to the PF specific pool.
598 So when a packet comes in, its destination MAC address will match and forward to the host DPDK PMD application.
600 In the host DPDK application, the behavior is similar to L2 forwarding,
601 that is, the packet is forwarded to the correct PF pool.
602 The SR-IOV NIC switch forwards the packet to a specific VM according to the MAC destination address
603 which belongs to the destination VF on the VM.
605 .. _figure_inter_vm_comms:
607 .. figure:: img/inter_vm_comms.*
609 Inter-VM Communication