2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 I40E/IXGBE/IGB Virtual Function Driver
32 ======================================
34 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
35 support the following modes of operation in a virtualized environment:
37 * **SR-IOV mode**: Involves direct assignment of part of the port resources to different guest operating systems
38 using the PCI-SIG Single Root I/O Virtualization (SR IOV) standard,
39 also known as "native mode" or "pass-through" mode.
40 In this chapter, this mode is referred to as IOV mode.
42 * **VMDq mode**: Involves central management of the networking resources by an IO Virtual Machine (IOVM) or
43 a Virtual Machine Monitor (VMM), also known as software switch acceleration mode.
44 In this chapter, this mode is referred to as the Next Generation VMDq mode.
46 SR-IOV Mode Utilization in a DPDK Environment
47 ---------------------------------------------
49 The DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode.
50 Therefore, it is possible to partition SR-IOV capability on Ethernet controller NIC resources logically and
51 expose them to a virtual machine as a separate PCI function called a "Virtual Function".
54 Therefore, a NIC is logically distributed among multiple virtual machines (as shown in Figure 10),
55 while still having global data in common to share with the Physical Function and other Virtual Functions.
56 The DPDK fm10kvf, i40evf, igbvf or ixgbevf as a Poll Mode Driver (PMD) serves for the Intel® 82576 Gigabit Ethernet Controller,
57 Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller NIC,
58 Intel® Fortville 10/40 Gigabit Ethernet Controller NIC's virtual PCI function,or PCIE host-interface of the Intel Ethernet Switch
60 Meanwhile the DPDK Poll Mode Driver (PMD) also supports "Physical Function" of such NIC's on the host.
62 The DPDK PF/VF Poll Mode Driver (PMD) supports the Layer 2 switch on Intel® 82576 Gigabit Ethernet Controller,
63 Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller,
64 and Intel® Fortville 10/40 Gigabit Ethernet Controller NICs so that guest can choose it for inter virtual machine traffic in SR-IOV mode.
66 For more detail on SR-IOV, please refer to the following documents:
68 * `SR-IOV provides hardware based I/O sharing <http://www.intel.com/network/connectivity/solutions/vmdc.htm>`_
70 * `PCI-SIG-Single Root I/O Virtualization Support on IA
71 <http://www.intel.com/content/www/us/en/pci-express/pci-sig-single-root-io-virtualization-support-in-virtualization-technology-for-connectivity-paper.html>`_
73 * `Scalable I/O Virtualized Servers <http://www.intel.com/content/www/us/en/virtualization/server-virtualization/scalable-i-o-virtualized-servers-paper.html>`_
77 **Figure 10. Virtualization for a Single Port NIC in SR-IOV Mode**
79 .. image24_png has been renamed
83 Physical and Virtual Function Infrastructure
84 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
86 The following describes the Physical Function and Virtual Functions infrastructure for the supported Ethernet Controller NICs.
88 Virtual Functions operate under the respective Physical Function on the same NIC Port and therefore have no access
89 to the global NIC resources that are shared between other functions for the same NIC port.
91 A Virtual Function has basic access to the queue resources and control structures of the queues assigned to it.
92 For global resource access, a Virtual Function has to send a request to the Physical Function for that port,
93 and the Physical Function operates on the global resources on behalf of the Virtual Function.
94 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
95 which is called a "Mailbox".
97 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
98 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
100 In a virtualized environment, the programmer can enable a maximum of *64 Virtual Functions (VF)*
101 globally per PCIE host-interface of the Intel Ethernet Switch FM10000 Series device.
102 Each VF can have a maximum of 16 queue pairs.
103 The Physical Function in host could be only configured by the Linux* fm10k driver
104 (in the case of the Linux Kernel-based Virtual Machine [KVM]), DPDK PMD PF driver doesn't support it yet.
108 * Using Linux* fm10k driver:
110 .. code-block:: console
112 rmmod fm10k (To remove the fm10k module)
113 insmod fm0k.ko max_vfs=2,2 (To enable two Virtual Functions per port)
115 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
116 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
117 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
120 * Virtual Functions 0 and 2 belong to Physical Function 0
122 * Virtual Functions 1 and 3 belong to Physical Function 1
126 The above is an important consideration to take into account when targeting specific packets to a selected port.
128 Intel® Fortville 10/40 Gigabit Ethernet Controller VF Infrastructure
129 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
131 In a virtualized environment, the programmer can enable a maximum of *128 Virtual Functions (VF)*
132 globally per Intel® Fortville 10/40 Gigabit Ethernet Controller NIC device.
133 Each VF can have a maximum of 16 queue pairs.
134 The Physical Function in host could be either configured by the Linux* i40e driver
135 (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
136 When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
140 * Using Linux* i40e driver:
142 .. code-block:: console
144 rmmod i40e (To remove the i40e module)
145 insmod i40e.ko max_vfs=2,2 (To enable two Virtual Functions per port)
147 * Using the DPDK PMD PF i40e driver:
149 Kernel Params: iommu=pt, intel_iommu=on
151 .. code-block:: console
155 ./dpdk_nic_bind.py -b igb_uio bb:ss.f
156 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
158 Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
160 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
161 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
162 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
165 * Virtual Functions 0 and 2 belong to Physical Function 0
167 * Virtual Functions 1 and 3 belong to Physical Function 1
171 The above is an important consideration to take into account when targeting specific packets to a selected port.
173 Intel® 82599 10 Gigabit Ethernet Controller VF Infrastructure
174 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
176 The programmer can enable a maximum of *63 Virtual Functions* and there must be *one Physical Function* per Intel® 82599
177 10 Gigabit Ethernet Controller NIC port.
178 The reason for this is that the device allows for a maximum of 128 queues per port and a virtual/physical function has to
179 have at least one queue pair (RX/TX).
180 The current implementation of the DPDK ixgbevf driver supports a single queue pair (RX/TX) per Virtual Function.
181 The Physical Function in host could be either configured by the Linux* ixgbe driver
182 (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
183 When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
187 * Using Linux* ixgbe driver:
189 .. code-block:: console
191 rmmod ixgbe (To remove the ixgbe module)
192 insmod ixgbe max_vfs=2,2 (To enable two Virtual Functions per port)
194 * Using the DPDK PMD PF ixgbe driver:
196 Kernel Params: iommu=pt, intel_iommu=on
198 .. code-block:: console
202 ./dpdk_nic_bind.py -b igb_uio bb:ss.f
203 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
205 Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
207 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
208 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
209 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
212 * Virtual Functions 0 and 2 belong to Physical Function 0
214 * Virtual Functions 1 and 3 belong to Physical Function 1
218 The above is an important consideration to take into account when targeting specific packets to a selected port.
220 Intel® 82576 Gigabit Ethernet Controller and Intel® Ethernet Controller I350 Family VF Infrastructure
221 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
223 In a virtualized environment, an Intel® 82576 Gigabit Ethernet Controller serves up to eight virtual machines (VMs).
224 The controller has 16 TX and 16 RX queues.
225 They are generally referred to (or thought of) as queue pairs (one TX and one RX queue).
226 This gives the controller 16 queue pairs.
228 A pool is a group of queue pairs for assignment to the same VF, used for transmit and receive operations.
229 The controller has eight pools, with each pool containing two queue pairs, that is, two TX and two RX queues assigned to each VF.
231 In a virtualized environment, an Intel® Ethernet Controller I350 family device serves up to eight virtual machines (VMs) per port.
232 The eight queues can be accessed by eight different VMs if configured correctly (the i350 has 4x1GbE ports each with 8T X and 8 RX queues),
233 that means, one Transmit and one Receive queue assigned to each VF.
237 * Using Linux* igb driver:
239 .. code-block:: console
241 rmmod igb (To remove the igb module)
242 insmod igb max_vfs=2,2 (To enable two Virtual Functions per port)
244 * Using Intel® DPDK PMD PF igb driver:
246 Kernel Params: iommu=pt, intel_iommu=on modprobe uio
248 .. code-block:: console
251 ./dpdk_nic_bind.py -b igb_uio bb:ss.f
252 echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific pci device)
254 Launch DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
256 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a four-port NIC.
257 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
258 represented by (Bus#, Device#, Function#) in sequence, starting from 0 to 7.
261 * Virtual Functions 0 and 4 belong to Physical Function 0
263 * Virtual Functions 1 and 5 belong to Physical Function 1
265 * Virtual Functions 2 and 6 belong to Physical Function 2
267 * Virtual Functions 3 and 7 belong to Physical Function 3
271 The above is an important consideration to take into account when targeting specific packets to a selected port.
273 Validated Hypervisors
274 ~~~~~~~~~~~~~~~~~~~~~
276 The validated hypervisor is:
278 * KVM (Kernel Virtual Machine) with Qemu, version 0.14.0
280 However, the hypervisor is bypassed to configure the Virtual Function devices using the Mailbox interface,
281 the solution is hypervisor-agnostic.
282 Xen* and VMware* (when SR- IOV is supported) will also be able to support the DPDK with Virtual Function driver support.
284 Expected Guest Operating System in Virtual Machine
285 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
287 The expected guest operating systems in a virtualized environment are:
289 * Fedora* 14 (64-bit)
291 * Ubuntu* 10.04 (64-bit)
293 For supported kernel versions, refer to the *DPDK Release Notes*.
295 Setting Up a KVM Virtual Machine Monitor
296 ----------------------------------------
298 The following describes a target environment:
300 * Host Operating System: Fedora 14
302 * Hypervisor: KVM (Kernel Virtual Machine) with Qemu version 0.14.0
304 * Guest Operating System: Fedora 14
306 * Linux Kernel Version: Refer to the *DPDK Getting Started Guide*
308 * Target Applications: l2fwd, l3fwd-vf
310 The setup procedure is as follows:
312 #. Before booting the Host OS, open **BIOS setup** and enable **Intel® VT features**.
314 #. While booting the Host OS kernel, pass the intel_iommu=on kernel command line argument using GRUB.
315 When using DPDK PF driver on host, pass the iommu=pt kernel command line argument in GRUB.
317 #. Download qemu-kvm-0.14.0 from
318 `http://sourceforge.net/projects/kvm/files/qemu-kvm/ <http://sourceforge.net/projects/kvm/files/qemu-kvm/>`_
319 and install it in the Host OS using the following steps:
321 When using a recent kernel (2.6.25+) with kvm modules included:
323 .. code-block:: console
325 tar xzf qemu-kvm-release.tar.gz
327 ./configure --prefix=/usr/local/kvm
330 sudo /sbin/modprobe kvm-intel
332 When using an older kernel, or a kernel from a distribution without the kvm modules,
333 you must download (from the same link), compile and install the modules yourself:
335 .. code-block:: console
337 tar xjf kvm-kmod-release.tar.bz2
342 sudo /sbin/modprobe kvm-intel
344 qemu-kvm installs in the /usr/local/bin directory.
346 For more details about KVM configuration and usage, please refer to:
348 `http://www.linux-kvm.org/page/HOWTO1 <http://www.linux-kvm.org/page/HOWTO1>`_.
350 #. Create a Virtual Machine and install Fedora 14 on the Virtual Machine.
351 This is referred to as the Guest Operating System (Guest OS).
353 #. Download and install the latest ixgbe driver from:
355 `http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=14687 <http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=14687>`_
359 When using Linux kernel ixgbe driver, unload the Linux ixgbe driver and reload it with the max_vfs=2,2 argument:
361 .. code-block:: console
364 modprobe ixgbe max_vfs=2,2
366 When using DPDK PMD PF driver, insert DPDK kernel module igb_uio and set the number of VF by sysfs max_vfs:
368 .. code-block:: console
372 ./dpdk_nic_bind.py -b igb_uio 02:00.0 02:00.1 0e:00.0 0e:00.1
373 echo 2 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
374 echo 2 > /sys/bus/pci/devices/0000\:02\:00.1/max_vfs
375 echo 2 > /sys/bus/pci/devices/0000\:0e\:00.0/max_vfs
376 echo 2 > /sys/bus/pci/devices/0000\:0e\:00.1/max_vfs
380 You need to explicitly specify number of vfs for each port, for example,
381 in the command above, it creates two vfs for the first two ixgbe ports.
383 Let say we have a machine with four physical ixgbe ports:
394 The command above creates two vfs for device 0000:02:00.0:
396 .. code-block:: console
398 ls -alrt /sys/bus/pci/devices/0000\:02\:00.0/virt*
399 lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn1 -> ../0000:02:10.2
400 lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn0 -> ../0000:02:10.0
402 It also creates two vfs for device 0000:02:00.1:
404 .. code-block:: console
406 ls -alrt /sys/bus/pci/devices/0000\:02\:00.1/virt*
407 lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn1 -> ../0000:02:10.3
408 lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn0 -> ../0000:02:10.1
410 #. List the PCI devices connected and notice that the Host OS shows two Physical Functions (traditional ports)
411 and four Virtual Functions (two for each port).
412 This is the result of the previous step.
414 #. Insert the pci_stub module to hold the PCI devices that are freed from the default driver using the following command
415 (see http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM Section 4 for more information):
417 .. code-block:: console
419 sudo /sbin/modprobe pci-stub
421 Unbind the default driver from the PCI devices representing the Virtual Functions.
422 A script to perform this action is as follows:
424 .. code-block:: console
426 echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id
427 echo 0000:08:10.0 > /sys/bus/pci/devices/0000:08:10.0/driver/unbind
428 echo 0000:08:10.0 > /sys/bus/pci/drivers/pci-stub/bind
430 where, 0000:08:10.0 belongs to the Virtual Function visible in the Host OS.
432 #. Now, start the Virtual Machine by running the following command:
434 .. code-block:: console
436 /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -smp 4 -boot c -hda lucid.qcow2 -device pci-assign,host=08:10.0
440 — -m = memory to assign
442 — -smp = number of smp cores
444 — -boot = boot option
446 — -hda = virtual disk image
448 — -device = device to attach
452 — The pci-assign,host=08:10.0 alue indicates that you want to attach a PCI device
453 to a Virtual Machine and the respective (Bus:Device.Function)
454 numbers should be passed for the Virtual Function to be attached.
456 — qemu-kvm-0.14.0 allows a maximum of four PCI devices assigned to a VM,
457 but this is qemu-kvm version dependent since qemu-kvm-0.14.1 allows a maximum of five PCI devices.
459 — qemu-system-x86_64 also has a -cpu command line option that is used to select the cpu_model
460 to emulate in a Virtual Machine. Therefore, it can be used as:
462 .. code-block:: console
464 /usr/local/kvm/bin/qemu-system-x86_64 -cpu ?
466 (to list all available cpu_models)
468 /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -cpu host -smp 4 -boot c -hda lucid.qcow2 -device pci-assign,host=08:10.0
470 (to use the same cpu_model equivalent to the host cpu)
472 For more information, please refer to: `http://wiki.qemu.org/Features/CPUModels <http://wiki.qemu.org/Features/CPUModels>`_.
474 #. Install and run DPDK host app to take over the Physical Function. Eg.
476 .. code-block:: console
478 make install T=x86_64-native-linuxapp-gcc
479 ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
481 #. Finally, access the Guest OS using vncviewer with the localhost:5900 port and check the lspci command output in the Guest OS.
482 The virtual functions will be listed as available for use.
484 #. Configure and install the DPDK with an x86_64-native-linuxapp-gcc configuration on the Guest OS as normal,
485 that is, there is no change to the normal installation procedure.
487 .. code-block:: console
489 make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
490 cd x86_64-native-linuxapp-gcc
495 If you are unable to compile the DPDK and you are getting "error: CPU you selected does not support x86-64 instruction set",
496 power off the Guest OS and start the virtual machine with the correct -cpu option in the qemu- system-x86_64 command as shown in step 9.
497 You must select the best x86_64 cpu_model to emulate or you can select host option if available.
501 Run the DPDK l2fwd sample application in the Guest OS with Hugepages enabled.
502 For the expected benchmark performance, you must pin the cores from the Guest OS to the Host OS (taskset can be used to do this) and
503 you must also look at the PCI Bus layout on the board to ensure you are not running the traffic over the QPI Inteface.
507 * The Virtual Machine Manager (the Fedora package name is virt-manager) is a utility for virtual machine management
508 that can also be used to create, start, stop and delete virtual machines.
509 If this option is used, step 2 and 6 in the instructions provided will be different.
511 * virsh, a command line utility for virtual machine management,
512 can also be used to bind and unbind devices to a virtual machine in Ubuntu.
513 If this option is used, step 6 in the instructions provided will be different.
515 * The Virtual Machine Monitor (see Figure 11) is equivalent to a Host OS with KVM installed as described in the instructions.
519 **Figure 11. Performance Benchmark Setup**
521 .. image25_png has been renamed
525 DPDK SR-IOV PMD PF/VF Driver Usage Model
526 ----------------------------------------
528 Fast Host-based Packet Processing
529 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
531 Software Defined Network (SDN) trends are demanding fast host-based packet handling.
532 In a virtualization environment,
533 the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
535 With such host instance fast packet processing, lots of services such as filtering, QoS,
536 DPI can be offloaded on the host fast path.
538 Figure 12 shows the scenario where some VMs directly communicate externally via a VFs,
539 while others connect to a virtual switch and share the same uplink bandwidth.
543 **Figure 12. Fast Host-based Packet Processing**
545 .. image26_png has been renamed
549 SR-IOV (PF/VF) Approach for Inter-VM Communication
550 --------------------------------------------------
552 Inter-VM data communication is one of the traffic bottle necks in virtualization platforms.
553 SR-IOV device assignment helps a VM to attach the real device, taking advantage of the bridge in the NIC.
554 So VF-to-VF traffic within the same physical port (VM0<->VM1) have hardware acceleration.
555 However, when VF crosses physical ports (VM0<->VM2), there is no such hardware bridge.
556 In this case, the DPDK PMD PF driver provides host forwarding between such VMs.
558 Figure 13 shows an example.
559 In this case an update of the MAC address lookup tables in both the NIC and host DPDK application is required.
561 In the NIC, writing the destination of a MAC address belongs to another cross device VM to the PF specific pool.
562 So when a packet comes in, its destination MAC address will match and forward to the host DPDK PMD application.
564 In the host DPDK application, the behavior is similar to L2 forwarding,
565 that is, the packet is forwarded to the correct PF pool.
566 The SR-IOV NIC switch forwards the packet to a specific VM according to the MAC destination address
567 which belongs to the destination VF on the VM.
571 **Figure 13. Inter-VM Communication**
573 .. image27_png has been renamed
577 .. |perf_benchmark| image:: img/perf_benchmark.*
579 .. |single_port_nic| image:: img/single_port_nic.*
581 .. |inter_vm_comms| image:: img/inter_vm_comms.*
583 .. |fast_pkt_proc| image:: img/fast_pkt_proc.*