2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 Vhost Sample Application
33 ========================
35 The vhost sample application demonstrates integration of the Data Plane Development Kit (DPDK)
36 with the Linux* KVM hypervisor by implementing the vhost-net offload API.
37 The sample application performs simple packet switching between virtual machines based on Media Access Control
38 (MAC) address or Virtual Local Area Network (VLAN) tag.
39 The splitting of ethernet traffic from an external switch is performed in hardware by the Virtual Machine Device Queues
40 (VMDQ) and Data Center Bridging (DCB) features of the IntelĀ® 82599 10 Gigabit Ethernet Controller.
45 Virtio networking (virtio-net) was developed as the Linux* KVM para-virtualized method for communicating network packets
46 between host and guest.
47 It was found that virtio-net performance was poor due to context switching and packet copying between host, guest, and QEMU.
48 The following figure shows the system architecture for a virtio-based networking (virtio-net).
52 **Figure16. QEMU Virtio-net (prior to vhost-net)**
54 .. image19_png has been renamed
58 The Linux* Kernel vhost-net module was developed as an offload mechanism for virtio-net.
59 The vhost-net module enables KVM (QEMU) to offload the servicing of virtio-net devices to the vhost-net kernel module,
60 reducing the context switching and packet copies in the virtual dataplane.
62 This is achieved by QEMU sharing the following information with the vhost-net module through the vhost-net API:
64 * The layout of the guest memory space, to enable the vhost-net module to translate addresses.
66 * The locations of virtual queues in QEMU virtual address space,
67 to enable the vhost module to read/write directly to and from the virtqueues.
69 * An event file descriptor (eventfd) configured in KVM to send interrupts to the virtio- net device driver in the guest.
70 This enables the vhost-net module to notify (call) the guest.
72 * An eventfd configured in KVM to be triggered on writes to the virtio-net device's
73 Peripheral Component Interconnect (PCI) config space.
74 This enables the vhost-net module to receive notifications (kicks) from the guest.
76 The following figure shows the system architecture for virtio-net networking with vhost-net offload.
80 **Figure 17. Virtio with Linux* Kernel Vhost**
82 .. image20_png has been renamed
89 The DPDK vhost-net sample code demonstrates KVM (QEMU) offloading the servicing of a Virtual Machine's (VM's)
90 virtio-net devices to a DPDK-based application in place of the kernel's vhost-net module.
92 The DPDK vhost-net sample code is based on vhost library. Vhost library is developed for user space ethernet switch to
93 easily integrate with vhost functionality.
95 The vhost library implements the following features:
97 * Management of virtio-net device creation/destruction events.
99 * Mapping of the VM's physical memory into the DPDK vhost-net's address space.
101 * Triggering/receiving notifications to/from VMs via eventfds.
103 * A virtio-net back-end implementation providing a subset of virtio-net features.
105 There are two vhost implementations in vhost library, vhost cuse and vhost user. In vhost cuse, a character device driver is implemented to
106 receive and process vhost requests through ioctl messages. In vhost user, a socket server is created to received vhost requests through
107 socket messages. Most of the messages share the same handler routine.
110 **Any vhost cuse specific requirement in the following sections will be emphasized**.
112 Two impelmentations are turned on and off statically through configure file. Only one implementation could be turned on. They don't co-exist in current implementation.
114 The vhost sample code application is a simple packet switching application with the following feature:
116 * Packet switching between virtio-net devices and the network interface card,
117 including using VMDQs to reduce the switching that needs to be performed in software.
119 The following figure shows the architecture of the Vhost sample application based on vhost-cuse.
123 **Figure 18. Vhost-net Architectural Overview**
125 .. image21_png has been renamed
129 The following figure shows the flow of packets through the vhost-net sample application.
133 **Figure 19. Packet Flow Through the vhost-net Sample Application**
135 .. image22_png has been renamed
137 |vhost_net_sample_app|
139 Supported Distributions
140 -----------------------
142 The example in this section have been validated with the following distributions:
153 This section lists prerequisite packages that must be installed.
155 Installing Packages on the Host(vhost cuse required)
156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158 The vhost cuse code uses the following packages; fuse, fuse-devel, and kernel-modules-extra.
159 The vhost user code don't rely on those modules as eventfds are already installed into vhost process through
162 #. Install Fuse Development Libraries and headers:
164 .. code-block:: console
166 yum -y install fuse fuse-devel
168 #. Install the Cuse Kernel Module:
170 .. code-block:: console
172 yum -y install kernel-modules-extra
177 For vhost user, qemu 2.2 is required.
179 Setting up the Execution Environment
180 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
182 The vhost sample code requires that QEMU allocates a VM's memory on the hugetlbfs file system.
183 As the vhost sample code requires hugepages,
184 the best practice is to partition the system into separate hugepage mount points for the VMs and the vhost sample code.
188 This is best-practice only and is not mandatory.
189 For systems that only support 2 MB page sizes,
190 both QEMU and vhost sample code can use the same hugetlbfs mount point without issue.
194 VMs with gigabytes of memory can benefit from having QEMU allocate their memory from 1 GB huge pages.
195 1 GB huge pages must be allocated at boot time by passing kernel parameters through the grub boot loader.
197 #. Calculate the maximum memory usage of all VMs to be run on the system.
198 Then, round this value up to the nearest Gigabyte the execution environment will require.
200 #. Edit the /etc/default/grub file, and add the following to the GRUB_CMDLINE_LINUX entry:
202 .. code-block:: console
204 GRUB_CMDLINE_LINUX="... hugepagesz=1G hugepages=<Number of hugepages required> default_hugepagesz=1G"
206 #. Update the grub boot loader:
208 .. code-block:: console
210 grub2-mkconfig -o /boot/grub2/grub.cfg
212 #. Reboot the system.
214 #. The hugetlbfs mount point (/dev/hugepages) should now default to allocating gigabyte pages.
218 Making the above modification will change the system default hugepage size to 1 GB for all applications.
220 **Vhost Sample Code**
222 In this section, we create a second hugetlbs mount point to allocate hugepages for the DPDK vhost sample code.
224 #. Allocate sufficient 2 MB pages for the DPDK vhost sample code:
226 .. code-block:: console
228 echo 256 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
230 #. Mount hugetlbs at a separate mount point for 2 MB pages:
232 .. code-block:: console
234 mount -t hugetlbfs nodev /mnt/huge -o pagesize=2M
236 The above steps can be automated by doing the following:
238 #. Edit /etc/fstab to add an entry to automatically mount the second hugetlbfs mount point:
242 hugetlbfs <tab> /mnt/huge <tab> hugetlbfs defaults,pagesize=1G 0 0
244 #. Edit the /etc/default/grub file, and add the following to the GRUB_CMDLINE_LINUX entry:
248 GRUB_CMDLINE_LINUX="... hugepagesz=2M hugepages=256 ... default_hugepagesz=1G"
250 #. Update the grub bootloader:
252 .. code-block:: console
254 grub2-mkconfig -o /boot/grub2/grub.cfg
256 #. Reboot the system.
260 Ensure that the default hugepage size after this setup is 1 GB.
262 Setting up the Guest Execution Environment
263 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
265 It is recommended for testing purposes that the DPDK testpmd sample application is used in the guest to forward packets,
266 the reasons for this are discussed in Section 22.7, "Running the Virtual Machine (QEMU)".
268 The testpmd application forwards packets between pairs of Ethernet devices,
269 it requires an even number of Ethernet devices (virtio or otherwise) to execute.
270 It is therefore recommended to create multiples of two virtio-net devices for each Virtual Machine either through libvirt or
271 at the command line as follows.
275 Observe that in the example, "-device" and "-netdev" are repeated for two virtio-net devices.
279 .. code-block:: console
281 user@target:~$ qemu-system-x86_64 ... \
282 -netdev tap,id=hostnet1,vhost=on,vhostfd=<open fd> \
283 -device virtio-net-pci, netdev=hostnet1,id=net1 \
284 -netdev tap,id=hostnet2,vhost=on,vhostfd=<open fd> \
285 -device virtio-net-pci, netdev=hostnet2,id=net1
289 .. code-block:: console
291 user@target:~$ qemu-system-x86_64 ... \
292 -chardev socket,id=char1,path=<sock_path> \
293 -netdev type=vhost-user,id=hostnet1,chardev=char1 \
294 -device virtio-net-pci,netdev=hostnet1,id=net1 \
295 -chardev socket,id=char2,path=<sock_path> \
296 -netdev type=vhost-user,id=hostnet2,chardev=char2 \
297 -device virtio-net-pci,netdev=hostnet2,id=net2
299 sock_path is the path for the socket file created by vhost.
301 Compiling the Sample Code
302 -------------------------
303 #. Compile vhost lib:
305 To enable vhost, turn on vhost library in the configure file config/common_linuxapp.
307 .. code-block:: console
309 CONFIG_RTE_LIBRTE_VHOST=n
311 vhost user is turned on by default in the lib/librte_vhost/Makefile.
312 To enable vhost cuse, uncomment vhost cuse and comment vhost user manually. In future, a configure will be created for switch between two implementations.
314 .. code-block:: console
316 SRCS-$(CONFIG_RTE_LIBRTE_VHOST) += vhost_cuse/vhost-net-cdev.c vhost_cuse/virtio-net-cdev.c vhost_cuse/eventfd_copy.c
317 #SRCS-$(CONFIG_RTE_LIBRTE_VHOST) += vhost_user/vhost-net-user.c vhost_user/virtio-net-user.c vhost_user/fd_man.c
319 After vhost is enabled and the implementation is selected, build the vhost library.
321 #. Go to the examples directory:
323 .. code-block:: console
325 export RTE_SDK=/path/to/rte_sdk
326 cd ${RTE_SDK}/examples/vhost
328 #. Set the target (a default target is used if not specified). For example:
330 .. code-block:: console
332 export RTE_TARGET=x86_64-native-linuxapp-gcc
334 See the DPDK Getting Started Guide for possible RTE_TARGET values.
336 #. Build the application:
338 .. code-block:: console
344 Note For zero copy, need firstly disable CONFIG_RTE_MBUF_SCATTER_GATHER,
345 CONFIG_RTE_LIBRTE_IP_FRAG and CONFIG_RTE_LIBRTE_DISTRIBUTOR
346 in the config file and then re-configure and compile the core lib, and then build the application:
348 .. code-block:: console
350 vi ${RTE_SDK}/config/common_linuxapp
352 change it as follows:
356 CONFIG_RTE_MBUF_SCATTER_GATHER=n
357 CONFIG_RTE_LIBRTE_IP_FRAG=n
358 CONFIG_RTE_LIBRTE_DISTRIBUTOR=n
360 .. code-block:: console
363 make config ${RTE_TARGET}
364 make install ${RTE_TARGET}
365 cd ${RTE_SDK}/examples/vhost
368 #. Go to the eventfd_link directory(vhost cuse required):
370 .. code-block:: console
372 cd ${RTE_SDK}/lib/librte_vhost/eventfd_link
374 #. Build the eventfd_link kernel module(vhost cuse required):
376 .. code-block:: console
380 Running the Sample Code
381 -----------------------
383 #. Install the cuse kernel module(vhost cuse required):
385 .. code-block:: console
389 #. Go to the eventfd_link directory(vhost cuse required):
391 .. code-block:: console
393 export RTE_SDK=/path/to/rte_sdk
394 cd ${RTE_SDK}/lib/librte_vhost/eventfd_link
396 #. Install the eventfd_link module(vhost cuse required):
398 .. code-block:: console
400 insmod ./eventfd_link.ko
402 #. Go to the examples directory:
404 .. code-block:: console
406 export RTE_SDK=/path/to/rte_sdk
407 cd ${RTE_SDK}/examples/vhost
409 #. Run the vhost-switch sample code:
413 .. code-block:: console
415 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir / mnt/huge -- -p 0x1 --dev-basename usvhost --dev-index 1
417 vhost user: a socket file named usvhost will be created under current directory. Use its path as the socket path in guest's qemu commandline.
419 .. code-block:: console
421 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir / mnt/huge -- -p 0x1 --dev-basename usvhost
425 Please note the huge-dir parameter instructs the DPDK to allocate its memory from the 2 MB page hugetlbfs.
430 **Basename and Index.**
431 vhost cuse uses a Linux* character device to communicate with QEMU.
432 The basename and the index are used to generate the character devices name.
434 /dev/<basename>-<index>
436 The index parameter is provided for a situation where multiple instances of the virtual switch is required.
438 For compatibility with the QEMU wrapper script, a base name of "usvhost" and an index of "1" should be used:
440 .. code-block:: console
442 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir / mnt/huge -- -p 0x1 --dev-basename usvhost --dev-index 1
445 The vm2vm parameter disable/set mode of packet switching between guests in the host.
446 Value of "0" means disabling vm2vm implies that on virtual machine packet transmission will always go to the Ethernet port;
447 Value of "1" means software mode packet forwarding between guests, it needs packets copy in vHOST,
448 so valid only in one-copy implementation, and invalid for zero copy implementation;
449 value of "2" means hardware mode packet forwarding between guests, it allows packets go to the Ethernet port,
450 hardware L2 switch will determine which guest the packet should forward to or need send to external,
451 which bases on the packet destination MAC address and VLAN tag.
453 .. code-block:: console
455 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --vm2vm [0,1,2]
457 **Mergeable Buffers.**
458 The mergeable buffers parameter controls how virtio-net descriptors are used for virtio-net headers.
459 In a disabled state, one virtio-net header is used per packet buffer;
460 in an enabled state one virtio-net header is used for multiple packets.
461 The default value is 0 or disabled since recent kernels virtio-net drivers show performance degradation with this feature is enabled.
463 .. code-block:: console
465 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir / mnt/huge -- --mergeable [0,1]
468 The stats parameter controls the printing of virtio-net device statistics.
469 The parameter specifies an interval second to print statistics, with an interval of 0 seconds disabling statistics.
471 .. code-block:: console
473 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir / mnt/huge -- --stats [0,n]
476 The rx-retry option enables/disables enqueue retries when the guests RX queue is full.
477 This feature resolves a packet loss that is observed at high data-rates,
478 by allowing it to delay and retry in the receive path.
479 This option is enabled by default.
481 .. code-block:: console
483 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir / mnt/huge -- --rx-retry [0,1]
486 The rx-retry-num option specifies the number of retries on an RX burst,
487 it takes effect only when rx retry is enabled.
488 The default value is 4.
490 .. code-block:: console
492 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir / mnt/huge -- --rx-retry 1 --rx-retry-num 5
494 **RX Retry Delay Time.**
495 The rx-retry-delay option specifies the timeout (in micro seconds) between retries on an RX burst,
496 it takes effect only when rx retry is enabled.
497 The default value is 15.
499 .. code-block:: console
501 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir / mnt/huge -- --rx-retry 1 --rx-retry-delay 20
504 The zero copy option enables/disables the zero copy mode for RX/TX packet,
505 in the zero copy mode the packet buffer address from guest translate into host physical address
506 and then set directly as DMA address.
507 If the zero copy mode is disabled, then one copy mode is utilized in the sample.
508 This option is disabled by default.
510 .. code-block:: console
512 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --zero-copy [0,1]
514 **RX descriptor number.**
515 The RX descriptor number option specify the Ethernet RX descriptor number,
516 Linux legacy virtio-net has different behaviour in how to use the vring descriptor from DPDK based virtio-net PMD,
517 the former likely allocate half for virtio header, another half for frame buffer,
518 while the latter allocate all for frame buffer,
519 this lead to different number for available frame buffer in vring,
520 and then lead to different Ethernet RX descriptor number could be used in zero copy mode.
521 So it is valid only in zero copy mode is enabled. The value is 32 by default.
523 .. code-block:: console
525 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --zero-copy 1 --rx-desc-num [0, n]
527 **TX descriptornumber.**
528 The TX descriptor number option specify the Ethernet TX descriptor number, it is valid only in zero copy mode is enabled.
529 The value is 64 by default.
531 .. code-block:: console
533 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --zero-copy 1 --tx-desc-num [0, n]
536 The VLAN strip option enable/disable the VLAN strip on host, if disabled, the guest will receive the packets with VLAN tag.
537 It is enabled by default.
539 .. code-block:: console
541 user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --vlan-strip [0, 1]
543 Running the Virtual Machine (QEMU)
544 ----------------------------------
546 QEMU must be executed with specific parameters to:
548 * Ensure the guest is configured to use virtio-net network adapters.
550 .. code-block:: console
552 user@target:~$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=hostnet1,id=net1 ...
554 * Ensure the guest's virtio-net network adapter is configured with offloads disabled.
556 .. code-block:: console
558 user@target:~$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
560 * Redirect QEMU to communicate with the DPDK vhost-net sample code in place of the vhost-net kernel module(vhost cuse).
562 .. code-block:: console
564 user@target:~$ qemu-system-x86_64 ... -netdev tap,id=hostnet1,vhost=on,vhostfd=<open fd> ...
566 * Enable the vhost-net sample code to map the VM's memory into its own process address space.
568 .. code-block:: console
570 user@target:~$ qemu-system-x86_64 ... -mem-prealloc -mem-path / dev/hugepages ...
574 The QEMU wrapper (qemu-wrap.py) is a Python script designed to automate the QEMU configuration described above.
575 It also facilitates integration with libvirt, although the script may also be used standalone without libvirt.
577 Redirecting QEMU to vhost-net Sample Code(vhost cuse)
578 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
580 To redirect QEMU to the vhost-net sample code implementation of the vhost-net API,
581 an open file descriptor must be passed to QEMU running as a child process.
583 .. code-block:: python
586 fd = os.open("/dev/usvhost-1", os.O_RDWR)
587 subprocess.call("qemu-system-x86_64 ... . -netdev tap,id=vhostnet0,vhost=on,vhostfd=" + fd +"...", shell=True)
591 This process is automated in the QEMU wrapper script discussed in Section 24.7.3.
593 Mapping the Virtual Machine's Memory
594 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
596 For the DPDK vhost-net sample code to be run correctly, QEMU must allocate the VM's memory on hugetlbfs.
597 This is done by specifying mem-prealloc and mem-path when executing QEMU.
598 The vhost-net sample code accesses the virtio-net device's virtual rings and packet buffers
599 by finding and mapping the VM's physical memory on hugetlbfs.
600 In this case, the path passed to the guest should be that of the 1 GB page hugetlbfs:
602 .. code-block:: console
604 user@target:~$ qemu-system-x86_64 ... -mem-prealloc -mem-path / dev/hugepages ...
608 This process is automated in the QEMU wrapper script discussed in Section 24.7.3.
609 The following two sections only applies to vhost cuse. For vhost-user, please make corresponding changes to qemu-wrapper script and guest XML file.
614 The QEMU wrapper script automatically detects and calls QEMU with the necessary parameters required
615 to integrate with the vhost sample code.
616 It performs the following actions:
618 * Automatically detects the location of the hugetlbfs and inserts this into the command line parameters.
620 * Automatically open file descriptors for each virtio-net device and inserts this into the command line parameters.
622 * Disables offloads on each virtio-net device.
624 * Calls Qemu passing both the command line parameters passed to the script itself and those it has auto-detected.
626 The QEMU wrapper script will automatically configure calls to QEMU:
628 .. code-block:: console
630 user@target:~$ qemu-wrap.py -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu SandyBridge -smp 4,sockets=4,cores=1,threads=1
631 -netdev tap,id=hostnet1,vhost=on -device virtio-net-pci,netdev=hostnet1,id=net1 -hda <disk img> -m 4096
633 which will become the following call to QEMU:
635 .. code-block:: console
637 /usr/local/bin/qemu-system-x86_64 -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu SandyBridge -smp 4,sockets=4,cores=1,threads=1
638 -netdev tap,id=hostnet1,vhost=on,vhostfd=<open fd> -device virtio-net-pci,netdev=hostnet1,id=net1,
639 csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off -hda <disk img> -m 4096 -mem-path /dev/hugepages -mem-prealloc
644 The QEMU wrapper script (qemu-wrap.py) "wraps" libvirt calls to QEMU,
645 such that QEMU is called with the correct parameters described above.
646 To call the QEMU wrapper automatically from libvirt, the following configuration changes must be made:
648 * Place the QEMU wrapper script in libvirt's binary search PATH ($PATH).
649 A good location is in the directory that contains the QEMU binary.
651 * Ensure that the script has the same owner/group and file permissions as the QEMU binary.
653 * Update the VM xml file using virsh edit <vm name>:
655 * Set the VM to use the launch script
657 * Set the emulator path contained in the #<emulator><emulator/> tags For example,
658 replace <emulator>/usr/bin/qemu-kvm<emulator/> with <emulator>/usr/bin/qemu-wrap.py<emulator/>
660 * Set the VM's virtio-net device's to use vhost-net offload:
664 <interface type="network">
665 <model type="virtio"/>
666 <driver name="vhost"/>
669 * Enable libvirt to access the DPDK Vhost sample code's character device file by adding it
670 to controllers cgroup for libvirtd using the following steps:
674 cgroup_controllers = [ ... "devices", ... ] clear_emulator_capabilities = 0
675 user = "root" group = "root"
676 cgroup_device_acl = [
677 "/dev/null", "/dev/full", "/dev/zero",
678 "/dev/random", "/dev/urandom",
679 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
680 "/dev/rtc", "/dev/hpet", "/dev/net/tun",
681 "/dev/<devbase-name>-<index>",
684 * Disable SELinux or set to permissive mode.
687 * Mount cgroup device controller:
689 .. code-block:: console
691 user@target:~$ mkdir /dev/cgroup
692 user@target:~$ mount -t cgroup none /dev/cgroup -o devices
694 * Restart the libvirtd system process
696 For example, on Fedora* "systemctl restart libvirtd.service"
698 * Edit the configuration parameters section of the script:
700 * Configure the "emul_path" variable to point to the QEMU emulator.
704 emul_path = "/usr/local/bin/qemu-system-x86_64"
706 * Configure the "us_vhost_path" variable to point to the DPDK vhost-net sample code's character devices name.
707 DPDK vhost-net sample code's character device will be in the format "/dev/<basename>-<index>".
711 us_vhost_path = "/dev/usvhost-1"
716 **QEMU failing to allocate memory on hugetlbfs.**
718 file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
720 When running QEMU the above error implies that it has failed to allocate memory for the Virtual Machine on the hugetlbfs.
721 This is typically due to insufficient hugepages being free to support the allocation request.
722 The number of free hugepages can be checked as follows:
724 .. code-block:: console
726 user@target:cat /sys/kernel/mm/hugepages/hugepages-<pagesize>/nr_hugepages
728 The command above indicates how many hugepages are free to support QEMU's allocation request.
730 Running DPDK in the Virtual Machine
731 -----------------------------------
733 For the DPDK vhost-net sample code to switch packets into the VM,
734 the sample code must first learn the MAC address of the VM's virtio-net device.
735 The sample code detects the address from packets being transmitted from the VM, similar to a learning switch.
737 This behavior requires no special action or configuration with the Linux* virtio-net driver in the VM
738 as the Linux* Kernel will automatically transmit packets during device initialization.
739 However, DPDK-based applications must be modified to automatically transmit packets during initialization
740 to facilitate the DPDK vhost- net sample code's MAC learning.
742 The DPDK testpmd application can be configured to automatically transmit packets during initialization
743 and to act as an L2 forwarding switch.
745 Testpmd MAC Forwarding
746 ~~~~~~~~~~~~~~~~~~~~~~
748 At high packet rates, a minor packet loss may be observed.
749 To resolve this issue, a "wait and retry" mode is implemented in the testpmd and vhost sample code.
750 In the "wait and retry" mode if the virtqueue is found to be full, then testpmd waits for a period of time before retrying to enqueue packets.
752 The "wait and retry" algorithm is implemented in DPDK testpmd as a forwarding method call "mac_retry".
753 The following sequence diagram describes the algorithm in detail.
757 **Figure 20. Packet Flow on TX in DPDK-testpmd**
759 .. image23_png has been renamed
766 The testpmd application is automatically built when DPDK is installed.
767 Run the testpmd application as follows:
769 .. code-block:: console
771 user@target:~$ x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -- n 4 -socket-mem 128 -- --burst=64 -i
773 The destination MAC address for packets transmitted on each port can be set at the command line:
775 .. code-block:: console
777 user@target:~$ x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -- n 4 -socket-mem 128 -- --burst=64 -i --eth- peer=0,aa:bb:cc:dd:ee:ff --eth-peer=1,ff,ee,dd,cc,bb,aa
779 * Packets received on port 1 will be forwarded on port 0 to MAC address
783 * Packets received on port 0 will be forwarded on port 1 to MAC address
787 The testpmd application can then be configured to act as an L2 forwarding application:
789 .. code-block:: console
791 testpmd> set fwd mac_retry
793 The testpmd can then be configured to start processing packets,
794 transmitting packets first so the DPDK vhost sample code on the host can learn the MAC address:
796 .. code-block:: console
798 testpmd> start tx_first
802 Please note "set fwd mac_retry" is used in place of "set fwd mac_fwd" to ensure the retry feature is activated.
804 Passing Traffic to the Virtual Machine Device
805 ---------------------------------------------
807 For a virtio-net device to receive traffic,
808 the traffic's Layer 2 header must include both the virtio-net device's MAC address and VLAN tag.
809 The DPDK sample code behaves in a similar manner to a learning switch in that
810 it learns the MAC address of the virtio-net devices from the first transmitted packet.
811 On learning the MAC address,
812 the DPDK vhost sample code prints a message with the MAC address and VLAN tag virtio-net device.
815 .. code-block:: console
817 DATA: (0) MAC_ADDRESS cc:bb:bb:bb:bb:bb and VLAN_TAG 1000 registered
819 The above message indicates that device 0 has been registered with MAC address cc:bb:bb:bb:bb:bb and VLAN tag 1000.
820 Any packets received on the NIC with these values is placed on the devices receive queue.
821 When a virtio-net device transmits packets, the VLAN tag is added to the packet by the DPDK vhost sample code.
823 .. |vhost_net_arch| image:: img/vhost_net_arch.*
825 .. |qemu_virtio_net| image:: img/qemu_virtio_net.*
827 .. |tx_dpdk_testpmd| image:: img/tx_dpdk_testpmd.*
829 .. |vhost_net_sample_app| image:: img/vhost_net_sample_app.*
831 .. |virtio_linux_vhost| image:: img/virtio_linux_vhost.*