2 Copyright(c) 2016 Red Hat, Inc. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 PVP reference benchmark setup using testpmd
33 ===========================================
35 This guide lists the steps required to setup a PVP benchmark using testpmd as
36 a simple forwarder between NICs and Vhost interfaces. The goal of this setup
37 is to have a reference PVP benchmark without using external vSwitches (OVS,
38 VPP, ...) to make it easier to obtain reproducible results and to facilitate
39 continuous integration testing.
41 The guide covers two ways of launching the VM, either by directly calling the
42 QEMU command line, or by relying on libvirt. It has been tested with DPDK
43 v16.11 using RHEL7 for both host and guest.
51 .. figure:: img/pvp_2nics.*
53 PVP setup using 2 NICs
55 In this diagram, each red arrow represents one logical core. This use-case
56 requires 6 dedicated logical cores. A forwarding configuration with a single
57 NIC is also possible, requiring 3 logical cores.
63 In this setup, we isolate 6 cores (from CPU2 to CPU7) on the same NUMA
64 node. Two cores are assigned to the VM vCPUs running testpmd and four are
65 assigned to testpmd on the host.
71 #. On BIOS, disable turbo-boost and hyper-threads.
73 #. Append these options to Kernel command line:
75 .. code-block:: console
77 intel_pstate=disable mce=ignore_ce default_hugepagesz=1G hugepagesz=1G hugepages=6 isolcpus=2-7 rcu_nocbs=2-7 nohz_full=2-7 iommu=pt intel_iommu=on
79 #. Disable hyper-threads at runtime if necessary or if BIOS is not accessible:
81 .. code-block:: console
83 cat /sys/devices/system/cpu/cpu*[0-9]/topology/thread_siblings_list \
85 | awk -F, '{system("echo 0 > /sys/devices/system/cpu/cpu"$2"/online")}'
89 .. code-block:: console
91 echo 0 > /proc/sys/kernel/nmi_watchdog
93 #. Exclude isolated CPUs from the writeback cpumask:
95 .. code-block:: console
97 echo ffffff03 > /sys/bus/workqueue/devices/writeback/cpumask
99 #. Isolate CPUs from IRQs:
101 .. code-block:: console
103 clear_mask=0xfc #Isolate CPU2 to CPU7 from IRQs
104 for i in /proc/irq/*/smp_affinity
106 echo "obase=16;$(( 0x$(cat $i) & ~$clear_mask ))" | bc > $i
115 .. code-block:: console
117 git clone git://git.qemu.org/qemu.git
121 ../configure --target-list=x86_64-softmmu
129 .. code-block:: console
131 git clone git://dpdk.org/dpdk
134 make install T=x86_64-native-linuxapp-gcc DESTDIR=install
140 #. Assign NICs to DPDK:
142 .. code-block:: console
145 $RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci 0000:11:00.0 0000:11:00.1
149 The Sandy Bridge family seems to have some IOMMU limitations giving poor
150 performance results. To achieve good performance on these machines
151 consider using UIO instead.
153 #. Launch the testpmd application:
155 .. code-block:: console
157 $RTE_SDK/install/bin/testpmd -l 0,2,3,4,5 --socket-mem=1024 -n 4 \
158 --vdev 'net_vhost0,iface=/tmp/vhost-user1' \
159 --vdev 'net_vhost1,iface=/tmp/vhost-user2' -- \
160 --portmask=f --disable-hw-vlan -i --rxq=1 --txq=1
161 --nb-cores=4 --forward-mode=io
163 With this command, isolated CPUs 2 to 5 will be used as lcores for PMD threads.
165 #. In testpmd interactive mode, set the portlist to obtain the correct port
168 .. code-block:: console
177 The VM may be launched either by calling QEMU directly, or by using libvirt.
182 Launch QEMU with two Virtio-net devices paired to the vhost-user sockets
183 created by testpmd. Below example uses default Virtio-net options, but options
184 may be specified, for example to disable mergeable buffers or indirect
187 .. code-block:: console
189 <QEMU path>/bin/x86_64-softmmu/qemu-system-x86_64 \
190 -enable-kvm -cpu host -m 3072 -smp 3 \
191 -chardev socket,id=char0,path=/tmp/vhost-user1 \
192 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
193 -device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:01,addr=0x10 \
194 -chardev socket,id=char1,path=/tmp/vhost-user2 \
195 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
196 -device virtio-net-pci,netdev=mynet2,mac=52:54:00:02:d9:02,addr=0x11 \
197 -object memory-backend-file,id=mem,size=3072M,mem-path=/dev/hugepages,share=on \
198 -numa node,memdev=mem -mem-prealloc \
199 -net user,hostfwd=tcp::1002$1-:22 -net nic \
200 -qmp unix:/tmp/qmp.socket,server,nowait \
201 -monitor stdio <vm_image>.qcow2
203 You can use this `qmp-vcpu-pin <https://patchwork.kernel.org/patch/9361617/>`_
206 It can be used as follows, for example to pin 3 vCPUs to CPUs 1, 6 and 7,
207 where isolated CPUs 6 and 7 will be used as lcores for Virtio PMDs:
209 .. code-block:: console
211 export PYTHONPATH=$PYTHONPATH:<QEMU path>/scripts/qmp
212 ./qmp-vcpu-pin -s /tmp/qmp.socket 1 6 7
217 Some initial steps are required for libvirt to be able to connect to testpmd's
220 First, SELinux policy needs to be set to permissive, since testpmd is
221 generally run as root (note, as reboot is required):
223 .. code-block:: console
225 cat /etc/selinux/config
227 # This file controls the state of SELinux on the system.
228 # SELINUX= can take one of these three values:
229 # enforcing - SELinux security policy is enforced.
230 # permissive - SELinux prints warnings instead of enforcing.
231 # disabled - No SELinux policy is loaded.
234 # SELINUXTYPE= can take one of three two values:
235 # targeted - Targeted processes are protected,
236 # minimum - Modification of targeted policy.
237 # Only selected processes are protected.
238 # mls - Multi Level Security protection.
242 Also, Qemu needs to be run as root, which has to be specified in
243 ``/etc/libvirt/qemu.conf``:
245 .. code-block:: console
249 Once the domain created, the following snippet is an extract of he most
250 important information (hugepages, vCPU pinning, Virtio PCI devices):
255 <memory unit='KiB'>3145728</memory>
256 <currentMemory unit='KiB'>3145728</currentMemory>
259 <page size='1048576' unit='KiB' nodeset='0'/>
263 <vcpu placement='static'>3</vcpu>
265 <vcpupin vcpu='0' cpuset='1'/>
266 <vcpupin vcpu='1' cpuset='6'/>
267 <vcpupin vcpu='2' cpuset='7'/>
268 <emulatorpin cpuset='0'/>
271 <memory mode='strict' nodeset='0'/>
274 <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
277 <cpu mode='host-passthrough'>
278 <topology sockets='1' cores='3' threads='1'/>
280 <cell id='0' cpus='0-2' memory='3145728' unit='KiB' memAccess='shared'/>
284 <interface type='vhostuser'>
285 <mac address='56:48:4f:53:54:01'/>
286 <source type='unix' path='/tmp/vhost-user1' mode='client'/>
287 <model type='virtio'/>
288 <driver name='vhost' rx_queue_size='256' />
289 <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
291 <interface type='vhostuser'>
292 <mac address='56:48:4f:53:54:02'/>
293 <source type='unix' path='/tmp/vhost-user2' mode='client'/>
294 <model type='virtio'/>
295 <driver name='vhost' rx_queue_size='256' />
296 <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/>
309 #. Append these options to the Kernel command line:
311 .. code-block:: console
313 default_hugepagesz=1G hugepagesz=1G hugepages=1 intel_iommu=on iommu=pt isolcpus=1,2 rcu_nocbs=1,2 nohz_full=1,2
317 .. code-block:: console
319 echo 0 > /proc/sys/kernel/nmi_watchdog
321 #. Exclude isolated CPU1 and CPU2 from the writeback cpumask:
323 .. code-block:: console
325 echo 1 > /sys/bus/workqueue/devices/writeback/cpumask
327 #. Isolate CPUs from IRQs:
329 .. code-block:: console
331 clear_mask=0x6 #Isolate CPU1 and CPU2 from IRQs
332 for i in /proc/irq/*/smp_affinity
334 echo "obase=16;$(( 0x$(cat $i) & ~$clear_mask ))" | bc > $i
343 .. code-block:: console
345 git clone git://dpdk.org/dpdk
348 make install T=x86_64-native-linuxapp-gcc DESTDIR=install
354 Probe vfio module without iommu:
356 .. code-block:: console
358 modprobe -r vfio_iommu_type1
360 modprobe vfio enable_unsafe_noiommu_mode=1
361 cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
364 Bind the virtio-net devices to DPDK:
366 .. code-block:: console
368 $RTE_SDK/tools/dpdk-devbind.py -b vfio-pci 0000:00:10.0 0000:00:11.0
372 .. code-block:: console
374 $RTE_SDK/install/bin/testpmd -l 0,1,2 --socket-mem 1024 -n 4 \
375 --proc-type auto --file-prefix pg -- \
376 --portmask=3 --forward-mode=macswap --port-topology=chained \
377 --disable-hw-vlan --disable-rss -i --rxq=1 --txq=1 \
378 --rxd=256 --txd=256 --nb-cores=2 --auto-start
383 Below template should be used when sharing results:
387 Traffic Generator: <Test equipment (e.g. IXIA, Moongen, ...)>
388 Acceptable Loss: <n>%
389 Validation run time: <n>min
390 Host DPDK version/commit: <version, SHA-1>
391 Guest DPDK version/commit: <version, SHA-1>
392 Patches applied: <link to patchwork>
393 QEMU version/commit: <version>
394 Virtio features: <features (e.g. mrg_rxbuf='off', leave empty if default)>
395 CPU: <CPU model>, <CPU frequency>