1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2016 Red Hat, Inc.
6 PVP reference benchmark setup using testpmd
7 ===========================================
9 This guide lists the steps required to setup a PVP benchmark using testpmd as
10 a simple forwarder between NICs and Vhost interfaces. The goal of this setup
11 is to have a reference PVP benchmark without using external vSwitches (OVS,
12 VPP, ...) to make it easier to obtain reproducible results and to facilitate
13 continuous integration testing.
15 The guide covers two ways of launching the VM, either by directly calling the
16 QEMU command line, or by relying on libvirt. It has been tested with DPDK
17 v16.11 using RHEL7 for both host and guest.
25 .. figure:: img/pvp_2nics.*
27 PVP setup using 2 NICs
29 In this diagram, each red arrow represents one logical core. This use-case
30 requires 6 dedicated logical cores. A forwarding configuration with a single
31 NIC is also possible, requiring 3 logical cores.
37 In this setup, we isolate 6 cores (from CPU2 to CPU7) on the same NUMA
38 node. Two cores are assigned to the VM vCPUs running testpmd and four are
39 assigned to testpmd on the host.
45 #. On BIOS, disable turbo-boost and hyper-threads.
47 #. Append these options to Kernel command line:
49 .. code-block:: console
51 intel_pstate=disable mce=ignore_ce default_hugepagesz=1G hugepagesz=1G hugepages=6 isolcpus=2-7 rcu_nocbs=2-7 nohz_full=2-7 iommu=pt intel_iommu=on
53 #. Disable hyper-threads at runtime if necessary or if BIOS is not accessible:
55 .. code-block:: console
57 cat /sys/devices/system/cpu/cpu*[0-9]/topology/thread_siblings_list \
59 | awk -F, '{system("echo 0 > /sys/devices/system/cpu/cpu"$2"/online")}'
63 .. code-block:: console
65 echo 0 > /proc/sys/kernel/nmi_watchdog
67 #. Exclude isolated CPUs from the writeback cpumask:
69 .. code-block:: console
71 echo ffffff03 > /sys/bus/workqueue/devices/writeback/cpumask
73 #. Isolate CPUs from IRQs:
75 .. code-block:: console
77 clear_mask=0xfc #Isolate CPU2 to CPU7 from IRQs
78 for i in /proc/irq/*/smp_affinity
80 echo "obase=16;$(( 0x$(cat $i) & ~$clear_mask ))" | bc > $i
89 .. code-block:: console
91 git clone git://git.qemu.org/qemu.git
95 ../configure --target-list=x86_64-softmmu
104 .. code-block:: console
106 git clone git://dpdk.org/dpdk
109 make install T=x86_64-native-linux-gcc DESTDIR=install
115 #. Assign NICs to DPDK:
117 .. code-block:: console
120 $RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci 0000:11:00.0 0000:11:00.1
124 The Sandy Bridge family seems to have some IOMMU limitations giving poor
125 performance results. To achieve good performance on these machines
126 consider using UIO instead.
128 #. Launch the testpmd application:
130 .. code-block:: console
132 $RTE_SDK/install/bin/testpmd -l 0,2,3,4,5 --socket-mem=1024 -n 4 \
133 --vdev 'net_vhost0,iface=/tmp/vhost-user1' \
134 --vdev 'net_vhost1,iface=/tmp/vhost-user2' -- \
135 --portmask=f -i --rxq=1 --txq=1 \
136 --nb-cores=4 --forward-mode=io
138 With this command, isolated CPUs 2 to 5 will be used as lcores for PMD threads.
140 #. In testpmd interactive mode, set the portlist to obtain the correct port
143 .. code-block:: console
152 The VM may be launched either by calling QEMU directly, or by using libvirt.
157 Launch QEMU with two Virtio-net devices paired to the vhost-user sockets
158 created by testpmd. Below example uses default Virtio-net options, but options
159 may be specified, for example to disable mergeable buffers or indirect
162 .. code-block:: console
164 <QEMU path>/bin/x86_64-softmmu/qemu-system-x86_64 \
165 -enable-kvm -cpu host -m 3072 -smp 3 \
166 -chardev socket,id=char0,path=/tmp/vhost-user1 \
167 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
168 -device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:01,addr=0x10 \
169 -chardev socket,id=char1,path=/tmp/vhost-user2 \
170 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
171 -device virtio-net-pci,netdev=mynet2,mac=52:54:00:02:d9:02,addr=0x11 \
172 -object memory-backend-file,id=mem,size=3072M,mem-path=/dev/hugepages,share=on \
173 -numa node,memdev=mem -mem-prealloc \
174 -net user,hostfwd=tcp::1002$1-:22 -net nic \
175 -qmp unix:/tmp/qmp.socket,server,nowait \
176 -monitor stdio <vm_image>.qcow2
178 You can use this `qmp-vcpu-pin <https://patchwork.kernel.org/patch/9361617/>`_
181 It can be used as follows, for example to pin 3 vCPUs to CPUs 1, 6 and 7,
182 where isolated CPUs 6 and 7 will be used as lcores for Virtio PMDs:
184 .. code-block:: console
186 export PYTHONPATH=$PYTHONPATH:<QEMU path>/scripts/qmp
187 ./qmp-vcpu-pin -s /tmp/qmp.socket 1 6 7
192 Some initial steps are required for libvirt to be able to connect to testpmd's
195 First, SELinux policy needs to be set to permissive, since testpmd is
196 generally run as root (note, as reboot is required):
198 .. code-block:: console
200 cat /etc/selinux/config
202 # This file controls the state of SELinux on the system.
203 # SELINUX= can take one of these three values:
204 # enforcing - SELinux security policy is enforced.
205 # permissive - SELinux prints warnings instead of enforcing.
206 # disabled - No SELinux policy is loaded.
209 # SELINUXTYPE= can take one of three two values:
210 # targeted - Targeted processes are protected,
211 # minimum - Modification of targeted policy.
212 # Only selected processes are protected.
213 # mls - Multi Level Security protection.
217 Also, Qemu needs to be run as root, which has to be specified in
218 ``/etc/libvirt/qemu.conf``:
220 .. code-block:: console
224 Once the domain created, the following snippet is an extract of he most
225 important information (hugepages, vCPU pinning, Virtio PCI devices):
230 <memory unit='KiB'>3145728</memory>
231 <currentMemory unit='KiB'>3145728</currentMemory>
234 <page size='1048576' unit='KiB' nodeset='0'/>
238 <vcpu placement='static'>3</vcpu>
240 <vcpupin vcpu='0' cpuset='1'/>
241 <vcpupin vcpu='1' cpuset='6'/>
242 <vcpupin vcpu='2' cpuset='7'/>
243 <emulatorpin cpuset='0'/>
246 <memory mode='strict' nodeset='0'/>
249 <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
252 <cpu mode='host-passthrough'>
253 <topology sockets='1' cores='3' threads='1'/>
255 <cell id='0' cpus='0-2' memory='3145728' unit='KiB' memAccess='shared'/>
259 <interface type='vhostuser'>
260 <mac address='56:48:4f:53:54:01'/>
261 <source type='unix' path='/tmp/vhost-user1' mode='client'/>
262 <model type='virtio'/>
263 <driver name='vhost' rx_queue_size='256' />
264 <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
266 <interface type='vhostuser'>
267 <mac address='56:48:4f:53:54:02'/>
268 <source type='unix' path='/tmp/vhost-user2' mode='client'/>
269 <model type='virtio'/>
270 <driver name='vhost' rx_queue_size='256' />
271 <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/>
284 #. Append these options to the Kernel command line:
286 .. code-block:: console
288 default_hugepagesz=1G hugepagesz=1G hugepages=1 intel_iommu=on iommu=pt isolcpus=1,2 rcu_nocbs=1,2 nohz_full=1,2
292 .. code-block:: console
294 echo 0 > /proc/sys/kernel/nmi_watchdog
296 #. Exclude isolated CPU1 and CPU2 from the writeback cpumask:
298 .. code-block:: console
300 echo 1 > /sys/bus/workqueue/devices/writeback/cpumask
302 #. Isolate CPUs from IRQs:
304 .. code-block:: console
306 clear_mask=0x6 #Isolate CPU1 and CPU2 from IRQs
307 for i in /proc/irq/*/smp_affinity
309 echo "obase=16;$(( 0x$(cat $i) & ~$clear_mask ))" | bc > $i
318 .. code-block:: console
320 git clone git://dpdk.org/dpdk
323 make install T=x86_64-native-linux-gcc DESTDIR=install
329 Probe vfio module without iommu:
331 .. code-block:: console
333 modprobe -r vfio_iommu_type1
335 modprobe vfio enable_unsafe_noiommu_mode=1
336 cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
339 Bind the virtio-net devices to DPDK:
341 .. code-block:: console
343 $RTE_SDK/usertools/dpdk-devbind.py -b vfio-pci 0000:00:10.0 0000:00:11.0
347 .. code-block:: console
349 $RTE_SDK/install/bin/testpmd -l 0,1,2 --socket-mem 1024 -n 4 \
350 --proc-type auto --file-prefix pg -- \
351 --portmask=3 --forward-mode=macswap --port-topology=chained \
352 --disable-rss -i --rxq=1 --txq=1 \
353 --rxd=256 --txd=256 --nb-cores=2 --auto-start
358 Below template should be used when sharing results:
362 Traffic Generator: <Test equipment (e.g. IXIA, Moongen, ...)>
363 Acceptable Loss: <n>%
364 Validation run time: <n>min
365 Host DPDK version/commit: <version, SHA-1>
366 Guest DPDK version/commit: <version, SHA-1>
367 Patches applied: <link to patchwork>
368 QEMU version/commit: <version>
369 Virtio features: <features (e.g. mrg_rxbuf='off', leave empty if default)>
370 CPU: <CPU model>, <CPU frequency>