1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2015 Intel Corporation.
4 Poll Mode Driver for Emulated Virtio NIC
5 ========================================
7 Virtio is a para-virtualization framework initiated by IBM, and supported by KVM hypervisor.
8 In the Data Plane Development Kit (DPDK),
9 we provide a virtio Poll Mode Driver (PMD) as a software solution, comparing to SRIOV hardware solution,
11 for fast guest VM to guest VM communication and guest VM to host communication.
13 Vhost is a kernel acceleration module for virtio qemu backend.
14 The DPDK extends kni to support vhost raw socket interface,
15 which enables vhost to directly read/ write packets from/to a physical port.
16 With this enhancement, virtio could achieve quite promising performance.
18 For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
19 please refer to Chapter "Driver for VM Emulated Devices".
21 In this chapter, we will demonstrate usage of virtio PMD driver with two backends,
22 standard qemu vhost back end and vhost kni back end.
24 Virtio Implementation in DPDK
25 -----------------------------
27 For details about the virtio spec, refer to the latest
28 `VIRTIO (Virtual I/O) Device Specification
29 <https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=virtio>`_.
31 As a PMD, virtio provides packet reception and transmission callbacks.
33 In Rx, packets described by the used descriptors in vring are available
34 for virtio to burst out.
36 In Tx, packets described by the used descriptors in vring are available
37 for virtio to clean. Virtio will enqueue to be transmitted packets into
38 vring, make them available to the device, and then notify the host back
41 Features and Limitations of virtio PMD
42 --------------------------------------
44 In this release, the virtio PMD driver provides the basic functionality of packet reception and transmission.
46 * It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
47 when transmitting packets. The packet size supported is from 64 to 1518.
49 * It supports multicast packets and promiscuous mode.
51 * The descriptor number for the Rx/Tx queue is hard-coded to be 256 by qemu 2.7 and below.
52 If given a different descriptor number by the upper application,
53 the virtio PMD generates a warning and fall back to the hard-coded value.
54 Rx queue size can be configurable and up to 1024 since qemu 2.8 and above. Rx queue size is 256
55 by default. Tx queue size is still hard-coded to be 256.
57 * Features of mac/vlan filter are supported, negotiation with vhost/backend are needed to support them.
58 When backend can't support vlan filter, virtio app on guest should not enable vlan filter in order
59 to make sure the virtio port is configured correctly. E.g. do not specify '--enable-hw-vlan' in testpmd
62 * "RTE_PKTMBUF_HEADROOM" should be defined
63 no less than "sizeof(struct virtio_net_hdr_mrg_rxbuf)", which is 12 bytes when mergeable or
64 "VIRTIO_F_VERSION_1" is set.
65 no less than "sizeof(struct virtio_net_hdr)", which is 10 bytes, when using non-mergeable.
67 * Virtio does not support runtime configuration.
69 * Virtio supports Link State interrupt.
71 * Virtio supports Rx interrupt (so far, only support 1:1 mapping for queue/interrupt).
73 * Virtio supports software vlan stripping and inserting.
75 * Virtio supports using port IO to get PCI resource when uio/igb_uio module is not available.
80 The following prerequisites apply:
82 * In the BIOS, turn VT-x and VT-d on
84 * Linux kernel with KVM module; vhost module loaded and ioeventfd supported.
85 Qemu standard backend without vhost support isn't tested, and probably isn't supported.
87 Virtio with kni vhost Back End
88 ------------------------------
90 This section demonstrates kni vhost back end example setup for Phy-VM Communication.
92 .. _figure_host_vm_comms:
94 .. figure:: img/host_vm_comms.*
96 Host2VM Communication Example Using kni vhost Back End
99 Host2VM communication example
101 #. Load the kni kernel module:
103 .. code-block:: console
107 Other basic DPDK preparations like hugepage enabling, uio port binding are not listed here.
108 Please refer to the *DPDK Getting Started Guide* for detailed instructions.
110 #. Launch the kni user application:
112 .. code-block:: console
114 examples/kni/build/app/kni -l 0-3 -n 4 -- -p 0x1 -P --config="(0,1,3)"
116 This command generates one network device vEth0 for physical port.
117 If specify more physical ports, the generated network device will be vEth1, vEth2, and so on.
119 For each physical port, kni creates two user threads.
120 One thread loops to fetch packets from the physical NIC port into the kni receive queue.
121 The other user thread loops to send packets in the kni transmit queue.
123 For each physical port, kni also creates a kernel thread that retrieves packets from the kni receive queue,
124 place them onto kni's raw socket's queue and wake up the vhost kernel thread to exchange packets with the virtio virt queue.
126 For more details about kni, please refer to :ref:`kni`.
128 #. Enable the kni raw socket functionality for the specified physical NIC port,
129 get the generated file descriptor and set it in the qemu command line parameter.
130 Always remember to set ioeventfd_on and vhost_on.
134 .. code-block:: console
136 echo 1 > /sys/class/net/vEth0/sock_en
137 fd=`cat /sys/class/net/vEth0/sock_fd`
138 exec qemu-system-x86_64 -enable-kvm -cpu host \
139 -m 2048 -smp 4 -name dpdk-test1-vm1 \
140 -drive file=/data/DPDKVMS/dpdk-vm.img \
141 -netdev tap, fd=$fd,id=mynet_kni, script=no,vhost=on \
142 -device virtio-net-pci,netdev=mynet_kni,bus=pci.0,addr=0x3,ioeventfd=on \
145 In the above example, virtio port 0 in the guest VM will be associated with vEth0, which in turns corresponds to a physical port,
146 which means received packets come from vEth0, and transmitted packets is sent to vEth0.
148 #. In the guest, bind the virtio device to the uio_pci_generic kernel module and start the forwarding application.
149 When the virtio port in guest bursts Rx, it is getting packets from the
150 raw socket's receive queue.
151 When the virtio port bursts Tx, it is sending packet to the tx_q.
153 .. code-block:: console
156 echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
157 modprobe uio_pci_generic
158 python usertools/dpdk-devbind.py -b uio_pci_generic 00:03.0
160 We use testpmd as the forwarding application in this example.
162 .. figure:: img/console.*
166 #. Use IXIA packet generator to inject a packet stream into the KNI physical port.
168 The packet reception and transmission flow path is:
170 IXIA packet generator->82599 PF->KNI Rx queue->KNI raw socket queue->Guest
171 VM virtio port 0 Rx burst->Guest VM virtio port 0 Tx burst-> KNI Tx queue
172 ->82599 PF-> IXIA packet generator
174 Virtio with qemu virtio Back End
175 --------------------------------
177 .. _figure_host_vm_comms_qemu:
179 .. figure:: img/host_vm_comms_qemu.*
181 Host2VM Communication Example Using qemu vhost Back End
184 .. code-block:: console
186 qemu-system-x86_64 -enable-kvm -cpu host -m 2048 -smp 2 -mem-path /dev/
187 hugepages -mem-prealloc
188 -drive file=/data/DPDKVMS/dpdk-vm1
189 -netdev tap,id=vm1_p1,ifname=tap0,script=no,vhost=on
190 -device virtio-net-pci,netdev=vm1_p1,bus=pci.0,addr=0x3,ioeventfd=on
191 -device pci-assign,host=04:10.1 \
193 In this example, the packet reception flow path is:
195 IXIA packet generator->82599 PF->Linux Bridge->TAP0's socket queue-> Guest
196 VM virtio port 0 Rx burst-> Guest VM 82599 VF port1 Tx burst-> IXIA packet
199 The packet transmission flow is:
201 IXIA packet generator-> Guest VM 82599 VF port1 Rx burst-> Guest VM virtio
202 port 0 Tx burst-> tap -> Linux Bridge->82599 PF-> IXIA packet generator
205 Virtio PMD Rx/Tx Callbacks
206 --------------------------
208 Virtio driver has 6 Rx callbacks and 3 Tx callbacks.
212 #. ``virtio_recv_pkts``:
213 Regular version without mergeable Rx buffer support for split virtqueue.
215 #. ``virtio_recv_mergeable_pkts``:
216 Regular version with mergeable Rx buffer support for split virtqueue.
218 #. ``virtio_recv_pkts_vec``:
219 Vector version without mergeable Rx buffer support, also fixes the available
220 ring indexes and uses vector instructions to optimize performance for split
223 #. ``virtio_recv_pkts_inorder``:
224 In-order version with mergeable and non-mergeable Rx buffer support
227 #. ``virtio_recv_pkts_packed``:
228 Regular and in-order version without mergeable Rx buffer support for
231 #. ``virtio_recv_mergeable_pkts_packed``:
232 Regular and in-order version with mergeable Rx buffer support for packed
237 #. ``virtio_xmit_pkts``:
238 Regular version for split virtqueue.
240 #. ``virtio_xmit_pkts_inorder``:
241 In-order version for split virtqueue.
243 #. ``virtio_xmit_pkts_packed``:
244 Regular and in-order version for packed virtqueue.
246 By default, the non-vector callbacks are used:
248 * For Rx: If mergeable Rx buffers is disabled then ``virtio_recv_pkts``
249 or ``virtio_recv_pkts_packed`` will be used, otherwise
250 ``virtio_recv_mergeable_pkts`` or ``virtio_recv_mergeable_pkts_packed``
253 * For Tx: ``virtio_xmit_pkts`` or ``virtio_xmit_pkts_packed`` will be used.
256 Vector callbacks will be used when:
258 * Mergeable Rx buffers is disabled.
260 The corresponding callbacks are:
262 * For Rx: ``virtio_recv_pkts_vec``.
264 There is no vector callbacks for packed virtqueue for now.
267 Example of using the vector version of the virtio poll mode driver in
270 testpmd -l 0-2 -n 4 -- -i --rxq=1 --txq=1 --nb-cores=1
272 In-order callbacks only work on simulated virtio user vdev.
276 * For Rx: If in-order is enabled then ``virtio_recv_pkts_inorder`` is used.
278 * For Tx: If in-order is enabled then ``virtio_xmit_pkts_inorder`` is used.
280 For packed virtqueue, the default callbacks already support the
286 .. _virtio_interrupt_mode:
288 There are three kinds of interrupts from a virtio device over PCI bus: config
289 interrupt, Rx interrupts, and Tx interrupts. Config interrupt is used for
290 notification of device configuration changes, especially link status (lsc).
291 Interrupt mode is translated into Rx interrupts in the context of DPDK.
295 Virtio PMD already has support for receiving lsc from qemu when the link
296 status changes, especially when vhost user disconnects. However, it fails
297 to do that if the VM is created by qemu 2.6.2 or below, since the
298 capability to detect vhost user disconnection is introduced in qemu 2.7.0.
300 Prerequisites for Rx interrupts
301 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
303 To support Rx interrupts,
304 #. Check if guest kernel supports VFIO-NOIOMMU:
306 Linux started to support VFIO-NOIOMMU since 4.8.0. Make sure the guest
307 kernel is compiled with:
309 .. code-block:: console
311 CONFIG_VFIO_NOIOMMU=y
313 #. Properly set msix vectors when starting VM:
315 Enable multi-queue when starting VM, and specify msix vectors in qemu
316 cmdline. (N+1) is the minimum, and (2N+2) is mostly recommended.
318 .. code-block:: console
320 $(QEMU) ... -device virtio-net-pci,mq=on,vectors=2N+2 ...
322 #. In VM, insert vfio module in NOIOMMU mode:
324 .. code-block:: console
326 modprobe vfio enable_unsafe_noiommu_mode=1
329 #. In VM, bind the virtio device with vfio-pci:
331 .. code-block:: console
333 python usertools/dpdk-devbind.py -b vfio-pci 00:03.0
338 Here we use l3fwd-power as an example to show how to get started.
342 .. code-block:: console
344 $ l3fwd-power -l 0-1 -- -p 1 -P --config="(0,0,1)" \
345 --no-numa --parse-ptype
351 Below devargs are supported by the PCI virtio driver:
355 A virtio device could also be driven by vDPA (vhost data path acceleration)
356 driver, and works as a HW vhost backend. This argument is used to specify
357 a virtio device needs to work in vDPA mode.
358 (Default: 0 (disabled))
360 Below devargs are supported by the virtio-user vdev:
364 It is used to specify a path to connect to vhost backend.
368 It is used to specify the MAC address.
372 It is used to enable the control queue. (Default: 0 (disabled))
376 It is used to specify the queue size. (Default: 256)
380 It is used to specify the queue number. (Default: 1)
384 It is used to specify the host interface name for vhost-kernel
389 It is used to enable the server mode when using vhost-user backend.
390 (Default: 0 (disabled))
394 It is used to enable virtio device mergeable Rx buffer feature.
395 (Default: 1 (enabled))
399 It is used to enable virtio device in-order feature.
400 (Default: 1 (enabled))
404 It is used to enable virtio device packed virtqueue feature.
405 (Default: 0 (disabled))