1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2015 Intel Corporation.
4 Poll Mode Driver for Emulated Virtio NIC
5 ========================================
7 Virtio is a para-virtualization framework initiated by IBM, and supported by KVM hypervisor.
8 In the Data Plane Development Kit (DPDK),
9 we provide a virtio Poll Mode Driver (PMD) as a software solution, comparing to SRIOV hardware solution,
11 for fast guest VM to guest VM communication and guest VM to host communication.
13 Vhost is a kernel acceleration module for virtio qemu backend.
14 The DPDK extends kni to support vhost raw socket interface,
15 which enables vhost to directly read/ write packets from/to a physical port.
16 With this enhancement, virtio could achieve quite promising performance.
18 For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
19 please refer to Chapter "Driver for VM Emulated Devices".
21 In this chapter, we will demonstrate usage of virtio PMD driver with two backends,
22 standard qemu vhost back end and vhost kni back end.
24 Virtio Implementation in DPDK
25 -----------------------------
27 For details about the virtio spec, refer to Virtio PCI Card Specification written by Rusty Russell.
29 As a PMD, virtio provides packet reception and transmission callbacks virtio_recv_pkts and virtio_xmit_pkts.
31 In virtio_recv_pkts, index in range [vq->vq_used_cons_idx , vq->vq_ring.used->idx) in vring is available for virtio to burst out.
33 In virtio_xmit_pkts, same index range in vring is available for virtio to clean.
34 Virtio will enqueue to be transmitted packets into vring, advance the vq->vq_ring.avail->idx,
35 and then notify the host back end if necessary.
37 Features and Limitations of virtio PMD
38 --------------------------------------
40 In this release, the virtio PMD driver provides the basic functionality of packet reception and transmission.
42 * It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
43 when transmitting packets. The packet size supported is from 64 to 1518.
45 * It supports multicast packets and promiscuous mode.
47 * The descriptor number for the Rx/Tx queue is hard-coded to be 256 by qemu 2.7 and below.
48 If given a different descriptor number by the upper application,
49 the virtio PMD generates a warning and fall back to the hard-coded value.
50 Rx queue size can be configureable and up to 1024 since qemu 2.8 and above. Rx queue size is 256
51 by default. Tx queue size is still hard-coded to be 256.
53 * Features of mac/vlan filter are supported, negotiation with vhost/backend are needed to support them.
54 When backend can't support vlan filter, virtio app on guest should not enable vlan filter in order
55 to make sure the virtio port is configured correctly. E.g. do not specify '--enable-hw-vlan' in testpmd
58 * "RTE_PKTMBUF_HEADROOM" should be defined
59 no less than "sizeof(struct virtio_net_hdr_mrg_rxbuf)", which is 12 bytes when mergeable or
60 "VIRTIO_F_VERSION_1" is set.
61 no less than "sizeof(struct virtio_net_hdr)", which is 10 bytes, when using non-mergeable.
63 * Virtio does not support runtime configuration.
65 * Virtio supports Link State interrupt.
67 * Virtio supports Rx interrupt (so far, only support 1:1 mapping for queue/interrupt).
69 * Virtio supports software vlan stripping and inserting.
71 * Virtio supports using port IO to get PCI resource when uio/igb_uio module is not available.
76 The following prerequisites apply:
78 * In the BIOS, turn VT-x and VT-d on
80 * Linux kernel with KVM module; vhost module loaded and ioeventfd supported.
81 Qemu standard backend without vhost support isn't tested, and probably isn't supported.
83 Virtio with kni vhost Back End
84 ------------------------------
86 This section demonstrates kni vhost back end example setup for Phy-VM Communication.
88 .. _figure_host_vm_comms:
90 .. figure:: img/host_vm_comms.*
92 Host2VM Communication Example Using kni vhost Back End
95 Host2VM communication example
97 #. Load the kni kernel module:
99 .. code-block:: console
103 Other basic DPDK preparations like hugepage enabling, uio port binding are not listed here.
104 Please refer to the *DPDK Getting Started Guide* for detailed instructions.
106 #. Launch the kni user application:
108 .. code-block:: console
110 examples/kni/build/app/kni -l 0-3 -n 4 -- -p 0x1 -P --config="(0,1,3)"
112 This command generates one network device vEth0 for physical port.
113 If specify more physical ports, the generated network device will be vEth1, vEth2, and so on.
115 For each physical port, kni creates two user threads.
116 One thread loops to fetch packets from the physical NIC port into the kni receive queue.
117 The other user thread loops to send packets in the kni transmit queue.
119 For each physical port, kni also creates a kernel thread that retrieves packets from the kni receive queue,
120 place them onto kni's raw socket's queue and wake up the vhost kernel thread to exchange packets with the virtio virt queue.
122 For more details about kni, please refer to :ref:`kni`.
124 #. Enable the kni raw socket functionality for the specified physical NIC port,
125 get the generated file descriptor and set it in the qemu command line parameter.
126 Always remember to set ioeventfd_on and vhost_on.
130 .. code-block:: console
132 echo 1 > /sys/class/net/vEth0/sock_en
133 fd=`cat /sys/class/net/vEth0/sock_fd`
134 exec qemu-system-x86_64 -enable-kvm -cpu host \
135 -m 2048 -smp 4 -name dpdk-test1-vm1 \
136 -drive file=/data/DPDKVMS/dpdk-vm.img \
137 -netdev tap, fd=$fd,id=mynet_kni, script=no,vhost=on \
138 -device virtio-net-pci,netdev=mynet_kni,bus=pci.0,addr=0x3,ioeventfd=on \
141 In the above example, virtio port 0 in the guest VM will be associated with vEth0, which in turns corresponds to a physical port,
142 which means received packets come from vEth0, and transmitted packets is sent to vEth0.
144 #. In the guest, bind the virtio device to the uio_pci_generic kernel module and start the forwarding application.
145 When the virtio port in guest bursts Rx, it is getting packets from the
146 raw socket's receive queue.
147 When the virtio port bursts Tx, it is sending packet to the tx_q.
149 .. code-block:: console
152 echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
153 modprobe uio_pci_generic
154 python usertools/dpdk-devbind.py -b uio_pci_generic 00:03.0
156 We use testpmd as the forwarding application in this example.
158 .. figure:: img/console.*
162 #. Use IXIA packet generator to inject a packet stream into the KNI physical port.
164 The packet reception and transmission flow path is:
166 IXIA packet generator->82599 PF->KNI Rx queue->KNI raw socket queue->Guest
167 VM virtio port 0 Rx burst->Guest VM virtio port 0 Tx burst-> KNI Tx queue
168 ->82599 PF-> IXIA packet generator
170 Virtio with qemu virtio Back End
171 --------------------------------
173 .. _figure_host_vm_comms_qemu:
175 .. figure:: img/host_vm_comms_qemu.*
177 Host2VM Communication Example Using qemu vhost Back End
180 .. code-block:: console
182 qemu-system-x86_64 -enable-kvm -cpu host -m 2048 -smp 2 -mem-path /dev/
183 hugepages -mem-prealloc
184 -drive file=/data/DPDKVMS/dpdk-vm1
185 -netdev tap,id=vm1_p1,ifname=tap0,script=no,vhost=on
186 -device virtio-net-pci,netdev=vm1_p1,bus=pci.0,addr=0x3,ioeventfd=on
187 -device pci-assign,host=04:10.1 \
189 In this example, the packet reception flow path is:
191 IXIA packet generator->82599 PF->Linux Bridge->TAP0's socket queue-> Guest
192 VM virtio port 0 Rx burst-> Guest VM 82599 VF port1 Tx burst-> IXIA packet
195 The packet transmission flow is:
197 IXIA packet generator-> Guest VM 82599 VF port1 Rx burst-> Guest VM virtio
198 port 0 Tx burst-> tap -> Linux Bridge->82599 PF-> IXIA packet generator
201 Virtio PMD Rx/Tx Callbacks
202 --------------------------
204 Virtio driver has 3 Rx callbacks and 2 Tx callbacks.
208 #. ``virtio_recv_pkts``:
209 Regular version without mergeable Rx buffer support.
211 #. ``virtio_recv_mergeable_pkts``:
212 Regular version with mergeable Rx buffer support.
214 #. ``virtio_recv_pkts_vec``:
215 Vector version without mergeable Rx buffer support, also fixes the available
216 ring indexes and uses vector instructions to optimize performance.
220 #. ``virtio_xmit_pkts``:
223 #. ``virtio_xmit_pkts_simple``:
224 Vector version fixes the available ring indexes to optimize performance.
227 By default, the non-vector callbacks are used:
229 * For Rx: If mergeable Rx buffers is disabled then ``virtio_recv_pkts`` is
230 used; otherwise ``virtio_recv_mergeable_pkts``.
232 * For Tx: ``virtio_xmit_pkts``.
235 Vector callbacks will be used when:
237 * ``txmode.offloads`` is set to ``0x0``, which implies:
239 * Single segment is specified.
241 * No offload support is needed.
243 * Mergeable Rx buffers is disabled.
245 The corresponding callbacks are:
247 * For Rx: ``virtio_recv_pkts_vec``.
249 * For Tx: ``virtio_xmit_pkts_simple``.
252 Example of using the vector version of the virtio poll mode driver in
255 testpmd -l 0-2 -n 4 -- -i --tx-offloads=0x0 --rxq=1 --txq=1 --nb-cores=1
261 .. _virtio_interrupt_mode:
263 There are three kinds of interrupts from a virtio device over PCI bus: config
264 interrupt, Rx interrupts, and Tx interrupts. Config interrupt is used for
265 notification of device configuration changes, especially link status (lsc).
266 Interrupt mode is translated into Rx interrupts in the context of DPDK.
270 Virtio PMD already has support for receiving lsc from qemu when the link
271 status changes, especially when vhost user disconnects. However, it fails
272 to do that if the VM is created by qemu 2.6.2 or below, since the
273 capability to detect vhost user disconnection is introduced in qemu 2.7.0.
275 Prerequisites for Rx interrupts
276 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
278 To support Rx interrupts,
279 #. Check if guest kernel supports VFIO-NOIOMMU:
281 Linux started to support VFIO-NOIOMMU since 4.8.0. Make sure the guest
282 kernel is compiled with:
284 .. code-block:: console
286 CONFIG_VFIO_NOIOMMU=y
288 #. Properly set msix vectors when starting VM:
290 Enable multi-queue when starting VM, and specify msix vectors in qemu
291 cmdline. (N+1) is the minimum, and (2N+2) is mostly recommended.
293 .. code-block:: console
295 $(QEMU) ... -device virtio-net-pci,mq=on,vectors=2N+2 ...
297 #. In VM, insert vfio module in NOIOMMU mode:
299 .. code-block:: console
301 modprobe vfio enable_unsafe_noiommu_mode=1
304 #. In VM, bind the virtio device with vfio-pci:
306 .. code-block:: console
308 python usertools/dpdk-devbind.py -b vfio-pci 00:03.0
313 Here we use l3fwd-power as an example to show how to get started.
317 .. code-block:: console
319 $ l3fwd-power -l 0-1 -- -p 1 -P --config="(0,0,1)" \
320 --no-numa --parse-ptype
326 The user can specify below argument in devargs.
330 A virtio device could also be driven by vDPA (vhost data path acceleration)
331 driver, and works as a HW vhost backend. This argument is used to specify
332 a virtio device needs to work in vDPA mode.
333 (Default: 0 (disabled))