1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2016 Intel Corporation.
7 The vhost library implements a user space virtio net server allowing the user
8 to manipulate the virtio ring directly. In another words, it allows the user
9 to fetch/put packets from/to the VM virtio net device. To achieve this, a
10 vhost library should be able to:
12 * Access the guest memory:
14 For QEMU, this is done by using the ``-object memory-backend-file,share=on,...``
15 option. Which means QEMU will create a file to serve as the guest RAM.
16 The ``share=on`` option allows another process to map that file, which
17 means it can access the guest RAM.
19 * Know all the necessary information about the vring:
21 Information such as where the available ring is stored. Vhost defines some
22 messages (passed through a Unix domain socket file) to tell the backend all
23 the information it needs to know how to manipulate the vring.
29 The following is an overview of some key Vhost API functions:
31 * ``rte_vhost_driver_register(path, flags)``
33 This function registers a vhost driver into the system. ``path`` specifies
34 the Unix domain socket file path.
36 Currently supported flags are:
38 - ``RTE_VHOST_USER_CLIENT``
40 DPDK vhost-user will act as the client when this flag is given. See below
43 - ``RTE_VHOST_USER_NO_RECONNECT``
45 When DPDK vhost-user acts as the client it will keep trying to reconnect
46 to the server (QEMU) until it succeeds. This is useful in two cases:
48 * When QEMU is not started yet.
49 * When QEMU restarts (for example due to a guest OS reboot).
51 This reconnect option is enabled by default. However, it can be turned off
54 - ``RTE_VHOST_USER_DEQUEUE_ZERO_COPY``
56 Dequeue zero copy will be enabled when this flag is set. It is disabled by
59 There are some truths (including limitations) you might want to know while
62 * zero copy is not good for small packets (typically for packet size below
65 * zero copy is really good for VM2VM case. For iperf between two VMs, the
66 boost could be above 70% (when TSO is enableld).
68 * For zero copy in VM2NIC case, guest Tx used vring may be starved if the
69 PMD driver consume the mbuf but not release them timely.
71 For example, i40e driver has an optimization to maximum NIC pipeline which
72 postpones returning transmitted mbuf until only tx_free_threshold free
73 descs left. The virtio TX used ring will be starved if the formula
74 (num_i40e_tx_desc - num_virtio_tx_desc > tx_free_threshold) is true, since
75 i40e will not return back mbuf.
77 A performance tip for tuning zero copy in VM2NIC case is to adjust the
78 frequency of mbuf free (i.e. adjust tx_free_threshold of i40e driver) to
79 balance consumer and producer.
81 * Guest memory should be backended with huge pages to achieve better
82 performance. Using 1G page size is the best.
84 When dequeue zero copy is enabled, the guest phys address and host phys
85 address mapping has to be established. Using non-huge pages means far
86 more page segments. To make it simple, DPDK vhost does a linear search
87 of those segments, thus the fewer the segments, the quicker we will get
88 the mapping. NOTE: we may speed it by using tree searching in future.
90 * zero copy can not work when using vfio-pci with iommu mode currently, this
91 is because we don't setup iommu dma mapping for guest memory. If you have
92 to use vfio-pci driver, please insert vfio-pci kernel module in noiommu
95 - ``RTE_VHOST_USER_IOMMU_SUPPORT``
97 IOMMU support will be enabled when this flag is set. It is disabled by
100 Enabling this flag makes possible to use guest vIOMMU to protect vhost
101 from accessing memory the virtio device isn't allowed to, when the feature
102 is negotiated and an IOMMU device is declared.
104 However, this feature enables vhost-user's reply-ack protocol feature,
105 which implementation is buggy in Qemu v2.7.0-v2.9.0 when doing multiqueue.
106 Enabling this flag with these Qemu version results in Qemu being blocked
107 when multiple queue pairs are declared.
109 * ``rte_vhost_driver_set_features(path, features)``
111 This function sets the feature bits the vhost-user driver supports. The
112 vhost-user driver could be vhost-user net, yet it could be something else,
113 say, vhost-user SCSI.
115 * ``rte_vhost_driver_callback_register(path, vhost_device_ops)``
117 This function registers a set of callbacks, to let DPDK applications take
118 the appropriate action when some events happen. The following events are
121 * ``new_device(int vid)``
123 This callback is invoked when a virtio device becomes ready. ``vid``
124 is the vhost device ID.
126 * ``destroy_device(int vid)``
128 This callback is invoked when a virtio device is paused or shut down.
130 * ``vring_state_changed(int vid, uint16_t queue_id, int enable)``
132 This callback is invoked when a specific queue's state is changed, for
133 example to enabled or disabled.
135 * ``features_changed(int vid, uint64_t features)``
137 This callback is invoked when the features is changed. For example,
138 ``VHOST_F_LOG_ALL`` will be set/cleared at the start/end of live
139 migration, respectively.
141 * ``new_connection(int vid)``
143 This callback is invoked on new vhost-user socket connection. If DPDK
144 acts as the server the device should not be deleted before
145 ``destroy_connection`` callback is received.
147 * ``destroy_connection(int vid)``
149 This callback is invoked when vhost-user socket connection is closed.
150 It indicates that device with id ``vid`` is no longer in use and can be
153 * ``rte_vhost_driver_disable/enable_features(path, features))``
155 This function disables/enables some features. For example, it can be used to
156 disable mergeable buffers and TSO features, which both are enabled by
159 * ``rte_vhost_driver_start(path)``
161 This function triggers the vhost-user negotiation. It should be invoked at
162 the end of initializing a vhost-user driver.
164 * ``rte_vhost_enqueue_burst(vid, queue_id, pkts, count)``
166 Transmits (enqueues) ``count`` packets from host to guest.
168 * ``rte_vhost_dequeue_burst(vid, queue_id, mbuf_pool, pkts, count)``
170 Receives (dequeues) ``count`` packets from guest, and stored them at ``pkts``.
172 * ``rte_vhost_crypto_create(vid, cryptodev_id, sess_mempool, socket_id)``
174 As an extension of new_device(), this function adds virtio-crypto workload
175 acceleration capability to the device. All crypto workload is processed by
176 DPDK cryptodev with the device ID of ``cryptodev_id``.
178 * ``rte_vhost_crypto_free(vid)``
180 Frees the memory and vhost-user message handlers created in
181 rte_vhost_crypto_create().
183 * ``rte_vhost_crypto_fetch_requests(vid, queue_id, ops, nb_ops)``
185 Receives (dequeues) ``nb_ops`` virtio-crypto requests from guest, parses
186 them to DPDK Crypto Operations, and fills the ``ops`` with parsing results.
188 * ``rte_vhost_crypto_finalize_requests(queue_id, ops, nb_ops)``
190 After the ``ops`` are dequeued from Cryptodev, finalizes the jobs and
191 notifies the guest(s).
193 * ``rte_vhost_crypto_set_zero_copy(vid, option)``
195 Enable or disable zero copy feature of the vhost crypto backend.
197 Vhost-user Implementations
198 --------------------------
200 Vhost-user uses Unix domain sockets for passing messages. This means the DPDK
201 vhost-user implementation has two options:
203 * DPDK vhost-user acts as the server.
205 DPDK will create a Unix domain socket server file and listen for
206 connections from the frontend.
208 Note, this is the default mode, and the only mode before DPDK v16.07.
211 * DPDK vhost-user acts as the client.
213 Unlike the server mode, this mode doesn't create the socket file;
214 it just tries to connect to the server (which responses to create the
217 When the DPDK vhost-user application restarts, DPDK vhost-user will try to
218 connect to the server again. This is how the "reconnect" feature works.
221 * The "reconnect" feature requires **QEMU v2.7** (or above).
223 * The vhost supported features must be exactly the same before and
224 after the restart. For example, if TSO is disabled and then enabled,
225 nothing will work and issues undefined might happen.
227 No matter which mode is used, once a connection is established, DPDK
228 vhost-user will start receiving and processing vhost messages from QEMU.
230 For messages with a file descriptor, the file descriptor can be used directly
231 in the vhost process as it is already installed by the Unix domain socket.
233 The supported vhost messages are:
235 * ``VHOST_SET_MEM_TABLE``
236 * ``VHOST_SET_VRING_KICK``
237 * ``VHOST_SET_VRING_CALL``
238 * ``VHOST_SET_LOG_FD``
239 * ``VHOST_SET_VRING_ERR``
241 For ``VHOST_SET_MEM_TABLE`` message, QEMU will send information for each
242 memory region and its file descriptor in the ancillary data of the message.
243 The file descriptor is used to map that region.
245 ``VHOST_SET_VRING_KICK`` is used as the signal to put the vhost device into
246 the data plane, and ``VHOST_GET_VRING_BASE`` is used as the signal to remove
247 the vhost device from the data plane.
249 When the socket connection is closed, vhost will destroy the device.
251 Guest memory requirement
252 ------------------------
254 * Memory pre-allocation
256 For non-zerocopy, guest memory pre-allocation is not a must. This can help
257 save of memory. If users really want the guest memory to be pre-allocated
258 (e.g., for performance reason), we can add option ``-mem-prealloc`` when
259 starting QEMU. Or, we can lock all memory at vhost side which will force
260 memory to be allocated when mmap at vhost side; option --mlockall in
261 ovs-dpdk is an example in hand.
263 For zerocopy, we force the VM memory to be pre-allocated at vhost lib when
264 mapping the guest memory; and also we need to lock the memory to prevent
265 pages being swapped out to disk.
269 Make sure ``share=on`` QEMU option is given. vhost-user will not work with
270 a QEMU version without shared memory mapping.
272 Vhost supported vSwitch reference
273 ---------------------------------
275 For more vhost details and how to support vhost in vSwitch, please refer to
276 the vhost example in the DPDK Sample Applications Guide.
278 Vhost data path acceleration (vDPA)
279 -----------------------------------
281 vDPA supports selective datapath in vhost-user lib by enabling virtio ring
282 compatible devices to serve virtio driver directly for datapath acceleration.
284 ``rte_vhost_driver_attach_vdpa_device`` is used to configure the vhost device
285 with accelerated backend.
287 Also vhost device capabilities are made configurable to adopt various devices.
288 Such capabilities include supported features, protocol features, queue number.
290 Finally, a set of device ops is defined for device specific operations:
294 Called to get supported queue number of the device.
298 Called to get supported features of the device.
300 * ``get_protocol_features``
302 Called to get supported protocol features of the device.
306 Called to configure the actual device when the virtio device becomes ready.
310 Called to close the actual device when the virtio device is stopped.
312 * ``set_vring_state``
314 Called to change the state of the vring in the actual device when vring state
319 Called to set the negotiated features to device.
323 Called to allow the device to response to RARP sending.
325 * ``get_vfio_group_fd``
327 Called to get the VFIO group fd of the device.
329 * ``get_vfio_device_fd``
331 Called to get the VFIO device fd of the device.
333 * ``get_notify_area``
335 Called to get the notify area info of the queue.