1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2016 Intel Corporation.
7 The vhost library implements a user space virtio net server allowing the user
8 to manipulate the virtio ring directly. In another words, it allows the user
9 to fetch/put packets from/to the VM virtio net device. To achieve this, a
10 vhost library should be able to:
12 * Access the guest memory:
14 For QEMU, this is done by using the ``-object memory-backend-file,share=on,...``
15 option. Which means QEMU will create a file to serve as the guest RAM.
16 The ``share=on`` option allows another process to map that file, which
17 means it can access the guest RAM.
19 * Know all the necessary information about the vring:
21 Information such as where the available ring is stored. Vhost defines some
22 messages (passed through a Unix domain socket file) to tell the backend all
23 the information it needs to know how to manipulate the vring.
29 The following is an overview of some key Vhost API functions:
31 * ``rte_vhost_driver_register(path, flags)``
33 This function registers a vhost driver into the system. ``path`` specifies
34 the Unix domain socket file path.
36 Currently supported flags are:
38 - ``RTE_VHOST_USER_CLIENT``
40 DPDK vhost-user will act as the client when this flag is given. See below
43 - ``RTE_VHOST_USER_NO_RECONNECT``
45 When DPDK vhost-user acts as the client it will keep trying to reconnect
46 to the server (QEMU) until it succeeds. This is useful in two cases:
48 * When QEMU is not started yet.
49 * When QEMU restarts (for example due to a guest OS reboot).
51 This reconnect option is enabled by default. However, it can be turned off
54 - ``RTE_VHOST_USER_DEQUEUE_ZERO_COPY``
56 Dequeue zero copy will be enabled when this flag is set. It is disabled by
59 There are some truths (including limitations) you might want to know while
62 * zero copy is not good for small packets (typically for packet size below
65 * zero copy is really good for VM2VM case. For iperf between two VMs, the
66 boost could be above 70% (when TSO is enabled).
68 * For zero copy in VM2NIC case, guest Tx used vring may be starved if the
69 PMD driver consume the mbuf but not release them timely.
71 For example, i40e driver has an optimization to maximum NIC pipeline which
72 postpones returning transmitted mbuf until only tx_free_threshold free
73 descs left. The virtio TX used ring will be starved if the formula
74 (num_i40e_tx_desc - num_virtio_tx_desc > tx_free_threshold) is true, since
75 i40e will not return back mbuf.
77 A performance tip for tuning zero copy in VM2NIC case is to adjust the
78 frequency of mbuf free (i.e. adjust tx_free_threshold of i40e driver) to
79 balance consumer and producer.
81 * Guest memory should be backended with huge pages to achieve better
82 performance. Using 1G page size is the best.
84 When dequeue zero copy is enabled, the guest phys address and host phys
85 address mapping has to be established. Using non-huge pages means far
86 more page segments. To make it simple, DPDK vhost does a linear search
87 of those segments, thus the fewer the segments, the quicker we will get
88 the mapping. NOTE: we may speed it by using tree searching in future.
90 * zero copy can not work when using vfio-pci with iommu mode currently, this
91 is because we don't setup iommu dma mapping for guest memory. If you have
92 to use vfio-pci driver, please insert vfio-pci kernel module in noiommu
95 * The consumer of zero copy mbufs should consume these mbufs as soon as
96 possible, otherwise it may block the operations in vhost.
98 - ``RTE_VHOST_USER_IOMMU_SUPPORT``
100 IOMMU support will be enabled when this flag is set. It is disabled by
103 Enabling this flag makes possible to use guest vIOMMU to protect vhost
104 from accessing memory the virtio device isn't allowed to, when the feature
105 is negotiated and an IOMMU device is declared.
107 However, this feature enables vhost-user's reply-ack protocol feature,
108 which implementation is buggy in Qemu v2.7.0-v2.9.0 when doing multiqueue.
109 Enabling this flag with these Qemu version results in Qemu being blocked
110 when multiple queue pairs are declared.
112 - ``RTE_VHOST_USER_POSTCOPY_SUPPORT``
114 Postcopy live-migration support will be enabled when this flag is set.
115 It is disabled by default.
117 Enabling this flag should only be done when the calling application does
118 not pre-fault the guest shared memory, otherwise migration would fail.
120 - ``RTE_VHOST_USER_LINEARBUF_SUPPORT``
122 Enabling this flag forces vhost dequeue function to only provide linear
123 pktmbuf (no multi-segmented pktmbuf).
125 The vhost library by default provides a single pktmbuf for given a
126 packet, but if for some reason the data doesn't fit into a single
127 pktmbuf (e.g., TSO is enabled), the library will allocate additional
128 pktmbufs from the same mempool and chain them together to create a
129 multi-segmented pktmbuf.
131 However, the vhost application needs to support multi-segmented format.
132 If the vhost application does not support that format and requires large
133 buffers to be dequeue, this flag should be enabled to force only linear
134 buffers (see RTE_VHOST_USER_EXTBUF_SUPPORT) or drop the packet.
136 It is disabled by default.
138 - ``RTE_VHOST_USER_EXTBUF_SUPPORT``
140 Enabling this flag allows vhost dequeue function to allocate and attach
141 an external buffer to a pktmbuf if the pkmbuf doesn't provide enough
142 space to store all data.
144 This is useful when the vhost application wants to support large packets
145 but doesn't want to increase the default mempool object size nor to
146 support multi-segmented mbufs (non-linear). In this case, a fresh buffer
147 is allocated using rte_malloc() which gets attached to a pktmbuf using
148 rte_pktmbuf_attach_extbuf().
150 See RTE_VHOST_USER_LINEARBUF_SUPPORT as well to disable multi-segmented
151 mbufs for application that doesn't support chained mbufs.
153 It is disabled by default.
155 * ``rte_vhost_driver_set_features(path, features)``
157 This function sets the feature bits the vhost-user driver supports. The
158 vhost-user driver could be vhost-user net, yet it could be something else,
159 say, vhost-user SCSI.
161 * ``rte_vhost_driver_callback_register(path, vhost_device_ops)``
163 This function registers a set of callbacks, to let DPDK applications take
164 the appropriate action when some events happen. The following events are
167 * ``new_device(int vid)``
169 This callback is invoked when a virtio device becomes ready. ``vid``
170 is the vhost device ID.
172 * ``destroy_device(int vid)``
174 This callback is invoked when a virtio device is paused or shut down.
176 * ``vring_state_changed(int vid, uint16_t queue_id, int enable)``
178 This callback is invoked when a specific queue's state is changed, for
179 example to enabled or disabled.
181 * ``features_changed(int vid, uint64_t features)``
183 This callback is invoked when the features is changed. For example,
184 ``VHOST_F_LOG_ALL`` will be set/cleared at the start/end of live
185 migration, respectively.
187 * ``new_connection(int vid)``
189 This callback is invoked on new vhost-user socket connection. If DPDK
190 acts as the server the device should not be deleted before
191 ``destroy_connection`` callback is received.
193 * ``destroy_connection(int vid)``
195 This callback is invoked when vhost-user socket connection is closed.
196 It indicates that device with id ``vid`` is no longer in use and can be
199 * ``rte_vhost_driver_disable/enable_features(path, features))``
201 This function disables/enables some features. For example, it can be used to
202 disable mergeable buffers and TSO features, which both are enabled by
205 * ``rte_vhost_driver_start(path)``
207 This function triggers the vhost-user negotiation. It should be invoked at
208 the end of initializing a vhost-user driver.
210 * ``rte_vhost_enqueue_burst(vid, queue_id, pkts, count)``
212 Transmits (enqueues) ``count`` packets from host to guest.
214 * ``rte_vhost_dequeue_burst(vid, queue_id, mbuf_pool, pkts, count)``
216 Receives (dequeues) ``count`` packets from guest, and stored them at ``pkts``.
218 * ``rte_vhost_crypto_create(vid, cryptodev_id, sess_mempool, socket_id)``
220 As an extension of new_device(), this function adds virtio-crypto workload
221 acceleration capability to the device. All crypto workload is processed by
222 DPDK cryptodev with the device ID of ``cryptodev_id``.
224 * ``rte_vhost_crypto_free(vid)``
226 Frees the memory and vhost-user message handlers created in
227 rte_vhost_crypto_create().
229 * ``rte_vhost_crypto_fetch_requests(vid, queue_id, ops, nb_ops)``
231 Receives (dequeues) ``nb_ops`` virtio-crypto requests from guest, parses
232 them to DPDK Crypto Operations, and fills the ``ops`` with parsing results.
234 * ``rte_vhost_crypto_finalize_requests(queue_id, ops, nb_ops)``
236 After the ``ops`` are dequeued from Cryptodev, finalizes the jobs and
237 notifies the guest(s).
239 * ``rte_vhost_crypto_set_zero_copy(vid, option)``
241 Enable or disable zero copy feature of the vhost crypto backend.
243 Vhost-user Implementations
244 --------------------------
246 Vhost-user uses Unix domain sockets for passing messages. This means the DPDK
247 vhost-user implementation has two options:
249 * DPDK vhost-user acts as the server.
251 DPDK will create a Unix domain socket server file and listen for
252 connections from the frontend.
254 Note, this is the default mode, and the only mode before DPDK v16.07.
257 * DPDK vhost-user acts as the client.
259 Unlike the server mode, this mode doesn't create the socket file;
260 it just tries to connect to the server (which responses to create the
263 When the DPDK vhost-user application restarts, DPDK vhost-user will try to
264 connect to the server again. This is how the "reconnect" feature works.
267 * The "reconnect" feature requires **QEMU v2.7** (or above).
269 * The vhost supported features must be exactly the same before and
270 after the restart. For example, if TSO is disabled and then enabled,
271 nothing will work and issues undefined might happen.
273 No matter which mode is used, once a connection is established, DPDK
274 vhost-user will start receiving and processing vhost messages from QEMU.
276 For messages with a file descriptor, the file descriptor can be used directly
277 in the vhost process as it is already installed by the Unix domain socket.
279 The supported vhost messages are:
281 * ``VHOST_SET_MEM_TABLE``
282 * ``VHOST_SET_VRING_KICK``
283 * ``VHOST_SET_VRING_CALL``
284 * ``VHOST_SET_LOG_FD``
285 * ``VHOST_SET_VRING_ERR``
287 For ``VHOST_SET_MEM_TABLE`` message, QEMU will send information for each
288 memory region and its file descriptor in the ancillary data of the message.
289 The file descriptor is used to map that region.
291 ``VHOST_SET_VRING_KICK`` is used as the signal to put the vhost device into
292 the data plane, and ``VHOST_GET_VRING_BASE`` is used as the signal to remove
293 the vhost device from the data plane.
295 When the socket connection is closed, vhost will destroy the device.
297 Guest memory requirement
298 ------------------------
300 * Memory pre-allocation
302 For non-zerocopy, guest memory pre-allocation is not a must. This can help
303 save of memory. If users really want the guest memory to be pre-allocated
304 (e.g., for performance reason), we can add option ``-mem-prealloc`` when
305 starting QEMU. Or, we can lock all memory at vhost side which will force
306 memory to be allocated when mmap at vhost side; option --mlockall in
307 ovs-dpdk is an example in hand.
309 For zerocopy, we force the VM memory to be pre-allocated at vhost lib when
310 mapping the guest memory; and also we need to lock the memory to prevent
311 pages being swapped out to disk.
315 Make sure ``share=on`` QEMU option is given. vhost-user will not work with
316 a QEMU version without shared memory mapping.
318 Vhost supported vSwitch reference
319 ---------------------------------
321 For more vhost details and how to support vhost in vSwitch, please refer to
322 the vhost example in the DPDK Sample Applications Guide.
324 Vhost data path acceleration (vDPA)
325 -----------------------------------
327 vDPA supports selective datapath in vhost-user lib by enabling virtio ring
328 compatible devices to serve virtio driver directly for datapath acceleration.
330 ``rte_vhost_driver_attach_vdpa_device`` is used to configure the vhost device
331 with accelerated backend.
333 Also vhost device capabilities are made configurable to adopt various devices.
334 Such capabilities include supported features, protocol features, queue number.
336 Finally, a set of device ops is defined for device specific operations:
340 Called to get supported queue number of the device.
344 Called to get supported features of the device.
346 * ``get_protocol_features``
348 Called to get supported protocol features of the device.
352 Called to configure the actual device when the virtio device becomes ready.
356 Called to close the actual device when the virtio device is stopped.
358 * ``set_vring_state``
360 Called to change the state of the vring in the actual device when vring state
365 Called to set the negotiated features to device.
369 Called to allow the device to response to RARP sending.
371 * ``get_vfio_group_fd``
373 Called to get the VFIO group fd of the device.
375 * ``get_vfio_device_fd``
377 Called to get the VFIO device fd of the device.
379 * ``get_notify_area``
381 Called to get the notify area info of the queue.