It is disabled by default.
+ - ``RTE_VHOST_USER_ASYNC_COPY``
+
+ Asynchronous data path will be enabled when this flag is set. Async data
+ path allows applications to register async copy devices (typically
+ hardware DMA channels) to the vhost queues. Vhost leverages the copy
+ device registered to free CPU from memory copy operations. A set of
+ async data path APIs are defined for DPDK applications to make use of
+ the async capability. Only packets enqueued/dequeued by async APIs are
+ processed through the async data path.
+
+ Currently this feature is only implemented on split ring enqueue data
+ path.
+
+ It is disabled by default.
+
* ``rte_vhost_driver_set_features(path, features)``
This function sets the feature bits the vhost-user driver supports. The
Enable or disable zero copy feature of the vhost crypto backend.
+* ``rte_vhost_async_channel_register(vid, queue_id, features, ops)``
+
+ Register a vhost queue with async copy device channel.
+ Following device ``features`` must be specified together with the
+ registration:
+
+ * ``async_inorder``
+
+ Async copy device can guarantee the ordering of copy completion
+ sequence. Copies are completed in the same order with that at
+ the submission time.
+
+ Currently, only ``async_inorder`` capable device is supported by vhost.
+
+ * ``async_threshold``
+
+ The copy length (in bytes) below which CPU copy will be used even if
+ applications call async vhost APIs to enqueue/dequeue data.
+
+ Typical value is 512~1024 depending on the async device capability.
+
+ Applications must provide following ``ops`` callbacks for vhost lib to
+ work with the async copy devices:
+
+ * ``transfer_data(vid, queue_id, descs, opaque_data, count)``
+
+ vhost invokes this function to submit copy data to the async devices.
+ For non-async_inorder capable devices, ``opaque_data`` could be used
+ for identifying the completed packets.
+
+ * ``check_completed_copies(vid, queue_id, opaque_data, max_packets)``
+
+ vhost invokes this function to get the copy data completed by async
+ devices.
+
+* ``rte_vhost_async_channel_unregister(vid, queue_id)``
+
+ Unregister the async copy device channel from a vhost queue.
+
+* ``rte_vhost_submit_enqueue_burst(vid, queue_id, pkts, count)``
+
+ Submit an enqueue request to transmit ``count`` packets from host to guest
+ by async data path. Enqueue is not guaranteed to finish upon the return of
+ this API call.
+
+ Applications must not free the packets submitted for enqueue until the
+ packets are completed.
+
+* ``rte_vhost_poll_enqueue_completed(vid, queue_id, pkts, count)``
+
+ Poll enqueue completion status from async data path. Completed packets
+ are returned to applications through ``pkts``.
+
Vhost-user Implementations
--------------------------
* Memory pre-allocation
- For non-zerocopy, guest memory pre-allocation is not a must. This can help
- save of memory. If users really want the guest memory to be pre-allocated
- (e.g., for performance reason), we can add option ``-mem-prealloc`` when
- starting QEMU. Or, we can lock all memory at vhost side which will force
- memory to be allocated when mmap at vhost side; option --mlockall in
- ovs-dpdk is an example in hand.
+ For non-zerocopy non-async data path, guest memory pre-allocation is not a
+ must. This can help save of memory. If users really want the guest memory
+ to be pre-allocated (e.g., for performance reason), we can add option
+ ``-mem-prealloc`` when starting QEMU. Or, we can lock all memory at vhost
+ side which will force memory to be allocated when mmap at vhost side;
+ option --mlockall in ovs-dpdk is an example in hand.
- For zerocopy, we force the VM memory to be pre-allocated at vhost lib when
- mapping the guest memory; and also we need to lock the memory to prevent
- pages being swapped out to disk.
+ For async and zerocopy data path, we force the VM memory to be
+ pre-allocated at vhost lib when mapping the guest memory; and also we need
+ to lock the memory to prevent pages being swapped out to disk.
* Memory sharing