~~~~~~~~~~~~~~~~~~~~~~~~
Make sure you have DPDK built inside the guest. Also make sure the
-corresponding virtio-net PCI device is bond to a uio driver, which
+corresponding virtio-net PCI device is bond to a UIO driver, which
could be done by:
.. code-block:: console
A very simple vhost-user net driver which demonstrates how to use the generic
vhost APIs will be used when this option is given. It is disabled by default.
+**--dma-type**
+This parameter is used to specify DMA type for async vhost-user net driver which
+demonstrates how to use the async vhost APIs. It's used in combination with dmas.
+
+**--dmas**
+This parameter is used to specify the assigned DMA device of a vhost device.
+Async vhost-user net driver will be used if --dmas is set. For example
+--dmas [txd0@00:04.0,txd1@00:04.1] means use DMA channel 00:04.0 for vhost
+device 0 enqueue operation and use DMA channel 00:04.1 for vhost device 1
+enqueue operation.
+
Common Issues
-------------
.. code-block:: console
- cat /sys/kernel/mm/hugepages/hugepages-<pagesize>/nr_hugepages
+ dpdk-hugepages.py --show
The command above indicates how many hugepages are free to support QEMU's
allocation request.
* Option "builtin-net-driver" is incompatible with QEMU
QEMU vhost net device start will fail if protocol feature is not negotiated.
- DPDK virtio-user pmd can be the replacement of QEMU.
+ DPDK virtio-user PMD can be the replacement of QEMU.
* Device start fails when enabling "builtin-net-driver" without memory
pre-allocation
The builtin example doesn't support dynamic memory allocation. When vhost
backend enables "builtin-net-driver", "--socket-mem" option should be
- added at virtio-user pmd side as a startup item.
+ added at virtio-user PMD side as a startup item.