X-Git-Url: http://git.droids-corp.org/?a=blobdiff_plain;f=doc%2Fguides%2Frawdevs%2Fioat.rst;h=98d15dd032a55b8d4c466937fe053e64321dca2c;hb=b54403fd08374dae243bf0a0e90488f51485154e;hp=71bca0b28f73588fc9f24f8ab3b3037ae7d56431;hpb=f55d1855408631adbe4e14a626b62564c03d6073;p=dpdk.git diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst index 71bca0b28f..98d15dd032 100644 --- a/doc/guides/rawdevs/ioat.rst +++ b/doc/guides/rawdevs/ioat.rst @@ -3,10 +3,16 @@ .. include:: -IOAT Rawdev Driver for Intel\ |reg| QuickData Technology -====================================================================== +IOAT Rawdev Driver +=================== + +.. warning:: + As of DPDK 21.11 the rawdev implementation of the IOAT driver has been deprecated. + Please use the dmadev library instead. The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg| +Data Streaming Accelerator `(Intel DSA) +`_ and for Intel\ |reg| QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology `(Intel I/OAT) `_. @@ -17,71 +23,121 @@ be done by software, freeing up CPU cycles for other tasks. Hardware Requirements ---------------------- -On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can -be detected by checking the output of the ``lspci`` command, where the -hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For -example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz, -lspci shows: +The ``dpdk-devbind.py`` script, included with DPDK, +can be used to show the presence of supported hardware. +Running ``dpdk-devbind.py --status-dev misc`` will show all the miscellaneous, +or rawdev-based devices on the system. +For Intel\ |reg| QuickData Technology devices, the hardware will be often listed as "Crystal Beach DMA", +or "CBDMA". +For Intel\ |reg| DSA devices, they are currently (at time of writing) appearing as devices with type "0b25", +due to the absence of pci-id database entries for them at this point. -.. code-block:: console +Compilation +------------ - # lspci | grep DMA - 00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01) - 00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01) - 00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01) - 00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01) - 00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01) - 00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01) - 00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01) - 00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01) +For builds using ``meson`` and ``ninja``, the driver will be built when the target platform is x86-based. +No additional compilation steps are necessary. -On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci -shows: +.. note:: + Since the addition of the dmadev library, the ``ioat`` and ``idxd`` parts of this driver + will only be built if their ``dmadev`` counterparts are not built. + The following can be used to disable the ``dmadev`` drivers, + if the raw drivers are to be used instead:: -.. code-block:: console + $ meson -Ddisable_drivers=dma/* - # lspci | grep DMA - 00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04) - 00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04) - 00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04) - 00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04) - 00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04) - 00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04) - 00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04) - 00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04) +Device Setup +------------- +Depending on support provided by the PMD, HW devices can either use the kernel configured driver +or be bound to a user-space IO driver for use. +For example, Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers, +such as ``vfio-pci``. -Compilation ------------- +Intel\ |reg| DSA devices using idxd kernel driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -For builds done with ``make``, the driver compilation is enabled by the -``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is -enabled by default in builds for x86 platforms, and disabled in other -configurations. +To use a Intel\ |reg| DSA device bound to the IDXD kernel driver, the device must first be configured. +The `accel-config `_ utility library can be used for configuration. -For builds using ``meson`` and ``ninja``, the driver will be built when the -target platform is x86-based. +.. note:: + The device configuration can also be done by directly interacting with the sysfs nodes. + An example of how this may be done can be seen in the script ``dpdk_idxd_cfg.py`` + included in the driver source directory. -Device Setup -------------- +There are some mandatory configuration steps before being able to use a device with an application. +The internal engines, which do the copies or other operations, +and the work-queues, which are used by applications to assign work to the device, +need to be assigned to groups, and the various other configuration options, +such as priority or queue depth, need to be set for each queue. + +To assign an engine to a group:: + + $ accel-config config-engine dsa0/engine0.0 --group-id=0 + $ accel-config config-engine dsa0/engine0.1 --group-id=1 + +To assign work queues to groups for passing descriptors to the engines a similar accel-config command can be used. +However, the work queues also need to be configured depending on the use case. +Some configuration options include: + +* mode (Dedicated/Shared): Indicates whether a WQ may accept jobs from multiple queues simultaneously. +* priority: WQ priority between 1 and 15. Larger value means higher priority. +* wq-size: the size of the WQ. Sum of all WQ sizes must be less that the total-size defined by the device. +* type: WQ type (kernel/mdev/user). Determines how the device is presented. +* name: identifier given to the WQ. -The Intel\ |reg| QuickData Technology HW devices will need to be bound to a -user-space IO driver for use. The script ``dpdk-devbind.py`` script -included with DPDK can be used to view the state of the devices and to bind -them to a suitable DPDK-supported kernel driver. When querying the status -of the devices, they will appear under the category of "Misc (rawdev) -devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be -used to see the state of those devices alone. +Example configuration for a work queue:: + + $ accel-config config-wq dsa0/wq0.0 --group-id=0 \ + --mode=dedicated --priority=10 --wq-size=8 \ + --type=user --name=dpdk_app1 + +Once the devices have been configured, they need to be enabled:: + + $ accel-config enable-device dsa0 + $ accel-config enable-wq dsa0/wq0.0 + +Check the device configuration:: + + $ accel-config list + +Devices using VFIO/UIO drivers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The HW devices to be used will need to be bound to a user-space IO driver for use. +The ``dpdk-devbind.py`` script can be used to view the state of the devices +and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``. +For example:: + + $ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1 Device Probing and Initialization ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Once bound to a suitable kernel device driver, the HW devices will be found -as part of the PCI scan done at application initialization time. No vdev -parameters need to be passed to create or initialize the device. - -Once probed successfully, the device will appear as a ``rawdev``, that is a -"raw device type" inside DPDK, and can be accessed using APIs from the +For devices bound to a suitable DPDK-supported VFIO/UIO driver, the HW devices will +be found as part of the device scan done at application initialization time without +the need to pass parameters to the application. + +For Intel\ |reg| DSA devices, DPDK will automatically configure the device with the +maximum number of workqueues available on it, partitioning all resources equally +among the queues. +If fewer workqueues are required, then the ``max_queues`` parameter may be passed to +the device driver on the EAL commandline, via the ``allowlist`` or ``-a`` flag e.g.:: + + $ dpdk-test -a ,max_queues=4 + +For devices bound to the IDXD kernel driver, +the DPDK ioat driver will automatically perform a scan for available workqueues to use. +Any workqueues found listed in ``/dev/dsa`` on the system will be checked in ``/sys``, +and any which have ``dpdk_`` prefix in their name will be automatically probed by the +driver to make them available to the application. +Alternatively, to support use by multiple DPDK processes simultaneously, +the value used as the DPDK ``--file-prefix`` parameter may be used as a workqueue name prefix, +instead of ``dpdk_``, +allowing each DPDK application instance to only use a subset of configured queues. + +Once probed successfully, irrespective of kernel driver, the device will appear as a ``rawdev``, +that is a "raw device type" inside DPDK, and can be accessed using APIs from the ``rte_rawdev`` library. Using IOAT Rawdev Devices @@ -252,6 +308,16 @@ is correct before freeing the data buffers using the returned handles: } +Filling an Area of Memory +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The IOAT driver also has support for the ``fill`` operation, where an area +of memory is overwritten, or filled, with a short pattern of data. +Fill operations can be performed in much the same was as copy operations +described above, just using the ``rte_ioat_enqueue_fill()`` function rather +than the ``rte_ioat_enqueue_copy()`` function. + + Querying Device Statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~