.. include:: <isonum.txt>
-IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
-======================================================================
+IOAT Rawdev Driver
+===================
The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
+Data Streaming Accelerator `(Intel DSA)
+<https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator>`_ and for Intel\ |reg|
QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
`(Intel I/OAT)
<https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
Hardware Requirements
----------------------
-On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
-be detected by checking the output of the ``lspci`` command, where the
-hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
-example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
-lspci shows:
-
-.. code-block:: console
-
- # lspci | grep DMA
- 00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
- 00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
- 00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
- 00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
- 00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
- 00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
- 00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
- 00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
-
-On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
-shows:
-
-.. code-block:: console
-
- # lspci | grep DMA
- 00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-
+The ``dpdk-devbind.py`` script, included with DPDK,
+can be used to show the presence of supported hardware.
+Running ``dpdk-devbind.py --status-dev misc`` will show all the miscellaneous,
+or rawdev-based devices on the system.
+For Intel\ |reg| QuickData Technology devices, the hardware will be often listed as "Crystal Beach DMA",
+or "CBDMA".
+For Intel\ |reg| DSA devices, they are currently (at time of writing) appearing as devices with type "0b25",
+due to the absence of pci-id database entries for them at this point.
Compilation
------------
-For builds done with ``make``, the driver compilation is enabled by the
-``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
-enabled by default in builds for x86 platforms, and disabled in other
-configurations.
-
-For builds using ``meson`` and ``ninja``, the driver will be built when the
-target platform is x86-based.
+For builds using ``meson`` and ``ninja``, the driver will be built when the target platform is x86-based.
+No additional compilation steps are necessary.
Device Setup
-------------
-The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
-user-space IO driver for use. The script ``dpdk-devbind.py`` script
-included with DPDK can be used to view the state of the devices and to bind
-them to a suitable DPDK-supported kernel driver. When querying the status
-of the devices, they will appear under the category of "Misc (rawdev)
-devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
-used to see the state of those devices alone.
+The HW devices to be used will need to be bound to a user-space IO driver for use.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported kernel driver, such as ``vfio-pci``.
+For example::
+
+ $ dpdk-devbind.py -b vfio-pci 00:04.0 00:04.1
Device Probing and Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_bus_pci.h>
+
+#include "ioat_private.h"
+
+#define IDXD_VENDOR_ID 0x8086
+#define IDXD_DEVICE_ID_SPR 0x0B25
+
+#define IDXD_PMD_RAWDEV_NAME_PCI rawdev_idxd_pci
+
+const struct rte_pci_id pci_id_idxd_map[] = {
+ { RTE_PCI_DEVICE(IDXD_VENDOR_ID, IDXD_DEVICE_ID_SPR) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idxd_rawdev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+{
+ int ret = 0;
+ char name[PCI_PRI_STR_SIZE];
+
+ rte_pci_device_name(&dev->addr, name, sizeof(name));
+ IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
+ dev->device.driver = &drv->driver;
+
+ return ret;
+}
+
+static int
+idxd_rawdev_remove_pci(struct rte_pci_device *dev)
+{
+ char name[PCI_PRI_STR_SIZE];
+ int ret = 0;
+
+ rte_pci_device_name(&dev->addr, name, sizeof(name));
+
+ IOAT_PMD_INFO("Closing %s on NUMA node %d",
+ name, dev->device.numa_node);
+
+ return ret;
+}
+
+struct rte_pci_driver idxd_pmd_drv_pci = {
+ .id_table = pci_id_idxd_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = idxd_rawdev_probe_pci,
+ .remove = idxd_rawdev_remove_pci,
+};
+
+RTE_PMD_REGISTER_PCI(IDXD_PMD_RAWDEV_NAME_PCI, idxd_pmd_drv_pci);
+RTE_PMD_REGISTER_PCI_TABLE(IDXD_PMD_RAWDEV_NAME_PCI, pci_id_idxd_map);
+RTE_PMD_REGISTER_KMOD_DEP(IDXD_PMD_RAWDEV_NAME_PCI,
+ "* igb_uio | uio_pci_generic | vfio-pci");
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IOAT_PRIVATE_H_
+#define _IOAT_PRIVATE_H_
+
+/**
+ * @file idxd_private.h
+ *
+ * Private data structures for the idxd/DSA part of ioat device driver
+ *
+ * @warning
+ * @b EXPERIMENTAL: these structures and APIs may change without prior notice
+ */
+
+extern int ioat_pmd_logtype;
+
+#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
+ ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
+
+#define IOAT_PMD_DEBUG(fmt, args...) IOAT_PMD_LOG(DEBUG, fmt, ## args)
+#define IOAT_PMD_INFO(fmt, args...) IOAT_PMD_LOG(INFO, fmt, ## args)
+#define IOAT_PMD_ERR(fmt, args...) IOAT_PMD_LOG(ERR, fmt, ## args)
+#define IOAT_PMD_WARN(fmt, args...) IOAT_PMD_LOG(WARNING, fmt, ## args)
+
+#endif /* _IOAT_PRIVATE_H_ */
#include "rte_ioat_rawdev.h"
#include "ioat_spec.h"
+#include "ioat_private.h"
static struct rte_pci_driver ioat_pmd_drv;
RTE_LOG_REGISTER(ioat_pmd_logtype, rawdev.ioat, INFO);
-#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \
- ioat_pmd_logtype, "%s(): " fmt "\n", __func__, ##args)
-
-#define IOAT_PMD_DEBUG(fmt, args...) IOAT_PMD_LOG(DEBUG, fmt, ## args)
-#define IOAT_PMD_INFO(fmt, args...) IOAT_PMD_LOG(INFO, fmt, ## args)
-#define IOAT_PMD_ERR(fmt, args...) IOAT_PMD_LOG(ERR, fmt, ## args)
-#define IOAT_PMD_WARN(fmt, args...) IOAT_PMD_LOG(WARNING, fmt, ## args)
-
#define DESC_SZ sizeof(struct rte_ioat_generic_hw_desc)
#define COMPLETION_SZ sizeof(__m128i)
build = dpdk_conf.has('RTE_ARCH_X86')
reason = 'only supported on x86'
-sources = files('ioat_rawdev.c',
- 'ioat_rawdev_test.c')
+sources = files(
+ 'idxd_pci.c',
+ 'ioat_rawdev.c',
+ 'ioat_rawdev_test.c')
deps += ['rawdev', 'bus_pci', 'mbuf']
install_headers('rte_ioat_rawdev.h',