M: Shreyansh Jain <shreyansh.jain@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
F: lib/librte_rawdev/
-F: drivers/raw/skeleton_rawdev/
+F: drivers/raw/skeleton/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
Intel FPGA
M: Rosen Xu <rosen.xu@intel.com>
M: Tianfei zhang <tianfei.zhang@intel.com>
-F: drivers/raw/ifpga_rawdev/
-F: doc/guides/rawdevs/ifpga_rawdev.rst
+F: drivers/raw/ifpga/
+F: doc/guides/rawdevs/ifpga.rst
IOAT Rawdev
M: Bruce Richardson <bruce.richardson@intel.com>
F: drivers/raw/ioat/
-F: doc/guides/rawdevs/ioat_rawdev.rst
+F: doc/guides/rawdevs/ioat.rst
NXP DPAA2 QDMA
M: Nipun Gupta <nipun.gupta@nxp.com>
--- /dev/null
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018 Intel Corporation.
+
+IFPGA Rawdev Driver
+======================
+
+FPGA is used more and more widely in Cloud and NFV, one primary reason is
+that FPGA not only provides ASIC performance but also it's more flexible
+than ASIC.
+
+FPGA uses Partial Reconfigure (PR) Parts of Bit Stream to achieve its
+flexibility. That means one FPGA Device Bit Stream is divided into many Parts
+of Bit Stream(each Part of Bit Stream is defined as AFU-Accelerated Function
+Unit), and each AFU is a hardware acceleration unit which can be dynamically
+reloaded respectively.
+
+By PR (Partial Reconfiguration) AFUs, one FPGA resources can be time-shared by
+different users. FPGA hot upgrade and fault tolerance can be provided easily.
+
+The SW IFPGA Rawdev Driver (**ifpga_rawdev**) provides a Rawdev driver
+that utilizes Intel FPGA Software Stack OPAE(Open Programmable Acceleration
+Engine) for FPGA management.
+
+Implementation details
+----------------------
+
+Each instance of IFPGA Rawdev Driver is probed by Intel FpgaDev. In coordination
+with OPAE share code IFPGA Rawdev Driver provides common FPGA management ops
+for FPGA operation, OPAE provides all following operations:
+- FPGA PR (Partial Reconfiguration) management
+- FPGA AFUs Identifying
+- FPGA Thermal Management
+- FPGA Power Management
+- FPGA Performance reporting
+- FPGA Remote Debug
+
+All configuration parameters are taken by vdev_ifpga_cfg driver. Besides
+configuration, vdev_ifpga_cfg driver also hot plugs in IFPGA Bus.
+
+All of the AFUs of one FPGA may share same PCI BDF and AFUs scan depend on
+IFPGA Rawdev Driver so IFPGA Bus takes AFU device scan and AFU drivers probe.
+All AFU device driver bind to AFU device by its UUID (Universally Unique
+Identifier).
+
+To avoid unnecessary code duplication and ensure maximum performance,
+handling of AFU devices is left to different PMDs; all the design as
+summarized by the following block diagram::
+
+ +---------------------------------------------------------------+
+ | Application(s) |
+ +----------------------------.----------------------------------+
+ |
+ |
+ +----------------------------'----------------------------------+
+ | DPDK Framework (APIs) |
+ +----------|------------|--------.---------------------|--------+
+ / \ |
+ / \ |
+ +-------'-------+ +-------'-------+ +--------'--------+
+ | Eth PMD | | Crypto PMD | | |
+ +-------.-------+ +-------.-------+ | |
+ | | | |
+ | | | |
+ +-------'-------+ +-------'-------+ | IFPGA |
+ | Eth AFU Dev | |Crypto AFU Dev | | Rawdev Driver |
+ +-------.-------+ +-------.-------+ |(OPAE Share Code)|
+ | | | |
+ | | Rawdev | |
+ +-------'------------------'-------+ Ops | |
+ | IFPGA Bus | -------->| |
+ +-----------------.----------------+ +--------.--------+
+ | |
+ Hot-plugin -->| |
+ | |
+ +-----------------'------------------+ +--------'--------+
+ | vdev_ifpga_cfg driver | | Intel FpgaDev |
+ +------------------------------------+ +-----------------+
+
+Build options
+-------------
+
+- ``CONFIG_RTE_LIBRTE_IFPGA_BUS`` (default ``y``)
+
+ Toggle compilation of IFPGA Bus library.
+
+- ``CONFIG_RTE_LIBRTE_IFPGA_RAWDEV`` (default ``y``)
+
+ Toggle compilation of the ``ifpga_rawdev`` driver.
+
+Run-time parameters
+-------------------
+
+This driver is invoked automatically in systems added with Intel FPGA,
+but PR and IFPGA Bus scan is triggered by command line using
+``--vdev 'ifpga_rawdev_cfg`` EAL option.
+
+The following device parameters are supported:
+
+- ``ifpga`` [string]
+
+ Provide a specific Intel FPGA device PCI BDF. Can be provided multiple
+ times for additional instances.
+
+- ``port`` [int]
+
+ Each FPGA can provide many channels to PR AFU by software, each channels
+ is identified by this parameter.
+
+- ``afu_bts`` [string]
+
+ If null, the AFU Bit Stream has been PR in FPGA, if not forces PR and
+ identifies AFU Bit Stream file.
+++ /dev/null
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2018 Intel Corporation.
-
-IFPGA Rawdev Driver
-======================
-
-FPGA is used more and more widely in Cloud and NFV, one primary reason is
-that FPGA not only provides ASIC performance but also it's more flexible
-than ASIC.
-
-FPGA uses Partial Reconfigure (PR) Parts of Bit Stream to achieve its
-flexibility. That means one FPGA Device Bit Stream is divided into many Parts
-of Bit Stream(each Part of Bit Stream is defined as AFU-Accelerated Function
-Unit), and each AFU is a hardware acceleration unit which can be dynamically
-reloaded respectively.
-
-By PR (Partial Reconfiguration) AFUs, one FPGA resources can be time-shared by
-different users. FPGA hot upgrade and fault tolerance can be provided easily.
-
-The SW IFPGA Rawdev Driver (**ifpga_rawdev**) provides a Rawdev driver
-that utilizes Intel FPGA Software Stack OPAE(Open Programmable Acceleration
-Engine) for FPGA management.
-
-Implementation details
-----------------------
-
-Each instance of IFPGA Rawdev Driver is probed by Intel FpgaDev. In coordination
-with OPAE share code IFPGA Rawdev Driver provides common FPGA management ops
-for FPGA operation, OPAE provides all following operations:
-- FPGA PR (Partial Reconfiguration) management
-- FPGA AFUs Identifying
-- FPGA Thermal Management
-- FPGA Power Management
-- FPGA Performance reporting
-- FPGA Remote Debug
-
-All configuration parameters are taken by vdev_ifpga_cfg driver. Besides
-configuration, vdev_ifpga_cfg driver also hot plugs in IFPGA Bus.
-
-All of the AFUs of one FPGA may share same PCI BDF and AFUs scan depend on
-IFPGA Rawdev Driver so IFPGA Bus takes AFU device scan and AFU drivers probe.
-All AFU device driver bind to AFU device by its UUID (Universally Unique
-Identifier).
-
-To avoid unnecessary code duplication and ensure maximum performance,
-handling of AFU devices is left to different PMDs; all the design as
-summarized by the following block diagram::
-
- +---------------------------------------------------------------+
- | Application(s) |
- +----------------------------.----------------------------------+
- |
- |
- +----------------------------'----------------------------------+
- | DPDK Framework (APIs) |
- +----------|------------|--------.---------------------|--------+
- / \ |
- / \ |
- +-------'-------+ +-------'-------+ +--------'--------+
- | Eth PMD | | Crypto PMD | | |
- +-------.-------+ +-------.-------+ | |
- | | | |
- | | | |
- +-------'-------+ +-------'-------+ | IFPGA |
- | Eth AFU Dev | |Crypto AFU Dev | | Rawdev Driver |
- +-------.-------+ +-------.-------+ |(OPAE Share Code)|
- | | | |
- | | Rawdev | |
- +-------'------------------'-------+ Ops | |
- | IFPGA Bus | -------->| |
- +-----------------.----------------+ +--------.--------+
- | |
- Hot-plugin -->| |
- | |
- +-----------------'------------------+ +--------'--------+
- | vdev_ifpga_cfg driver | | Intel FpgaDev |
- +------------------------------------+ +-----------------+
-
-Build options
--------------
-
-- ``CONFIG_RTE_LIBRTE_IFPGA_BUS`` (default ``y``)
-
- Toggle compilation of IFPGA Bus library.
-
-- ``CONFIG_RTE_LIBRTE_IFPGA_RAWDEV`` (default ``y``)
-
- Toggle compilation of the ``ifpga_rawdev`` driver.
-
-Run-time parameters
--------------------
-
-This driver is invoked automatically in systems added with Intel FPGA,
-but PR and IFPGA Bus scan is triggered by command line using
-``--vdev 'ifpga_rawdev_cfg`` EAL option.
-
-The following device parameters are supported:
-
-- ``ifpga`` [string]
-
- Provide a specific Intel FPGA device PCI BDF. Can be provided multiple
- times for additional instances.
-
-- ``port`` [int]
-
- Each FPGA can provide many channels to PR AFU by software, each channels
- is identified by this parameter.
-
-- ``afu_bts`` [string]
-
- If null, the AFU Bit Stream has been PR in FPGA, if not forces PR and
- identifies AFU Bit Stream file.
dpaa2_cmdif
dpaa2_qdma
- ifpga_rawdev
- ioat_rawdev
+ ifpga
+ ioat
ntb
octeontx2_dma
--- /dev/null
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
+======================================================================
+
+The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
+QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
+`(Intel I/OAT)
+<https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
+This PMD, when used on supported hardware, allows data copies, for example,
+cloning packet data, to be accelerated by that hardware rather than having to
+be done by software, freeing up CPU cycles for other tasks.
+
+Hardware Requirements
+----------------------
+
+On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
+be detected by checking the output of the ``lspci`` command, where the
+hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
+example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
+lspci shows:
+
+.. code-block:: console
+
+ # lspci | grep DMA
+ 00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
+ 00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
+ 00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
+ 00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
+ 00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
+ 00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
+ 00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
+ 00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
+
+On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
+shows:
+
+.. code-block:: console
+
+ # lspci | grep DMA
+ 00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+
+
+Compilation
+------------
+
+For builds done with ``make``, the driver compilation is enabled by the
+``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
+enabled by default in builds for x86 platforms, and disabled in other
+configurations.
+
+For builds using ``meson`` and ``ninja``, the driver will be built when the
+target platform is x86-based.
+
+Device Setup
+-------------
+
+The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
+user-space IO driver for use. The script ``dpdk-devbind.py`` script
+included with DPDK can be used to view the state of the devices and to bind
+them to a suitable DPDK-supported kernel driver. When querying the status
+of the devices, they will appear under the category of "Misc (rawdev)
+devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
+used to see the state of those devices alone.
+
+Device Probing and Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once bound to a suitable kernel device driver, the HW devices will be found
+as part of the PCI scan done at application initialization time. No vdev
+parameters need to be passed to create or initialize the device.
+
+Once probed successfully, the device will appear as a ``rawdev``, that is a
+"raw device type" inside DPDK, and can be accessed using APIs from the
+``rte_rawdev`` library.
+
+Using IOAT Rawdev Devices
+--------------------------
+
+To use the devices from an application, the rawdev API can be used, along
+with definitions taken from the device-specific header file
+``rte_ioat_rawdev.h``. This header is needed to get the definition of
+structure parameters used by some of the rawdev APIs for IOAT rawdev
+devices, as well as providing key functions for using the device for memory
+copies.
+
+Getting Device Information
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Basic information about each rawdev device can be queried using the
+``rte_rawdev_info_get()`` API. For most applications, this API will be
+needed to verify that the rawdev in question is of the expected type. For
+example, the following code snippet can be used to identify an IOAT
+rawdev device for use by an application:
+
+.. code-block:: C
+
+ for (i = 0; i < count && !found; i++) {
+ struct rte_rawdev_info info = { .dev_private = NULL };
+ found = (rte_rawdev_info_get(i, &info) == 0 &&
+ strcmp(info.driver_name,
+ IOAT_PMD_RAWDEV_NAME_STR) == 0);
+ }
+
+When calling the ``rte_rawdev_info_get()`` API for an IOAT rawdev device,
+the ``dev_private`` field in the ``rte_rawdev_info`` struct should either
+be NULL, or else be set to point to a structure of type
+``rte_ioat_rawdev_config``, in which case the size of the configured device
+input ring will be returned in that structure.
+
+Device Configuration
+~~~~~~~~~~~~~~~~~~~~~
+
+Configuring an IOAT rawdev device is done using the
+``rte_rawdev_configure()`` API, which takes the same structure parameters
+as the, previously referenced, ``rte_rawdev_info_get()`` API. The main
+difference is that, because the parameter is used as input rather than
+output, the ``dev_private`` structure element cannot be NULL, and must
+point to a valid ``rte_ioat_rawdev_config`` structure, containing the ring
+size to be used by the device. The ring size must be a power of two,
+between 64 and 4096.
+
+The following code shows how the device is configured in
+``test_ioat_rawdev.c``:
+
+.. code-block:: C
+
+ #define IOAT_TEST_RINGSIZE 512
+ struct rte_ioat_rawdev_config p = { .ring_size = -1 };
+ struct rte_rawdev_info info = { .dev_private = &p };
+
+ /* ... */
+
+ p.ring_size = IOAT_TEST_RINGSIZE;
+ if (rte_rawdev_configure(dev_id, &info) != 0) {
+ printf("Error with rte_rawdev_configure()\n");
+ return -1;
+ }
+
+Once configured, the device can then be made ready for use by calling the
+``rte_rawdev_start()`` API.
+
+Performing Data Copies
+~~~~~~~~~~~~~~~~~~~~~~~
+
+To perform data copies using IOAT rawdev devices, the functions
+``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
+Once copies have been completed, the completion will be reported back when
+the application calls ``rte_ioat_completed_copies()``.
+
+The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
+device ring for copying at a later point. The parameters to that function
+include the physical addresses of both the source and destination buffers,
+as well as two "handles" to be returned to the user when the copy is
+completed. These handles can be arbitrary values, but two are provided so
+that the library can track handles for both source and destination on
+behalf of the user, e.g. virtual addresses for the buffers, or mbuf
+pointers if packet data is being copied.
+
+While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
+the device ring, the copy will not actually be performed until after the
+application calls the ``rte_ioat_do_copies()`` function. This function
+informs the device hardware of the elements enqueued on the ring, and the
+device will begin to process them. It is expected that, for efficiency
+reasons, a burst of operations will be enqueued to the device via multiple
+enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
+
+The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
+a burst of copies to the device and start the hardware processing of them:
+
+.. code-block:: C
+
+ struct rte_mbuf *srcs[32], *dsts[32];
+ unsigned int j;
+
+ for (i = 0; i < RTE_DIM(srcs); i++) {
+ char *src_data;
+
+ srcs[i] = rte_pktmbuf_alloc(pool);
+ dsts[i] = rte_pktmbuf_alloc(pool);
+ srcs[i]->data_len = srcs[i]->pkt_len = length;
+ dsts[i]->data_len = dsts[i]->pkt_len = length;
+ src_data = rte_pktmbuf_mtod(srcs[i], char *);
+
+ for (j = 0; j < length; j++)
+ src_data[j] = rand() & 0xFF;
+
+ if (rte_ioat_enqueue_copy(dev_id,
+ srcs[i]->buf_iova + srcs[i]->data_off,
+ dsts[i]->buf_iova + dsts[i]->data_off,
+ length,
+ (uintptr_t)srcs[i],
+ (uintptr_t)dsts[i],
+ 0 /* nofence */) != 1) {
+ printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
+ i);
+ return -1;
+ }
+ }
+ rte_ioat_do_copies(dev_id);
+
+To retrieve information about completed copies, the API
+``rte_ioat_completed_copies()`` should be used. This API will return to the
+application a set of completion handles passed in when the relevant copies
+were enqueued.
+
+The following code from ``test_ioat_rawdev.c`` shows the test code
+retrieving information about the completed copies and validating the data
+is correct before freeing the data buffers using the returned handles:
+
+.. code-block:: C
+
+ if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+ (void *)completed_dst) != RTE_DIM(srcs)) {
+ printf("Error with rte_ioat_completed_copies\n");
+ return -1;
+ }
+ for (i = 0; i < RTE_DIM(srcs); i++) {
+ char *src_data, *dst_data;
+
+ if (completed_src[i] != srcs[i]) {
+ printf("Error with source pointer %u\n", i);
+ return -1;
+ }
+ if (completed_dst[i] != dsts[i]) {
+ printf("Error with dest pointer %u\n", i);
+ return -1;
+ }
+
+ src_data = rte_pktmbuf_mtod(srcs[i], char *);
+ dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+ for (j = 0; j < length; j++)
+ if (src_data[j] != dst_data[j]) {
+ printf("Error with copy of packet %u, byte %u\n",
+ i, j);
+ return -1;
+ }
+ rte_pktmbuf_free(srcs[i]);
+ rte_pktmbuf_free(dsts[i]);
+ }
+
+
+Querying Device Statistics
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The statistics from the IOAT rawdev device can be got via the xstats
+functions in the ``rte_rawdev`` library, i.e.
+``rte_rawdev_xstats_names_get()``, ``rte_rawdev_xstats_get()`` and
+``rte_rawdev_xstats_by_name_get``. The statistics returned for each device
+instance are:
+
+* ``failed_enqueues``
+* ``successful_enqueues``
+* ``copies_started``
+* ``copies_completed``
+++ /dev/null
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Intel Corporation.
-
-.. include:: <isonum.txt>
-
-IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
-======================================================================
-
-The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
-QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
-`(Intel I/OAT)
-<https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
-This PMD, when used on supported hardware, allows data copies, for example,
-cloning packet data, to be accelerated by that hardware rather than having to
-be done by software, freeing up CPU cycles for other tasks.
-
-Hardware Requirements
-----------------------
-
-On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
-be detected by checking the output of the ``lspci`` command, where the
-hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
-example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
-lspci shows:
-
-.. code-block:: console
-
- # lspci | grep DMA
- 00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
- 00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
- 00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
- 00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
- 00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
- 00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
- 00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
- 00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
-
-On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
-shows:
-
-.. code-block:: console
-
- # lspci | grep DMA
- 00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
- 00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
-
-
-Compilation
-------------
-
-For builds done with ``make``, the driver compilation is enabled by the
-``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
-enabled by default in builds for x86 platforms, and disabled in other
-configurations.
-
-For builds using ``meson`` and ``ninja``, the driver will be built when the
-target platform is x86-based.
-
-Device Setup
--------------
-
-The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
-user-space IO driver for use. The script ``dpdk-devbind.py`` script
-included with DPDK can be used to view the state of the devices and to bind
-them to a suitable DPDK-supported kernel driver. When querying the status
-of the devices, they will appear under the category of "Misc (rawdev)
-devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
-used to see the state of those devices alone.
-
-Device Probing and Initialization
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Once bound to a suitable kernel device driver, the HW devices will be found
-as part of the PCI scan done at application initialization time. No vdev
-parameters need to be passed to create or initialize the device.
-
-Once probed successfully, the device will appear as a ``rawdev``, that is a
-"raw device type" inside DPDK, and can be accessed using APIs from the
-``rte_rawdev`` library.
-
-Using IOAT Rawdev Devices
---------------------------
-
-To use the devices from an application, the rawdev API can be used, along
-with definitions taken from the device-specific header file
-``rte_ioat_rawdev.h``. This header is needed to get the definition of
-structure parameters used by some of the rawdev APIs for IOAT rawdev
-devices, as well as providing key functions for using the device for memory
-copies.
-
-Getting Device Information
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Basic information about each rawdev device can be queried using the
-``rte_rawdev_info_get()`` API. For most applications, this API will be
-needed to verify that the rawdev in question is of the expected type. For
-example, the following code snippet can be used to identify an IOAT
-rawdev device for use by an application:
-
-.. code-block:: C
-
- for (i = 0; i < count && !found; i++) {
- struct rte_rawdev_info info = { .dev_private = NULL };
- found = (rte_rawdev_info_get(i, &info) == 0 &&
- strcmp(info.driver_name,
- IOAT_PMD_RAWDEV_NAME_STR) == 0);
- }
-
-When calling the ``rte_rawdev_info_get()`` API for an IOAT rawdev device,
-the ``dev_private`` field in the ``rte_rawdev_info`` struct should either
-be NULL, or else be set to point to a structure of type
-``rte_ioat_rawdev_config``, in which case the size of the configured device
-input ring will be returned in that structure.
-
-Device Configuration
-~~~~~~~~~~~~~~~~~~~~~
-
-Configuring an IOAT rawdev device is done using the
-``rte_rawdev_configure()`` API, which takes the same structure parameters
-as the, previously referenced, ``rte_rawdev_info_get()`` API. The main
-difference is that, because the parameter is used as input rather than
-output, the ``dev_private`` structure element cannot be NULL, and must
-point to a valid ``rte_ioat_rawdev_config`` structure, containing the ring
-size to be used by the device. The ring size must be a power of two,
-between 64 and 4096.
-
-The following code shows how the device is configured in
-``test_ioat_rawdev.c``:
-
-.. code-block:: C
-
- #define IOAT_TEST_RINGSIZE 512
- struct rte_ioat_rawdev_config p = { .ring_size = -1 };
- struct rte_rawdev_info info = { .dev_private = &p };
-
- /* ... */
-
- p.ring_size = IOAT_TEST_RINGSIZE;
- if (rte_rawdev_configure(dev_id, &info) != 0) {
- printf("Error with rte_rawdev_configure()\n");
- return -1;
- }
-
-Once configured, the device can then be made ready for use by calling the
-``rte_rawdev_start()`` API.
-
-Performing Data Copies
-~~~~~~~~~~~~~~~~~~~~~~~
-
-To perform data copies using IOAT rawdev devices, the functions
-``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
-Once copies have been completed, the completion will be reported back when
-the application calls ``rte_ioat_completed_copies()``.
-
-The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
-device ring for copying at a later point. The parameters to that function
-include the physical addresses of both the source and destination buffers,
-as well as two "handles" to be returned to the user when the copy is
-completed. These handles can be arbitrary values, but two are provided so
-that the library can track handles for both source and destination on
-behalf of the user, e.g. virtual addresses for the buffers, or mbuf
-pointers if packet data is being copied.
-
-While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
-the device ring, the copy will not actually be performed until after the
-application calls the ``rte_ioat_do_copies()`` function. This function
-informs the device hardware of the elements enqueued on the ring, and the
-device will begin to process them. It is expected that, for efficiency
-reasons, a burst of operations will be enqueued to the device via multiple
-enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
-
-The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
-a burst of copies to the device and start the hardware processing of them:
-
-.. code-block:: C
-
- struct rte_mbuf *srcs[32], *dsts[32];
- unsigned int j;
-
- for (i = 0; i < RTE_DIM(srcs); i++) {
- char *src_data;
-
- srcs[i] = rte_pktmbuf_alloc(pool);
- dsts[i] = rte_pktmbuf_alloc(pool);
- srcs[i]->data_len = srcs[i]->pkt_len = length;
- dsts[i]->data_len = dsts[i]->pkt_len = length;
- src_data = rte_pktmbuf_mtod(srcs[i], char *);
-
- for (j = 0; j < length; j++)
- src_data[j] = rand() & 0xFF;
-
- if (rte_ioat_enqueue_copy(dev_id,
- srcs[i]->buf_iova + srcs[i]->data_off,
- dsts[i]->buf_iova + dsts[i]->data_off,
- length,
- (uintptr_t)srcs[i],
- (uintptr_t)dsts[i],
- 0 /* nofence */) != 1) {
- printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
- i);
- return -1;
- }
- }
- rte_ioat_do_copies(dev_id);
-
-To retrieve information about completed copies, the API
-``rte_ioat_completed_copies()`` should be used. This API will return to the
-application a set of completion handles passed in when the relevant copies
-were enqueued.
-
-The following code from ``test_ioat_rawdev.c`` shows the test code
-retrieving information about the completed copies and validating the data
-is correct before freeing the data buffers using the returned handles:
-
-.. code-block:: C
-
- if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
- (void *)completed_dst) != RTE_DIM(srcs)) {
- printf("Error with rte_ioat_completed_copies\n");
- return -1;
- }
- for (i = 0; i < RTE_DIM(srcs); i++) {
- char *src_data, *dst_data;
-
- if (completed_src[i] != srcs[i]) {
- printf("Error with source pointer %u\n", i);
- return -1;
- }
- if (completed_dst[i] != dsts[i]) {
- printf("Error with dest pointer %u\n", i);
- return -1;
- }
-
- src_data = rte_pktmbuf_mtod(srcs[i], char *);
- dst_data = rte_pktmbuf_mtod(dsts[i], char *);
- for (j = 0; j < length; j++)
- if (src_data[j] != dst_data[j]) {
- printf("Error with copy of packet %u, byte %u\n",
- i, j);
- return -1;
- }
- rte_pktmbuf_free(srcs[i]);
- rte_pktmbuf_free(dsts[i]);
- }
-
-
-Querying Device Statistics
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The statistics from the IOAT rawdev device can be got via the xstats
-functions in the ``rte_rawdev`` library, i.e.
-``rte_rawdev_xstats_names_get()``, ``rte_rawdev_xstats_get()`` and
-``rte_rawdev_xstats_by_name_get``. The statistics returned for each device
-instance are:
-
-* ``failed_enqueues``
-* ``successful_enqueues``
-* ``copies_started``
-* ``copies_completed``
include $(RTE_SDK)/mk/rte.vars.mk
# DIRS-$(<configuration>) += <directory>
-DIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_RAWDEV) += skeleton_rawdev
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_RAWDEV) += skeleton
ifeq ($(CONFIG_RTE_EAL_VFIO)$(CONFIG_RTE_LIBRTE_FSLMC_BUS),yy)
DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_CMDIF_RAWDEV) += dpaa2_cmdif
DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_QDMA_RAWDEV) += dpaa2_qdma
endif
-DIRS-$(CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV) += ifpga_rawdev
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV) += ifpga
DIRS-$(CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV) += ioat
DIRS-$(CONFIG_RTE_LIBRTE_PMD_NTB_RAWDEV) += ntb
DIRS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_DMA_RAWDEV) += octeontx2_dma
--- /dev/null
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ifpga_rawdev.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK)/drivers/bus/ifpga
+CFLAGS += -I$(RTE_SDK)/drivers/raw/ifpga_rawdev
+CFLAGS += -I$(RTE_SDK)/drivers/net/ipn3ke
+LDLIBS += -lrte_eal
+LDLIBS += -lrte_rawdev
+LDLIBS += -lrte_bus_vdev
+LDLIBS += -lrte_kvargs
+LDLIBS += -lrte_bus_pci
+LDLIBS += -lrte_bus_ifpga
+
+EXPORT_MAP := rte_pmd_ifpga_version.map
+
+LIBABIVER := 1
+
+VPATH += $(SRCDIR)/base
+
+include $(RTE_SDK)/drivers/raw/ifpga/base/Makefile
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV) += ifpga_rawdev.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
--- /dev/null
+#SPDX-License-Identifier: BSD-3-Clause
+#Copyright(c) 2010-2018 Intel Corporation
+
+ifneq ($(CONFIG_RTE_LIBRTE_EAL),)
+OSDEP := osdep_rte
+else
+OSDEP := osdep_raw
+endif
+
+CFLAGS += -I$(RTE_SDK)/drivers/raw/ifpga_rawdev/base/$(OSDEP)
+
+SRCS-y += ifpga_api.c
+SRCS-y += ifpga_enumerate.c
+SRCS-y += ifpga_feature_dev.c
+SRCS-y += ifpga_fme.c
+SRCS-y += ifpga_fme_iperf.c
+SRCS-y += ifpga_fme_dperf.c
+SRCS-y += ifpga_fme_error.c
+SRCS-y += ifpga_port.c
+SRCS-y += ifpga_port_error.c
+SRCS-y += opae_hw_api.c
+SRCS-y += opae_ifpga_hw_api.c
+SRCS-y += opae_debug.c
+SRCS-y += ifpga_fme_pr.c
+SRCS-y += opae_spi.c
+SRCS-y += opae_spi_transaction.c
+SRCS-y += opae_intel_max10.c
+SRCS-y += opae_i2c.c
+SRCS-y += opae_at24_eeprom.c
+SRCS-y += opae_eth_group.c
+
+SRCS-y += $(wildcard $(SRCDIR)/base/$(OSDEP)/*.c)
--- /dev/null
+..
+
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+Intel iFPGA driver
+==================
+
+This directory contains source code of Intel FPGA driver released by
+the team which develops Intel FPGA Open Programmable Acceleration Engine (OPAE).
+The directory of base/ contains the original source package. The base code
+currently supports Intel FPGA solutions including integrated solution (Intel(R)
+Xeon(R) CPU with FPGAs) and discrete solution (Intel(R) Programmable Acceleration
+Card with Intel(R) Arria(R) 10 FPGA) and it could be extended to support more FPGA
+devices in the future.
+
+Please refer to [1][2] for more introduction on OPAE and Intel FPGAs.
+
+[1] https://01.org/OPAE
+[2] https://www.altera.com/solutions/acceleration-hub/overview.html
+
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+ osdep_raw/osdep_generic.h
+ osdep_rte/osdep_generic.h
+
+
+New Features
+==================
+
+2019-03:
+Support Intel FPGA PAC N3000 card.
+Some features added in this version:
+1. Store private features in FME and Port list.
+2. Add eth group devices driver.
+3. Add altera SPI master driver and Intel MAX10 device driver.
+4. Add Altera I2C master driver and AT24 eeprom driver.
+5. Add Device Tree support to get the configuration from card.
+6. Instruding and exposing APIs to DPDK PMD driver to access networking
+functionality.
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "ifpga_api.h"
+#include "ifpga_enumerate.h"
+#include "ifpga_feature_dev.h"
+
+#include "opae_hw_api.h"
+
+/* Accelerator APIs */
+static int ifpga_acc_get_uuid(struct opae_accelerator *acc,
+ struct uuid *uuid)
+{
+ struct opae_bridge *br = acc->br;
+ struct ifpga_port_hw *port;
+
+ if (!br || !br->data)
+ return -EINVAL;
+
+ port = br->data;
+
+ return fpga_get_afu_uuid(port, uuid);
+}
+
+static int ifpga_acc_set_irq(struct opae_accelerator *acc,
+ u32 start, u32 count, s32 evtfds[])
+{
+ struct ifpga_afu_info *afu_info = acc->data;
+ struct opae_bridge *br = acc->br;
+ struct ifpga_port_hw *port;
+ struct fpga_uafu_irq_set irq_set;
+
+ if (!br || !br->data)
+ return -EINVAL;
+
+ if (start >= afu_info->num_irqs || start + count > afu_info->num_irqs)
+ return -EINVAL;
+
+ port = br->data;
+
+ irq_set.start = start;
+ irq_set.count = count;
+ irq_set.evtfds = evtfds;
+
+ return ifpga_set_irq(port->parent, FEATURE_FIU_ID_PORT, port->port_id,
+ IFPGA_PORT_FEATURE_ID_UINT, &irq_set);
+}
+
+static int ifpga_acc_get_info(struct opae_accelerator *acc,
+ struct opae_acc_info *info)
+{
+ struct ifpga_afu_info *afu_info = acc->data;
+
+ if (!afu_info)
+ return -ENODEV;
+
+ info->num_regions = afu_info->num_regions;
+ info->num_irqs = afu_info->num_irqs;
+
+ return 0;
+}
+
+static int ifpga_acc_get_region_info(struct opae_accelerator *acc,
+ struct opae_acc_region_info *info)
+{
+ struct ifpga_afu_info *afu_info = acc->data;
+
+ if (!afu_info)
+ return -EINVAL;
+
+ if (info->index >= afu_info->num_regions)
+ return -EINVAL;
+
+ /* always one RW region only for AFU now */
+ info->flags = ACC_REGION_READ | ACC_REGION_WRITE | ACC_REGION_MMIO;
+ info->len = afu_info->region[info->index].len;
+ info->addr = afu_info->region[info->index].addr;
+ info->phys_addr = afu_info->region[info->index].phys_addr;
+
+ return 0;
+}
+
+static int ifpga_acc_read(struct opae_accelerator *acc, unsigned int region_idx,
+ u64 offset, unsigned int byte, void *data)
+{
+ struct ifpga_afu_info *afu_info = acc->data;
+ struct opae_reg_region *region;
+
+ if (!afu_info)
+ return -EINVAL;
+
+ if (offset + byte <= offset)
+ return -EINVAL;
+
+ if (region_idx >= afu_info->num_regions)
+ return -EINVAL;
+
+ region = &afu_info->region[region_idx];
+ if (offset + byte > region->len)
+ return -EINVAL;
+
+ switch (byte) {
+ case 8:
+ *(u64 *)data = opae_readq(region->addr + offset);
+ break;
+ case 4:
+ *(u32 *)data = opae_readl(region->addr + offset);
+ break;
+ case 2:
+ *(u16 *)data = opae_readw(region->addr + offset);
+ break;
+ case 1:
+ *(u8 *)data = opae_readb(region->addr + offset);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int ifpga_acc_write(struct opae_accelerator *acc,
+ unsigned int region_idx, u64 offset,
+ unsigned int byte, void *data)
+{
+ struct ifpga_afu_info *afu_info = acc->data;
+ struct opae_reg_region *region;
+
+ if (!afu_info)
+ return -EINVAL;
+
+ if (offset + byte <= offset)
+ return -EINVAL;
+
+ if (region_idx >= afu_info->num_regions)
+ return -EINVAL;
+
+ region = &afu_info->region[region_idx];
+ if (offset + byte > region->len)
+ return -EINVAL;
+
+ /* normal mmio case */
+ switch (byte) {
+ case 8:
+ opae_writeq(*(u64 *)data, region->addr + offset);
+ break;
+ case 4:
+ opae_writel(*(u32 *)data, region->addr + offset);
+ break;
+ case 2:
+ opae_writew(*(u16 *)data, region->addr + offset);
+ break;
+ case 1:
+ opae_writeb(*(u8 *)data, region->addr + offset);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+struct opae_accelerator_ops ifpga_acc_ops = {
+ .read = ifpga_acc_read,
+ .write = ifpga_acc_write,
+ .set_irq = ifpga_acc_set_irq,
+ .get_info = ifpga_acc_get_info,
+ .get_region_info = ifpga_acc_get_region_info,
+ .get_uuid = ifpga_acc_get_uuid,
+};
+
+/* Bridge APIs */
+static int ifpga_br_reset(struct opae_bridge *br)
+{
+ struct ifpga_port_hw *port = br->data;
+
+ return fpga_port_reset(port);
+}
+
+struct opae_bridge_ops ifpga_br_ops = {
+ .reset = ifpga_br_reset,
+};
+
+/* Manager APIs */
+static int ifpga_mgr_flash(struct opae_manager *mgr, int id, const char *buf,
+ u32 size, u64 *status)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+ struct ifpga_hw *hw = fme->parent;
+
+ return ifpga_pr(hw, id, buf, size, status);
+}
+
+static int ifpga_mgr_get_eth_group_region_info(struct opae_manager *mgr,
+ struct opae_eth_group_region_info *info)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ if (info->group_id >= MAX_ETH_GROUP_DEVICES)
+ return -EINVAL;
+
+ info->phys_addr = fme->eth_group_region[info->group_id].phys_addr;
+ info->addr = fme->eth_group_region[info->group_id].addr;
+ info->len = fme->eth_group_region[info->group_id].len;
+
+ info->mem_idx = fme->nums_acc_region + info->group_id;
+
+ return 0;
+}
+
+struct opae_manager_ops ifpga_mgr_ops = {
+ .flash = ifpga_mgr_flash,
+ .get_eth_group_region_info = ifpga_mgr_get_eth_group_region_info,
+};
+
+static int ifpga_mgr_read_mac_rom(struct opae_manager *mgr, int offset,
+ void *buf, int size)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ return fme_mgr_read_mac_rom(fme, offset, buf, size);
+}
+
+static int ifpga_mgr_write_mac_rom(struct opae_manager *mgr, int offset,
+ void *buf, int size)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ return fme_mgr_write_mac_rom(fme, offset, buf, size);
+}
+
+static int ifpga_mgr_get_eth_group_nums(struct opae_manager *mgr)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ return fme_mgr_get_eth_group_nums(fme);
+}
+
+static int ifpga_mgr_get_eth_group_info(struct opae_manager *mgr,
+ u8 group_id, struct opae_eth_group_info *info)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ return fme_mgr_get_eth_group_info(fme, group_id, info);
+}
+
+static int ifpga_mgr_eth_group_reg_read(struct opae_manager *mgr, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 *data)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ return fme_mgr_eth_group_read_reg(fme, group_id,
+ type, index, addr, data);
+}
+
+static int ifpga_mgr_eth_group_reg_write(struct opae_manager *mgr, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 data)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ return fme_mgr_eth_group_write_reg(fme, group_id,
+ type, index, addr, data);
+}
+
+static int ifpga_mgr_get_retimer_info(struct opae_manager *mgr,
+ struct opae_retimer_info *info)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ return fme_mgr_get_retimer_info(fme, info);
+}
+
+static int ifpga_mgr_get_retimer_status(struct opae_manager *mgr,
+ struct opae_retimer_status *status)
+{
+ struct ifpga_fme_hw *fme = mgr->data;
+
+ return fme_mgr_get_retimer_status(fme, status);
+}
+
+/* Network APIs in FME */
+struct opae_manager_networking_ops ifpga_mgr_network_ops = {
+ .read_mac_rom = ifpga_mgr_read_mac_rom,
+ .write_mac_rom = ifpga_mgr_write_mac_rom,
+ .get_eth_group_nums = ifpga_mgr_get_eth_group_nums,
+ .get_eth_group_info = ifpga_mgr_get_eth_group_info,
+ .eth_group_reg_read = ifpga_mgr_eth_group_reg_read,
+ .eth_group_reg_write = ifpga_mgr_eth_group_reg_write,
+ .get_retimer_info = ifpga_mgr_get_retimer_info,
+ .get_retimer_status = ifpga_mgr_get_retimer_status,
+};
+
+/* Adapter APIs */
+static int ifpga_adapter_enumerate(struct opae_adapter *adapter)
+{
+ struct ifpga_hw *hw = malloc(sizeof(*hw));
+
+ if (hw) {
+ opae_memset(hw, 0, sizeof(*hw));
+ hw->pci_data = adapter->data;
+ hw->adapter = adapter;
+ if (ifpga_bus_enumerate(hw))
+ goto error;
+ return ifpga_bus_init(hw);
+ }
+
+error:
+ return -ENOMEM;
+}
+
+struct opae_adapter_ops ifpga_adapter_ops = {
+ .enumerate = ifpga_adapter_enumerate,
+};
+
+/**
+ * ifpga_pr - do the partial reconfiguration for a given port device
+ * @hw: pointer to the HW structure
+ * @port_id: the port device id
+ * @buffer: the buffer of the bitstream
+ * @size: the size of the bitstream
+ * @status: hardware status including PR error code if return -EIO.
+ *
+ * @return
+ * - 0: Success, partial reconfiguration finished.
+ * - <0: Error code returned in partial reconfiguration.
+ **/
+int ifpga_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer, u32 size,
+ u64 *status)
+{
+ if (!is_valid_port_id(hw, port_id))
+ return -ENODEV;
+
+ return do_pr(hw, port_id, buffer, size, status);
+}
+
+int ifpga_get_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
+ struct feature_prop *prop)
+{
+ if (!hw || !prop)
+ return -EINVAL;
+
+ switch (fiu_id) {
+ case FEATURE_FIU_ID_FME:
+ return fme_get_prop(&hw->fme, prop);
+ case FEATURE_FIU_ID_PORT:
+ if (!is_valid_port_id(hw, port_id))
+ return -ENODEV;
+ return port_get_prop(&hw->port[port_id], prop);
+ }
+
+ return -ENOENT;
+}
+
+int ifpga_set_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
+ struct feature_prop *prop)
+{
+ if (!hw || !prop)
+ return -EINVAL;
+
+ switch (fiu_id) {
+ case FEATURE_FIU_ID_FME:
+ return fme_set_prop(&hw->fme, prop);
+ case FEATURE_FIU_ID_PORT:
+ if (!is_valid_port_id(hw, port_id))
+ return -ENODEV;
+ return port_set_prop(&hw->port[port_id], prop);
+ }
+
+ return -ENOENT;
+}
+
+int ifpga_set_irq(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
+ u32 feature_id, void *irq_set)
+{
+ if (!hw || !irq_set)
+ return -EINVAL;
+
+ switch (fiu_id) {
+ case FEATURE_FIU_ID_FME:
+ return fme_set_irq(&hw->fme, feature_id, irq_set);
+ case FEATURE_FIU_ID_PORT:
+ if (!is_valid_port_id(hw, port_id))
+ return -ENODEV;
+ return port_set_irq(&hw->port[port_id], feature_id, irq_set);
+ }
+
+ return -ENOENT;
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _IFPGA_API_H_
+#define _IFPGA_API_H_
+
+#include "opae_hw_api.h"
+#include "ifpga_hw.h"
+
+extern struct opae_adapter_ops ifpga_adapter_ops;
+extern struct opae_manager_ops ifpga_mgr_ops;
+extern struct opae_bridge_ops ifpga_br_ops;
+extern struct opae_accelerator_ops ifpga_acc_ops;
+extern struct opae_manager_networking_ops ifpga_mgr_network_ops;
+
+/* common APIs */
+int ifpga_get_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
+ struct feature_prop *prop);
+int ifpga_set_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
+ struct feature_prop *prop);
+int ifpga_set_irq(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
+ u32 feature_id, void *irq_set);
+
+/* FME APIs */
+int ifpga_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer, u32 size,
+ u64 *status);
+
+#endif /* _IFPGA_API_H_ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _IFPGA_COMPAT_H_
+#define _IFPGA_COMPAT_H_
+
+#include "opae_osdep.h"
+
+#undef container_of
+#define container_of(ptr, type, member) ({ \
+ typeof(((type *)0)->member)(*__mptr) = (ptr); \
+ (type *)((char *)__mptr - offsetof(type, member)); })
+
+#define IFPGA_PAGE_SHIFT 12
+#define IFPGA_PAGE_SIZE (1 << IFPGA_PAGE_SHIFT)
+#define IFPGA_PAGE_MASK (~(IFPGA_PAGE_SIZE - 1))
+#define IFPGA_PAGE_ALIGN(addr) (((addr) + IFPGA_PAGE_SIZE - 1)\
+ & IFPGA_PAGE_MASK)
+#define IFPGA_ALIGN(x, a) (((x) + (a) - 1) & ~((a) - 1))
+
+#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0)
+#define PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), IFPGA_PAGE_SIZE)
+
+#define readl(addr) opae_readl(addr)
+#define readq(addr) opae_readq(addr)
+#define writel(value, addr) opae_writel(value, addr)
+#define writeq(value, addr) opae_writeq(value, addr)
+
+#define malloc(size) opae_malloc(size)
+#define zmalloc(size) opae_zmalloc(size)
+#define free(size) opae_free(size)
+
+/*
+ * Wait register's _field to be changed to the given value (_expect's _field)
+ * by polling with given interval and timeout.
+ */
+#define fpga_wait_register_field(_field, _expect, _reg_addr, _timeout, _invl)\
+({ \
+ int wait = 0; \
+ int ret = -ETIMEDOUT; \
+ typeof(_expect) value; \
+ for (; wait <= _timeout; wait += _invl) { \
+ value.csr = readq(_reg_addr); \
+ if (_expect._field == value._field) { \
+ ret = 0; \
+ break; \
+ } \
+ udelay(_invl); \
+ } \
+ ret; \
+})
+
+#define __maybe_unused __attribute__((__unused__))
+
+#define UNUSED(x) (void)(x)
+
+#endif /* _IFPGA_COMPAT_H_ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _IFPGA_DEFINES_H_
+#define _IFPGA_DEFINES_H_
+
+#include "ifpga_compat.h"
+
+#define MAX_FPGA_PORT_NUM 4
+
+#define FME_FEATURE_HEADER "fme_hdr"
+#define FME_FEATURE_THERMAL_MGMT "fme_thermal"
+#define FME_FEATURE_POWER_MGMT "fme_power"
+#define FME_FEATURE_GLOBAL_IPERF "fme_iperf"
+#define FME_FEATURE_GLOBAL_ERR "fme_error"
+#define FME_FEATURE_PR_MGMT "fme_pr"
+#define FME_FEATURE_EMIF_MGMT "fme_emif"
+#define FME_FEATURE_HSSI_ETH "fme_hssi"
+#define FME_FEATURE_GLOBAL_DPERF "fme_dperf"
+#define FME_FEATURE_QSPI_FLASH "fme_qspi_flash"
+#define FME_FEATURE_MAX10_SPI "fme_max10_spi"
+#define FME_FEATURE_NIOS_SPI "fme_nios_spi"
+#define FME_FEATURE_I2C_MASTER "fme_i2c_master"
+#define FME_FEATURE_ETH_GROUP "fme_eth_group"
+
+#define PORT_FEATURE_HEADER "port_hdr"
+#define PORT_FEATURE_UAFU "port_uafu"
+#define PORT_FEATURE_ERR "port_err"
+#define PORT_FEATURE_UMSG "port_umsg"
+#define PORT_FEATURE_PR "port_pr"
+#define PORT_FEATURE_UINT "port_uint"
+#define PORT_FEATURE_STP "port_stp"
+
+/*
+ * do not check the revision id as id may be dynamic under
+ * some cases, e.g, UAFU.
+ */
+#define SKIP_REVISION_CHECK 0xff
+
+#define FME_HEADER_REVISION 1
+#define FME_THERMAL_MGMT_REVISION 0
+#define FME_POWER_MGMT_REVISION 1
+#define FME_GLOBAL_IPERF_REVISION 1
+#define FME_GLOBAL_ERR_REVISION 1
+#define FME_PR_MGMT_REVISION 2
+#define FME_HSSI_ETH_REVISION 0
+#define FME_GLOBAL_DPERF_REVISION 0
+#define FME_QSPI_REVISION 0
+#define FME_MAX10_SPI 0
+#define FME_I2C_MASTER 0
+
+#define PORT_HEADER_REVISION 0
+/* UAFU's header info depends on the downloaded GBS */
+#define PORT_UAFU_REVISION SKIP_REVISION_CHECK
+#define PORT_ERR_REVISION 1
+#define PORT_UMSG_REVISION 0
+#define PORT_UINT_REVISION 0
+#define PORT_STP_REVISION 1
+
+#define FEATURE_TYPE_AFU 0x1
+#define FEATURE_TYPE_BBB 0x2
+#define FEATURE_TYPE_PRIVATE 0x3
+#define FEATURE_TYPE_FIU 0x4
+
+#define FEATURE_FIU_ID_FME 0x0
+#define FEATURE_FIU_ID_PORT 0x1
+
+/* Reserved 0xfe for Header, 0xff for AFU*/
+#define FEATURE_ID_FIU_HEADER 0xfe
+#define FEATURE_ID_AFU 0xff
+
+enum fpga_id_type {
+ FME_ID,
+ PORT_ID,
+ FPGA_ID_MAX,
+};
+
+#define FME_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER
+#define FME_FEATURE_ID_THERMAL_MGMT 0x1
+#define FME_FEATURE_ID_POWER_MGMT 0x2
+#define FME_FEATURE_ID_GLOBAL_IPERF 0x3
+#define FME_FEATURE_ID_GLOBAL_ERR 0x4
+#define FME_FEATURE_ID_PR_MGMT 0x5
+#define FME_FEATURE_ID_HSSI_ETH 0x6
+#define FME_FEATURE_ID_GLOBAL_DPERF 0x7
+#define FME_FEATURE_ID_QSPI_FLASH 0x8
+#define FME_FEATURE_ID_EMIF_MGMT 0x9
+#define FME_FEATURE_ID_MAX10_SPI 0xe
+#define FME_FEATURE_ID_NIOS_SPI 0xd
+#define FME_FEATURE_ID_I2C_MASTER 0xf
+#define FME_FEATURE_ID_ETH_GROUP 0x10
+
+#define PORT_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER
+#define PORT_FEATURE_ID_ERROR 0x10
+#define PORT_FEATURE_ID_UMSG 0x12
+#define PORT_FEATURE_ID_UINT 0x13
+#define PORT_FEATURE_ID_STP 0x14
+#define PORT_FEATURE_ID_UAFU FEATURE_ID_AFU
+
+/*
+ * All headers and structures must be byte-packed to match the spec.
+ */
+#pragma pack(push, 1)
+
+struct feature_header {
+ union {
+ u64 csr;
+ struct {
+ u16 id:12;
+ u8 revision:4;
+ u32 next_header_offset:24;
+ u8 end_of_list:1;
+ u32 reserved:19;
+ u8 type:4;
+ };
+ };
+};
+
+struct feature_bbb_header {
+ struct uuid guid;
+};
+
+struct feature_afu_header {
+ struct uuid guid;
+ union {
+ u64 csr;
+ struct {
+ u64 next_afu:24;
+ u64 reserved:40;
+ };
+ };
+};
+
+struct feature_fiu_header {
+ struct uuid guid;
+ union {
+ u64 csr;
+ struct {
+ u64 next_afu:24;
+ u64 reserved:40;
+ };
+ };
+};
+
+struct feature_fme_capability {
+ union {
+ u64 csr;
+ struct {
+ u8 fabric_verid; /* Fabric version ID */
+ u8 socket_id:1; /* Socket id */
+ u8 rsvd1:3; /* Reserved */
+ /* pci0 link available yes /no */
+ u8 pci0_link_avile:1;
+ /* pci1 link available yes /no */
+ u8 pci1_link_avile:1;
+ /* Coherent (QPI/UPI) link available yes /no */
+ u8 qpi_link_avile:1;
+ u8 rsvd2:1; /* Reserved */
+ /* IOMMU or VT-d supported yes/no */
+ u8 iommu_support:1;
+ u8 num_ports:3; /* Number of ports */
+ u8 sf_fab_ctl:1; /* Internal validation bit */
+ u8 rsvd3:3; /* Reserved */
+ /*
+ * Address width supported in bits
+ * BXT -0x26 , SKX -0x30
+ */
+ u8 address_width_bits:6;
+ u8 rsvd4:2; /* Reserved */
+ /* Size of cache supported in kb */
+ u16 cache_size:12;
+ u8 cache_assoc:4; /* Cache Associativity */
+ u16 rsvd5:15; /* Reserved */
+ u8 lock_bit:1; /* Lock bit */
+ };
+ };
+};
+
+#define FME_AFU_ACCESS_PF 0
+#define FME_AFU_ACCESS_VF 1
+
+struct feature_fme_port {
+ union {
+ u64 csr;
+ struct {
+ u32 port_offset:24;
+ u8 reserved1;
+ u8 port_bar:3;
+ u32 reserved2:20;
+ u8 afu_access_control:1;
+ u8 reserved3:4;
+ u8 port_implemented:1;
+ u8 reserved4:3;
+ };
+ };
+};
+
+struct feature_fme_fab_status {
+ union {
+ u64 csr;
+ struct {
+ u8 upilink_status:4; /* UPI Link Status */
+ u8 rsvd1:4; /* Reserved */
+ u8 pci0link_status:1; /* pci0 link status */
+ u8 rsvd2:3; /* Reserved */
+ u8 pci1link_status:1; /* pci1 link status */
+ u64 rsvd3:51; /* Reserved */
+ };
+ };
+};
+
+struct feature_fme_genprotrange2_base {
+ union {
+ u64 csr;
+ struct {
+ u16 rsvd1; /* Reserved */
+ /* Base Address of memory range */
+ u8 protected_base_addrss:4;
+ u64 rsvd2:44; /* Reserved */
+ };
+ };
+};
+
+struct feature_fme_genprotrange2_limit {
+ union {
+ u64 csr;
+ struct {
+ u16 rsvd1; /* Reserved */
+ /* Limit Address of memory range */
+ u8 protected_limit_addrss:4;
+ u16 rsvd2:11; /* Reserved */
+ u8 enable:1; /* Enable GENPROTRANGE check */
+ u32 rsvd3; /* Reserved */
+ };
+ };
+};
+
+struct feature_fme_dxe_lock {
+ union {
+ u64 csr;
+ struct {
+ /*
+ * Determines write access to the DXE region CSRs
+ * 1 - CSR region is locked;
+ * 0 - it is open for write access.
+ */
+ u8 dxe_early_lock:1;
+ /*
+ * Determines write access to the HSSI CSR
+ * 1 - CSR region is locked;
+ * 0 - it is open for write access.
+ */
+ u8 dxe_late_lock:1;
+ u64 rsvd:62;
+ };
+ };
+};
+
+#define HSSI_ID_NO_HASSI 0
+#define HSSI_ID_PCIE_RP 1
+#define HSSI_ID_ETHERNET 2
+
+struct feature_fme_bitstream_id {
+ union {
+ u64 csr;
+ struct {
+ u32 gitrepo_hash:32; /* GIT repository hash */
+ /*
+ * HSSI configuration identifier:
+ * 0 - No HSSI
+ * 1 - PCIe-RP
+ * 2 - Ethernet
+ */
+ u8 hssi_id:4;
+ u16 rsvd1:12; /* Reserved */
+ /* Bitstream version patch number */
+ u8 bs_verpatch:4;
+ /* Bitstream version minor number */
+ u8 bs_verminor:4;
+ /* Bitstream version major number */
+ u8 bs_vermajor:4;
+ /* Bitstream version debug number */
+ u8 bs_verdebug:4;
+ };
+ };
+};
+
+struct feature_fme_bitstream_md {
+ union {
+ u64 csr;
+ struct {
+ /* Seed number userd for synthesis flow */
+ u8 synth_seed:4;
+ /* Synthesis date(day number - 2 digits) */
+ u8 synth_day:8;
+ /* Synthesis date(month number - 2 digits) */
+ u8 synth_month:8;
+ /* Synthesis date(year number - 2 digits) */
+ u8 synth_year:8;
+ u64 rsvd:36; /* Reserved */
+ };
+ };
+};
+
+struct feature_fme_iommu_ctrl {
+ union {
+ u64 csr;
+ struct {
+ /* Disables IOMMU prefetcher for C0 channel */
+ u8 prefetch_disableC0:1;
+ /* Disables IOMMU prefetcher for C1 channel */
+ u8 prefetch_disableC1:1;
+ /* Disables IOMMU partial cache line writes */
+ u8 prefetch_wrdisable:1;
+ u8 rsvd1:1; /* Reserved */
+ /*
+ * Select counter and read value from register
+ * iommu_stat.dbg_counters
+ * 0 - Number of 4K page translation response
+ * 1 - Number of 2M page translation response
+ * 2 - Number of 1G page translation response
+ */
+ u8 counter_sel:2;
+ u32 rsvd2:26; /* Reserved */
+ /* Connected to IOMMU SIP Capabilities */
+ u32 capecap_defeature;
+ };
+ };
+};
+
+struct feature_fme_iommu_stat {
+ union {
+ u64 csr;
+ struct {
+ /* Translation Enable bit from IOMMU SIP */
+ u8 translation_enable:1;
+ /* Drain request in progress */
+ u8 drain_req_inprog:1;
+ /* Invalidation current state */
+ u8 inv_state:3;
+ /* C0 Response Buffer current state */
+ u8 respbuffer_stateC0:3;
+ /* C1 Response Buffer current state */
+ u8 respbuffer_stateC1:3;
+ /* Last request ID to IOMMU SIP */
+ u8 last_reqID:4;
+ /* Last IOMMU SIP response ID value */
+ u8 last_respID:4;
+ /* Last IOMMU SIP response status value */
+ u8 last_respstatus:3;
+ /* C0 Transaction Buffer is not empty */
+ u8 transbuf_notEmptyC0:1;
+ /* C1 Transaction Buffer is not empty */
+ u8 transbuf_notEmptyC1:1;
+ /* C0 Request FIFO is not empty */
+ u8 reqFIFO_notemptyC0:1;
+ /* C1 Request FIFO is not empty */
+ u8 reqFIFO_notemptyC1:1;
+ /* C0 Response FIFO is not empty */
+ u8 respFIFO_notemptyC0:1;
+ /* C1 Response FIFO is not empty */
+ u8 respFIFO_notemptyC1:1;
+ /* C0 Response FIFO overflow detected */
+ u8 respFIFO_overflowC0:1;
+ /* C1 Response FIFO overflow detected */
+ u8 respFIFO_overflowC1:1;
+ /* C0 Transaction Buffer overflow detected */
+ u8 tranbuf_overflowC0:1;
+ /* C1 Transaction Buffer overflow detected */
+ u8 tranbuf_overflowC1:1;
+ /* Request FIFO overflow detected */
+ u8 reqFIFO_overflow:1;
+ /* IOMMU memory read in progress */
+ u8 memrd_inprog:1;
+ /* IOMMU memory write in progress */
+ u8 memwr_inprog:1;
+ u8 rsvd1:1; /* Reserved */
+ /* Value of counter selected by iommu_ctl.counter_sel */
+ u16 dbg_counters:16;
+ u16 rsvd2:12; /* Reserved */
+ };
+ };
+};
+
+struct feature_fme_pcie0_ctrl {
+ union {
+ u64 csr;
+ struct {
+ u64 vtd_bar_lock:1; /* Lock VT-D BAR register */
+ u64 rsvd1:3;
+ u64 rciep:1; /* Configure PCIE0 as RCiEP */
+ u64 rsvd2:59;
+ };
+ };
+};
+
+struct feature_fme_llpr_smrr_base {
+ union {
+ u64 csr;
+ struct {
+ u64 rsvd1:12;
+ u64 base:20; /* SMRR2 memory range base address */
+ u64 rsvd2:32;
+ };
+ };
+};
+
+struct feature_fme_llpr_smrr_mask {
+ union {
+ u64 csr;
+ struct {
+ u64 rsvd1:11;
+ u64 valid:1; /* LLPR_SMRR rule is valid or not */
+ /*
+ * SMRR memory range mask which determines the range
+ * of region being mapped
+ */
+ u64 phys_mask:20;
+ u64 rsvd2:32;
+ };
+ };
+};
+
+struct feature_fme_llpr_smrr2_base {
+ union {
+ u64 csr;
+ struct {
+ u64 rsvd1:12;
+ u64 base:20; /* SMRR2 memory range base address */
+ u64 rsvd2:32;
+ };
+ };
+};
+
+struct feature_fme_llpr_smrr2_mask {
+ union {
+ u64 csr;
+ struct {
+ u64 rsvd1:11;
+ u64 valid:1; /* LLPR_SMRR2 rule is valid or not */
+ /*
+ * SMRR2 memory range mask which determines the range
+ * of region being mapped
+ */
+ u64 phys_mask:20;
+ u64 rsvd2:32;
+ };
+ };
+};
+
+struct feature_fme_llpr_meseg_base {
+ union {
+ u64 csr;
+ struct {
+ /* A[45:19] of base address memory range */
+ u64 me_base:27;
+ u64 rsvd:37;
+ };
+ };
+};
+
+struct feature_fme_llpr_meseg_limit {
+ union {
+ u64 csr;
+ struct {
+ /* A[45:19] of limit address memory range */
+ u64 me_limit:27;
+ u64 rsvd1:4;
+ u64 enable:1; /* Enable LLPR MESEG rule */
+ u64 rsvd2:32;
+ };
+ };
+};
+
+struct feature_fme_header {
+ struct feature_header header;
+ struct feature_afu_header afu_header;
+ u64 reserved;
+ u64 scratchpad;
+ struct feature_fme_capability capability;
+ struct feature_fme_port port[MAX_FPGA_PORT_NUM];
+ struct feature_fme_fab_status fab_status;
+ struct feature_fme_bitstream_id bitstream_id;
+ struct feature_fme_bitstream_md bitstream_md;
+ struct feature_fme_genprotrange2_base genprotrange2_base;
+ struct feature_fme_genprotrange2_limit genprotrange2_limit;
+ struct feature_fme_dxe_lock dxe_lock;
+ struct feature_fme_iommu_ctrl iommu_ctrl;
+ struct feature_fme_iommu_stat iommu_stat;
+ struct feature_fme_pcie0_ctrl pcie0_control;
+ struct feature_fme_llpr_smrr_base smrr_base;
+ struct feature_fme_llpr_smrr_mask smrr_mask;
+ struct feature_fme_llpr_smrr2_base smrr2_base;
+ struct feature_fme_llpr_smrr2_mask smrr2_mask;
+ struct feature_fme_llpr_meseg_base meseg_base;
+ struct feature_fme_llpr_meseg_limit meseg_limit;
+};
+
+struct feature_port_capability {
+ union {
+ u64 csr;
+ struct {
+ u8 port_number:2; /* Port Number 0-3 */
+ u8 rsvd1:6; /* Reserved */
+ u16 mmio_size; /* User MMIO size in KB */
+ u8 rsvd2; /* Reserved */
+ u8 sp_intr_num:4; /* Supported interrupts num */
+ u32 rsvd3:28; /* Reserved */
+ };
+ };
+};
+
+struct feature_port_control {
+ union {
+ u64 csr;
+ struct {
+ u8 port_sftrst:1; /* Port Soft Reset */
+ u8 rsvd1:1; /* Reserved */
+ u8 latency_tolerance:1;/* '1' >= 40us, '0' < 40us */
+ u8 rsvd2:1; /* Reserved */
+ u8 port_sftrst_ack:1; /* HW ACK for Soft Reset */
+ u64 rsvd3:59; /* Reserved */
+ };
+ };
+};
+
+#define PORT_POWER_STATE_NORMAL 0
+#define PORT_POWER_STATE_AP1 1
+#define PORT_POWER_STATE_AP2 2
+#define PORT_POWER_STATE_AP6 6
+
+struct feature_port_status {
+ union {
+ u64 csr;
+ struct {
+ u8 port_freeze:1; /* '1' - freezed '0' - normal */
+ u8 rsvd1:7; /* Reserved */
+ u8 power_state:4; /* Power State */
+ u8 ap1_event:1; /* AP1 event was detected */
+ u8 ap2_event:1; /* AP2 event was detected */
+ u64 rsvd2:50; /* Reserved */
+ };
+ };
+};
+
+/* Port Header Register Set */
+struct feature_port_header {
+ struct feature_header header;
+ struct feature_afu_header afu_header;
+ u64 port_mailbox;
+ u64 scratchpad;
+ struct feature_port_capability capability;
+ struct feature_port_control control;
+ struct feature_port_status status;
+ u64 rsvd2;
+ u64 user_clk_freq_cmd0;
+ u64 user_clk_freq_cmd1;
+ u64 user_clk_freq_sts0;
+ u64 user_clk_freq_sts1;
+};
+
+struct feature_fme_tmp_threshold {
+ union {
+ u64 csr;
+ struct {
+ u8 tmp_thshold1:7; /* temperature Threshold 1 */
+ /* temperature Threshold 1 enable/disable */
+ u8 tmp_thshold1_enable:1;
+ u8 tmp_thshold2:7; /* temperature Threshold 2 */
+ /* temperature Threshold 2 enable /disable */
+ u8 tmp_thshold2_enable:1;
+ u8 pro_hot_setpoint:7; /* Proc Hot set point */
+ u8 rsvd4:1; /* Reserved */
+ u8 therm_trip_thshold:7; /* Thermeal Trip Threshold */
+ u8 rsvd3:1; /* Reserved */
+ u8 thshold1_status:1; /* Threshold 1 Status */
+ u8 thshold2_status:1; /* Threshold 2 Status */
+ u8 rsvd5:1; /* Reserved */
+ /* Thermeal Trip Threshold status */
+ u8 therm_trip_thshold_status:1;
+ u8 rsvd6:4; /* Reserved */
+ /* Validation mode- Force Proc Hot */
+ u8 valmodeforce:1;
+ /* Validation mode - Therm trip Hot */
+ u8 valmodetherm:1;
+ u8 rsvd2:2; /* Reserved */
+ u8 thshold_policy:1; /* threshold policy */
+ u32 rsvd:19; /* Reserved */
+ };
+ };
+};
+
+/* Temperature Sensor Read values format 1 */
+struct feature_fme_temp_rdsensor_fmt1 {
+ union {
+ u64 csr;
+ struct {
+ /* Reads out FPGA temperature in celsius */
+ u8 fpga_temp:7;
+ u8 rsvd0:1; /* Reserved */
+ /* Temperature reading sequence number */
+ u16 tmp_reading_seq_num;
+ /* Temperature reading is valid */
+ u8 tmp_reading_valid:1;
+ u8 rsvd1:7; /* Reserved */
+ u16 dbg_mode:10; /* Debug mode */
+ u32 rsvd2:22; /* Reserved */
+ };
+ };
+};
+
+/* Temperature sensor read values format 2 */
+struct feature_fme_temp_rdsensor_fmt2 {
+ u64 rsvd; /* Reserved */
+};
+
+/* Temperature Threshold Capability Register */
+struct feature_fme_tmp_threshold_cap {
+ union {
+ u64 csr;
+ struct {
+ /* Temperature Threshold Unsupported */
+ u8 tmp_thshold_disabled:1;
+ u64 rsvd:63; /* Reserved */
+ };
+ };
+};
+
+/* FME THERNAL FEATURE */
+struct feature_fme_thermal {
+ struct feature_header header;
+ struct feature_fme_tmp_threshold threshold;
+ struct feature_fme_temp_rdsensor_fmt1 rdsensor_fm1;
+ struct feature_fme_temp_rdsensor_fmt2 rdsensor_fm2;
+ struct feature_fme_tmp_threshold_cap threshold_cap;
+};
+
+/* Power Status register */
+struct feature_fme_pm_status {
+ union {
+ u64 csr;
+ struct {
+ /* FPGA Power consumed, The format is to be defined */
+ u32 pwr_consumed:18;
+ /* FPGA Latency Tolerance Reporting */
+ u8 fpga_latency_report:1;
+ u64 rsvd:45; /* Reserved */
+ };
+ };
+};
+
+/* AP Thresholds */
+struct feature_fme_pm_ap_threshold {
+ union {
+ u64 csr;
+ struct {
+ /*
+ * Number of clocks (5ns period) for assertion
+ * of FME_data
+ */
+ u8 threshold1:7;
+ u8 rsvd1:1;
+ u8 threshold2:7;
+ u8 rsvd2:1;
+ u8 threshold1_status:1;
+ u8 threshold2_status:1;
+ u64 rsvd3:46; /* Reserved */
+ };
+ };
+};
+
+/* Xeon Power Limit */
+struct feature_fme_pm_xeon_limit {
+ union {
+ u64 csr;
+ struct {
+ /* Power limit in Watts in 12.3 format */
+ u16 pwr_limit:15;
+ /* Indicates that power limit has been written */
+ u8 enable:1;
+ /* 0 - Turbe range, 1 - Entire range */
+ u8 clamping:1;
+ /* Time constant in XXYYY format */
+ u8 time:7;
+ u64 rsvd:40; /* Reserved */
+ };
+ };
+};
+
+/* FPGA Power Limit */
+struct feature_fme_pm_fpga_limit {
+ union {
+ u64 csr;
+ struct {
+ /* Power limit in Watts in 12.3 format */
+ u16 pwr_limit:15;
+ /* Indicates that power limit has been written */
+ u8 enable:1;
+ /* 0 - Turbe range, 1 - Entire range */
+ u8 clamping:1;
+ /* Time constant in XXYYY format */
+ u8 time:7;
+ u64 rsvd:40; /* Reserved */
+ };
+ };
+};
+
+/* FME POWER FEATURE */
+struct feature_fme_power {
+ struct feature_header header;
+ struct feature_fme_pm_status status;
+ struct feature_fme_pm_ap_threshold threshold;
+ struct feature_fme_pm_xeon_limit xeon_limit;
+ struct feature_fme_pm_fpga_limit fpga_limit;
+};
+
+#define CACHE_CHANNEL_RD 0
+#define CACHE_CHANNEL_WR 1
+
+enum iperf_cache_events {
+ IPERF_CACHE_RD_HIT,
+ IPERF_CACHE_WR_HIT,
+ IPERF_CACHE_RD_MISS,
+ IPERF_CACHE_WR_MISS,
+ IPERF_CACHE_RSVD, /* reserved */
+ IPERF_CACHE_HOLD_REQ,
+ IPERF_CACHE_DATA_WR_PORT_CONTEN,
+ IPERF_CACHE_TAG_WR_PORT_CONTEN,
+ IPERF_CACHE_TX_REQ_STALL,
+ IPERF_CACHE_RX_REQ_STALL,
+ IPERF_CACHE_EVICTIONS,
+};
+
+/* FPMON Cache Control */
+struct feature_fme_ifpmon_ch_ctl {
+ union {
+ u64 csr;
+ struct {
+ u8 reset_counters:1; /* Reset Counters */
+ u8 rsvd1:7; /* Reserved */
+ u8 freeze:1; /* Freeze if set to 1 */
+ u8 rsvd2:7; /* Reserved */
+ u8 cache_event:4; /* Select the cache event */
+ u8 cci_chsel:1; /* Select the channel */
+ u64 rsvd3:43; /* Reserved */
+ };
+ };
+};
+
+/* FPMON Cache Counter */
+struct feature_fme_ifpmon_ch_ctr {
+ union {
+ u64 csr;
+ struct {
+ /* Cache Counter for even addresse */
+ u64 cache_counter:48;
+ u16 rsvd:12; /* Reserved */
+ /* Cache Event being reported */
+ u8 event_code:4;
+ };
+ };
+};
+
+enum iperf_fab_events {
+ IPERF_FAB_PCIE0_RD,
+ IPERF_FAB_PCIE0_WR,
+ IPERF_FAB_PCIE1_RD,
+ IPERF_FAB_PCIE1_WR,
+ IPERF_FAB_UPI_RD,
+ IPERF_FAB_UPI_WR,
+ IPERF_FAB_MMIO_RD,
+ IPERF_FAB_MMIO_WR,
+};
+
+#define FAB_DISABLE_FILTER 0
+#define FAB_ENABLE_FILTER 1
+
+/* FPMON FAB Control */
+struct feature_fme_ifpmon_fab_ctl {
+ union {
+ u64 csr;
+ struct {
+ u8 reset_counters:1; /* Reset Counters */
+ u8 rsvd:7; /* Reserved */
+ u8 freeze:1; /* Set to 1 frozen counter */
+ u8 rsvd1:7; /* Reserved */
+ u8 fab_evtcode:4; /* Fabric Event Code */
+ u8 port_id:2; /* Port ID */
+ u8 rsvd2:1; /* Reserved */
+ u8 port_filter:1; /* Port Filter */
+ u64 rsvd3:40; /* Reserved */
+ };
+ };
+};
+
+/* FPMON Event Counter */
+struct feature_fme_ifpmon_fab_ctr {
+ union {
+ u64 csr;
+ struct {
+ u64 fab_cnt:60; /* Fabric event counter */
+ /* Fabric event code being reported */
+ u8 event_code:4;
+ };
+ };
+};
+
+/* FPMON Clock Counter */
+struct feature_fme_ifpmon_clk_ctr {
+ u64 afu_interf_clock; /* Clk_16UI (AFU clock) counter. */
+};
+
+enum iperf_vtd_events {
+ IPERF_VTD_AFU_MEM_RD_TRANS,
+ IPERF_VTD_AFU_MEM_WR_TRANS,
+ IPERF_VTD_AFU_DEVTLB_RD_HIT,
+ IPERF_VTD_AFU_DEVTLB_WR_HIT,
+ IPERF_VTD_DEVTLB_4K_FILL,
+ IPERF_VTD_DEVTLB_2M_FILL,
+ IPERF_VTD_DEVTLB_1G_FILL,
+};
+
+/* VT-d control register */
+struct feature_fme_ifpmon_vtd_ctl {
+ union {
+ u64 csr;
+ struct {
+ u8 reset_counters:1; /* Reset Counters */
+ u8 rsvd:7; /* Reserved */
+ u8 freeze:1; /* Set to 1 frozen counter */
+ u8 rsvd1:7; /* Reserved */
+ u8 vtd_evtcode:4; /* VTd and TLB event code */
+ u64 rsvd2:44; /* Reserved */
+ };
+ };
+};
+
+/* VT-d event counter */
+struct feature_fme_ifpmon_vtd_ctr {
+ union {
+ u64 csr;
+ struct {
+ u64 vtd_counter:48; /* VTd event counter */
+ u16 rsvd:12; /* Reserved */
+ u8 event_code:4; /* VTd event code */
+ };
+ };
+};
+
+enum iperf_vtd_sip_events {
+ IPERF_VTD_SIP_IOTLB_4K_HIT,
+ IPERF_VTD_SIP_IOTLB_2M_HIT,
+ IPERF_VTD_SIP_IOTLB_1G_HIT,
+ IPERF_VTD_SIP_SLPWC_L3_HIT,
+ IPERF_VTD_SIP_SLPWC_L4_HIT,
+ IPERF_VTD_SIP_RCC_HIT,
+ IPERF_VTD_SIP_IOTLB_4K_MISS,
+ IPERF_VTD_SIP_IOTLB_2M_MISS,
+ IPERF_VTD_SIP_IOTLB_1G_MISS,
+ IPERF_VTD_SIP_SLPWC_L3_MISS,
+ IPERF_VTD_SIP_SLPWC_L4_MISS,
+ IPERF_VTD_SIP_RCC_MISS,
+};
+
+/* VT-d SIP control register */
+struct feature_fme_ifpmon_vtd_sip_ctl {
+ union {
+ u64 csr;
+ struct {
+ u8 reset_counters:1; /* Reset Counters */
+ u8 rsvd:7; /* Reserved */
+ u8 freeze:1; /* Set to 1 frozen counter */
+ u8 rsvd1:7; /* Reserved */
+ u8 vtd_evtcode:4; /* VTd and TLB event code */
+ u64 rsvd2:44; /* Reserved */
+ };
+ };
+};
+
+/* VT-d SIP event counter */
+struct feature_fme_ifpmon_vtd_sip_ctr {
+ union {
+ u64 csr;
+ struct {
+ u64 vtd_counter:48; /* VTd event counter */
+ u16 rsvd:12; /* Reserved */
+ u8 event_code:4; /* VTd event code */
+ };
+ };
+};
+
+/* FME IPERF FEATURE */
+struct feature_fme_iperf {
+ struct feature_header header;
+ struct feature_fme_ifpmon_ch_ctl ch_ctl;
+ struct feature_fme_ifpmon_ch_ctr ch_ctr0;
+ struct feature_fme_ifpmon_ch_ctr ch_ctr1;
+ struct feature_fme_ifpmon_fab_ctl fab_ctl;
+ struct feature_fme_ifpmon_fab_ctr fab_ctr;
+ struct feature_fme_ifpmon_clk_ctr clk;
+ struct feature_fme_ifpmon_vtd_ctl vtd_ctl;
+ struct feature_fme_ifpmon_vtd_ctr vtd_ctr;
+ struct feature_fme_ifpmon_vtd_sip_ctl vtd_sip_ctl;
+ struct feature_fme_ifpmon_vtd_sip_ctr vtd_sip_ctr;
+};
+
+enum dperf_fab_events {
+ DPERF_FAB_PCIE0_RD,
+ DPERF_FAB_PCIE0_WR,
+ DPERF_FAB_MMIO_RD = 6,
+ DPERF_FAB_MMIO_WR,
+};
+
+/* FPMON FAB Control */
+struct feature_fme_dfpmon_fab_ctl {
+ union {
+ u64 csr;
+ struct {
+ u8 reset_counters:1; /* Reset Counters */
+ u8 rsvd:7; /* Reserved */
+ u8 freeze:1; /* Set to 1 frozen counter */
+ u8 rsvd1:7; /* Reserved */
+ u8 fab_evtcode:4; /* Fabric Event Code */
+ u8 port_id:2; /* Port ID */
+ u8 rsvd2:1; /* Reserved */
+ u8 port_filter:1; /* Port Filter */
+ u64 rsvd3:40; /* Reserved */
+ };
+ };
+};
+
+/* FPMON Event Counter */
+struct feature_fme_dfpmon_fab_ctr {
+ union {
+ u64 csr;
+ struct {
+ u64 fab_cnt:60; /* Fabric event counter */
+ /* Fabric event code being reported */
+ u8 event_code:4;
+ };
+ };
+};
+
+/* FPMON Clock Counter */
+struct feature_fme_dfpmon_clk_ctr {
+ u64 afu_interf_clock; /* Clk_16UI (AFU clock) counter. */
+};
+
+/* FME DPERF FEATURE */
+struct feature_fme_dperf {
+ struct feature_header header;
+ u64 rsvd[3];
+ struct feature_fme_dfpmon_fab_ctl fab_ctl;
+ struct feature_fme_dfpmon_fab_ctr fab_ctr;
+ struct feature_fme_dfpmon_clk_ctr clk;
+};
+
+struct feature_fme_error0 {
+#define FME_ERROR0_MASK 0xFFUL
+#define FME_ERROR0_MASK_DEFAULT 0x40UL /* pcode workaround */
+ union {
+ u64 csr;
+ struct {
+ u8 fabric_err:1; /* Fabric error */
+ u8 fabfifo_overflow:1; /* Fabric fifo overflow */
+ u8 kticdc_parity_err:2;/* KTI CDC Parity Error */
+ u8 iommu_parity_err:1; /* IOMMU Parity error */
+ /* AFU PF/VF access mismatch detected */
+ u8 afu_acc_mode_err:1;
+ u8 mbp_err:1; /* Indicates an MBP event */
+ /* PCIE0 CDC Parity Error */
+ u8 pcie0cdc_parity_err:5;
+ /* PCIE1 CDC Parity Error */
+ u8 pcie1cdc_parity_err:5;
+ /* CVL CDC Parity Error */
+ u8 cvlcdc_parity_err:3;
+ u64 rsvd:44; /* Reserved */
+ };
+ };
+};
+
+/* PCIe0 Error Status register */
+struct feature_fme_pcie0_error {
+#define FME_PCIE0_ERROR_MASK 0xFFUL
+ union {
+ u64 csr;
+ struct {
+ u8 formattype_err:1; /* TLP format/type error */
+ u8 MWAddr_err:1; /* TLP MW address error */
+ u8 MWAddrLength_err:1; /* TLP MW length error */
+ u8 MRAddr_err:1; /* TLP MR address error */
+ u8 MRAddrLength_err:1; /* TLP MR length error */
+ u8 cpl_tag_err:1; /* TLP CPL tag error */
+ u8 cpl_status_err:1; /* TLP CPL status error */
+ u8 cpl_timeout_err:1; /* TLP CPL timeout */
+ u8 cci_parity_err:1; /* CCI bridge parity error */
+ u8 rxpoison_tlp_err:1; /* Received a TLP with EP set */
+ u64 rsvd:52; /* Reserved */
+ u8 vfnumb_err:1; /* Number of error VF */
+ u8 funct_type_err:1; /* Virtual (1) or Physical */
+ };
+ };
+};
+
+/* PCIe1 Error Status register */
+struct feature_fme_pcie1_error {
+#define FME_PCIE1_ERROR_MASK 0xFFUL
+ union {
+ u64 csr;
+ struct {
+ u8 formattype_err:1; /* TLP format/type error */
+ u8 MWAddr_err:1; /* TLP MW address error */
+ u8 MWAddrLength_err:1; /* TLP MW length error */
+ u8 MRAddr_err:1; /* TLP MR address error */
+ u8 MRAddrLength_err:1; /* TLP MR length error */
+ u8 cpl_tag_err:1; /* TLP CPL tag error */
+ u8 cpl_status_err:1; /* TLP CPL status error */
+ u8 cpl_timeout_err:1; /* TLP CPL timeout */
+ u8 cci_parity_err:1; /* CCI bridge parity error */
+ u8 rxpoison_tlp_err:1; /* Received a TLP with EP set */
+ u64 rsvd:54; /* Reserved */
+ };
+ };
+};
+
+/* FME First Error register */
+struct feature_fme_first_error {
+#define FME_FIRST_ERROR_MASK ((1ULL << 60) - 1)
+ union {
+ u64 csr;
+ struct {
+ /*
+ * Indicates the Error Register that was
+ * triggered first
+ */
+ u64 err_reg_status:60;
+ /*
+ * Holds 60 LSBs from the Error register that was
+ * triggered first
+ */
+ u8 errReg_id:4;
+ };
+ };
+};
+
+/* FME Next Error register */
+struct feature_fme_next_error {
+#define FME_NEXT_ERROR_MASK ((1ULL << 60) - 1)
+ union {
+ u64 csr;
+ struct {
+ /*
+ * Indicates the Error Register that was
+ * triggered second
+ */
+ u64 err_reg_status:60;
+ /*
+ * Holds 60 LSBs from the Error register that was
+ * triggered second
+ */
+ u8 errReg_id:4;
+ };
+ };
+};
+
+/* RAS Non Fatal Error Status register */
+struct feature_fme_ras_nonfaterror {
+ union {
+ u64 csr;
+ struct {
+ /* thremal threshold AP1 */
+ u8 temp_thresh_ap1:1;
+ /* thremal threshold AP2 */
+ u8 temp_thresh_ap2:1;
+ u8 pcie_error:1; /* pcie Error */
+ u8 portfatal_error:1; /* port fatal error */
+ u8 proc_hot:1; /* Indicates a ProcHot event */
+ /* Indicates an AFU PF/VF access mismatch */
+ u8 afu_acc_mode_err:1;
+ /* Injected nonfata Error */
+ u8 injected_nonfata_err:1;
+ u8 rsvd1:2;
+ /* Temperature threshold triggered AP6*/
+ u8 temp_thresh_AP6:1;
+ /* Power threshold triggered AP1 */
+ u8 power_thresh_AP1:1;
+ /* Power threshold triggered AP2 */
+ u8 power_thresh_AP2:1;
+ /* Indicates a MBP event */
+ u8 mbp_err:1;
+ u64 rsvd2:51; /* Reserved */
+ };
+ };
+};
+
+/* RAS Catastrophic Fatal Error Status register */
+struct feature_fme_ras_catfaterror {
+ union {
+ u64 csr;
+ struct {
+ /* KTI Link layer error detected */
+ u8 ktilink_fatal_err:1;
+ /* tag-n-cache error detected */
+ u8 tagcch_fatal_err:1;
+ /* CCI error detected */
+ u8 cci_fatal_err:1;
+ /* KTI Protocol error detected */
+ u8 ktiprpto_fatal_err:1;
+ /* Fatal DRAM error detected */
+ u8 dram_fatal_err:1;
+ /* IOMMU detected */
+ u8 iommu_fatal_err:1;
+ /* Fabric Fatal Error */
+ u8 fabric_fatal_err:1;
+ /* PCIe possion Error */
+ u8 pcie_poison_err:1;
+ /* Injected fatal Error */
+ u8 inject_fata_err:1;
+ /* Catastrophic CRC Error */
+ u8 crc_catast_err:1;
+ /* Catastrophic Thermal Error */
+ u8 therm_catast_err:1;
+ /* Injected Catastrophic Error */
+ u8 injected_catast_err:1;
+ u64 rsvd:52;
+ };
+ };
+};
+
+/* RAS Error injection register */
+struct feature_fme_ras_error_inj {
+#define FME_RAS_ERROR_INJ_MASK 0x7UL
+ union {
+ u64 csr;
+ struct {
+ u8 catast_error:1; /* Catastrophic error flag */
+ u8 fatal_error:1; /* Fatal error flag */
+ u8 nonfatal_error:1; /* NonFatal error flag */
+ u64 rsvd:61; /* Reserved */
+ };
+ };
+};
+
+/* FME error capabilities */
+struct feature_fme_error_capability {
+ union {
+ u64 csr;
+ struct {
+ u8 support_intr:1;
+ /* MSI-X vector table entry number */
+ u16 intr_vector_num:12;
+ u64 rsvd:51; /* Reserved */
+ };
+ };
+};
+
+/* FME ERR FEATURE */
+struct feature_fme_err {
+ struct feature_header header;
+ struct feature_fme_error0 fme_err_mask;
+ struct feature_fme_error0 fme_err;
+ struct feature_fme_pcie0_error pcie0_err_mask;
+ struct feature_fme_pcie0_error pcie0_err;
+ struct feature_fme_pcie1_error pcie1_err_mask;
+ struct feature_fme_pcie1_error pcie1_err;
+ struct feature_fme_first_error fme_first_err;
+ struct feature_fme_next_error fme_next_err;
+ struct feature_fme_ras_nonfaterror ras_nonfat_mask;
+ struct feature_fme_ras_nonfaterror ras_nonfaterr;
+ struct feature_fme_ras_catfaterror ras_catfat_mask;
+ struct feature_fme_ras_catfaterror ras_catfaterr;
+ struct feature_fme_ras_error_inj ras_error_inj;
+ struct feature_fme_error_capability fme_err_capability;
+};
+
+/* FME Partial Reconfiguration Control */
+struct feature_fme_pr_ctl {
+ union {
+ u64 csr;
+ struct {
+ u8 pr_reset:1; /* Reset PR Engine */
+ u8 rsvd3:3; /* Reserved */
+ u8 pr_reset_ack:1; /* Reset PR Engine Ack */
+ u8 rsvd4:3; /* Reserved */
+ u8 pr_regionid:2; /* PR Region ID */
+ u8 rsvd1:2; /* Reserved */
+ u8 pr_start_req:1; /* PR Start Request */
+ u8 pr_push_complete:1; /* PR Data push complete */
+ u8 pr_kind:1; /* PR Data push complete */
+ u32 rsvd:17; /* Reserved */
+ u32 config_data; /* Config data TBD */
+ };
+ };
+};
+
+/* FME Partial Reconfiguration Status */
+struct feature_fme_pr_status {
+ union {
+ u64 csr;
+ struct {
+ u16 pr_credit:9; /* PR Credits */
+ u8 rsvd2:7; /* Reserved */
+ u8 pr_status:1; /* PR status */
+ u8 rsvd:3; /* Reserved */
+ /* Altra PR Controller Block status */
+ u8 pr_controller_status:3;
+ u8 rsvd1:1; /* Reserved */
+ u8 pr_host_status:4; /* PR Host status */
+ u8 rsvd3:4; /* Reserved */
+ /* Security Block Status fields (TBD) */
+ u32 security_bstatus;
+ };
+ };
+};
+
+/* FME Partial Reconfiguration Data */
+struct feature_fme_pr_data {
+ union {
+ u64 csr; /* PR data from the raw-binary file */
+ struct {
+ /* PR data from the raw-binary file */
+ u32 pr_data_raw;
+ u32 rsvd;
+ };
+ };
+};
+
+/* FME PR Public Key */
+struct feature_fme_pr_key {
+ u64 key; /* FME PR Public Hash */
+};
+
+/* FME PR FEATURE */
+struct feature_fme_pr {
+ struct feature_header header;
+ /*Partial Reconfiguration control */
+ struct feature_fme_pr_ctl ccip_fme_pr_control;
+
+ /* Partial Reconfiguration Status */
+ struct feature_fme_pr_status ccip_fme_pr_status;
+
+ /* Partial Reconfiguration data */
+ struct feature_fme_pr_data ccip_fme_pr_data;
+
+ /* Partial Reconfiguration data */
+ u64 ccip_fme_pr_err;
+
+ u64 rsvd1[3];
+
+ /* Partial Reconfiguration data registers */
+ u64 fme_pr_data1;
+ u64 fme_pr_data2;
+ u64 fme_pr_data3;
+ u64 fme_pr_data4;
+ u64 fme_pr_data5;
+ u64 fme_pr_data6;
+ u64 fme_pr_data7;
+ u64 fme_pr_data8;
+
+ u64 rsvd2[5];
+
+ /* PR Interface ID */
+ u64 fme_pr_intfc_id_l;
+ u64 fme_pr_intfc_id_h;
+
+ /* MSIX filed to be Added */
+};
+
+/* FME HSSI Control */
+struct feature_fme_hssi_eth_ctrl {
+ union {
+ u64 csr;
+ struct {
+ u32 data:32; /* HSSI data */
+ u16 address:16; /* HSSI address */
+ /*
+ * HSSI comamnd
+ * 0x0 - No request
+ * 0x08 - SW register RD request
+ * 0x10 - SW register WR request
+ * 0x40 - Auxiliar bus RD request
+ * 0x80 - Auxiliar bus WR request
+ */
+ u16 cmd:16;
+ };
+ };
+};
+
+/* FME HSSI Status */
+struct feature_fme_hssi_eth_stat {
+ union {
+ u64 csr;
+ struct {
+ u32 data:32; /* HSSI data */
+ u8 acknowledge:1; /* HSSI acknowledge */
+ u8 spare:1; /* HSSI spare */
+ u32 rsvd:30; /* Reserved */
+ };
+ };
+};
+
+/* FME HSSI FEATURE */
+struct feature_fme_hssi {
+ struct feature_header header;
+ struct feature_fme_hssi_eth_ctrl hssi_control;
+ struct feature_fme_hssi_eth_stat hssi_status;
+};
+
+#define PORT_ERR_MASK 0xfff0703ff001f
+struct feature_port_err_key {
+ union {
+ u64 csr;
+ struct {
+ /* Tx Channel0: Overflow */
+ u8 tx_ch0_overflow:1;
+ /* Tx Channel0: Invalid request encoding */
+ u8 tx_ch0_invaldreq :1;
+ /* Tx Channel0: Request with cl_len=3 not supported */
+ u8 tx_ch0_cl_len3:1;
+ /* Tx Channel0: Request with cl_len=2 not aligned 2CL */
+ u8 tx_ch0_cl_len2:1;
+ /* Tx Channel0: Request with cl_len=4 not aligned 4CL */
+ u8 tx_ch0_cl_len4:1;
+
+ u16 rsvd1:4; /* Reserved */
+
+ /* AFU MMIO RD received while PORT is in reset */
+ u8 mmio_rd_whilerst:1;
+ /* AFU MMIO WR received while PORT is in reset */
+ u8 mmio_wr_whilerst:1;
+
+ u16 rsvd2:5; /* Reserved */
+
+ /* Tx Channel1: Overflow */
+ u8 tx_ch1_overflow:1;
+ /* Tx Channel1: Invalid request encoding */
+ u8 tx_ch1_invaldreq:1;
+ /* Tx Channel1: Request with cl_len=3 not supported */
+ u8 tx_ch1_cl_len3:1;
+ /* Tx Channel1: Request with cl_len=2 not aligned 2CL */
+ u8 tx_ch1_cl_len2:1;
+ /* Tx Channel1: Request with cl_len=4 not aligned 4CL */
+ u8 tx_ch1_cl_len4:1;
+
+ /* Tx Channel1: Insufficient data payload */
+ u8 tx_ch1_insuff_data:1;
+ /* Tx Channel1: Data payload overrun */
+ u8 tx_ch1_data_overrun:1;
+ /* Tx Channel1 : Incorrect address */
+ u8 tx_ch1_incorr_addr:1;
+ /* Tx Channel1 : NON-Zero SOP Detected */
+ u8 tx_ch1_nzsop:1;
+ /* Tx Channel1 : Illegal VC_SEL, atomic request VLO */
+ u8 tx_ch1_illegal_vcsel:1;
+
+ u8 rsvd3:6; /* Reserved */
+
+ /* MMIO Read Timeout in AFU */
+ u8 mmioread_timeout:1;
+
+ /* Tx Channel2: FIFO Overflow */
+ u8 tx_ch2_fifo_overflow:1;
+
+ /* MMIO read is not matching pending request */
+ u8 unexp_mmio_resp:1;
+
+ u8 rsvd4:5; /* Reserved */
+
+ /* Number of pending Requests: counter overflow */
+ u8 tx_req_counter_overflow:1;
+ /* Req with Address violating SMM Range */
+ u8 llpr_smrr_err:1;
+ /* Req with Address violating second SMM Range */
+ u8 llpr_smrr2_err:1;
+ /* Req with Address violating ME Stolen message */
+ u8 llpr_mesg_err:1;
+ /* Req with Address violating Generic Protected Range */
+ u8 genprot_range_err:1;
+ /* Req with Address violating Legacy Range low */
+ u8 legrange_low_err:1;
+ /* Req with Address violating Legacy Range High */
+ u8 legrange_high_err:1;
+ /* Req with Address violating VGA memory range */
+ u8 vgmem_range_err:1;
+ u8 page_fault_err:1; /* Page fault */
+ u8 pmr_err:1; /* PMR Error */
+ u8 ap6_event:1; /* AP6 event */
+ /* VF FLR detected on Port with PF access control */
+ u8 vfflr_access_err:1;
+ u16 rsvd5:12; /* Reserved */
+ };
+ };
+};
+
+/* Port first error register, not contain all error bits in error register. */
+struct feature_port_first_err_key {
+ union {
+ u64 csr;
+ struct {
+ u8 tx_ch0_overflow:1;
+ u8 tx_ch0_invaldreq :1;
+ u8 tx_ch0_cl_len3:1;
+ u8 tx_ch0_cl_len2:1;
+ u8 tx_ch0_cl_len4:1;
+ u8 rsvd1:4; /* Reserved */
+ u8 mmio_rd_whilerst:1;
+ u8 mmio_wr_whilerst:1;
+ u8 rsvd2:5; /* Reserved */
+ u8 tx_ch1_overflow:1;
+ u8 tx_ch1_invaldreq:1;
+ u8 tx_ch1_cl_len3:1;
+ u8 tx_ch1_cl_len2:1;
+ u8 tx_ch1_cl_len4:1;
+ u8 tx_ch1_insuff_data:1;
+ u8 tx_ch1_data_overrun:1;
+ u8 tx_ch1_incorr_addr:1;
+ u8 tx_ch1_nzsop:1;
+ u8 tx_ch1_illegal_vcsel:1;
+ u8 rsvd3:6; /* Reserved */
+ u8 mmioread_timeout:1;
+ u8 tx_ch2_fifo_overflow:1;
+ u8 rsvd4:6; /* Reserved */
+ u8 tx_req_counter_overflow:1;
+ u32 rsvd5:23; /* Reserved */
+ };
+ };
+};
+
+/* Port malformed Req0 */
+struct feature_port_malformed_req0 {
+ u64 header_lsb;
+};
+
+/* Port malformed Req1 */
+struct feature_port_malformed_req1 {
+ u64 header_msb;
+};
+
+/* Port debug register */
+struct feature_port_debug {
+ u64 port_debug;
+};
+
+/* Port error capabilities */
+struct feature_port_err_capability {
+ union {
+ u64 csr;
+ struct {
+ u8 support_intr:1;
+ /* MSI-X vector table entry number */
+ u16 intr_vector_num:12;
+ u64 rsvd:51; /* Reserved */
+ };
+ };
+};
+
+/* PORT FEATURE ERROR */
+struct feature_port_error {
+ struct feature_header header;
+ struct feature_port_err_key error_mask;
+ struct feature_port_err_key port_error;
+ struct feature_port_first_err_key port_first_error;
+ struct feature_port_malformed_req0 malreq0;
+ struct feature_port_malformed_req1 malreq1;
+ struct feature_port_debug port_debug;
+ struct feature_port_err_capability error_capability;
+};
+
+/* Port UMSG Capability */
+struct feature_port_umsg_cap {
+ union {
+ u64 csr;
+ struct {
+ /* Number of umsg allocated to this port */
+ u8 umsg_allocated;
+ /* Enable / Disable UMsg engine for this port */
+ u8 umsg_enable:1;
+ /* Usmg initialization status */
+ u8 umsg_init_complete:1;
+ /* IOMMU can not translate the umsg base address */
+ u8 umsg_trans_error:1;
+ u64 rsvd:53; /* Reserved */
+ };
+ };
+};
+
+/* Port UMSG base address */
+struct feature_port_umsg_baseaddr {
+ union {
+ u64 csr;
+ struct {
+ u64 base_addr:48; /* 48 bit physical address */
+ u16 rsvd; /* Reserved */
+ };
+ };
+};
+
+struct feature_port_umsg_mode {
+ union {
+ u64 csr;
+ struct {
+ u32 umsg_hint_enable; /* UMSG hint enable/disable */
+ u32 rsvd; /* Reserved */
+ };
+ };
+};
+
+/* PORT FEATURE UMSG */
+struct feature_port_umsg {
+ struct feature_header header;
+ struct feature_port_umsg_cap capability;
+ struct feature_port_umsg_baseaddr baseaddr;
+ struct feature_port_umsg_mode mode;
+};
+
+#define UMSG_EN_POLL_INVL 10 /* us */
+#define UMSG_EN_POLL_TIMEOUT 1000 /* us */
+
+/* Port UINT Capability */
+struct feature_port_uint_cap {
+ union {
+ u64 csr;
+ struct {
+ u16 intr_num:12; /* Supported interrupts num */
+ /* First MSI-X vector table entry number */
+ u16 first_vec_num:12;
+ u64 rsvd:40;
+ };
+ };
+};
+
+/* PORT FEATURE UINT */
+struct feature_port_uint {
+ struct feature_header header;
+ struct feature_port_uint_cap capability;
+};
+
+/* STP region supports mmap operation, so use page aligned size. */
+#define PORT_FEATURE_STP_REGION_SIZE \
+ IFPGA_PAGE_ALIGN(sizeof(struct feature_port_stp))
+
+/* Port STP status register (for debug only)*/
+struct feature_port_stp_status {
+ union {
+ u64 csr;
+ struct {
+ /* SLD Hub end-point read/write timeout */
+ u8 sld_ep_timeout:1;
+ /* Remote STP in reset/disable */
+ u8 rstp_disabled:1;
+ u8 unsupported_read:1;
+ /* MMIO timeout detected and faked with a response */
+ u8 mmio_timeout:1;
+ u8 txfifo_count:4;
+ u8 rxfifo_count:4;
+ u8 txfifo_overflow:1;
+ u8 txfifo_underflow:1;
+ u8 rxfifo_overflow:1;
+ u8 rxfifo_underflow:1;
+ /* Number of MMIO write requests */
+ u16 write_requests;
+ /* Number of MMIO read requests */
+ u16 read_requests;
+ /* Number of MMIO read responses */
+ u16 read_responses;
+ };
+ };
+};
+
+/*
+ * PORT FEATURE STP
+ * Most registers in STP region are not touched by driver, but mmapped to user
+ * space. So they are not defined in below data structure, as its actual size
+ * is 0x18c per spec.
+ */
+struct feature_port_stp {
+ struct feature_header header;
+ struct feature_port_stp_status stp_status;
+};
+
+/**
+ * enum fpga_pr_states - fpga PR states
+ * @FPGA_PR_STATE_UNKNOWN: can't determine state
+ * @FPGA_PR_STATE_WRITE_INIT: preparing FPGA for programming
+ * @FPGA_PR_STATE_WRITE_INIT_ERR: Error during WRITE_INIT stage
+ * @FPGA_PR_STATE_WRITE: writing image to FPGA
+ * @FPGA_PR_STATE_WRITE_ERR: Error while writing FPGA
+ * @FPGA_PR_STATE_WRITE_COMPLETE: Doing post programming steps
+ * @FPGA_PR_STATE_WRITE_COMPLETE_ERR: Error during WRITE_COMPLETE
+ * @FPGA_PR_STATE_OPERATING: FPGA PR done
+ */
+enum fpga_pr_states {
+ /* canot determine state states */
+ FPGA_PR_STATE_UNKNOWN,
+
+ /* write sequence: init, write, complete */
+ FPGA_PR_STATE_WRITE_INIT,
+ FPGA_PR_STATE_WRITE_INIT_ERR,
+ FPGA_PR_STATE_WRITE,
+ FPGA_PR_STATE_WRITE_ERR,
+ FPGA_PR_STATE_WRITE_COMPLETE,
+ FPGA_PR_STATE_WRITE_COMPLETE_ERR,
+
+ /* FPGA PR done */
+ FPGA_PR_STATE_DONE,
+};
+
+/*
+ * FPGA Manager flags
+ * FPGA_MGR_PARTIAL_RECONFIG: do partial reconfiguration if supported
+ */
+#define FPGA_MGR_PARTIAL_RECONFIG BIT(0)
+
+/**
+ * struct fpga_pr_info - specific information to a FPGA PR
+ * @flags: boolean flags as defined above
+ * @pr_err: PR error code
+ * @state: fpga manager state
+ * @port_id: port id
+ */
+struct fpga_pr_info {
+ u32 flags;
+ u64 pr_err;
+ enum fpga_pr_states state;
+ int port_id;
+};
+
+#define DEFINE_FPGA_PR_ERR_MSG(_name_) \
+static const char * const _name_[] = { \
+ "PR operation error detected", \
+ "PR CRC error detected", \
+ "PR incompatiable bitstream error detected", \
+ "PR IP protocol error detected", \
+ "PR FIFO overflow error detected", \
+ "PR timeout error detected", \
+ "PR secure load error detected", \
+}
+
+#define RST_POLL_INVL 10 /* us */
+#define RST_POLL_TIMEOUT 1000 /* us */
+
+#define PR_WAIT_TIMEOUT 15000000
+
+#define PR_HOST_STATUS_IDLE 0
+#define PR_MAX_ERR_NUM 7
+
+DEFINE_FPGA_PR_ERR_MSG(pr_err_msg);
+
+/*
+ * green bitstream header must be byte-packed to match the
+ * real file format.
+ */
+struct bts_header {
+ u64 guid_h;
+ u64 guid_l;
+ u32 metadata_len;
+};
+
+#define GBS_GUID_H 0x414750466e6f6558
+#define GBS_GUID_L 0x31303076534247b7
+#define is_valid_bts(bts_hdr) \
+ (((bts_hdr)->guid_h == GBS_GUID_H) && \
+ ((bts_hdr)->guid_l == GBS_GUID_L))
+
+/* bitstream id definition */
+struct fme_bitstream_id {
+ union {
+ u64 id;
+ struct {
+ u64 hash:32;
+ u64 interface:4;
+ u64 reserved:12;
+ u64 debug:4;
+ u64 patch:4;
+ u64 minor:4;
+ u64 major:4;
+ };
+ };
+};
+
+enum board_interface {
+ VC_8_10G = 0,
+ VC_4_25G = 1,
+ VC_2_1_25 = 2,
+ VC_4_25G_2_25G = 3,
+ VC_2_2_25G = 4,
+};
+
+struct ifpga_fme_board_info {
+ enum board_interface type;
+ u32 build_hash;
+ u32 debug_version;
+ u32 patch_version;
+ u32 minor_version;
+ u32 major_version;
+ u32 nums_of_retimer;
+ u32 ports_per_retimer;
+ u32 nums_of_fvl;
+ u32 ports_per_fvl;
+};
+
+#pragma pack(pop)
+#endif /* _BASE_IFPGA_DEFINES_H_ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "opae_hw_api.h"
+#include "ifpga_api.h"
+
+#include "ifpga_hw.h"
+#include "ifpga_enumerate.h"
+#include "ifpga_feature_dev.h"
+
+struct build_feature_devs_info {
+ struct opae_adapter_data_pci *pci_data;
+
+ struct ifpga_afu_info *acc_info;
+
+ void *fiu;
+ enum fpga_id_type current_type;
+ int current_port_id;
+
+ void *ioaddr;
+ void *ioend;
+ uint64_t phys_addr;
+ int current_bar;
+
+ void *pfme_hdr;
+
+ struct ifpga_hw *hw;
+};
+
+static int feature_revision(void __iomem *start)
+{
+ struct feature_header header;
+
+ header.csr = readq(start);
+
+ return header.revision;
+}
+
+static u32 feature_size(void __iomem *start)
+{
+ struct feature_header header;
+
+ header.csr = readq(start);
+
+ /*the size of private feature is 4KB aligned*/
+ return header.next_header_offset ? header.next_header_offset:4096;
+}
+
+static u64 feature_id(void __iomem *start)
+{
+ struct feature_header header;
+
+ header.csr = readq(start);
+
+ switch (header.type) {
+ case FEATURE_TYPE_FIU:
+ return FEATURE_ID_FIU_HEADER;
+ case FEATURE_TYPE_PRIVATE:
+ return header.id;
+ case FEATURE_TYPE_AFU:
+ return FEATURE_ID_AFU;
+ }
+
+ WARN_ON(1);
+ return 0;
+}
+
+static int
+build_info_add_sub_feature(struct build_feature_devs_info *binfo,
+ void __iomem *start, u64 fid, unsigned int size,
+ unsigned int vec_start,
+ unsigned int vec_cnt)
+{
+ struct ifpga_hw *hw = binfo->hw;
+ struct ifpga_feature *feature = NULL;
+ struct feature_irq_ctx *ctx = NULL;
+ int port_id, ret = 0;
+ unsigned int i;
+
+ fid = fid?fid:feature_id(start);
+ size = size?size:feature_size(start);
+
+ feature = opae_malloc(sizeof(struct ifpga_feature));
+ if (!feature)
+ return -ENOMEM;
+
+ feature->state = IFPGA_FEATURE_ATTACHED;
+ feature->addr = start;
+ feature->id = fid;
+ feature->size = size;
+ feature->revision = feature_revision(start);
+ feature->phys_addr = binfo->phys_addr +
+ ((u8 *)start - (u8 *)binfo->ioaddr);
+ feature->vec_start = vec_start;
+ feature->vec_cnt = vec_cnt;
+
+ dev_debug(binfo, "%s: id=0x%llx, phys_addr=0x%llx, size=%u\n",
+ __func__, (unsigned long long)feature->id,
+ (unsigned long long)feature->phys_addr, size);
+
+ if (vec_cnt) {
+ if (vec_start + vec_cnt <= vec_start)
+ return -EINVAL;
+
+ ctx = zmalloc(sizeof(*ctx) * vec_cnt);
+ if (!ctx)
+ return -ENOMEM;
+
+ for (i = 0; i < vec_cnt; i++) {
+ ctx[i].eventfd = -1;
+ ctx[i].idx = vec_start + i;
+ }
+ }
+
+ feature->ctx = ctx;
+ feature->ctx_num = vec_cnt;
+ feature->vfio_dev_fd = binfo->pci_data->vfio_dev_fd;
+
+ if (binfo->current_type == FME_ID) {
+ feature->parent = &hw->fme;
+ feature->type = FEATURE_FME_TYPE;
+ feature->name = get_fme_feature_name(fid);
+ TAILQ_INSERT_TAIL(&hw->fme.feature_list, feature, next);
+ } else if (binfo->current_type == PORT_ID) {
+ port_id = binfo->current_port_id;
+ feature->parent = &hw->port[port_id];
+ feature->type = FEATURE_PORT_TYPE;
+ feature->name = get_port_feature_name(fid);
+ TAILQ_INSERT_TAIL(&hw->port[port_id].feature_list,
+ feature, next);
+ } else {
+ return -EFAULT;
+ }
+ return ret;
+}
+
+static int
+create_feature_instance(struct build_feature_devs_info *binfo,
+ void __iomem *start, u64 fid,
+ unsigned int size, unsigned int vec_start,
+ unsigned int vec_cnt)
+{
+ return build_info_add_sub_feature(binfo, start, fid, size, vec_start,
+ vec_cnt);
+}
+
+/*
+ * UAFU GUID is dynamic as it can be changed after FME downloads different
+ * Green Bitstream to the port, so we treat the unknown GUIDs which are
+ * attached on port's feature list as UAFU.
+ */
+static bool feature_is_UAFU(struct build_feature_devs_info *binfo)
+{
+ if (binfo->current_type != PORT_ID)
+ return false;
+
+ return true;
+}
+
+static int parse_feature_port_uafu(struct build_feature_devs_info *binfo,
+ struct feature_header *hdr)
+{
+ u64 id = PORT_FEATURE_ID_UAFU;
+ struct ifpga_afu_info *info;
+ void *start = (void *)hdr;
+ struct feature_port_header *port_hdr = binfo->ioaddr;
+ struct feature_port_capability capability;
+ int ret;
+ int size;
+
+ capability.csr = readq(&port_hdr->capability);
+
+ size = capability.mmio_size << 10;
+
+ ret = create_feature_instance(binfo, hdr, id, size, 0, 0);
+ if (ret)
+ return ret;
+
+ info = opae_malloc(sizeof(*info));
+ if (!info)
+ return -ENOMEM;
+
+ info->region[0].addr = start;
+ info->region[0].phys_addr = binfo->phys_addr +
+ (uint8_t *)start - (uint8_t *)binfo->ioaddr;
+ info->region[0].len = size;
+ info->num_regions = 1;
+
+ binfo->acc_info = info;
+
+ return ret;
+}
+
+static int parse_feature_afus(struct build_feature_devs_info *binfo,
+ struct feature_header *hdr)
+{
+ int ret;
+ struct feature_afu_header *afu_hdr, header;
+ u8 __iomem *start;
+ u8 __iomem *end = binfo->ioend;
+
+ start = (u8 __iomem *)hdr;
+ for (; start < end; start += header.next_afu) {
+ if ((unsigned int)(end - start) <
+ (unsigned int)(sizeof(*afu_hdr) + sizeof(*hdr)))
+ return -EINVAL;
+
+ hdr = (struct feature_header *)start;
+ afu_hdr = (struct feature_afu_header *)(hdr + 1);
+ header.csr = readq(&afu_hdr->csr);
+
+ if (feature_is_UAFU(binfo)) {
+ ret = parse_feature_port_uafu(binfo, hdr);
+ if (ret)
+ return ret;
+ }
+
+ if (!header.next_afu)
+ break;
+ }
+
+ return 0;
+}
+
+/* create and register proper private data */
+static int build_info_commit_dev(struct build_feature_devs_info *binfo)
+{
+ struct ifpga_afu_info *info = binfo->acc_info;
+ struct ifpga_hw *hw = binfo->hw;
+ struct opae_manager *mgr;
+ struct opae_bridge *br;
+ struct opae_accelerator *acc;
+ struct ifpga_port_hw *port;
+ struct ifpga_fme_hw *fme;
+ struct ifpga_feature *feature;
+
+ if (!binfo->fiu)
+ return 0;
+
+ if (binfo->current_type == PORT_ID) {
+ /* return error if no valid acc info data structure */
+ if (!info)
+ return -EFAULT;
+
+ br = opae_bridge_alloc(hw->adapter->name, &ifpga_br_ops,
+ binfo->fiu);
+ if (!br)
+ return -ENOMEM;
+
+ br->id = binfo->current_port_id;
+
+ /* update irq info */
+ port = &hw->port[binfo->current_port_id];
+ feature = get_feature_by_id(&port->feature_list,
+ PORT_FEATURE_ID_UINT);
+ if (feature)
+ info->num_irqs = feature->vec_cnt;
+
+ acc = opae_accelerator_alloc(hw->adapter->name,
+ &ifpga_acc_ops, info);
+ if (!acc) {
+ opae_bridge_free(br);
+ return -ENOMEM;
+ }
+
+ acc->br = br;
+ if (hw->adapter->mgr)
+ acc->mgr = hw->adapter->mgr;
+ acc->index = br->id;
+
+ fme = &hw->fme;
+ fme->nums_acc_region = info->num_regions;
+
+ opae_adapter_add_acc(hw->adapter, acc);
+
+ } else if (binfo->current_type == FME_ID) {
+ mgr = opae_manager_alloc(hw->adapter->name, &ifpga_mgr_ops,
+ &ifpga_mgr_network_ops, binfo->fiu);
+ if (!mgr)
+ return -ENOMEM;
+
+ mgr->adapter = hw->adapter;
+ hw->adapter->mgr = mgr;
+ }
+
+ binfo->fiu = NULL;
+
+ return 0;
+}
+
+static int
+build_info_create_dev(struct build_feature_devs_info *binfo,
+ enum fpga_id_type type, unsigned int index)
+{
+ int ret;
+
+ ret = build_info_commit_dev(binfo);
+ if (ret)
+ return ret;
+
+ binfo->current_type = type;
+
+ if (type == FME_ID) {
+ binfo->fiu = &binfo->hw->fme;
+ } else if (type == PORT_ID) {
+ binfo->fiu = &binfo->hw->port[index];
+ binfo->current_port_id = index;
+ }
+
+ return 0;
+}
+
+static int parse_feature_fme(struct build_feature_devs_info *binfo,
+ struct feature_header *start)
+{
+ struct ifpga_hw *hw = binfo->hw;
+ struct ifpga_fme_hw *fme = &hw->fme;
+ int ret;
+
+ ret = build_info_create_dev(binfo, FME_ID, 0);
+ if (ret)
+ return ret;
+
+ /* Update FME states */
+ fme->state = IFPGA_FME_IMPLEMENTED;
+ fme->parent = hw;
+ TAILQ_INIT(&fme->feature_list);
+ spinlock_init(&fme->lock);
+
+ return create_feature_instance(binfo, start, 0, 0, 0, 0);
+}
+
+static int parse_feature_port(struct build_feature_devs_info *binfo,
+ void __iomem *start)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_capability capability;
+ struct ifpga_hw *hw = binfo->hw;
+ struct ifpga_port_hw *port;
+ unsigned int port_id;
+ int ret;
+
+ /* Get current port's id */
+ port_hdr = (struct feature_port_header *)start;
+ capability.csr = readq(&port_hdr->capability);
+ port_id = capability.port_number;
+
+ ret = build_info_create_dev(binfo, PORT_ID, port_id);
+ if (ret)
+ return ret;
+
+ /*found a Port device*/
+ port = &hw->port[port_id];
+ port->port_id = binfo->current_port_id;
+ port->parent = hw;
+ port->state = IFPGA_PORT_ATTACHED;
+ spinlock_init(&port->lock);
+ TAILQ_INIT(&port->feature_list);
+
+ return create_feature_instance(binfo, start, 0, 0, 0, 0);
+}
+
+static void enable_port_uafu(struct build_feature_devs_info *binfo,
+ void __iomem *start)
+{
+ struct ifpga_port_hw *port = &binfo->hw->port[binfo->current_port_id];
+
+ UNUSED(start);
+
+ fpga_port_reset(port);
+}
+
+static int parse_feature_fiu(struct build_feature_devs_info *binfo,
+ struct feature_header *hdr)
+{
+ struct feature_header header;
+ struct feature_fiu_header *fiu_hdr, fiu_header;
+ u8 __iomem *start = (u8 __iomem *)hdr;
+ int ret;
+
+ header.csr = readq(hdr);
+
+ switch (header.id) {
+ case FEATURE_FIU_ID_FME:
+ ret = parse_feature_fme(binfo, hdr);
+ binfo->pfme_hdr = hdr;
+ if (ret)
+ return ret;
+ break;
+ case FEATURE_FIU_ID_PORT:
+ ret = parse_feature_port(binfo, hdr);
+ enable_port_uafu(binfo, hdr);
+ if (ret)
+ return ret;
+
+ /* Check Port FIU's next_afu pointer to User AFU DFH */
+ fiu_hdr = (struct feature_fiu_header *)(hdr + 1);
+ fiu_header.csr = readq(&fiu_hdr->csr);
+
+ if (fiu_header.next_afu) {
+ start += fiu_header.next_afu;
+ ret = parse_feature_afus(binfo,
+ (struct feature_header *)start);
+ if (ret)
+ return ret;
+ } else {
+ dev_info(binfo, "No AFUs detected on Port\n");
+ }
+
+ break;
+ default:
+ dev_info(binfo, "FIU TYPE %d is not supported yet.\n",
+ header.id);
+ }
+
+ return 0;
+}
+
+static void parse_feature_irqs(struct build_feature_devs_info *binfo,
+ void __iomem *start, unsigned int *vec_start,
+ unsigned int *vec_cnt)
+{
+ UNUSED(binfo);
+ u64 id;
+
+ id = feature_id(start);
+
+ if (id == PORT_FEATURE_ID_UINT) {
+ struct feature_port_uint *port_uint = start;
+ struct feature_port_uint_cap uint_cap;
+
+ uint_cap.csr = readq(&port_uint->capability);
+ if (uint_cap.intr_num) {
+ *vec_start = uint_cap.first_vec_num;
+ *vec_cnt = uint_cap.intr_num;
+ } else {
+ dev_debug(binfo, "UAFU doesn't support interrupt\n");
+ }
+ } else if (id == PORT_FEATURE_ID_ERROR) {
+ struct feature_port_error *port_err = start;
+ struct feature_port_err_capability port_err_cap;
+
+ port_err_cap.csr = readq(&port_err->error_capability);
+ if (port_err_cap.support_intr) {
+ *vec_start = port_err_cap.intr_vector_num;
+ *vec_cnt = 1;
+ } else {
+ dev_debug(&binfo, "Port error doesn't support interrupt\n");
+ }
+
+ } else if (id == FME_FEATURE_ID_GLOBAL_ERR) {
+ struct feature_fme_err *fme_err = start;
+ struct feature_fme_error_capability fme_err_cap;
+
+ fme_err_cap.csr = readq(&fme_err->fme_err_capability);
+ if (fme_err_cap.support_intr) {
+ *vec_start = fme_err_cap.intr_vector_num;
+ *vec_cnt = 1;
+ } else {
+ dev_debug(&binfo, "FME error doesn't support interrupt\n");
+ }
+ }
+}
+
+static int parse_feature_fme_private(struct build_feature_devs_info *binfo,
+ struct feature_header *hdr)
+{
+ unsigned int vec_start = 0;
+ unsigned int vec_cnt = 0;
+
+ parse_feature_irqs(binfo, hdr, &vec_start, &vec_cnt);
+
+ return create_feature_instance(binfo, hdr, 0, 0, vec_start, vec_cnt);
+}
+
+static int parse_feature_port_private(struct build_feature_devs_info *binfo,
+ struct feature_header *hdr)
+{
+ unsigned int vec_start = 0;
+ unsigned int vec_cnt = 0;
+
+ parse_feature_irqs(binfo, hdr, &vec_start, &vec_cnt);
+
+ return create_feature_instance(binfo, hdr, 0, 0, vec_start, vec_cnt);
+}
+
+static int parse_feature_private(struct build_feature_devs_info *binfo,
+ struct feature_header *hdr)
+{
+ struct feature_header header;
+
+ header.csr = readq(hdr);
+
+ switch (binfo->current_type) {
+ case FME_ID:
+ return parse_feature_fme_private(binfo, hdr);
+ case PORT_ID:
+ return parse_feature_port_private(binfo, hdr);
+ default:
+ dev_err(binfo, "private feature %x belonging to AFU %d (unknown_type) is not supported yet.\n",
+ header.id, binfo->current_type);
+ }
+ return 0;
+}
+
+static int parse_feature(struct build_feature_devs_info *binfo,
+ struct feature_header *hdr)
+{
+ struct feature_header header;
+ int ret = 0;
+
+ header.csr = readq(hdr);
+
+ switch (header.type) {
+ case FEATURE_TYPE_AFU:
+ ret = parse_feature_afus(binfo, hdr);
+ break;
+ case FEATURE_TYPE_PRIVATE:
+ ret = parse_feature_private(binfo, hdr);
+ break;
+ case FEATURE_TYPE_FIU:
+ ret = parse_feature_fiu(binfo, hdr);
+ break;
+ default:
+ dev_err(binfo, "Feature Type %x is not supported.\n",
+ hdr->type);
+ };
+
+ return ret;
+}
+
+static int
+parse_feature_list(struct build_feature_devs_info *binfo, u8 __iomem *start)
+{
+ struct feature_header *hdr, header;
+ u8 __iomem *end = (u8 __iomem *)binfo->ioend;
+ int ret = 0;
+
+ for (; start < end; start += header.next_header_offset) {
+ if ((unsigned int)(end - start) < (unsigned int)sizeof(*hdr)) {
+ dev_err(binfo, "The region is too small to contain a feature.\n");
+ ret = -EINVAL;
+ break;
+ }
+
+ hdr = (struct feature_header *)start;
+ header.csr = readq(hdr);
+
+ dev_debug(binfo, "%s: address=0x%p, val=0x%llx, header.id=0x%x, header.next_offset=0x%x, header.eol=0x%x, header.type=0x%x\n",
+ __func__, hdr, (unsigned long long)header.csr,
+ header.id, header.next_header_offset,
+ header.end_of_list, header.type);
+
+ ret = parse_feature(binfo, hdr);
+ if (ret)
+ return ret;
+
+ if (header.end_of_list || !header.next_header_offset)
+ break;
+ }
+
+ return build_info_commit_dev(binfo);
+}
+
+/* switch the memory mapping to BAR# @bar */
+static int parse_switch_to(struct build_feature_devs_info *binfo, int bar)
+{
+ struct opae_adapter_data_pci *pci_data = binfo->pci_data;
+
+ if (!pci_data->region[bar].addr)
+ return -ENOMEM;
+
+ binfo->ioaddr = pci_data->region[bar].addr;
+ binfo->ioend = (u8 __iomem *)binfo->ioaddr + pci_data->region[bar].len;
+ binfo->phys_addr = pci_data->region[bar].phys_addr;
+ binfo->current_bar = bar;
+
+ return 0;
+}
+
+static int parse_ports_from_fme(struct build_feature_devs_info *binfo)
+{
+ struct feature_fme_header *fme_hdr;
+ struct feature_fme_port port;
+ int i = 0, ret = 0;
+
+ if (!binfo->pfme_hdr) {
+ dev_info(binfo, "VF is detected.\n");
+ return ret;
+ }
+
+ fme_hdr = binfo->pfme_hdr;
+
+ do {
+ port.csr = readq(&fme_hdr->port[i]);
+ if (!port.port_implemented)
+ break;
+
+ /* skip port which only could be accessed via VF */
+ if (port.afu_access_control == FME_AFU_ACCESS_VF)
+ continue;
+
+ ret = parse_switch_to(binfo, port.port_bar);
+ if (ret)
+ break;
+
+ ret = parse_feature_list(binfo,
+ (u8 __iomem *)binfo->ioaddr +
+ port.port_offset);
+ if (ret)
+ break;
+ } while (++i < MAX_FPGA_PORT_NUM);
+
+ return ret;
+}
+
+static struct build_feature_devs_info *
+build_info_alloc_and_init(struct ifpga_hw *hw)
+{
+ struct build_feature_devs_info *binfo;
+
+ binfo = zmalloc(sizeof(*binfo));
+ if (!binfo)
+ return binfo;
+
+ binfo->hw = hw;
+ binfo->pci_data = hw->pci_data;
+
+ /* fpga feature list starts from BAR 0 */
+ if (parse_switch_to(binfo, 0)) {
+ free(binfo);
+ return NULL;
+ }
+
+ return binfo;
+}
+
+static void build_info_free(struct build_feature_devs_info *binfo)
+{
+ free(binfo);
+}
+
+static void ifpga_print_device_feature_list(struct ifpga_hw *hw)
+{
+ struct ifpga_fme_hw *fme = &hw->fme;
+ struct ifpga_port_hw *port;
+ struct ifpga_feature *feature;
+ int i;
+
+ dev_info(hw, "found fme_device, is in PF: %s\n",
+ is_ifpga_hw_pf(hw) ? "yes" : "no");
+
+ ifpga_for_each_fme_feature(fme, feature) {
+ if (feature->state != IFPGA_FEATURE_ATTACHED)
+ continue;
+
+ dev_info(hw, "%12s: %p - %p - paddr: 0x%lx\n",
+ feature->name, feature->addr,
+ feature->addr + feature->size - 1,
+ (unsigned long)feature->phys_addr);
+
+ }
+
+ for (i = 0; i < MAX_FPGA_PORT_NUM; i++) {
+ port = &hw->port[i];
+
+ if (port->state != IFPGA_PORT_ATTACHED)
+ continue;
+
+ dev_info(hw, "port device: %d\n", port->port_id);
+
+ ifpga_for_each_port_feature(port, feature) {
+ if (feature->state != IFPGA_FEATURE_ATTACHED)
+ continue;
+
+ dev_info(hw, "%12s: %p - %p - paddr:0x%lx\n",
+ feature->name,
+ feature->addr,
+ feature->addr +
+ feature->size - 1,
+ (unsigned long)feature->phys_addr);
+ }
+
+ }
+}
+
+int ifpga_bus_enumerate(struct ifpga_hw *hw)
+{
+ struct build_feature_devs_info *binfo;
+ int ret;
+
+ binfo = build_info_alloc_and_init(hw);
+ if (!binfo)
+ return -ENOMEM;
+
+ ret = parse_feature_list(binfo, binfo->ioaddr);
+ if (ret)
+ goto exit;
+
+ ret = parse_ports_from_fme(binfo);
+ if (ret)
+ goto exit;
+
+ ifpga_print_device_feature_list(hw);
+
+exit:
+ build_info_free(binfo);
+ return ret;
+}
+
+int ifpga_bus_init(struct ifpga_hw *hw)
+{
+ int i;
+ struct ifpga_port_hw *port;
+
+ fme_hw_init(&hw->fme);
+ for (i = 0; i < MAX_FPGA_PORT_NUM; i++) {
+ port = &hw->port[i];
+ port_hw_init(port);
+ }
+
+ return 0;
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _IFPGA_ENUMERATE_H_
+#define _IFPGA_ENUMERATE_H_
+
+int ifpga_bus_init(struct ifpga_hw *hw);
+int ifpga_bus_enumerate(struct ifpga_hw *hw);
+
+#endif /* _IFPGA_ENUMERATE_H_ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include <sys/ioctl.h>
+
+#include "ifpga_feature_dev.h"
+
+/*
+ * Enable Port by clear the port soft reset bit, which is set by default.
+ * The AFU is unable to respond to any MMIO access while in reset.
+ * __fpga_port_enable function should only be used after __fpga_port_disable
+ * function.
+ */
+void __fpga_port_enable(struct ifpga_port_hw *port)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_control control;
+
+ WARN_ON(!port->disable_count);
+
+ if (--port->disable_count != 0)
+ return;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+ WARN_ON(!port_hdr);
+
+ control.csr = readq(&port_hdr->control);
+ control.port_sftrst = 0x0;
+ writeq(control.csr, &port_hdr->control);
+}
+
+int __fpga_port_disable(struct ifpga_port_hw *port)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_control control;
+
+ if (port->disable_count++ != 0)
+ return 0;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+ WARN_ON(!port_hdr);
+
+ /* Set port soft reset */
+ control.csr = readq(&port_hdr->control);
+ control.port_sftrst = 0x1;
+ writeq(control.csr, &port_hdr->control);
+
+ /*
+ * HW sets ack bit to 1 when all outstanding requests have been drained
+ * on this port and minimum soft reset pulse width has elapsed.
+ * Driver polls port_soft_reset_ack to determine if reset done by HW.
+ */
+ control.port_sftrst_ack = 1;
+
+ if (fpga_wait_register_field(port_sftrst_ack, control,
+ &port_hdr->control, RST_POLL_TIMEOUT,
+ RST_POLL_INVL)) {
+ dev_err(port, "timeout, fail to reset device\n");
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+int fpga_get_afu_uuid(struct ifpga_port_hw *port, struct uuid *uuid)
+{
+ struct feature_port_header *port_hdr;
+ u64 guidl, guidh;
+
+ if (!uuid)
+ return -EINVAL;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port, PORT_FEATURE_ID_UAFU);
+
+ spinlock_lock(&port->lock);
+ guidl = readq(&port_hdr->afu_header.guid.b[0]);
+ guidh = readq(&port_hdr->afu_header.guid.b[8]);
+ spinlock_unlock(&port->lock);
+
+ opae_memcpy(uuid->b, &guidl, sizeof(u64));
+ opae_memcpy(uuid->b + 8, &guidh, sizeof(u64));
+
+ return 0;
+}
+
+/* Mask / Unmask Port Errors by the Error Mask register. */
+void port_err_mask(struct ifpga_port_hw *port, bool mask)
+{
+ struct feature_port_error *port_err;
+ struct feature_port_err_key err_mask;
+
+ port_err = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_ERROR);
+
+ if (mask)
+ err_mask.csr = PORT_ERR_MASK;
+ else
+ err_mask.csr = 0;
+
+ writeq(err_mask.csr, &port_err->error_mask);
+}
+
+/* Clear All Port Errors. */
+int port_err_clear(struct ifpga_port_hw *port, u64 err)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_error *port_err;
+ struct feature_port_err_key mask;
+ struct feature_port_first_err_key first;
+ struct feature_port_status status;
+ int ret = 0;
+
+ port_err = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_ERROR);
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ /*
+ * Clear All Port Errors
+ *
+ * - Check for AP6 State
+ * - Halt Port by keeping Port in reset
+ * - Set PORT Error mask to all 1 to mask errors
+ * - Clear all errors
+ * - Set Port mask to all 0 to enable errors
+ * - All errors start capturing new errors
+ * - Enable Port by pulling the port out of reset
+ */
+
+ /* If device is still in AP6 state, can not clear any error.*/
+ status.csr = readq(&port_hdr->status);
+ if (status.power_state == PORT_POWER_STATE_AP6) {
+ dev_err(dev, "Could not clear errors, device in AP6 state.\n");
+ return -EBUSY;
+ }
+
+ /* Halt Port by keeping Port in reset */
+ ret = __fpga_port_disable(port);
+ if (ret)
+ return ret;
+
+ /* Mask all errors */
+ port_err_mask(port, true);
+
+ /* Clear errors if err input matches with current port errors.*/
+ mask.csr = readq(&port_err->port_error);
+
+ if (mask.csr == err) {
+ writeq(mask.csr, &port_err->port_error);
+
+ first.csr = readq(&port_err->port_first_error);
+ writeq(first.csr, &port_err->port_first_error);
+ } else {
+ ret = -EBUSY;
+ }
+
+ /* Clear mask */
+ port_err_mask(port, false);
+
+ /* Enable the Port by clear the reset */
+ __fpga_port_enable(port);
+
+ return ret;
+}
+
+int port_clear_error(struct ifpga_port_hw *port)
+{
+ struct feature_port_error *port_err;
+ struct feature_port_err_key error;
+
+ port_err = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_ERROR);
+ error.csr = readq(&port_err->port_error);
+
+ dev_info(port, "read port error: 0x%lx\n", (unsigned long)error.csr);
+
+ return port_err_clear(port, error.csr);
+}
+
+static struct feature_driver fme_feature_drvs[] = {
+ {FEATURE_DRV(FME_FEATURE_ID_HEADER, FME_FEATURE_HEADER,
+ &fme_hdr_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_THERMAL_MGMT, FME_FEATURE_THERMAL_MGMT,
+ &fme_thermal_mgmt_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_POWER_MGMT, FME_FEATURE_POWER_MGMT,
+ &fme_power_mgmt_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_GLOBAL_ERR, FME_FEATURE_GLOBAL_ERR,
+ &fme_global_err_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_PR_MGMT, FME_FEATURE_PR_MGMT,
+ &fme_pr_mgmt_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_GLOBAL_DPERF, FME_FEATURE_GLOBAL_DPERF,
+ &fme_global_dperf_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_HSSI_ETH, FME_FEATURE_HSSI_ETH,
+ &fme_hssi_eth_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_EMIF_MGMT, FME_FEATURE_EMIF_MGMT,
+ &fme_emif_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_MAX10_SPI, FME_FEATURE_MAX10_SPI,
+ &fme_spi_master_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_NIOS_SPI, FME_FEATURE_NIOS_SPI,
+ &fme_nios_spi_master_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_I2C_MASTER, FME_FEATURE_I2C_MASTER,
+ &fme_i2c_master_ops),},
+ {FEATURE_DRV(FME_FEATURE_ID_ETH_GROUP, FME_FEATURE_ETH_GROUP,
+ &fme_eth_group_ops),},
+ {0, NULL, NULL}, /* end of arrary */
+};
+
+static struct feature_driver port_feature_drvs[] = {
+ {FEATURE_DRV(PORT_FEATURE_ID_HEADER, PORT_FEATURE_HEADER,
+ &ifpga_rawdev_port_hdr_ops)},
+ {FEATURE_DRV(PORT_FEATURE_ID_ERROR, PORT_FEATURE_ERR,
+ &ifpga_rawdev_port_error_ops)},
+ {FEATURE_DRV(PORT_FEATURE_ID_UINT, PORT_FEATURE_UINT,
+ &ifpga_rawdev_port_uint_ops)},
+ {FEATURE_DRV(PORT_FEATURE_ID_STP, PORT_FEATURE_STP,
+ &ifpga_rawdev_port_stp_ops)},
+ {FEATURE_DRV(PORT_FEATURE_ID_UAFU, PORT_FEATURE_UAFU,
+ &ifpga_rawdev_port_afu_ops)},
+ {0, NULL, NULL}, /* end of array */
+};
+
+const char *get_fme_feature_name(unsigned int id)
+{
+ struct feature_driver *drv = fme_feature_drvs;
+
+ while (drv->name) {
+ if (drv->id == id)
+ return drv->name;
+
+ drv++;
+ }
+
+ return NULL;
+}
+
+const char *get_port_feature_name(unsigned int id)
+{
+ struct feature_driver *drv = port_feature_drvs;
+
+ while (drv->name) {
+ if (drv->id == id)
+ return drv->name;
+
+ drv++;
+ }
+
+ return NULL;
+}
+
+static void feature_uinit(struct ifpga_feature_list *list)
+{
+ struct ifpga_feature *feature;
+
+ TAILQ_FOREACH(feature, list, next) {
+ if (feature->state != IFPGA_FEATURE_ATTACHED)
+ continue;
+ if (feature->ops && feature->ops->uinit)
+ feature->ops->uinit(feature);
+ }
+}
+
+static int feature_init(struct feature_driver *drv,
+ struct ifpga_feature_list *list)
+{
+ struct ifpga_feature *feature;
+ int ret;
+
+ while (drv->ops) {
+ TAILQ_FOREACH(feature, list, next) {
+ if (feature->state != IFPGA_FEATURE_ATTACHED)
+ continue;
+ if (feature->id == drv->id) {
+ feature->ops = drv->ops;
+ feature->name = drv->name;
+ if (feature->ops->init) {
+ ret = feature->ops->init(feature);
+ if (ret)
+ goto error;
+ }
+ }
+ }
+ drv++;
+ }
+
+ return 0;
+error:
+ feature_uinit(list);
+ return ret;
+}
+
+int fme_hw_init(struct ifpga_fme_hw *fme)
+{
+ int ret;
+
+ if (fme->state != IFPGA_FME_IMPLEMENTED)
+ return -ENODEV;
+
+ ret = feature_init(fme_feature_drvs, &fme->feature_list);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+void fme_hw_uinit(struct ifpga_fme_hw *fme)
+{
+ feature_uinit(&fme->feature_list);
+}
+
+void port_hw_uinit(struct ifpga_port_hw *port)
+{
+ feature_uinit(&port->feature_list);
+}
+
+int port_hw_init(struct ifpga_port_hw *port)
+{
+ int ret;
+
+ if (port->state == IFPGA_PORT_UNUSED)
+ return 0;
+
+ ret = feature_init(port_feature_drvs, &port->feature_list);
+ if (ret)
+ goto error;
+
+ return 0;
+error:
+ port_hw_uinit(port);
+ return ret;
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _IFPGA_FEATURE_DEV_H_
+#define _IFPGA_FEATURE_DEV_H_
+
+#include "ifpga_hw.h"
+
+struct feature_driver {
+ u64 id;
+ const char *name;
+ struct ifpga_feature_ops *ops;
+};
+
+/**
+ * FEATURE_DRV - macro used to describe a specific feature driver
+ */
+#define FEATURE_DRV(n, s, p) \
+ .id = (n), .name = (s), .ops = (p)
+
+static inline struct ifpga_port_hw *
+get_port(struct ifpga_hw *hw, u32 port_id)
+{
+ if (!is_valid_port_id(hw, port_id))
+ return NULL;
+
+ return &hw->port[port_id];
+}
+
+#define ifpga_for_each_fme_feature(hw, feature) \
+ TAILQ_FOREACH(feature, &hw->feature_list, next)
+
+#define ifpga_for_each_port_feature(port, feature) \
+ TAILQ_FOREACH(feature, &port->feature_list, next)
+
+static inline struct ifpga_feature *
+get_fme_feature_by_id(struct ifpga_fme_hw *fme, u64 id)
+{
+ struct ifpga_feature *feature;
+
+ ifpga_for_each_fme_feature(fme, feature) {
+ if (feature->id == id)
+ return feature;
+ }
+
+ return NULL;
+}
+
+static inline struct ifpga_feature *
+get_port_feature_by_id(struct ifpga_port_hw *port, u64 id)
+{
+ struct ifpga_feature *feature;
+
+ ifpga_for_each_port_feature(port, feature) {
+ if (feature->id == id)
+ return feature;
+ }
+
+ return NULL;
+}
+
+static inline struct ifpga_feature *
+get_feature_by_id(struct ifpga_feature_list *list, u64 id)
+{
+ struct ifpga_feature *feature;
+
+ TAILQ_FOREACH(feature, list, next)
+ if (feature->id == id)
+ return feature;
+
+ return NULL;
+}
+
+static inline void *
+get_fme_feature_ioaddr_by_index(struct ifpga_fme_hw *fme, int index)
+{
+ struct ifpga_feature *feature =
+ get_feature_by_id(&fme->feature_list, index);
+
+ return feature ? feature->addr : NULL;
+}
+
+static inline void *
+get_port_feature_ioaddr_by_index(struct ifpga_port_hw *port, int index)
+{
+ struct ifpga_feature *feature =
+ get_feature_by_id(&port->feature_list, index);
+
+ return feature ? feature->addr : NULL;
+}
+
+static inline bool
+is_fme_feature_present(struct ifpga_fme_hw *fme, int index)
+{
+ return !!get_fme_feature_ioaddr_by_index(fme, index);
+}
+
+static inline bool
+is_port_feature_present(struct ifpga_port_hw *port, int index)
+{
+ return !!get_port_feature_ioaddr_by_index(port, index);
+}
+
+int fpga_get_afu_uuid(struct ifpga_port_hw *port, struct uuid *uuid);
+
+int __fpga_port_disable(struct ifpga_port_hw *port);
+void __fpga_port_enable(struct ifpga_port_hw *port);
+
+static inline int fpga_port_disable(struct ifpga_port_hw *port)
+{
+ int ret;
+
+ spinlock_lock(&port->lock);
+ ret = __fpga_port_disable(port);
+ spinlock_unlock(&port->lock);
+ return ret;
+}
+
+static inline int fpga_port_enable(struct ifpga_port_hw *port)
+{
+ spinlock_lock(&port->lock);
+ __fpga_port_enable(port);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static inline int __fpga_port_reset(struct ifpga_port_hw *port)
+{
+ int ret;
+
+ ret = __fpga_port_disable(port);
+ if (ret)
+ return ret;
+
+ __fpga_port_enable(port);
+
+ return 0;
+}
+
+static inline int fpga_port_reset(struct ifpga_port_hw *port)
+{
+ int ret;
+
+ spinlock_lock(&port->lock);
+ ret = __fpga_port_reset(port);
+ spinlock_unlock(&port->lock);
+ return ret;
+}
+
+int do_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer, u32 size,
+ u64 *status);
+
+int fme_get_prop(struct ifpga_fme_hw *fme, struct feature_prop *prop);
+int fme_set_prop(struct ifpga_fme_hw *fme, struct feature_prop *prop);
+int fme_set_irq(struct ifpga_fme_hw *fme, u32 feature_id, void *irq_set);
+
+int fme_hw_init(struct ifpga_fme_hw *fme);
+void fme_hw_uinit(struct ifpga_fme_hw *fme);
+void port_hw_uinit(struct ifpga_port_hw *port);
+int port_hw_init(struct ifpga_port_hw *port);
+int port_clear_error(struct ifpga_port_hw *port);
+void port_err_mask(struct ifpga_port_hw *port, bool mask);
+int port_err_clear(struct ifpga_port_hw *port, u64 err);
+
+extern struct ifpga_feature_ops fme_hdr_ops;
+extern struct ifpga_feature_ops fme_thermal_mgmt_ops;
+extern struct ifpga_feature_ops fme_power_mgmt_ops;
+extern struct ifpga_feature_ops fme_global_err_ops;
+extern struct ifpga_feature_ops fme_pr_mgmt_ops;
+extern struct ifpga_feature_ops fme_global_iperf_ops;
+extern struct ifpga_feature_ops fme_global_dperf_ops;
+extern struct ifpga_feature_ops fme_hssi_eth_ops;
+extern struct ifpga_feature_ops fme_emif_ops;
+extern struct ifpga_feature_ops fme_spi_master_ops;
+extern struct ifpga_feature_ops fme_i2c_master_ops;
+extern struct ifpga_feature_ops fme_eth_group_ops;
+extern struct ifpga_feature_ops fme_nios_spi_master_ops;
+
+int port_get_prop(struct ifpga_port_hw *port, struct feature_prop *prop);
+int port_set_prop(struct ifpga_port_hw *port, struct feature_prop *prop);
+
+/* This struct is used when parsing uafu irq_set */
+struct fpga_uafu_irq_set {
+ u32 start;
+ u32 count;
+ s32 *evtfds;
+};
+
+int port_set_irq(struct ifpga_port_hw *port, u32 feature_id, void *irq_set);
+const char *get_fme_feature_name(unsigned int id);
+const char *get_port_feature_name(unsigned int id);
+
+extern struct ifpga_feature_ops ifpga_rawdev_port_hdr_ops;
+extern struct ifpga_feature_ops ifpga_rawdev_port_error_ops;
+extern struct ifpga_feature_ops ifpga_rawdev_port_stp_ops;
+extern struct ifpga_feature_ops ifpga_rawdev_port_uint_ops;
+extern struct ifpga_feature_ops ifpga_rawdev_port_afu_ops;
+
+/* help functions for feature ops */
+int fpga_msix_set_block(struct ifpga_feature *feature, unsigned int start,
+ unsigned int count, s32 *fds);
+
+/* FME network function ops*/
+int fme_mgr_read_mac_rom(struct ifpga_fme_hw *fme, int offset,
+ void *buf, int size);
+int fme_mgr_write_mac_rom(struct ifpga_fme_hw *fme, int offset,
+ void *buf, int size);
+int fme_mgr_get_eth_group_nums(struct ifpga_fme_hw *fme);
+int fme_mgr_get_eth_group_info(struct ifpga_fme_hw *fme,
+ u8 group_id, struct opae_eth_group_info *info);
+int fme_mgr_eth_group_read_reg(struct ifpga_fme_hw *fme, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 *data);
+int fme_mgr_eth_group_write_reg(struct ifpga_fme_hw *fme, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 data);
+int fme_mgr_get_retimer_info(struct ifpga_fme_hw *fme,
+ struct opae_retimer_info *info);
+int fme_mgr_get_retimer_status(struct ifpga_fme_hw *fme,
+ struct opae_retimer_status *status);
+#endif /* _IFPGA_FEATURE_DEV_H_ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "ifpga_feature_dev.h"
+#include "opae_spi.h"
+#include "opae_intel_max10.h"
+#include "opae_i2c.h"
+#include "opae_at24_eeprom.h"
+
+#define PWR_THRESHOLD_MAX 0x7F
+
+int fme_get_prop(struct ifpga_fme_hw *fme, struct feature_prop *prop)
+{
+ struct ifpga_feature *feature;
+
+ if (!fme)
+ return -ENOENT;
+
+ feature = get_fme_feature_by_id(fme, prop->feature_id);
+
+ if (feature && feature->ops && feature->ops->get_prop)
+ return feature->ops->get_prop(feature, prop);
+
+ return -ENOENT;
+}
+
+int fme_set_prop(struct ifpga_fme_hw *fme, struct feature_prop *prop)
+{
+ struct ifpga_feature *feature;
+
+ if (!fme)
+ return -ENOENT;
+
+ feature = get_fme_feature_by_id(fme, prop->feature_id);
+
+ if (feature && feature->ops && feature->ops->set_prop)
+ return feature->ops->set_prop(feature, prop);
+
+ return -ENOENT;
+}
+
+int fme_set_irq(struct ifpga_fme_hw *fme, u32 feature_id, void *irq_set)
+{
+ struct ifpga_feature *feature;
+
+ if (!fme)
+ return -ENOENT;
+
+ feature = get_fme_feature_by_id(fme, feature_id);
+
+ if (feature && feature->ops && feature->ops->set_irq)
+ return feature->ops->set_irq(feature, irq_set);
+
+ return -ENOENT;
+}
+
+/* fme private feature head */
+static int fme_hdr_init(struct ifpga_feature *feature)
+{
+ struct feature_fme_header *fme_hdr;
+
+ fme_hdr = (struct feature_fme_header *)feature->addr;
+
+ dev_info(NULL, "FME HDR Init.\n");
+ dev_info(NULL, "FME cap %llx.\n",
+ (unsigned long long)fme_hdr->capability.csr);
+
+ return 0;
+}
+
+static void fme_hdr_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "FME HDR UInit.\n");
+}
+
+static int fme_hdr_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
+{
+ struct feature_fme_header *fme_hdr
+ = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+ struct feature_header header;
+
+ header.csr = readq(&fme_hdr->header);
+ *revision = header.revision;
+
+ return 0;
+}
+
+static int fme_hdr_get_ports_num(struct ifpga_fme_hw *fme, u64 *ports_num)
+{
+ struct feature_fme_header *fme_hdr
+ = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+ struct feature_fme_capability fme_capability;
+
+ fme_capability.csr = readq(&fme_hdr->capability);
+ *ports_num = fme_capability.num_ports;
+
+ return 0;
+}
+
+static int fme_hdr_get_cache_size(struct ifpga_fme_hw *fme, u64 *cache_size)
+{
+ struct feature_fme_header *fme_hdr
+ = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+ struct feature_fme_capability fme_capability;
+
+ fme_capability.csr = readq(&fme_hdr->capability);
+ *cache_size = fme_capability.cache_size;
+
+ return 0;
+}
+
+static int fme_hdr_get_version(struct ifpga_fme_hw *fme, u64 *version)
+{
+ struct feature_fme_header *fme_hdr
+ = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+ struct feature_fme_capability fme_capability;
+
+ fme_capability.csr = readq(&fme_hdr->capability);
+ *version = fme_capability.fabric_verid;
+
+ return 0;
+}
+
+static int fme_hdr_get_socket_id(struct ifpga_fme_hw *fme, u64 *socket_id)
+{
+ struct feature_fme_header *fme_hdr
+ = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+ struct feature_fme_capability fme_capability;
+
+ fme_capability.csr = readq(&fme_hdr->capability);
+ *socket_id = fme_capability.socket_id;
+
+ return 0;
+}
+
+static int fme_hdr_get_bitstream_id(struct ifpga_fme_hw *fme,
+ u64 *bitstream_id)
+{
+ struct feature_fme_header *fme_hdr
+ = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+
+ *bitstream_id = readq(&fme_hdr->bitstream_id);
+
+ return 0;
+}
+
+static int fme_hdr_get_bitstream_metadata(struct ifpga_fme_hw *fme,
+ u64 *bitstream_metadata)
+{
+ struct feature_fme_header *fme_hdr
+ = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+
+ *bitstream_metadata = readq(&fme_hdr->bitstream_md);
+
+ return 0;
+}
+
+static int
+fme_hdr_get_prop(struct ifpga_feature *feature, struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+
+ switch (prop->prop_id) {
+ case FME_HDR_PROP_REVISION:
+ return fme_hdr_get_revision(fme, &prop->data);
+ case FME_HDR_PROP_PORTS_NUM:
+ return fme_hdr_get_ports_num(fme, &prop->data);
+ case FME_HDR_PROP_CACHE_SIZE:
+ return fme_hdr_get_cache_size(fme, &prop->data);
+ case FME_HDR_PROP_VERSION:
+ return fme_hdr_get_version(fme, &prop->data);
+ case FME_HDR_PROP_SOCKET_ID:
+ return fme_hdr_get_socket_id(fme, &prop->data);
+ case FME_HDR_PROP_BITSTREAM_ID:
+ return fme_hdr_get_bitstream_id(fme, &prop->data);
+ case FME_HDR_PROP_BITSTREAM_METADATA:
+ return fme_hdr_get_bitstream_metadata(fme, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+struct ifpga_feature_ops fme_hdr_ops = {
+ .init = fme_hdr_init,
+ .uinit = fme_hdr_uinit,
+ .get_prop = fme_hdr_get_prop,
+};
+
+/* thermal management */
+static int fme_thermal_get_threshold1(struct ifpga_fme_hw *fme, u64 *thres1)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_tmp_threshold temp_threshold;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+
+ temp_threshold.csr = readq(&thermal->threshold);
+ *thres1 = temp_threshold.tmp_thshold1;
+
+ return 0;
+}
+
+static int fme_thermal_set_threshold1(struct ifpga_fme_hw *fme, u64 thres1)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_header *fme_hdr;
+ struct feature_fme_tmp_threshold tmp_threshold;
+ struct feature_fme_capability fme_capability;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+ fme_hdr = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+
+ spinlock_lock(&fme->lock);
+ tmp_threshold.csr = readq(&thermal->threshold);
+ fme_capability.csr = readq(&fme_hdr->capability);
+
+ if (fme_capability.lock_bit == 1) {
+ spinlock_unlock(&fme->lock);
+ return -EBUSY;
+ } else if (thres1 > 100) {
+ spinlock_unlock(&fme->lock);
+ return -EINVAL;
+ } else if (thres1 == 0) {
+ tmp_threshold.tmp_thshold1_enable = 0;
+ tmp_threshold.tmp_thshold1 = thres1;
+ } else {
+ tmp_threshold.tmp_thshold1_enable = 1;
+ tmp_threshold.tmp_thshold1 = thres1;
+ }
+
+ writeq(tmp_threshold.csr, &thermal->threshold);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static int fme_thermal_get_threshold2(struct ifpga_fme_hw *fme, u64 *thres2)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_tmp_threshold temp_threshold;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+
+ temp_threshold.csr = readq(&thermal->threshold);
+ *thres2 = temp_threshold.tmp_thshold2;
+
+ return 0;
+}
+
+static int fme_thermal_set_threshold2(struct ifpga_fme_hw *fme, u64 thres2)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_header *fme_hdr;
+ struct feature_fme_tmp_threshold tmp_threshold;
+ struct feature_fme_capability fme_capability;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+ fme_hdr = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+
+ spinlock_lock(&fme->lock);
+ tmp_threshold.csr = readq(&thermal->threshold);
+ fme_capability.csr = readq(&fme_hdr->capability);
+
+ if (fme_capability.lock_bit == 1) {
+ spinlock_unlock(&fme->lock);
+ return -EBUSY;
+ } else if (thres2 > 100) {
+ spinlock_unlock(&fme->lock);
+ return -EINVAL;
+ } else if (thres2 == 0) {
+ tmp_threshold.tmp_thshold2_enable = 0;
+ tmp_threshold.tmp_thshold2 = thres2;
+ } else {
+ tmp_threshold.tmp_thshold2_enable = 1;
+ tmp_threshold.tmp_thshold2 = thres2;
+ }
+
+ writeq(tmp_threshold.csr, &thermal->threshold);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static int fme_thermal_get_threshold_trip(struct ifpga_fme_hw *fme,
+ u64 *thres_trip)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_tmp_threshold temp_threshold;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+
+ temp_threshold.csr = readq(&thermal->threshold);
+ *thres_trip = temp_threshold.therm_trip_thshold;
+
+ return 0;
+}
+
+static int fme_thermal_get_threshold1_reached(struct ifpga_fme_hw *fme,
+ u64 *thres1_reached)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_tmp_threshold temp_threshold;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+
+ temp_threshold.csr = readq(&thermal->threshold);
+ *thres1_reached = temp_threshold.thshold1_status;
+
+ return 0;
+}
+
+static int fme_thermal_get_threshold2_reached(struct ifpga_fme_hw *fme,
+ u64 *thres1_reached)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_tmp_threshold temp_threshold;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+
+ temp_threshold.csr = readq(&thermal->threshold);
+ *thres1_reached = temp_threshold.thshold2_status;
+
+ return 0;
+}
+
+static int fme_thermal_get_threshold1_policy(struct ifpga_fme_hw *fme,
+ u64 *thres1_policy)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_tmp_threshold temp_threshold;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+
+ temp_threshold.csr = readq(&thermal->threshold);
+ *thres1_policy = temp_threshold.thshold_policy;
+
+ return 0;
+}
+
+static int fme_thermal_set_threshold1_policy(struct ifpga_fme_hw *fme,
+ u64 thres1_policy)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_tmp_threshold tmp_threshold;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+
+ spinlock_lock(&fme->lock);
+ tmp_threshold.csr = readq(&thermal->threshold);
+
+ if (thres1_policy == 0) {
+ tmp_threshold.thshold_policy = 0;
+ } else if (thres1_policy == 1) {
+ tmp_threshold.thshold_policy = 1;
+ } else {
+ spinlock_unlock(&fme->lock);
+ return -EINVAL;
+ }
+
+ writeq(tmp_threshold.csr, &thermal->threshold);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static int fme_thermal_get_temperature(struct ifpga_fme_hw *fme, u64 *temp)
+{
+ struct feature_fme_thermal *thermal;
+ struct feature_fme_temp_rdsensor_fmt1 temp_rdsensor_fmt1;
+
+ thermal = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+
+ temp_rdsensor_fmt1.csr = readq(&thermal->rdsensor_fm1);
+ *temp = temp_rdsensor_fmt1.fpga_temp;
+
+ return 0;
+}
+
+static int fme_thermal_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
+{
+ struct feature_fme_thermal *fme_thermal
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_THERMAL_MGMT);
+ struct feature_header header;
+
+ header.csr = readq(&fme_thermal->header);
+ *revision = header.revision;
+
+ return 0;
+}
+
+#define FME_THERMAL_CAP_NO_TMP_THRESHOLD 0x1
+
+static int fme_thermal_mgmt_init(struct ifpga_feature *feature)
+{
+ struct feature_fme_thermal *fme_thermal;
+ struct feature_fme_tmp_threshold_cap thermal_cap;
+
+ UNUSED(feature);
+
+ dev_info(NULL, "FME thermal mgmt Init.\n");
+
+ fme_thermal = (struct feature_fme_thermal *)feature->addr;
+ thermal_cap.csr = readq(&fme_thermal->threshold_cap);
+
+ dev_info(NULL, "FME thermal cap %llx.\n",
+ (unsigned long long)fme_thermal->threshold_cap.csr);
+
+ if (thermal_cap.tmp_thshold_disabled)
+ feature->cap |= FME_THERMAL_CAP_NO_TMP_THRESHOLD;
+
+ return 0;
+}
+
+static void fme_thermal_mgmt_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "FME thermal mgmt UInit.\n");
+}
+
+static int
+fme_thermal_set_prop(struct ifpga_feature *feature, struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+
+ if (feature->cap & FME_THERMAL_CAP_NO_TMP_THRESHOLD)
+ return -ENOENT;
+
+ switch (prop->prop_id) {
+ case FME_THERMAL_PROP_THRESHOLD1:
+ return fme_thermal_set_threshold1(fme, prop->data);
+ case FME_THERMAL_PROP_THRESHOLD2:
+ return fme_thermal_set_threshold2(fme, prop->data);
+ case FME_THERMAL_PROP_THRESHOLD1_POLICY:
+ return fme_thermal_set_threshold1_policy(fme, prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int
+fme_thermal_get_prop(struct ifpga_feature *feature, struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+
+ if (feature->cap & FME_THERMAL_CAP_NO_TMP_THRESHOLD &&
+ prop->prop_id != FME_THERMAL_PROP_TEMPERATURE &&
+ prop->prop_id != FME_THERMAL_PROP_REVISION)
+ return -ENOENT;
+
+ switch (prop->prop_id) {
+ case FME_THERMAL_PROP_THRESHOLD1:
+ return fme_thermal_get_threshold1(fme, &prop->data);
+ case FME_THERMAL_PROP_THRESHOLD2:
+ return fme_thermal_get_threshold2(fme, &prop->data);
+ case FME_THERMAL_PROP_THRESHOLD_TRIP:
+ return fme_thermal_get_threshold_trip(fme, &prop->data);
+ case FME_THERMAL_PROP_THRESHOLD1_REACHED:
+ return fme_thermal_get_threshold1_reached(fme, &prop->data);
+ case FME_THERMAL_PROP_THRESHOLD2_REACHED:
+ return fme_thermal_get_threshold2_reached(fme, &prop->data);
+ case FME_THERMAL_PROP_THRESHOLD1_POLICY:
+ return fme_thermal_get_threshold1_policy(fme, &prop->data);
+ case FME_THERMAL_PROP_TEMPERATURE:
+ return fme_thermal_get_temperature(fme, &prop->data);
+ case FME_THERMAL_PROP_REVISION:
+ return fme_thermal_get_revision(fme, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+struct ifpga_feature_ops fme_thermal_mgmt_ops = {
+ .init = fme_thermal_mgmt_init,
+ .uinit = fme_thermal_mgmt_uinit,
+ .get_prop = fme_thermal_get_prop,
+ .set_prop = fme_thermal_set_prop,
+};
+
+static int fme_pwr_get_consumed(struct ifpga_fme_hw *fme, u64 *consumed)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_status pm_status;
+
+ pm_status.csr = readq(&fme_power->status);
+
+ *consumed = pm_status.pwr_consumed;
+
+ return 0;
+}
+
+static int fme_pwr_get_threshold1(struct ifpga_fme_hw *fme, u64 *threshold)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_ap_threshold pm_ap_threshold;
+
+ pm_ap_threshold.csr = readq(&fme_power->threshold);
+
+ *threshold = pm_ap_threshold.threshold1;
+
+ return 0;
+}
+
+static int fme_pwr_set_threshold1(struct ifpga_fme_hw *fme, u64 threshold)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_ap_threshold pm_ap_threshold;
+
+ spinlock_lock(&fme->lock);
+ pm_ap_threshold.csr = readq(&fme_power->threshold);
+
+ if (threshold <= PWR_THRESHOLD_MAX) {
+ pm_ap_threshold.threshold1 = threshold;
+ } else {
+ spinlock_unlock(&fme->lock);
+ return -EINVAL;
+ }
+
+ writeq(pm_ap_threshold.csr, &fme_power->threshold);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static int fme_pwr_get_threshold2(struct ifpga_fme_hw *fme, u64 *threshold)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_ap_threshold pm_ap_threshold;
+
+ pm_ap_threshold.csr = readq(&fme_power->threshold);
+
+ *threshold = pm_ap_threshold.threshold2;
+
+ return 0;
+}
+
+static int fme_pwr_set_threshold2(struct ifpga_fme_hw *fme, u64 threshold)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_ap_threshold pm_ap_threshold;
+
+ spinlock_lock(&fme->lock);
+ pm_ap_threshold.csr = readq(&fme_power->threshold);
+
+ if (threshold <= PWR_THRESHOLD_MAX) {
+ pm_ap_threshold.threshold2 = threshold;
+ } else {
+ spinlock_unlock(&fme->lock);
+ return -EINVAL;
+ }
+
+ writeq(pm_ap_threshold.csr, &fme_power->threshold);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static int fme_pwr_get_threshold1_status(struct ifpga_fme_hw *fme,
+ u64 *threshold_status)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_ap_threshold pm_ap_threshold;
+
+ pm_ap_threshold.csr = readq(&fme_power->threshold);
+
+ *threshold_status = pm_ap_threshold.threshold1_status;
+
+ return 0;
+}
+
+static int fme_pwr_get_threshold2_status(struct ifpga_fme_hw *fme,
+ u64 *threshold_status)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_ap_threshold pm_ap_threshold;
+
+ pm_ap_threshold.csr = readq(&fme_power->threshold);
+
+ *threshold_status = pm_ap_threshold.threshold2_status;
+
+ return 0;
+}
+
+static int fme_pwr_get_rtl(struct ifpga_fme_hw *fme, u64 *rtl)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_status pm_status;
+
+ pm_status.csr = readq(&fme_power->status);
+
+ *rtl = pm_status.fpga_latency_report;
+
+ return 0;
+}
+
+static int fme_pwr_get_xeon_limit(struct ifpga_fme_hw *fme, u64 *limit)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_xeon_limit xeon_limit;
+
+ xeon_limit.csr = readq(&fme_power->xeon_limit);
+
+ if (!xeon_limit.enable)
+ xeon_limit.pwr_limit = 0;
+
+ *limit = xeon_limit.pwr_limit;
+
+ return 0;
+}
+
+static int fme_pwr_get_fpga_limit(struct ifpga_fme_hw *fme, u64 *limit)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_fme_pm_fpga_limit fpga_limit;
+
+ fpga_limit.csr = readq(&fme_power->fpga_limit);
+
+ if (!fpga_limit.enable)
+ fpga_limit.pwr_limit = 0;
+
+ *limit = fpga_limit.pwr_limit;
+
+ return 0;
+}
+
+static int fme_pwr_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
+{
+ struct feature_fme_power *fme_power
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_POWER_MGMT);
+ struct feature_header header;
+
+ header.csr = readq(&fme_power->header);
+ *revision = header.revision;
+
+ return 0;
+}
+
+static int fme_power_mgmt_init(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "FME power mgmt Init.\n");
+
+ return 0;
+}
+
+static void fme_power_mgmt_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "FME power mgmt UInit.\n");
+}
+
+static int fme_power_mgmt_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+
+ switch (prop->prop_id) {
+ case FME_PWR_PROP_CONSUMED:
+ return fme_pwr_get_consumed(fme, &prop->data);
+ case FME_PWR_PROP_THRESHOLD1:
+ return fme_pwr_get_threshold1(fme, &prop->data);
+ case FME_PWR_PROP_THRESHOLD2:
+ return fme_pwr_get_threshold2(fme, &prop->data);
+ case FME_PWR_PROP_THRESHOLD1_STATUS:
+ return fme_pwr_get_threshold1_status(fme, &prop->data);
+ case FME_PWR_PROP_THRESHOLD2_STATUS:
+ return fme_pwr_get_threshold2_status(fme, &prop->data);
+ case FME_PWR_PROP_RTL:
+ return fme_pwr_get_rtl(fme, &prop->data);
+ case FME_PWR_PROP_XEON_LIMIT:
+ return fme_pwr_get_xeon_limit(fme, &prop->data);
+ case FME_PWR_PROP_FPGA_LIMIT:
+ return fme_pwr_get_fpga_limit(fme, &prop->data);
+ case FME_PWR_PROP_REVISION:
+ return fme_pwr_get_revision(fme, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_power_mgmt_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+
+ switch (prop->prop_id) {
+ case FME_PWR_PROP_THRESHOLD1:
+ return fme_pwr_set_threshold1(fme, prop->data);
+ case FME_PWR_PROP_THRESHOLD2:
+ return fme_pwr_set_threshold2(fme, prop->data);
+ }
+
+ return -ENOENT;
+}
+
+struct ifpga_feature_ops fme_power_mgmt_ops = {
+ .init = fme_power_mgmt_init,
+ .uinit = fme_power_mgmt_uinit,
+ .get_prop = fme_power_mgmt_get_prop,
+ .set_prop = fme_power_mgmt_set_prop,
+};
+
+static int fme_hssi_eth_init(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+ return 0;
+}
+
+static void fme_hssi_eth_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+}
+
+struct ifpga_feature_ops fme_hssi_eth_ops = {
+ .init = fme_hssi_eth_init,
+ .uinit = fme_hssi_eth_uinit,
+};
+
+static int fme_emif_init(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+ return 0;
+}
+
+static void fme_emif_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+}
+
+struct ifpga_feature_ops fme_emif_ops = {
+ .init = fme_emif_init,
+ .uinit = fme_emif_uinit,
+};
+
+static const char *board_type_to_string(u32 type)
+{
+ switch (type) {
+ case VC_8_10G:
+ return "VC_8x10G";
+ case VC_4_25G:
+ return "VC_4x25G";
+ case VC_2_1_25:
+ return "VC_2x1x25G";
+ case VC_4_25G_2_25G:
+ return "VC_4x25G+2x25G";
+ case VC_2_2_25G:
+ return "VC_2x2x25G";
+ }
+
+ return "unknown";
+}
+
+static int board_type_to_info(u32 type,
+ struct ifpga_fme_board_info *info)
+{
+ switch (type) {
+ case VC_8_10G:
+ info->nums_of_retimer = 2;
+ info->ports_per_retimer = 4;
+ info->nums_of_fvl = 2;
+ info->ports_per_fvl = 4;
+ break;
+ case VC_4_25G:
+ info->nums_of_retimer = 1;
+ info->ports_per_retimer = 4;
+ info->nums_of_fvl = 2;
+ info->ports_per_fvl = 2;
+ break;
+ case VC_2_1_25:
+ info->nums_of_retimer = 2;
+ info->ports_per_retimer = 1;
+ info->nums_of_fvl = 1;
+ info->ports_per_fvl = 2;
+ break;
+ case VC_2_2_25G:
+ info->nums_of_retimer = 2;
+ info->ports_per_retimer = 2;
+ info->nums_of_fvl = 2;
+ info->ports_per_fvl = 2;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int fme_get_board_interface(struct ifpga_fme_hw *fme)
+{
+ struct fme_bitstream_id id;
+
+ if (fme_hdr_get_bitstream_id(fme, &id.id))
+ return -EINVAL;
+
+ fme->board_info.type = id.interface;
+ fme->board_info.build_hash = id.hash;
+ fme->board_info.debug_version = id.debug;
+ fme->board_info.major_version = id.major;
+ fme->board_info.minor_version = id.minor;
+
+ dev_info(fme, "board type: %s major_version:%u minor_version:%u build_hash:%u\n",
+ board_type_to_string(fme->board_info.type),
+ fme->board_info.major_version,
+ fme->board_info.minor_version,
+ fme->board_info.build_hash);
+
+ if (board_type_to_info(fme->board_info.type, &fme->board_info))
+ return -EINVAL;
+
+ dev_info(fme, "get board info: nums_retimers %d ports_per_retimer %d nums_fvl %d ports_per_fvl %d\n",
+ fme->board_info.nums_of_retimer,
+ fme->board_info.ports_per_retimer,
+ fme->board_info.nums_of_fvl,
+ fme->board_info.ports_per_fvl);
+
+ return 0;
+}
+
+static int spi_self_checking(void)
+{
+ u32 val;
+ int ret;
+
+ ret = max10_reg_read(0x30043c, &val);
+ if (ret)
+ return -EIO;
+
+ if (val != 0x87654321) {
+ dev_err(NULL, "Read MAX10 test register fail: 0x%x\n", val);
+ return -EIO;
+ }
+
+ dev_info(NULL, "Read MAX10 test register success, SPI self-test done\n");
+
+ return 0;
+}
+
+static int fme_spi_init(struct ifpga_feature *feature)
+{
+ struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
+ struct altera_spi_device *spi_master;
+ struct intel_max10_device *max10;
+ int ret = 0;
+
+ dev_info(fme, "FME SPI Master (Max10) Init.\n");
+ dev_debug(fme, "FME SPI base addr %p.\n",
+ feature->addr);
+ dev_debug(fme, "spi param=0x%llx\n",
+ (unsigned long long)opae_readq(feature->addr + 0x8));
+
+ spi_master = altera_spi_alloc(feature->addr, TYPE_SPI);
+ if (!spi_master)
+ return -ENODEV;
+
+ altera_spi_init(spi_master);
+
+ max10 = intel_max10_device_probe(spi_master, 0);
+ if (!max10) {
+ ret = -ENODEV;
+ dev_err(fme, "max10 init fail\n");
+ goto spi_fail;
+ }
+
+ fme->max10_dev = max10;
+
+ /* SPI self test */
+ if (spi_self_checking()) {
+ ret = -EIO;
+ goto max10_fail;
+ }
+
+ return ret;
+
+max10_fail:
+ intel_max10_device_remove(fme->max10_dev);
+spi_fail:
+ altera_spi_release(spi_master);
+ return ret;
+}
+
+static void fme_spi_uinit(struct ifpga_feature *feature)
+{
+ struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
+
+ if (fme->max10_dev)
+ intel_max10_device_remove(fme->max10_dev);
+}
+
+struct ifpga_feature_ops fme_spi_master_ops = {
+ .init = fme_spi_init,
+ .uinit = fme_spi_uinit,
+};
+
+static int nios_spi_wait_init_done(struct altera_spi_device *dev)
+{
+ u32 val = 0;
+ unsigned long timeout = msecs_to_timer_cycles(10000);
+ unsigned long ticks;
+
+ do {
+ if (spi_reg_read(dev, NIOS_SPI_INIT_DONE, &val))
+ return -EIO;
+ if (val)
+ break;
+
+ ticks = rte_get_timer_cycles();
+ if (time_after(ticks, timeout))
+ return -ETIMEDOUT;
+ msleep(100);
+ } while (!val);
+
+ return 0;
+}
+
+static int nios_spi_check_error(struct altera_spi_device *dev)
+{
+ u32 value = 0;
+
+ if (spi_reg_read(dev, NIOS_SPI_INIT_STS0, &value))
+ return -EIO;
+
+ dev_debug(dev, "SPI init status0 0x%x\n", value);
+
+ /* Error code: 0xFFF0 to 0xFFFC */
+ if (value >= 0xFFF0 && value <= 0xFFFC)
+ return -EINVAL;
+
+ value = 0;
+ if (spi_reg_read(dev, NIOS_SPI_INIT_STS1, &value))
+ return -EIO;
+
+ dev_debug(dev, "SPI init status1 0x%x\n", value);
+
+ /* Error code: 0xFFF0 to 0xFFFC */
+ if (value >= 0xFFF0 && value <= 0xFFFC)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int fme_nios_spi_init(struct ifpga_feature *feature)
+{
+ struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
+ struct altera_spi_device *spi_master;
+ struct intel_max10_device *max10;
+ int ret = 0;
+
+ dev_info(fme, "FME SPI Master (NIOS) Init.\n");
+ dev_debug(fme, "FME SPI base addr %p.\n",
+ feature->addr);
+ dev_debug(fme, "spi param=0x%llx\n",
+ (unsigned long long)opae_readq(feature->addr + 0x8));
+
+ spi_master = altera_spi_alloc(feature->addr, TYPE_NIOS_SPI);
+ if (!spi_master)
+ return -ENODEV;
+
+ /**
+ * 1. wait A10 NIOS initial finished and
+ * release the SPI master to Host
+ */
+ ret = nios_spi_wait_init_done(spi_master);
+ if (ret != 0) {
+ dev_err(fme, "FME NIOS_SPI init fail\n");
+ goto release_dev;
+ }
+
+ dev_info(fme, "FME NIOS_SPI initial done\n");
+
+ /* 2. check if error occur? */
+ if (nios_spi_check_error(spi_master))
+ dev_info(fme, "NIOS_SPI INIT done, but found some error\n");
+
+ /* 3. init the spi master*/
+ altera_spi_init(spi_master);
+
+ /* init the max10 device */
+ max10 = intel_max10_device_probe(spi_master, 0);
+ if (!max10) {
+ ret = -ENODEV;
+ dev_err(fme, "max10 init fail\n");
+ goto release_dev;
+ }
+
+ fme_get_board_interface(fme);
+
+ fme->max10_dev = max10;
+
+ /* SPI self test */
+ if (spi_self_checking())
+ goto spi_fail;
+
+ return ret;
+
+spi_fail:
+ intel_max10_device_remove(fme->max10_dev);
+release_dev:
+ altera_spi_release(spi_master);
+ return -ENODEV;
+}
+
+static void fme_nios_spi_uinit(struct ifpga_feature *feature)
+{
+ struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
+
+ if (fme->max10_dev)
+ intel_max10_device_remove(fme->max10_dev);
+}
+
+struct ifpga_feature_ops fme_nios_spi_master_ops = {
+ .init = fme_nios_spi_init,
+ .uinit = fme_nios_spi_uinit,
+};
+
+static int i2c_mac_rom_test(struct altera_i2c_dev *dev)
+{
+ char buf[20];
+ int ret;
+ char read_buf[20] = {0,};
+ const char *string = "1a2b3c4d5e";
+
+ opae_memcpy(buf, string, strlen(string));
+
+ ret = at24_eeprom_write(dev, AT24512_SLAVE_ADDR, 0,
+ (u8 *)buf, strlen(string));
+ if (ret < 0) {
+ dev_err(NULL, "write i2c error:%d\n", ret);
+ return ret;
+ }
+
+ ret = at24_eeprom_read(dev, AT24512_SLAVE_ADDR, 0,
+ (u8 *)read_buf, strlen(string));
+ if (ret < 0) {
+ dev_err(NULL, "read i2c error:%d\n", ret);
+ return ret;
+ }
+
+ if (memcmp(buf, read_buf, strlen(string))) {
+ dev_err(NULL, "%s test fail!\n", __func__);
+ return -EFAULT;
+ }
+
+ dev_info(NULL, "%s test successful\n", __func__);
+
+ return 0;
+}
+
+static int fme_i2c_init(struct ifpga_feature *feature)
+{
+ struct feature_fme_i2c *i2c;
+ struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
+
+ i2c = (struct feature_fme_i2c *)feature->addr;
+
+ dev_info(NULL, "FME I2C Master Init.\n");
+
+ fme->i2c_master = altera_i2c_probe(i2c);
+ if (!fme->i2c_master)
+ return -ENODEV;
+
+ /* MAC ROM self test */
+ i2c_mac_rom_test(fme->i2c_master);
+
+ return 0;
+}
+
+static void fme_i2c_uninit(struct ifpga_feature *feature)
+{
+ struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
+
+ altera_i2c_remove(fme->i2c_master);
+}
+
+struct ifpga_feature_ops fme_i2c_master_ops = {
+ .init = fme_i2c_init,
+ .uinit = fme_i2c_uninit,
+};
+
+static int fme_eth_group_init(struct ifpga_feature *feature)
+{
+ struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
+ struct eth_group_device *dev;
+
+ dev = (struct eth_group_device *)eth_group_probe(feature->addr);
+ if (!dev)
+ return -ENODEV;
+
+ fme->eth_dev[dev->group_id] = dev;
+
+ fme->eth_group_region[dev->group_id].addr =
+ feature->addr;
+ fme->eth_group_region[dev->group_id].phys_addr =
+ feature->phys_addr;
+ fme->eth_group_region[dev->group_id].len =
+ feature->size;
+
+ fme->nums_eth_dev++;
+
+ dev_info(NULL, "FME PHY Group %d Init.\n", dev->group_id);
+ dev_info(NULL, "found %d eth group, addr %p phys_addr 0x%llx len %u\n",
+ dev->group_id, feature->addr,
+ (unsigned long long)feature->phys_addr,
+ feature->size);
+
+ return 0;
+}
+
+static void fme_eth_group_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+}
+
+struct ifpga_feature_ops fme_eth_group_ops = {
+ .init = fme_eth_group_init,
+ .uinit = fme_eth_group_uinit,
+};
+
+int fme_mgr_read_mac_rom(struct ifpga_fme_hw *fme, int offset,
+ void *buf, int size)
+{
+ struct altera_i2c_dev *dev;
+
+ dev = fme->i2c_master;
+ if (!dev)
+ return -ENODEV;
+
+ return at24_eeprom_read(dev, AT24512_SLAVE_ADDR, offset, buf, size);
+}
+
+int fme_mgr_write_mac_rom(struct ifpga_fme_hw *fme, int offset,
+ void *buf, int size)
+{
+ struct altera_i2c_dev *dev;
+
+ dev = fme->i2c_master;
+ if (!dev)
+ return -ENODEV;
+
+ return at24_eeprom_write(dev, AT24512_SLAVE_ADDR, offset, buf, size);
+}
+
+static struct eth_group_device *get_eth_group_dev(struct ifpga_fme_hw *fme,
+ u8 group_id)
+{
+ struct eth_group_device *dev;
+
+ if (group_id > (MAX_ETH_GROUP_DEVICES - 1))
+ return NULL;
+
+ dev = (struct eth_group_device *)fme->eth_dev[group_id];
+ if (!dev)
+ return NULL;
+
+ if (dev->status != ETH_GROUP_DEV_ATTACHED)
+ return NULL;
+
+ return dev;
+}
+
+int fme_mgr_get_eth_group_nums(struct ifpga_fme_hw *fme)
+{
+ return fme->nums_eth_dev;
+}
+
+int fme_mgr_get_eth_group_info(struct ifpga_fme_hw *fme,
+ u8 group_id, struct opae_eth_group_info *info)
+{
+ struct eth_group_device *dev;
+
+ dev = get_eth_group_dev(fme, group_id);
+ if (!dev)
+ return -ENODEV;
+
+ info->group_id = group_id;
+ info->speed = dev->speed;
+ info->nums_of_mac = dev->mac_num;
+ info->nums_of_phy = dev->phy_num;
+
+ return 0;
+}
+
+int fme_mgr_eth_group_read_reg(struct ifpga_fme_hw *fme, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 *data)
+{
+ struct eth_group_device *dev;
+
+ dev = get_eth_group_dev(fme, group_id);
+ if (!dev)
+ return -ENODEV;
+
+ return eth_group_read_reg(dev, type, index, addr, data);
+}
+
+int fme_mgr_eth_group_write_reg(struct ifpga_fme_hw *fme, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 data)
+{
+ struct eth_group_device *dev;
+
+ dev = get_eth_group_dev(fme, group_id);
+ if (!dev)
+ return -ENODEV;
+
+ return eth_group_write_reg(dev, type, index, addr, data);
+}
+
+static int fme_get_eth_group_speed(struct ifpga_fme_hw *fme,
+ u8 group_id)
+{
+ struct eth_group_device *dev;
+
+ dev = get_eth_group_dev(fme, group_id);
+ if (!dev)
+ return -ENODEV;
+
+ return dev->speed;
+}
+
+int fme_mgr_get_retimer_info(struct ifpga_fme_hw *fme,
+ struct opae_retimer_info *info)
+{
+ struct intel_max10_device *dev;
+
+ dev = (struct intel_max10_device *)fme->max10_dev;
+ if (!dev)
+ return -ENODEV;
+
+ info->nums_retimer = fme->board_info.nums_of_retimer;
+ info->ports_per_retimer = fme->board_info.ports_per_retimer;
+ info->nums_fvl = fme->board_info.nums_of_fvl;
+ info->ports_per_fvl = fme->board_info.ports_per_fvl;
+
+ /* The speed of PKVL is identical the eth group's speed */
+ info->support_speed = fme_get_eth_group_speed(fme,
+ LINE_SIDE_GROUP_ID);
+
+ return 0;
+}
+
+int fme_mgr_get_retimer_status(struct ifpga_fme_hw *fme,
+ struct opae_retimer_status *status)
+{
+ struct intel_max10_device *dev;
+ unsigned int val;
+
+ dev = (struct intel_max10_device *)fme->max10_dev;
+ if (!dev)
+ return -ENODEV;
+
+ if (max10_reg_read(PKVL_LINK_STATUS, &val)) {
+ dev_err(dev, "%s: read pkvl status fail\n", __func__);
+ return -EINVAL;
+ }
+
+ /* The speed of PKVL is identical the eth group's speed */
+ status->speed = fme_get_eth_group_speed(fme,
+ LINE_SIDE_GROUP_ID);
+
+ status->line_link_bitmap = val;
+
+ dev_debug(dev, "get retimer status: speed:%d. line_link_bitmap:0x%x\n",
+ status->speed,
+ status->line_link_bitmap);
+
+ return 0;
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "ifpga_feature_dev.h"
+
+#define PERF_OBJ_ROOT_ID 0xff
+
+static int fme_dperf_get_clock(struct ifpga_fme_hw *fme, u64 *clock)
+{
+ struct feature_fme_dperf *dperf;
+ struct feature_fme_dfpmon_clk_ctr clk;
+
+ dperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_DPERF);
+ clk.afu_interf_clock = readq(&dperf->clk);
+
+ *clock = clk.afu_interf_clock;
+ return 0;
+}
+
+static int fme_dperf_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
+{
+ struct feature_fme_dperf *dperf;
+ struct feature_header header;
+
+ dperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_DPERF);
+ header.csr = readq(&dperf->header);
+ *revision = header.revision;
+
+ return 0;
+}
+
+#define DPERF_TIMEOUT 30
+
+static bool fabric_pobj_is_enabled(int port_id,
+ struct feature_fme_dperf *dperf)
+{
+ struct feature_fme_dfpmon_fab_ctl ctl;
+
+ ctl.csr = readq(&dperf->fab_ctl);
+
+ if (ctl.port_filter == FAB_DISABLE_FILTER)
+ return port_id == PERF_OBJ_ROOT_ID;
+
+ return port_id == ctl.port_id;
+}
+
+static u64 read_fabric_counter(struct ifpga_fme_hw *fme, u8 port_id,
+ enum dperf_fab_events fab_event)
+{
+ struct feature_fme_dfpmon_fab_ctl ctl;
+ struct feature_fme_dfpmon_fab_ctr ctr;
+ struct feature_fme_dperf *dperf;
+ u64 counter = 0;
+
+ spinlock_lock(&fme->lock);
+ dperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_DPERF);
+
+ /* if it is disabled, force the counter to return zero. */
+ if (!fabric_pobj_is_enabled(port_id, dperf))
+ goto exit;
+
+ ctl.csr = readq(&dperf->fab_ctl);
+ ctl.fab_evtcode = fab_event;
+ writeq(ctl.csr, &dperf->fab_ctl);
+
+ ctr.event_code = fab_event;
+
+ if (fpga_wait_register_field(event_code, ctr,
+ &dperf->fab_ctr, DPERF_TIMEOUT, 1)) {
+ dev_err(fme, "timeout, unmatched VTd event type in counter registers.\n");
+ spinlock_unlock(&fme->lock);
+ return -ETIMEDOUT;
+ }
+
+ ctr.csr = readq(&dperf->fab_ctr);
+ counter = ctr.fab_cnt;
+exit:
+ spinlock_unlock(&fme->lock);
+ return counter;
+}
+
+#define FAB_PORT_SHOW(name, event) \
+static int fme_dperf_get_fab_port_##name(struct ifpga_fme_hw *fme, \
+ u8 port_id, u64 *counter) \
+{ \
+ *counter = read_fabric_counter(fme, port_id, event); \
+ return 0; \
+}
+
+FAB_PORT_SHOW(pcie0_read, DPERF_FAB_PCIE0_RD);
+FAB_PORT_SHOW(pcie0_write, DPERF_FAB_PCIE0_WR);
+FAB_PORT_SHOW(mmio_read, DPERF_FAB_MMIO_RD);
+FAB_PORT_SHOW(mmio_write, DPERF_FAB_MMIO_WR);
+
+static int fme_dperf_get_fab_port_enable(struct ifpga_fme_hw *fme,
+ u8 port_id, u64 *enable)
+{
+ struct feature_fme_dperf *dperf;
+ int status;
+
+ dperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_DPERF);
+
+ status = fabric_pobj_is_enabled(port_id, dperf);
+ *enable = (u64)status;
+
+ return 0;
+}
+
+/*
+ * If enable one port or all port event counter in fabric, other
+ * fabric event counter originally enabled will be disable automatically.
+ */
+static int fme_dperf_set_fab_port_enable(struct ifpga_fme_hw *fme,
+ u8 port_id, u64 enable)
+{
+ struct feature_fme_dfpmon_fab_ctl ctl;
+ struct feature_fme_dperf *dperf;
+ bool state;
+
+ state = !!enable;
+
+ if (!state)
+ return -EINVAL;
+
+ dperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_DPERF);
+
+ /* if it is already enabled. */
+ if (fabric_pobj_is_enabled(port_id, dperf))
+ return 0;
+
+ spinlock_lock(&fme->lock);
+ ctl.csr = readq(&dperf->fab_ctl);
+ if (port_id == PERF_OBJ_ROOT_ID) {
+ ctl.port_filter = FAB_DISABLE_FILTER;
+ } else {
+ ctl.port_filter = FAB_ENABLE_FILTER;
+ ctl.port_id = port_id;
+ }
+
+ writeq(ctl.csr, &dperf->fab_ctl);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static int fme_dperf_get_fab_freeze(struct ifpga_fme_hw *fme, u64 *freeze)
+{
+ struct feature_fme_dperf *dperf;
+ struct feature_fme_dfpmon_fab_ctl ctl;
+
+ dperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_DPERF);
+ ctl.csr = readq(&dperf->fab_ctl);
+ *freeze = (u64)ctl.freeze;
+
+ return 0;
+}
+
+static int fme_dperf_set_fab_freeze(struct ifpga_fme_hw *fme, u64 freeze)
+{
+ struct feature_fme_dperf *dperf;
+ struct feature_fme_dfpmon_fab_ctl ctl;
+ bool state;
+
+ state = !!freeze;
+
+ spinlock_lock(&fme->lock);
+ dperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_DPERF);
+ ctl.csr = readq(&dperf->fab_ctl);
+ ctl.freeze = state;
+ writeq(ctl.csr, &dperf->fab_ctl);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+#define PERF_MAX_PORT_NUM 1
+
+static int fme_global_dperf_init(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "FME global_dperf Init.\n");
+
+ return 0;
+}
+
+static void fme_global_dperf_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "FME global_dperf UInit.\n");
+}
+
+static int fme_dperf_fab_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ switch (id) {
+ case 0x1: /* FREEZE */
+ return fme_dperf_get_fab_freeze(fme, &prop->data);
+ case 0x2: /* PCIE0_READ */
+ return fme_dperf_get_fab_port_pcie0_read(fme, sub, &prop->data);
+ case 0x3: /* PCIE0_WRITE */
+ return fme_dperf_get_fab_port_pcie0_write(fme, sub,
+ &prop->data);
+ case 0x4: /* MMIO_READ */
+ return fme_dperf_get_fab_port_mmio_read(fme, sub, &prop->data);
+ case 0x5: /* MMIO_WRITE */
+ return fme_dperf_get_fab_port_mmio_write(fme, sub, &prop->data);
+ case 0x6: /* ENABLE */
+ return fme_dperf_get_fab_port_enable(fme, sub, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_dperf_root_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ if (sub != PERF_PROP_SUB_UNUSED)
+ return -ENOENT;
+
+ switch (id) {
+ case 0x1: /* CLOCK */
+ return fme_dperf_get_clock(fme, &prop->data);
+ case 0x2: /* REVISION */
+ return fme_dperf_get_revision(fme, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_global_dperf_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
+
+ switch (top) {
+ case PERF_PROP_TOP_FAB:
+ return fme_dperf_fab_get_prop(feature, prop);
+ case PERF_PROP_TOP_UNUSED:
+ return fme_dperf_root_get_prop(feature, prop);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_dperf_fab_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ switch (id) {
+ case 0x1: /* FREEZE - fab root only prop */
+ if (sub != PERF_PROP_SUB_UNUSED)
+ return -ENOENT;
+ return fme_dperf_set_fab_freeze(fme, prop->data);
+ case 0x6: /* ENABLE - fab both root and sub */
+ return fme_dperf_set_fab_port_enable(fme, sub, prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_global_dperf_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
+
+ switch (top) {
+ case PERF_PROP_TOP_FAB:
+ return fme_dperf_fab_set_prop(feature, prop);
+ }
+
+ return -ENOENT;
+}
+
+struct ifpga_feature_ops fme_global_dperf_ops = {
+ .init = fme_global_dperf_init,
+ .uinit = fme_global_dperf_uinit,
+ .get_prop = fme_global_dperf_get_prop,
+ .set_prop = fme_global_dperf_set_prop,
+
+};
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "ifpga_feature_dev.h"
+
+static int fme_err_get_errors(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_error0 fme_error0;
+
+ fme_error0.csr = readq(&fme_err->fme_err);
+ *val = fme_error0.csr;
+
+ return 0;
+}
+
+static int fme_err_get_first_error(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_first_error fme_first_err;
+
+ fme_first_err.csr = readq(&fme_err->fme_first_err);
+ *val = fme_first_err.err_reg_status;
+
+ return 0;
+}
+
+static int fme_err_get_next_error(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_next_error fme_next_err;
+
+ fme_next_err.csr = readq(&fme_err->fme_next_err);
+ *val = fme_next_err.err_reg_status;
+
+ return 0;
+}
+
+static int fme_err_set_clear(struct ifpga_fme_hw *fme, u64 val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_error0 fme_error0;
+ struct feature_fme_first_error fme_first_err;
+ struct feature_fme_next_error fme_next_err;
+ int ret = 0;
+
+ spinlock_lock(&fme->lock);
+ writeq(FME_ERROR0_MASK, &fme_err->fme_err_mask);
+
+ fme_error0.csr = readq(&fme_err->fme_err);
+ if (val != fme_error0.csr) {
+ ret = -EBUSY;
+ goto exit;
+ }
+
+ fme_first_err.csr = readq(&fme_err->fme_first_err);
+ fme_next_err.csr = readq(&fme_err->fme_next_err);
+
+ writeq(fme_error0.csr & FME_ERROR0_MASK, &fme_err->fme_err);
+ writeq(fme_first_err.csr & FME_FIRST_ERROR_MASK,
+ &fme_err->fme_first_err);
+ writeq(fme_next_err.csr & FME_NEXT_ERROR_MASK,
+ &fme_err->fme_next_err);
+
+exit:
+ writeq(FME_ERROR0_MASK_DEFAULT, &fme_err->fme_err_mask);
+ spinlock_unlock(&fme->lock);
+
+ return ret;
+}
+
+static int fme_err_get_revision(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_header header;
+
+ header.csr = readq(&fme_err->header);
+ *val = header.revision;
+
+ return 0;
+}
+
+static int fme_err_get_pcie0_errors(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_pcie0_error pcie0_err;
+
+ pcie0_err.csr = readq(&fme_err->pcie0_err);
+ *val = pcie0_err.csr;
+
+ return 0;
+}
+
+static int fme_err_set_pcie0_errors(struct ifpga_fme_hw *fme, u64 val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_pcie0_error pcie0_err;
+ int ret = 0;
+
+ spinlock_lock(&fme->lock);
+ writeq(FME_PCIE0_ERROR_MASK, &fme_err->pcie0_err_mask);
+
+ pcie0_err.csr = readq(&fme_err->pcie0_err);
+ if (val != pcie0_err.csr)
+ ret = -EBUSY;
+ else
+ writeq(pcie0_err.csr & FME_PCIE0_ERROR_MASK,
+ &fme_err->pcie0_err);
+
+ writeq(0UL, &fme_err->pcie0_err_mask);
+ spinlock_unlock(&fme->lock);
+
+ return ret;
+}
+
+static int fme_err_get_pcie1_errors(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_pcie1_error pcie1_err;
+
+ pcie1_err.csr = readq(&fme_err->pcie1_err);
+ *val = pcie1_err.csr;
+
+ return 0;
+}
+
+static int fme_err_set_pcie1_errors(struct ifpga_fme_hw *fme, u64 val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_pcie1_error pcie1_err;
+ int ret = 0;
+
+ spinlock_lock(&fme->lock);
+ writeq(FME_PCIE1_ERROR_MASK, &fme_err->pcie1_err_mask);
+
+ pcie1_err.csr = readq(&fme_err->pcie1_err);
+ if (val != pcie1_err.csr)
+ ret = -EBUSY;
+ else
+ writeq(pcie1_err.csr & FME_PCIE1_ERROR_MASK,
+ &fme_err->pcie1_err);
+
+ writeq(0UL, &fme_err->pcie1_err_mask);
+ spinlock_unlock(&fme->lock);
+
+ return ret;
+}
+
+static int fme_err_get_nonfatal_errors(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_ras_nonfaterror ras_nonfaterr;
+
+ ras_nonfaterr.csr = readq(&fme_err->ras_nonfaterr);
+ *val = ras_nonfaterr.csr;
+
+ return 0;
+}
+
+static int fme_err_get_catfatal_errors(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_ras_catfaterror ras_catfaterr;
+
+ ras_catfaterr.csr = readq(&fme_err->ras_catfaterr);
+ *val = ras_catfaterr.csr;
+
+ return 0;
+}
+
+static int fme_err_get_inject_errors(struct ifpga_fme_hw *fme, u64 *val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_ras_error_inj ras_error_inj;
+
+ ras_error_inj.csr = readq(&fme_err->ras_error_inj);
+ *val = ras_error_inj.csr & FME_RAS_ERROR_INJ_MASK;
+
+ return 0;
+}
+
+static int fme_err_set_inject_errors(struct ifpga_fme_hw *fme, u64 val)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+ struct feature_fme_ras_error_inj ras_error_inj;
+
+ spinlock_lock(&fme->lock);
+ ras_error_inj.csr = readq(&fme_err->ras_error_inj);
+
+ if (val <= FME_RAS_ERROR_INJ_MASK) {
+ ras_error_inj.csr = val;
+ } else {
+ spinlock_unlock(&fme->lock);
+ return -EINVAL;
+ }
+
+ writeq(ras_error_inj.csr, &fme_err->ras_error_inj);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static void fme_error_enable(struct ifpga_fme_hw *fme)
+{
+ struct feature_fme_err *fme_err
+ = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_ERR);
+
+ writeq(FME_ERROR0_MASK_DEFAULT, &fme_err->fme_err_mask);
+ writeq(0UL, &fme_err->pcie0_err_mask);
+ writeq(0UL, &fme_err->pcie1_err_mask);
+ writeq(0UL, &fme_err->ras_nonfat_mask);
+ writeq(0UL, &fme_err->ras_catfat_mask);
+}
+
+static int fme_global_error_init(struct ifpga_feature *feature)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+
+ fme_error_enable(fme);
+
+ if (feature->ctx_num)
+ fme->capability |= FPGA_FME_CAP_ERR_IRQ;
+
+ return 0;
+}
+
+static void fme_global_error_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+}
+
+static int fme_err_fme_err_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ switch (id) {
+ case 0x1: /* ERRORS */
+ return fme_err_get_errors(fme, &prop->data);
+ case 0x2: /* FIRST_ERROR */
+ return fme_err_get_first_error(fme, &prop->data);
+ case 0x3: /* NEXT_ERROR */
+ return fme_err_get_next_error(fme, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_err_root_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ switch (id) {
+ case 0x5: /* REVISION */
+ return fme_err_get_revision(fme, &prop->data);
+ case 0x6: /* PCIE0_ERRORS */
+ return fme_err_get_pcie0_errors(fme, &prop->data);
+ case 0x7: /* PCIE1_ERRORS */
+ return fme_err_get_pcie1_errors(fme, &prop->data);
+ case 0x8: /* NONFATAL_ERRORS */
+ return fme_err_get_nonfatal_errors(fme, &prop->data);
+ case 0x9: /* CATFATAL_ERRORS */
+ return fme_err_get_catfatal_errors(fme, &prop->data);
+ case 0xa: /* INJECT_ERRORS */
+ return fme_err_get_inject_errors(fme, &prop->data);
+ case 0xb: /* REVISION*/
+ return fme_err_get_revision(fme, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_global_error_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+
+ /* PROP_SUB is never used */
+ if (sub != PROP_SUB_UNUSED)
+ return -ENOENT;
+
+ switch (top) {
+ case ERR_PROP_TOP_FME_ERR:
+ return fme_err_fme_err_get_prop(feature, prop);
+ case ERR_PROP_TOP_UNUSED:
+ return fme_err_root_get_prop(feature, prop);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_err_fme_err_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ switch (id) {
+ case 0x4: /* CLEAR */
+ return fme_err_set_clear(fme, prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_err_root_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ switch (id) {
+ case 0x6: /* PCIE0_ERRORS */
+ return fme_err_set_pcie0_errors(fme, prop->data);
+ case 0x7: /* PCIE1_ERRORS */
+ return fme_err_set_pcie1_errors(fme, prop->data);
+ case 0xa: /* INJECT_ERRORS */
+ return fme_err_set_inject_errors(fme, prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_global_error_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+
+ /* PROP_SUB is never used */
+ if (sub != PROP_SUB_UNUSED)
+ return -ENOENT;
+
+ switch (top) {
+ case ERR_PROP_TOP_FME_ERR:
+ return fme_err_fme_err_set_prop(feature, prop);
+ case ERR_PROP_TOP_UNUSED:
+ return fme_err_root_set_prop(feature, prop);
+ }
+
+ return -ENOENT;
+}
+
+struct ifpga_feature_ops fme_global_err_ops = {
+ .init = fme_global_error_init,
+ .uinit = fme_global_error_uinit,
+ .get_prop = fme_global_error_get_prop,
+ .set_prop = fme_global_error_set_prop,
+};
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "ifpga_feature_dev.h"
+
+#define PERF_OBJ_ROOT_ID 0xff
+
+static int fme_iperf_get_clock(struct ifpga_fme_hw *fme, u64 *clock)
+{
+ struct feature_fme_iperf *iperf;
+ struct feature_fme_ifpmon_clk_ctr clk;
+
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ clk.afu_interf_clock = readq(&iperf->clk);
+
+ *clock = clk.afu_interf_clock;
+ return 0;
+}
+
+static int fme_iperf_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
+{
+ struct feature_fme_iperf *iperf;
+ struct feature_header header;
+
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ header.csr = readq(&iperf->header);
+ *revision = header.revision;
+
+ return 0;
+}
+
+static int fme_iperf_get_cache_freeze(struct ifpga_fme_hw *fme, u64 *freeze)
+{
+ struct feature_fme_iperf *iperf;
+ struct feature_fme_ifpmon_ch_ctl ctl;
+
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ ctl.csr = readq(&iperf->ch_ctl);
+ *freeze = (u64)ctl.freeze;
+ return 0;
+}
+
+static int fme_iperf_set_cache_freeze(struct ifpga_fme_hw *fme, u64 freeze)
+{
+ struct feature_fme_iperf *iperf;
+ struct feature_fme_ifpmon_ch_ctl ctl;
+ bool state;
+
+ state = !!freeze;
+
+ spinlock_lock(&fme->lock);
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ ctl.csr = readq(&iperf->ch_ctl);
+ ctl.freeze = state;
+ writeq(ctl.csr, &iperf->ch_ctl);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+#define IPERF_TIMEOUT 30
+
+static u64 read_cache_counter(struct ifpga_fme_hw *fme,
+ u8 channel, enum iperf_cache_events event)
+{
+ struct feature_fme_iperf *iperf;
+ struct feature_fme_ifpmon_ch_ctl ctl;
+ struct feature_fme_ifpmon_ch_ctr ctr0, ctr1;
+ u64 counter;
+
+ spinlock_lock(&fme->lock);
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+
+ /* set channel access type and cache event code. */
+ ctl.csr = readq(&iperf->ch_ctl);
+ ctl.cci_chsel = channel;
+ ctl.cache_event = event;
+ writeq(ctl.csr, &iperf->ch_ctl);
+
+ /* check the event type in the counter registers */
+ ctr0.event_code = event;
+
+ if (fpga_wait_register_field(event_code, ctr0,
+ &iperf->ch_ctr0, IPERF_TIMEOUT, 1)) {
+ dev_err(fme, "timeout, unmatched cache event type in counter registers.\n");
+ spinlock_unlock(&fme->lock);
+ return -ETIMEDOUT;
+ }
+
+ ctr0.csr = readq(&iperf->ch_ctr0);
+ ctr1.csr = readq(&iperf->ch_ctr1);
+ counter = ctr0.cache_counter + ctr1.cache_counter;
+ spinlock_unlock(&fme->lock);
+
+ return counter;
+}
+
+#define CACHE_SHOW(name, type, event) \
+static int fme_iperf_get_cache_##name(struct ifpga_fme_hw *fme, \
+ u64 *counter) \
+{ \
+ *counter = read_cache_counter(fme, type, event); \
+ return 0; \
+}
+
+CACHE_SHOW(read_hit, CACHE_CHANNEL_RD, IPERF_CACHE_RD_HIT);
+CACHE_SHOW(read_miss, CACHE_CHANNEL_RD, IPERF_CACHE_RD_MISS);
+CACHE_SHOW(write_hit, CACHE_CHANNEL_WR, IPERF_CACHE_WR_HIT);
+CACHE_SHOW(write_miss, CACHE_CHANNEL_WR, IPERF_CACHE_WR_MISS);
+CACHE_SHOW(hold_request, CACHE_CHANNEL_RD, IPERF_CACHE_HOLD_REQ);
+CACHE_SHOW(tx_req_stall, CACHE_CHANNEL_RD, IPERF_CACHE_TX_REQ_STALL);
+CACHE_SHOW(rx_req_stall, CACHE_CHANNEL_RD, IPERF_CACHE_RX_REQ_STALL);
+CACHE_SHOW(rx_eviction, CACHE_CHANNEL_RD, IPERF_CACHE_EVICTIONS);
+CACHE_SHOW(data_write_port_contention, CACHE_CHANNEL_WR,
+ IPERF_CACHE_DATA_WR_PORT_CONTEN);
+CACHE_SHOW(tag_write_port_contention, CACHE_CHANNEL_WR,
+ IPERF_CACHE_TAG_WR_PORT_CONTEN);
+
+static int fme_iperf_get_vtd_freeze(struct ifpga_fme_hw *fme, u64 *freeze)
+{
+ struct feature_fme_ifpmon_vtd_ctl ctl;
+ struct feature_fme_iperf *iperf;
+
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ ctl.csr = readq(&iperf->vtd_ctl);
+ *freeze = (u64)ctl.freeze;
+
+ return 0;
+}
+
+static int fme_iperf_set_vtd_freeze(struct ifpga_fme_hw *fme, u64 freeze)
+{
+ struct feature_fme_ifpmon_vtd_ctl ctl;
+ struct feature_fme_iperf *iperf;
+ bool state;
+
+ state = !!freeze;
+
+ spinlock_lock(&fme->lock);
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ ctl.csr = readq(&iperf->vtd_ctl);
+ ctl.freeze = state;
+ writeq(ctl.csr, &iperf->vtd_ctl);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static u64 read_iommu_sip_counter(struct ifpga_fme_hw *fme,
+ enum iperf_vtd_sip_events event)
+{
+ struct feature_fme_ifpmon_vtd_sip_ctl sip_ctl;
+ struct feature_fme_ifpmon_vtd_sip_ctr sip_ctr;
+ struct feature_fme_iperf *iperf;
+ u64 counter;
+
+ spinlock_lock(&fme->lock);
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ sip_ctl.csr = readq(&iperf->vtd_sip_ctl);
+ sip_ctl.vtd_evtcode = event;
+ writeq(sip_ctl.csr, &iperf->vtd_sip_ctl);
+
+ sip_ctr.event_code = event;
+
+ if (fpga_wait_register_field(event_code, sip_ctr,
+ &iperf->vtd_sip_ctr, IPERF_TIMEOUT, 1)) {
+ dev_err(fme, "timeout, unmatched VTd SIP event type in counter registers\n");
+ spinlock_unlock(&fme->lock);
+ return -ETIMEDOUT;
+ }
+
+ sip_ctr.csr = readq(&iperf->vtd_sip_ctr);
+ counter = sip_ctr.vtd_counter;
+ spinlock_unlock(&fme->lock);
+
+ return counter;
+}
+
+#define VTD_SIP_SHOW(name, event) \
+static int fme_iperf_get_vtd_sip_##name(struct ifpga_fme_hw *fme, \
+ u64 *counter) \
+{ \
+ *counter = read_iommu_sip_counter(fme, event); \
+ return 0; \
+}
+
+VTD_SIP_SHOW(iotlb_4k_hit, IPERF_VTD_SIP_IOTLB_4K_HIT);
+VTD_SIP_SHOW(iotlb_2m_hit, IPERF_VTD_SIP_IOTLB_2M_HIT);
+VTD_SIP_SHOW(iotlb_1g_hit, IPERF_VTD_SIP_IOTLB_1G_HIT);
+VTD_SIP_SHOW(slpwc_l3_hit, IPERF_VTD_SIP_SLPWC_L3_HIT);
+VTD_SIP_SHOW(slpwc_l4_hit, IPERF_VTD_SIP_SLPWC_L4_HIT);
+VTD_SIP_SHOW(rcc_hit, IPERF_VTD_SIP_RCC_HIT);
+VTD_SIP_SHOW(iotlb_4k_miss, IPERF_VTD_SIP_IOTLB_4K_MISS);
+VTD_SIP_SHOW(iotlb_2m_miss, IPERF_VTD_SIP_IOTLB_2M_MISS);
+VTD_SIP_SHOW(iotlb_1g_miss, IPERF_VTD_SIP_IOTLB_1G_MISS);
+VTD_SIP_SHOW(slpwc_l3_miss, IPERF_VTD_SIP_SLPWC_L3_MISS);
+VTD_SIP_SHOW(slpwc_l4_miss, IPERF_VTD_SIP_SLPWC_L4_MISS);
+VTD_SIP_SHOW(rcc_miss, IPERF_VTD_SIP_RCC_MISS);
+
+static u64 read_iommu_counter(struct ifpga_fme_hw *fme, u8 port_id,
+ enum iperf_vtd_events base_event)
+{
+ struct feature_fme_ifpmon_vtd_ctl ctl;
+ struct feature_fme_ifpmon_vtd_ctr ctr;
+ struct feature_fme_iperf *iperf;
+ enum iperf_vtd_events event = base_event + port_id;
+ u64 counter;
+
+ spinlock_lock(&fme->lock);
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ ctl.csr = readq(&iperf->vtd_ctl);
+ ctl.vtd_evtcode = event;
+ writeq(ctl.csr, &iperf->vtd_ctl);
+
+ ctr.event_code = event;
+
+ if (fpga_wait_register_field(event_code, ctr,
+ &iperf->vtd_ctr, IPERF_TIMEOUT, 1)) {
+ dev_err(fme, "timeout, unmatched VTd event type in counter registers.\n");
+ spinlock_unlock(&fme->lock);
+ return -ETIMEDOUT;
+ }
+
+ ctr.csr = readq(&iperf->vtd_ctr);
+ counter = ctr.vtd_counter;
+ spinlock_unlock(&fme->lock);
+
+ return counter;
+}
+
+#define VTD_PORT_SHOW(name, base_event) \
+static int fme_iperf_get_vtd_port_##name(struct ifpga_fme_hw *fme, \
+ u8 port_id, u64 *counter) \
+{ \
+ *counter = read_iommu_counter(fme, port_id, base_event); \
+ return 0; \
+}
+
+VTD_PORT_SHOW(read_transaction, IPERF_VTD_AFU_MEM_RD_TRANS);
+VTD_PORT_SHOW(write_transaction, IPERF_VTD_AFU_MEM_WR_TRANS);
+VTD_PORT_SHOW(devtlb_read_hit, IPERF_VTD_AFU_DEVTLB_RD_HIT);
+VTD_PORT_SHOW(devtlb_write_hit, IPERF_VTD_AFU_DEVTLB_WR_HIT);
+VTD_PORT_SHOW(devtlb_4k_fill, IPERF_VTD_DEVTLB_4K_FILL);
+VTD_PORT_SHOW(devtlb_2m_fill, IPERF_VTD_DEVTLB_2M_FILL);
+VTD_PORT_SHOW(devtlb_1g_fill, IPERF_VTD_DEVTLB_1G_FILL);
+
+static bool fabric_pobj_is_enabled(u8 port_id, struct feature_fme_iperf *iperf)
+{
+ struct feature_fme_ifpmon_fab_ctl ctl;
+
+ ctl.csr = readq(&iperf->fab_ctl);
+
+ if (ctl.port_filter == FAB_DISABLE_FILTER)
+ return port_id == PERF_OBJ_ROOT_ID;
+
+ return port_id == ctl.port_id;
+}
+
+static u64 read_fabric_counter(struct ifpga_fme_hw *fme, u8 port_id,
+ enum iperf_fab_events fab_event)
+{
+ struct feature_fme_ifpmon_fab_ctl ctl;
+ struct feature_fme_ifpmon_fab_ctr ctr;
+ struct feature_fme_iperf *iperf;
+ u64 counter = 0;
+
+ spinlock_lock(&fme->lock);
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+
+ /* if it is disabled, force the counter to return zero. */
+ if (!fabric_pobj_is_enabled(port_id, iperf))
+ goto exit;
+
+ ctl.csr = readq(&iperf->fab_ctl);
+ ctl.fab_evtcode = fab_event;
+ writeq(ctl.csr, &iperf->fab_ctl);
+
+ ctr.event_code = fab_event;
+
+ if (fpga_wait_register_field(event_code, ctr,
+ &iperf->fab_ctr, IPERF_TIMEOUT, 1)) {
+ dev_err(fme, "timeout, unmatched VTd event type in counter registers.\n");
+ spinlock_unlock(&fme->lock);
+ return -ETIMEDOUT;
+ }
+
+ ctr.csr = readq(&iperf->fab_ctr);
+ counter = ctr.fab_cnt;
+exit:
+ spinlock_unlock(&fme->lock);
+ return counter;
+}
+
+#define FAB_PORT_SHOW(name, event) \
+static int fme_iperf_get_fab_port_##name(struct ifpga_fme_hw *fme, \
+ u8 port_id, u64 *counter) \
+{ \
+ *counter = read_fabric_counter(fme, port_id, event); \
+ return 0; \
+}
+
+FAB_PORT_SHOW(pcie0_read, IPERF_FAB_PCIE0_RD);
+FAB_PORT_SHOW(pcie0_write, IPERF_FAB_PCIE0_WR);
+FAB_PORT_SHOW(pcie1_read, IPERF_FAB_PCIE1_RD);
+FAB_PORT_SHOW(pcie1_write, IPERF_FAB_PCIE1_WR);
+FAB_PORT_SHOW(upi_read, IPERF_FAB_UPI_RD);
+FAB_PORT_SHOW(upi_write, IPERF_FAB_UPI_WR);
+FAB_PORT_SHOW(mmio_read, IPERF_FAB_MMIO_RD);
+FAB_PORT_SHOW(mmio_write, IPERF_FAB_MMIO_WR);
+
+static int fme_iperf_get_fab_port_enable(struct ifpga_fme_hw *fme,
+ u8 port_id, u64 *enable)
+{
+ struct feature_fme_iperf *iperf;
+ int status;
+
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+
+ status = fabric_pobj_is_enabled(port_id, iperf);
+ *enable = (u64)status;
+
+ return 0;
+}
+
+/*
+ * If enable one port or all port event counter in fabric, other
+ * fabric event counter originally enabled will be disable automatically.
+ */
+static int fme_iperf_set_fab_port_enable(struct ifpga_fme_hw *fme,
+ u8 port_id, u64 enable)
+{
+ struct feature_fme_ifpmon_fab_ctl ctl;
+ struct feature_fme_iperf *iperf;
+ bool state;
+
+ state = !!enable;
+
+ if (!state)
+ return -EINVAL;
+
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+
+ /* if it is already enabled. */
+ if (fabric_pobj_is_enabled(port_id, iperf))
+ return 0;
+
+ spinlock_lock(&fme->lock);
+ ctl.csr = readq(&iperf->fab_ctl);
+ if (port_id == PERF_OBJ_ROOT_ID) {
+ ctl.port_filter = FAB_DISABLE_FILTER;
+ } else {
+ ctl.port_filter = FAB_ENABLE_FILTER;
+ ctl.port_id = port_id;
+ }
+
+ writeq(ctl.csr, &iperf->fab_ctl);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+static int fme_iperf_get_fab_freeze(struct ifpga_fme_hw *fme, u64 *freeze)
+{
+ struct feature_fme_iperf *iperf;
+ struct feature_fme_ifpmon_fab_ctl ctl;
+
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ ctl.csr = readq(&iperf->fab_ctl);
+ *freeze = (u64)ctl.freeze;
+
+ return 0;
+}
+
+static int fme_iperf_set_fab_freeze(struct ifpga_fme_hw *fme, u64 freeze)
+{
+ struct feature_fme_iperf *iperf;
+ struct feature_fme_ifpmon_fab_ctl ctl;
+ bool state;
+
+ state = !!freeze;
+
+ spinlock_lock(&fme->lock);
+ iperf = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_GLOBAL_IPERF);
+ ctl.csr = readq(&iperf->fab_ctl);
+ ctl.freeze = state;
+ writeq(ctl.csr, &iperf->fab_ctl);
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+#define PERF_MAX_PORT_NUM 1
+#define FME_IPERF_CAP_IOMMU 0x1
+
+static int fme_global_iperf_init(struct ifpga_feature *feature)
+{
+ struct ifpga_fme_hw *fme;
+ struct feature_fme_header *fme_hdr;
+ struct feature_fme_capability fme_capability;
+
+ dev_info(NULL, "FME global_iperf Init.\n");
+
+ fme = (struct ifpga_fme_hw *)feature->parent;
+ fme_hdr = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
+
+ /* check if iommu is not supported on this device. */
+ fme_capability.csr = readq(&fme_hdr->capability);
+ dev_info(NULL, "FME HEAD fme_capability %llx.\n",
+ (unsigned long long)fme_hdr->capability.csr);
+
+ if (fme_capability.iommu_support)
+ feature->cap |= FME_IPERF_CAP_IOMMU;
+
+ return 0;
+}
+
+static void fme_global_iperf_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "FME global_iperf UInit.\n");
+}
+
+static int fme_iperf_root_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ if (sub != PERF_PROP_SUB_UNUSED)
+ return -ENOENT;
+
+ switch (id) {
+ case 0x1: /* CLOCK */
+ return fme_iperf_get_clock(fme, &prop->data);
+ case 0x2: /* REVISION */
+ return fme_iperf_get_revision(fme, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_iperf_cache_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ if (sub != PERF_PROP_SUB_UNUSED)
+ return -ENOENT;
+
+ switch (id) {
+ case 0x1: /* FREEZE */
+ return fme_iperf_get_cache_freeze(fme, &prop->data);
+ case 0x2: /* READ_HIT */
+ return fme_iperf_get_cache_read_hit(fme, &prop->data);
+ case 0x3: /* READ_MISS */
+ return fme_iperf_get_cache_read_miss(fme, &prop->data);
+ case 0x4: /* WRITE_HIT */
+ return fme_iperf_get_cache_write_hit(fme, &prop->data);
+ case 0x5: /* WRITE_MISS */
+ return fme_iperf_get_cache_write_miss(fme, &prop->data);
+ case 0x6: /* HOLD_REQUEST */
+ return fme_iperf_get_cache_hold_request(fme, &prop->data);
+ case 0x7: /* TX_REQ_STALL */
+ return fme_iperf_get_cache_tx_req_stall(fme, &prop->data);
+ case 0x8: /* RX_REQ_STALL */
+ return fme_iperf_get_cache_rx_req_stall(fme, &prop->data);
+ case 0x9: /* RX_EVICTION */
+ return fme_iperf_get_cache_rx_eviction(fme, &prop->data);
+ case 0xa: /* DATA_WRITE_PORT_CONTENTION */
+ return fme_iperf_get_cache_data_write_port_contention(fme,
+ &prop->data);
+ case 0xb: /* TAG_WRITE_PORT_CONTENTION */
+ return fme_iperf_get_cache_tag_write_port_contention(fme,
+ &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_iperf_vtd_root_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ switch (id) {
+ case 0x1: /* FREEZE */
+ return fme_iperf_get_vtd_freeze(fme, &prop->data);
+ case 0x2: /* IOTLB_4K_HIT */
+ return fme_iperf_get_vtd_sip_iotlb_4k_hit(fme, &prop->data);
+ case 0x3: /* IOTLB_2M_HIT */
+ return fme_iperf_get_vtd_sip_iotlb_2m_hit(fme, &prop->data);
+ case 0x4: /* IOTLB_1G_HIT */
+ return fme_iperf_get_vtd_sip_iotlb_1g_hit(fme, &prop->data);
+ case 0x5: /* SLPWC_L3_HIT */
+ return fme_iperf_get_vtd_sip_slpwc_l3_hit(fme, &prop->data);
+ case 0x6: /* SLPWC_L4_HIT */
+ return fme_iperf_get_vtd_sip_slpwc_l4_hit(fme, &prop->data);
+ case 0x7: /* RCC_HIT */
+ return fme_iperf_get_vtd_sip_rcc_hit(fme, &prop->data);
+ case 0x8: /* IOTLB_4K_MISS */
+ return fme_iperf_get_vtd_sip_iotlb_4k_miss(fme, &prop->data);
+ case 0x9: /* IOTLB_2M_MISS */
+ return fme_iperf_get_vtd_sip_iotlb_2m_miss(fme, &prop->data);
+ case 0xa: /* IOTLB_1G_MISS */
+ return fme_iperf_get_vtd_sip_iotlb_1g_miss(fme, &prop->data);
+ case 0xb: /* SLPWC_L3_MISS */
+ return fme_iperf_get_vtd_sip_slpwc_l3_miss(fme, &prop->data);
+ case 0xc: /* SLPWC_L4_MISS */
+ return fme_iperf_get_vtd_sip_slpwc_l4_miss(fme, &prop->data);
+ case 0xd: /* RCC_MISS */
+ return fme_iperf_get_vtd_sip_rcc_miss(fme, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_iperf_vtd_sub_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+
+ if (sub > PERF_MAX_PORT_NUM)
+ return -ENOENT;
+
+ switch (id) {
+ case 0xe: /* READ_TRANSACTION */
+ return fme_iperf_get_vtd_port_read_transaction(fme, sub,
+ &prop->data);
+ case 0xf: /* WRITE_TRANSACTION */
+ return fme_iperf_get_vtd_port_write_transaction(fme, sub,
+ &prop->data);
+ case 0x10: /* DEVTLB_READ_HIT */
+ return fme_iperf_get_vtd_port_devtlb_read_hit(fme, sub,
+ &prop->data);
+ case 0x11: /* DEVTLB_WRITE_HIT */
+ return fme_iperf_get_vtd_port_devtlb_write_hit(fme, sub,
+ &prop->data);
+ case 0x12: /* DEVTLB_4K_FILL */
+ return fme_iperf_get_vtd_port_devtlb_4k_fill(fme, sub,
+ &prop->data);
+ case 0x13: /* DEVTLB_2M_FILL */
+ return fme_iperf_get_vtd_port_devtlb_2m_fill(fme, sub,
+ &prop->data);
+ case 0x14: /* DEVTLB_1G_FILL */
+ return fme_iperf_get_vtd_port_devtlb_1g_fill(fme, sub,
+ &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_iperf_vtd_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+
+ if (sub == PERF_PROP_SUB_UNUSED)
+ return fme_iperf_vtd_root_get_prop(feature, prop);
+
+ return fme_iperf_vtd_sub_get_prop(feature, prop);
+}
+
+static int fme_iperf_fab_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ /* Other properties are present for both top and sub levels */
+ switch (id) {
+ case 0x1: /* FREEZE */
+ if (sub != PERF_PROP_SUB_UNUSED)
+ return -ENOENT;
+ return fme_iperf_get_fab_freeze(fme, &prop->data);
+ case 0x2: /* PCIE0_READ */
+ return fme_iperf_get_fab_port_pcie0_read(fme, sub,
+ &prop->data);
+ case 0x3: /* PCIE0_WRITE */
+ return fme_iperf_get_fab_port_pcie0_write(fme, sub,
+ &prop->data);
+ case 0x4: /* PCIE1_READ */
+ return fme_iperf_get_fab_port_pcie1_read(fme, sub,
+ &prop->data);
+ case 0x5: /* PCIE1_WRITE */
+ return fme_iperf_get_fab_port_pcie1_write(fme, sub,
+ &prop->data);
+ case 0x6: /* UPI_READ */
+ return fme_iperf_get_fab_port_upi_read(fme, sub,
+ &prop->data);
+ case 0x7: /* UPI_WRITE */
+ return fme_iperf_get_fab_port_upi_write(fme, sub,
+ &prop->data);
+ case 0x8: /* MMIO_READ */
+ return fme_iperf_get_fab_port_mmio_read(fme, sub,
+ &prop->data);
+ case 0x9: /* MMIO_WRITE */
+ return fme_iperf_get_fab_port_mmio_write(fme, sub,
+ &prop->data);
+ case 0xa: /* ENABLE */
+ return fme_iperf_get_fab_port_enable(fme, sub, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_global_iperf_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
+
+ switch (top) {
+ case PERF_PROP_TOP_CACHE:
+ return fme_iperf_cache_get_prop(feature, prop);
+ case PERF_PROP_TOP_VTD:
+ return fme_iperf_vtd_get_prop(feature, prop);
+ case PERF_PROP_TOP_FAB:
+ return fme_iperf_fab_get_prop(feature, prop);
+ case PERF_PROP_TOP_UNUSED:
+ return fme_iperf_root_get_prop(feature, prop);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_iperf_cache_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ if (sub == PERF_PROP_SUB_UNUSED && id == 0x1) /* FREEZE */
+ return fme_iperf_set_cache_freeze(fme, prop->data);
+
+ return -ENOENT;
+}
+
+static int fme_iperf_vtd_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ if (sub == PERF_PROP_SUB_UNUSED && id == 0x1) /* FREEZE */
+ return fme_iperf_set_vtd_freeze(fme, prop->data);
+
+ return -ENOENT;
+}
+
+static int fme_iperf_fab_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme = feature->parent;
+ u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
+ u16 id = GET_FIELD(PROP_ID, prop->prop_id);
+
+ switch (id) {
+ case 0x1: /* FREEZE */
+ if (sub != PERF_PROP_SUB_UNUSED)
+ return -ENOENT;
+ return fme_iperf_set_fab_freeze(fme, prop->data);
+ case 0xa: /* ENABLE */
+ return fme_iperf_set_fab_port_enable(fme, sub, prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int fme_global_iperf_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
+
+ switch (top) {
+ case PERF_PROP_TOP_CACHE:
+ return fme_iperf_cache_set_prop(feature, prop);
+ case PERF_PROP_TOP_VTD:
+ return fme_iperf_vtd_set_prop(feature, prop);
+ case PERF_PROP_TOP_FAB:
+ return fme_iperf_fab_set_prop(feature, prop);
+ }
+
+ return -ENOENT;
+}
+
+struct ifpga_feature_ops fme_global_iperf_ops = {
+ .init = fme_global_iperf_init,
+ .uinit = fme_global_iperf_uinit,
+ .get_prop = fme_global_iperf_get_prop,
+ .set_prop = fme_global_iperf_set_prop,
+};
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "ifpga_feature_dev.h"
+
+static u64
+pr_err_handle(struct feature_fme_pr *fme_pr)
+{
+ struct feature_fme_pr_status fme_pr_status;
+ unsigned long err_code;
+ u64 fme_pr_error;
+ int i;
+
+ fme_pr_status.csr = readq(&fme_pr->ccip_fme_pr_status);
+ if (!fme_pr_status.pr_status)
+ return 0;
+
+ err_code = readq(&fme_pr->ccip_fme_pr_err);
+ fme_pr_error = err_code;
+
+ for (i = 0; i < PR_MAX_ERR_NUM; i++) {
+ if (err_code & (1 << i))
+ dev_info(NULL, "%s\n", pr_err_msg[i]);
+ }
+
+ writeq(fme_pr_error, &fme_pr->ccip_fme_pr_err);
+ return fme_pr_error;
+}
+
+static int fme_pr_write_init(struct ifpga_fme_hw *fme_dev,
+ struct fpga_pr_info *info)
+{
+ struct feature_fme_pr *fme_pr;
+ struct feature_fme_pr_ctl fme_pr_ctl;
+ struct feature_fme_pr_status fme_pr_status;
+
+ fme_pr = get_fme_feature_ioaddr_by_index(fme_dev,
+ FME_FEATURE_ID_PR_MGMT);
+ if (!fme_pr)
+ return -EINVAL;
+
+ if (info->flags != FPGA_MGR_PARTIAL_RECONFIG)
+ return -EINVAL;
+
+ dev_info(fme_dev, "resetting PR before initiated PR\n");
+
+ fme_pr_ctl.csr = readq(&fme_pr->ccip_fme_pr_control);
+ fme_pr_ctl.pr_reset = 1;
+ writeq(fme_pr_ctl.csr, &fme_pr->ccip_fme_pr_control);
+
+ fme_pr_ctl.pr_reset_ack = 1;
+
+ if (fpga_wait_register_field(pr_reset_ack, fme_pr_ctl,
+ &fme_pr->ccip_fme_pr_control,
+ PR_WAIT_TIMEOUT, 1)) {
+ dev_err(fme_dev, "maximum PR timeout\n");
+ return -ETIMEDOUT;
+ }
+
+ fme_pr_ctl.csr = readq(&fme_pr->ccip_fme_pr_control);
+ fme_pr_ctl.pr_reset = 0;
+ writeq(fme_pr_ctl.csr, &fme_pr->ccip_fme_pr_control);
+
+ dev_info(fme_dev, "waiting for PR resource in HW to be initialized and ready\n");
+
+ fme_pr_status.pr_host_status = PR_HOST_STATUS_IDLE;
+
+ if (fpga_wait_register_field(pr_host_status, fme_pr_status,
+ &fme_pr->ccip_fme_pr_status,
+ PR_WAIT_TIMEOUT, 1)) {
+ dev_err(fme_dev, "maximum PR timeout\n");
+ return -ETIMEDOUT;
+ }
+
+ dev_info(fme_dev, "check if have any previous PR error\n");
+ pr_err_handle(fme_pr);
+ return 0;
+}
+
+static int fme_pr_write(struct ifpga_fme_hw *fme_dev,
+ int port_id, const char *buf, size_t count,
+ struct fpga_pr_info *info)
+{
+ struct feature_fme_pr *fme_pr;
+ struct feature_fme_pr_ctl fme_pr_ctl;
+ struct feature_fme_pr_status fme_pr_status;
+ struct feature_fme_pr_data fme_pr_data;
+ int delay, pr_credit;
+ int ret = 0;
+
+ fme_pr = get_fme_feature_ioaddr_by_index(fme_dev,
+ FME_FEATURE_ID_PR_MGMT);
+ if (!fme_pr)
+ return -EINVAL;
+
+ dev_info(fme_dev, "set PR port ID and start request\n");
+
+ fme_pr_ctl.csr = readq(&fme_pr->ccip_fme_pr_control);
+ fme_pr_ctl.pr_regionid = port_id;
+ fme_pr_ctl.pr_start_req = 1;
+ writeq(fme_pr_ctl.csr, &fme_pr->ccip_fme_pr_control);
+
+ dev_info(fme_dev, "pushing data from bitstream to HW\n");
+
+ fme_pr_status.csr = readq(&fme_pr->ccip_fme_pr_status);
+ pr_credit = fme_pr_status.pr_credit;
+
+ while (count > 0) {
+ delay = 0;
+ while (pr_credit <= 1) {
+ if (delay++ > PR_WAIT_TIMEOUT) {
+ dev_err(fme_dev, "maximum try\n");
+
+ info->pr_err = pr_err_handle(fme_pr);
+ return info->pr_err ? -EIO : -ETIMEDOUT;
+ }
+ udelay(1);
+
+ fme_pr_status.csr = readq(&fme_pr->ccip_fme_pr_status);
+ pr_credit = fme_pr_status.pr_credit;
+ };
+
+ if (count >= fme_dev->pr_bandwidth) {
+ switch (fme_dev->pr_bandwidth) {
+ case 4:
+ fme_pr_data.rsvd = 0;
+ fme_pr_data.pr_data_raw = *((const u32 *)buf);
+ writeq(fme_pr_data.csr,
+ &fme_pr->ccip_fme_pr_data);
+ break;
+ default:
+ ret = -EFAULT;
+ goto done;
+ }
+
+ buf += fme_dev->pr_bandwidth;
+ count -= fme_dev->pr_bandwidth;
+ pr_credit--;
+ } else {
+ WARN_ON(1);
+ ret = -EINVAL;
+ goto done;
+ }
+ }
+
+done:
+ return ret;
+}
+
+static int fme_pr_write_complete(struct ifpga_fme_hw *fme_dev,
+ struct fpga_pr_info *info)
+{
+ struct feature_fme_pr *fme_pr;
+ struct feature_fme_pr_ctl fme_pr_ctl;
+
+ fme_pr = get_fme_feature_ioaddr_by_index(fme_dev,
+ FME_FEATURE_ID_PR_MGMT);
+
+ fme_pr_ctl.csr = readq(&fme_pr->ccip_fme_pr_control);
+ fme_pr_ctl.pr_push_complete = 1;
+ writeq(fme_pr_ctl.csr, &fme_pr->ccip_fme_pr_control);
+
+ dev_info(fme_dev, "green bitstream push complete\n");
+ dev_info(fme_dev, "waiting for HW to release PR resource\n");
+
+ fme_pr_ctl.pr_start_req = 0;
+
+ if (fpga_wait_register_field(pr_start_req, fme_pr_ctl,
+ &fme_pr->ccip_fme_pr_control,
+ PR_WAIT_TIMEOUT, 1)) {
+ printf("maximum try.\n");
+ return -ETIMEDOUT;
+ }
+
+ dev_info(fme_dev, "PR operation complete, checking status\n");
+ info->pr_err = pr_err_handle(fme_pr);
+ if (info->pr_err)
+ return -EIO;
+
+ dev_info(fme_dev, "PR done successfully\n");
+ return 0;
+}
+
+static int fpga_pr_buf_load(struct ifpga_fme_hw *fme_dev,
+ struct fpga_pr_info *info, const char *buf,
+ size_t count)
+{
+ int ret;
+
+ info->state = FPGA_PR_STATE_WRITE_INIT;
+ ret = fme_pr_write_init(fme_dev, info);
+ if (ret) {
+ dev_err(fme_dev, "Error preparing FPGA for writing\n");
+ info->state = FPGA_PR_STATE_WRITE_INIT_ERR;
+ return ret;
+ }
+
+ /*
+ * Write the FPGA image to the FPGA.
+ */
+ info->state = FPGA_PR_STATE_WRITE;
+ ret = fme_pr_write(fme_dev, info->port_id, buf, count, info);
+ if (ret) {
+ dev_err(fme_dev, "Error while writing image data to FPGA\n");
+ info->state = FPGA_PR_STATE_WRITE_ERR;
+ return ret;
+ }
+
+ /*
+ * After all the FPGA image has been written, do the device specific
+ * steps to finish and set the FPGA into operating mode.
+ */
+ info->state = FPGA_PR_STATE_WRITE_COMPLETE;
+ ret = fme_pr_write_complete(fme_dev, info);
+ if (ret) {
+ dev_err(fme_dev, "Error after writing image data to FPGA\n");
+ info->state = FPGA_PR_STATE_WRITE_COMPLETE_ERR;
+ return ret;
+ }
+ info->state = FPGA_PR_STATE_DONE;
+
+ return 0;
+}
+
+static int fme_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer,
+ u32 size, u64 *status)
+{
+ struct feature_fme_header *fme_hdr;
+ struct feature_fme_capability fme_capability;
+ struct ifpga_fme_hw *fme = &hw->fme;
+ struct fpga_pr_info info;
+ struct ifpga_port_hw *port;
+ int ret = 0;
+
+ if (!buffer || size == 0)
+ return -EINVAL;
+ if (fme->state != IFPGA_FME_IMPLEMENTED)
+ return -EINVAL;
+
+ /*
+ * Padding extra zeros to align PR buffer with PR bandwidth, HW will
+ * ignore these zeros automatically.
+ */
+ size = IFPGA_ALIGN(size, fme->pr_bandwidth);
+
+ /* get fme header region */
+ fme_hdr = get_fme_feature_ioaddr_by_index(fme,
+ FME_FEATURE_ID_HEADER);
+ if (!fme_hdr)
+ return -EINVAL;
+
+ /* check port id */
+ fme_capability.csr = readq(&fme_hdr->capability);
+ if (port_id >= fme_capability.num_ports) {
+ dev_err(fme, "port number more than maximum\n");
+ return -EINVAL;
+ }
+
+ opae_memset(&info, 0, sizeof(struct fpga_pr_info));
+ info.flags = FPGA_MGR_PARTIAL_RECONFIG;
+ info.port_id = port_id;
+
+ spinlock_lock(&fme->lock);
+
+ /* get port device by port_id */
+ port = &hw->port[port_id];
+
+ /* Disable Port before PR */
+ fpga_port_disable(port);
+
+ ret = fpga_pr_buf_load(fme, &info, buffer, size);
+
+ *status = info.pr_err;
+
+ /* Re-enable Port after PR finished */
+ fpga_port_enable(port);
+ spinlock_unlock(&fme->lock);
+
+ return ret;
+}
+
+int do_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer,
+ u32 size, u64 *status)
+{
+ const struct bts_header *bts_hdr;
+ const char *buf;
+ struct ifpga_port_hw *port;
+ int ret;
+ u32 header_size;
+
+ if (!buffer || size == 0) {
+ dev_err(hw, "invalid parameter\n");
+ return -EINVAL;
+ }
+
+ bts_hdr = (const struct bts_header *)buffer;
+
+ if (is_valid_bts(bts_hdr)) {
+ dev_info(hw, "this is a valid bitsteam..\n");
+ header_size = sizeof(struct bts_header) +
+ bts_hdr->metadata_len;
+ if (size < header_size)
+ return -EINVAL;
+ size -= header_size;
+ buf = buffer + header_size;
+ } else {
+ dev_err(hw, "this is an invalid bitstream..\n");
+ return -EINVAL;
+ }
+
+ /* clean port error before do PR */
+ port = &hw->port[port_id];
+ ret = port_clear_error(port);
+ if (ret) {
+ dev_err(hw, "port cannot clear error\n");
+ return -EINVAL;
+ }
+
+ return fme_pr(hw, port_id, buf, size, status);
+}
+
+static int fme_pr_mgmt_init(struct ifpga_feature *feature)
+{
+ struct feature_fme_pr *fme_pr;
+ struct feature_header fme_pr_header;
+ struct ifpga_fme_hw *fme;
+
+ dev_info(NULL, "FME PR MGMT Init.\n");
+
+ fme = (struct ifpga_fme_hw *)feature->parent;
+
+ fme_pr = (struct feature_fme_pr *)feature->addr;
+
+ fme_pr_header.csr = readq(&fme_pr->header);
+ if (fme_pr_header.revision == 2) {
+ dev_info(NULL, "using 512-bit PR\n");
+ fme->pr_bandwidth = 64;
+ } else {
+ dev_info(NULL, "using 32-bit PR\n");
+ fme->pr_bandwidth = 4;
+ }
+
+ return 0;
+}
+
+static void fme_pr_mgmt_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "FME PR MGMT UInit.\n");
+}
+
+struct ifpga_feature_ops fme_pr_mgmt_ops = {
+ .init = fme_pr_mgmt_init,
+ .uinit = fme_pr_mgmt_uinit,
+};
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _IFPGA_HW_H_
+#define _IFPGA_HW_H_
+
+#include "ifpga_defines.h"
+#include "opae_ifpga_hw_api.h"
+#include "opae_eth_group.h"
+
+/** List of private feateues */
+TAILQ_HEAD(ifpga_feature_list, ifpga_feature);
+
+enum ifpga_feature_state {
+ IFPGA_FEATURE_UNUSED = 0,
+ IFPGA_FEATURE_ATTACHED,
+};
+
+enum feature_type {
+ FEATURE_FME_TYPE = 0,
+ FEATURE_PORT_TYPE,
+};
+
+struct feature_irq_ctx {
+ int eventfd;
+ int idx;
+};
+
+struct ifpga_feature {
+ TAILQ_ENTRY(ifpga_feature)next;
+ enum ifpga_feature_state state;
+ enum feature_type type;
+ const char *name;
+ u64 id;
+ u8 *addr;
+ uint64_t phys_addr;
+ u32 size;
+ int revision;
+ u64 cap;
+ int vfio_dev_fd;
+ struct feature_irq_ctx *ctx;
+ unsigned int ctx_num;
+
+ void *parent; /* to parent hw data structure */
+
+ struct ifpga_feature_ops *ops;/* callback to this private feature */
+ unsigned int vec_start;
+ unsigned int vec_cnt;
+};
+
+struct ifpga_feature_ops {
+ int (*init)(struct ifpga_feature *feature);
+ void (*uinit)(struct ifpga_feature *feature);
+ int (*get_prop)(struct ifpga_feature *feature,
+ struct feature_prop *prop);
+ int (*set_prop)(struct ifpga_feature *feature,
+ struct feature_prop *prop);
+ int (*set_irq)(struct ifpga_feature *feature, void *irq_set);
+};
+
+enum ifpga_fme_state {
+ IFPGA_FME_UNUSED = 0,
+ IFPGA_FME_IMPLEMENTED,
+};
+
+struct ifpga_fme_hw {
+ enum ifpga_fme_state state;
+
+ struct ifpga_feature_list feature_list;
+ spinlock_t lock; /* protect hardware access */
+
+ void *parent; /* pointer to ifpga_hw */
+
+ /* provied by HEADER feature */
+ u32 port_num;
+ struct uuid bitstream_id;
+ u64 bitstream_md;
+ size_t pr_bandwidth;
+ u32 socket_id;
+ u32 fabric_version_id;
+ u32 cache_size;
+
+ u32 capability;
+
+ void *max10_dev; /* MAX10 device */
+ void *i2c_master; /* I2C Master device */
+ void *eth_dev[MAX_ETH_GROUP_DEVICES];
+ struct opae_reg_region
+ eth_group_region[MAX_ETH_GROUP_DEVICES];
+ struct ifpga_fme_board_info board_info;
+ int nums_eth_dev;
+ unsigned int nums_acc_region;
+};
+
+enum ifpga_port_state {
+ IFPGA_PORT_UNUSED = 0,
+ IFPGA_PORT_ATTACHED,
+ IFPGA_PORT_DETACHED,
+};
+
+struct ifpga_port_hw {
+ enum ifpga_port_state state;
+
+ struct ifpga_feature_list feature_list;
+ spinlock_t lock; /* protect access to hw */
+
+ void *parent; /* pointer to ifpga_hw */
+
+ int port_id; /* provied by HEADER feature */
+ struct uuid afu_id; /* provied by User AFU feature */
+
+ unsigned int disable_count;
+
+ u32 capability;
+ u32 num_umsgs; /* The number of allocated umsgs */
+ u32 num_uafu_irqs; /* The number of uafu interrupts */
+ u8 *stp_addr;
+ u32 stp_size;
+};
+
+#define AFU_MAX_REGION 1
+
+struct ifpga_afu_info {
+ struct opae_reg_region region[AFU_MAX_REGION];
+ unsigned int num_regions;
+ unsigned int num_irqs;
+};
+
+struct ifpga_hw {
+ struct opae_adapter *adapter;
+ struct opae_adapter_data_pci *pci_data;
+
+ struct ifpga_fme_hw fme;
+ struct ifpga_port_hw port[MAX_FPGA_PORT_NUM];
+};
+
+static inline bool is_ifpga_hw_pf(struct ifpga_hw *hw)
+{
+ return hw->fme.state != IFPGA_FME_UNUSED;
+}
+
+static inline bool is_valid_port_id(struct ifpga_hw *hw, u32 port_id)
+{
+ if (port_id >= MAX_FPGA_PORT_NUM ||
+ hw->port[port_id].state != IFPGA_PORT_ATTACHED)
+ return false;
+
+ return true;
+}
+#endif /* _IFPGA_HW_H_ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "ifpga_feature_dev.h"
+
+int port_get_prop(struct ifpga_port_hw *port, struct feature_prop *prop)
+{
+ struct ifpga_feature *feature;
+
+ if (!port)
+ return -ENOENT;
+
+ feature = get_port_feature_by_id(port, prop->feature_id);
+
+ if (feature && feature->ops && feature->ops->get_prop)
+ return feature->ops->get_prop(feature, prop);
+
+ return -ENOENT;
+}
+
+int port_set_prop(struct ifpga_port_hw *port, struct feature_prop *prop)
+{
+ struct ifpga_feature *feature;
+
+ if (!port)
+ return -ENOENT;
+
+ feature = get_port_feature_by_id(port, prop->feature_id);
+
+ if (feature && feature->ops && feature->ops->set_prop)
+ return feature->ops->set_prop(feature, prop);
+
+ return -ENOENT;
+}
+
+int port_set_irq(struct ifpga_port_hw *port, u32 feature_id, void *irq_set)
+{
+ struct ifpga_feature *feature;
+
+ if (!port)
+ return -ENOENT;
+
+ feature = get_port_feature_by_id(port, feature_id);
+
+ if (feature && feature->ops && feature->ops->set_irq)
+ return feature->ops->set_irq(feature, irq_set);
+
+ return -ENOENT;
+}
+
+static int port_get_revision(struct ifpga_port_hw *port, u64 *revision)
+{
+ struct feature_port_header *port_hdr
+ = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+ struct feature_header header;
+
+ header.csr = readq(&port_hdr->header);
+
+ *revision = header.revision;
+
+ return 0;
+}
+
+static int port_get_portidx(struct ifpga_port_hw *port, u64 *idx)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_capability capability;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ capability.csr = readq(&port_hdr->capability);
+ *idx = capability.port_number;
+
+ return 0;
+}
+
+static int port_get_latency_tolerance(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_control control;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ control.csr = readq(&port_hdr->control);
+ *val = control.latency_tolerance;
+
+ return 0;
+}
+
+static int port_get_ap1_event(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_status status;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ status.csr = readq(&port_hdr->status);
+ spinlock_unlock(&port->lock);
+
+ *val = status.ap1_event;
+
+ return 0;
+}
+
+static int port_set_ap1_event(struct ifpga_port_hw *port, u64 val)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_status status;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ status.csr = readq(&port_hdr->status);
+ status.ap1_event = val;
+ writeq(status.csr, &port_hdr->status);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static int port_get_ap2_event(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_status status;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ status.csr = readq(&port_hdr->status);
+ spinlock_unlock(&port->lock);
+
+ *val = status.ap2_event;
+
+ return 0;
+}
+
+static int port_set_ap2_event(struct ifpga_port_hw *port, u64 val)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_status status;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ status.csr = readq(&port_hdr->status);
+ status.ap2_event = val;
+ writeq(status.csr, &port_hdr->status);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static int port_get_power_state(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_header *port_hdr;
+ struct feature_port_status status;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ status.csr = readq(&port_hdr->status);
+ spinlock_unlock(&port->lock);
+
+ *val = status.power_state;
+
+ return 0;
+}
+
+static int port_get_userclk_freqcmd(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_header *port_hdr;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ *val = readq(&port_hdr->user_clk_freq_cmd0);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static int port_set_userclk_freqcmd(struct ifpga_port_hw *port, u64 val)
+{
+ struct feature_port_header *port_hdr;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ writeq(val, &port_hdr->user_clk_freq_cmd0);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static int port_get_userclk_freqcntrcmd(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_header *port_hdr;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ *val = readq(&port_hdr->user_clk_freq_cmd1);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static int port_set_userclk_freqcntrcmd(struct ifpga_port_hw *port, u64 val)
+{
+ struct feature_port_header *port_hdr;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ writeq(val, &port_hdr->user_clk_freq_cmd1);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static int port_get_userclk_freqsts(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_header *port_hdr;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ *val = readq(&port_hdr->user_clk_freq_sts0);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static int port_get_userclk_freqcntrsts(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_header *port_hdr;
+
+ port_hdr = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_HEADER);
+
+ spinlock_lock(&port->lock);
+ *val = readq(&port_hdr->user_clk_freq_sts1);
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static int port_hdr_init(struct ifpga_feature *feature)
+{
+ struct ifpga_port_hw *port = feature->parent;
+
+ dev_info(NULL, "port hdr Init.\n");
+
+ fpga_port_reset(port);
+
+ return 0;
+}
+
+static void port_hdr_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "port hdr uinit.\n");
+}
+
+static int port_hdr_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_port_hw *port = feature->parent;
+
+ switch (prop->prop_id) {
+ case PORT_HDR_PROP_REVISION:
+ return port_get_revision(port, &prop->data);
+ case PORT_HDR_PROP_PORTIDX:
+ return port_get_portidx(port, &prop->data);
+ case PORT_HDR_PROP_LATENCY_TOLERANCE:
+ return port_get_latency_tolerance(port, &prop->data);
+ case PORT_HDR_PROP_AP1_EVENT:
+ return port_get_ap1_event(port, &prop->data);
+ case PORT_HDR_PROP_AP2_EVENT:
+ return port_get_ap2_event(port, &prop->data);
+ case PORT_HDR_PROP_POWER_STATE:
+ return port_get_power_state(port, &prop->data);
+ case PORT_HDR_PROP_USERCLK_FREQCMD:
+ return port_get_userclk_freqcmd(port, &prop->data);
+ case PORT_HDR_PROP_USERCLK_FREQCNTRCMD:
+ return port_get_userclk_freqcntrcmd(port, &prop->data);
+ case PORT_HDR_PROP_USERCLK_FREQSTS:
+ return port_get_userclk_freqsts(port, &prop->data);
+ case PORT_HDR_PROP_USERCLK_CNTRSTS:
+ return port_get_userclk_freqcntrsts(port, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int port_hdr_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_port_hw *port = feature->parent;
+
+ switch (prop->prop_id) {
+ case PORT_HDR_PROP_AP1_EVENT:
+ return port_set_ap1_event(port, prop->data);
+ case PORT_HDR_PROP_AP2_EVENT:
+ return port_set_ap2_event(port, prop->data);
+ case PORT_HDR_PROP_USERCLK_FREQCMD:
+ return port_set_userclk_freqcmd(port, prop->data);
+ case PORT_HDR_PROP_USERCLK_FREQCNTRCMD:
+ return port_set_userclk_freqcntrcmd(port, prop->data);
+ }
+
+ return -ENOENT;
+}
+
+struct ifpga_feature_ops ifpga_rawdev_port_hdr_ops = {
+ .init = port_hdr_init,
+ .uinit = port_hdr_uinit,
+ .get_prop = port_hdr_get_prop,
+ .set_prop = port_hdr_set_prop,
+};
+
+static int port_stp_init(struct ifpga_feature *feature)
+{
+ struct ifpga_port_hw *port = feature->parent;
+
+ dev_info(NULL, "port stp Init.\n");
+
+ spinlock_lock(&port->lock);
+ port->stp_addr = feature->addr;
+ port->stp_size = feature->size;
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static void port_stp_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "port stp uinit.\n");
+}
+
+struct ifpga_feature_ops ifpga_rawdev_port_stp_ops = {
+ .init = port_stp_init,
+ .uinit = port_stp_uinit,
+};
+
+static int port_uint_init(struct ifpga_feature *feature)
+{
+ struct ifpga_port_hw *port = feature->parent;
+
+ dev_info(NULL, "PORT UINT Init.\n");
+
+ spinlock_lock(&port->lock);
+ if (feature->ctx_num) {
+ port->capability |= FPGA_PORT_CAP_UAFU_IRQ;
+ port->num_uafu_irqs = feature->ctx_num;
+ }
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static void port_uint_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "PORT UINT UInit.\n");
+}
+
+struct ifpga_feature_ops ifpga_rawdev_port_uint_ops = {
+ .init = port_uint_init,
+ .uinit = port_uint_uinit,
+};
+
+static int port_afu_init(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "PORT AFU Init.\n");
+
+ return 0;
+}
+
+static void port_afu_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+
+ dev_info(NULL, "PORT AFU UInit.\n");
+}
+
+struct ifpga_feature_ops ifpga_rawdev_port_afu_ops = {
+ .init = port_afu_init,
+ .uinit = port_afu_uinit,
+};
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "ifpga_feature_dev.h"
+
+static int port_err_get_revision(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_error *port_err;
+ struct feature_header header;
+
+ port_err = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_ERROR);
+ header.csr = readq(&port_err->header);
+ *val = header.revision;
+
+ return 0;
+}
+
+static int port_err_get_errors(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_error *port_err;
+ struct feature_port_err_key error;
+
+ port_err = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_ERROR);
+ error.csr = readq(&port_err->port_error);
+ *val = error.csr;
+
+ return 0;
+}
+
+static int port_err_get_first_error(struct ifpga_port_hw *port, u64 *val)
+{
+ struct feature_port_error *port_err;
+ struct feature_port_first_err_key first_error;
+
+ port_err = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_ERROR);
+ first_error.csr = readq(&port_err->port_first_error);
+ *val = first_error.csr;
+
+ return 0;
+}
+
+static int port_err_get_first_malformed_req_lsb(struct ifpga_port_hw *port,
+ u64 *val)
+{
+ struct feature_port_error *port_err;
+ struct feature_port_malformed_req0 malreq0;
+
+ port_err = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_ERROR);
+
+ malreq0.header_lsb = readq(&port_err->malreq0);
+ *val = malreq0.header_lsb;
+
+ return 0;
+}
+
+static int port_err_get_first_malformed_req_msb(struct ifpga_port_hw *port,
+ u64 *val)
+{
+ struct feature_port_error *port_err;
+ struct feature_port_malformed_req1 malreq1;
+
+ port_err = get_port_feature_ioaddr_by_index(port,
+ PORT_FEATURE_ID_ERROR);
+
+ malreq1.header_msb = readq(&port_err->malreq1);
+ *val = malreq1.header_msb;
+
+ return 0;
+}
+
+static int port_err_set_clear(struct ifpga_port_hw *port, u64 val)
+{
+ int ret;
+
+ spinlock_lock(&port->lock);
+ ret = port_err_clear(port, val);
+ spinlock_unlock(&port->lock);
+
+ return ret;
+}
+
+static int port_error_init(struct ifpga_feature *feature)
+{
+ struct ifpga_port_hw *port = feature->parent;
+
+ dev_info(NULL, "port error Init.\n");
+
+ spinlock_lock(&port->lock);
+ port_err_mask(port, false);
+ if (feature->ctx_num)
+ port->capability |= FPGA_PORT_CAP_ERR_IRQ;
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+static void port_error_uinit(struct ifpga_feature *feature)
+{
+ UNUSED(feature);
+}
+
+static int port_error_get_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_port_hw *port = feature->parent;
+
+ switch (prop->prop_id) {
+ case PORT_ERR_PROP_REVISION:
+ return port_err_get_revision(port, &prop->data);
+ case PORT_ERR_PROP_ERRORS:
+ return port_err_get_errors(port, &prop->data);
+ case PORT_ERR_PROP_FIRST_ERROR:
+ return port_err_get_first_error(port, &prop->data);
+ case PORT_ERR_PROP_FIRST_MALFORMED_REQ_LSB:
+ return port_err_get_first_malformed_req_lsb(port, &prop->data);
+ case PORT_ERR_PROP_FIRST_MALFORMED_REQ_MSB:
+ return port_err_get_first_malformed_req_msb(port, &prop->data);
+ }
+
+ return -ENOENT;
+}
+
+static int port_error_set_prop(struct ifpga_feature *feature,
+ struct feature_prop *prop)
+{
+ struct ifpga_port_hw *port = feature->parent;
+
+ if (prop->prop_id == PORT_ERR_PROP_CLEAR)
+ return port_err_set_clear(port, prop->data);
+
+ return -ENOENT;
+}
+
+struct ifpga_feature_ops ifpga_rawdev_port_error_ops = {
+ .init = port_error_init,
+ .uinit = port_error_uinit,
+ .get_prop = port_error_get_prop,
+ .set_prop = port_error_set_prop,
+};
--- /dev/null
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+sources = [
+ 'ifpga_api.c',
+ 'ifpga_enumerate.c',
+ 'ifpga_feature_dev.c',
+ 'ifpga_fme.c',
+ 'ifpga_fme_iperf.c',
+ 'ifpga_fme_dperf.c',
+ 'ifpga_fme_error.c',
+ 'ifpga_port.c',
+ 'ifpga_port_error.c',
+ 'ifpga_fme_pr.c',
+ 'opae_hw_api.c',
+ 'opae_ifpga_hw_api.c',
+ 'opae_debug.c',
+ 'opae_spi.c',
+ 'opae_spi_transaction.c',
+ 'opae_intel_max10.c',
+ 'opae_i2c.c',
+ 'opae_at24_eeprom.c',
+ 'opae_eth_group.c',
+]
+
+error_cflags = ['-Wno-sign-compare', '-Wno-unused-value',
+ '-Wno-format', '-Wno-error=format-security',
+ '-Wno-strict-aliasing', '-Wno-unused-but-set-variable'
+]
+c_args = cflags
+foreach flag: error_cflags
+ if cc.has_argument(flag)
+ c_args += flag
+ endif
+endforeach
+
+base_lib = static_library('ifpga_rawdev_base', sources,
+ dependencies: static_rte_eal,
+ c_args: c_args)
+base_objs = base_lib.extract_all_objects()
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#include "opae_osdep.h"
+#include "opae_i2c.h"
+#include "opae_at24_eeprom.h"
+
+#define AT24_READ_RETRY 10
+
+static int at24_eeprom_read_and_try(struct altera_i2c_dev *dev,
+ unsigned int slave_addr,
+ u32 offset, u8 *buf, u32 len)
+{
+ int i;
+ int ret = 0;
+
+ for (i = 0; i < AT24_READ_RETRY; i++) {
+ ret = i2c_read16(dev, slave_addr, offset,
+ buf, len);
+ if (ret == 0)
+ break;
+
+ opae_udelay(100);
+ }
+
+ return ret;
+}
+
+int at24_eeprom_read(struct altera_i2c_dev *dev, unsigned int slave_addr,
+ u32 offset, u8 *buf, int count)
+{
+ int len;
+ int status;
+ int read_count = 0;
+
+ if (!count)
+ return count;
+
+ if (count > AT24C512_IO_LIMIT)
+ len = AT24C512_IO_LIMIT;
+ else
+ len = count;
+
+ while (count) {
+ status = at24_eeprom_read_and_try(dev, slave_addr, offset,
+ buf, len);
+ if (status)
+ break;
+
+ buf += len;
+ offset += len;
+ count -= len;
+ read_count += len;
+ }
+
+ return read_count;
+}
+
+int at24_eeprom_write(struct altera_i2c_dev *dev, unsigned int slave_addr,
+ u32 offset, u8 *buf, int count)
+{
+ int len;
+ int status;
+ int write_count = 0;
+
+ if (!count)
+ return count;
+
+ if (count > AT24C512_PAGE_SIZE)
+ len = AT24C512_PAGE_SIZE;
+ else
+ len = count;
+
+ while (count) {
+ status = i2c_write16(dev, slave_addr, offset, buf, len);
+ if (status)
+ break;
+
+ buf += len;
+ offset += len;
+ count -= len;
+ write_count += len;
+ }
+
+ return write_count;
+}
+
--- /dev/null
+
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#define AT24C512_PAGE_SIZE 128
+#define AT24C512_IO_LIMIT 128
+
+#define AT24512_SLAVE_ADDR 0x51
+
+int at24_eeprom_read(struct altera_i2c_dev *dev, unsigned int slave_addr,
+ u32 offset, u8 *buf, int count);
+int at24_eeprom_write(struct altera_i2c_dev *dev, unsigned int slave_addr,
+ u32 offset, u8 *buf, int count);
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#define OPAE_HW_DEBUG
+
+#include "opae_hw_api.h"
+#include "opae_debug.h"
+
+void opae_manager_dump(struct opae_manager *mgr)
+{
+ opae_log("=====%s=====\n", __func__);
+ opae_log("OPAE Manger %s\n", mgr->name);
+ opae_log("OPAE Manger OPs = %p\n", mgr->ops);
+ opae_log("OPAE Manager Private Data = %p\n", mgr->data);
+ opae_log("OPAE Adapter(parent) = %p\n", mgr->adapter);
+ opae_log("==========================\n");
+}
+
+void opae_bridge_dump(struct opae_bridge *br)
+{
+ opae_log("=====%s=====\n", __func__);
+ opae_log("OPAE Bridge %s\n", br->name);
+ opae_log("OPAE Bridge ID = %d\n", br->id);
+ opae_log("OPAE Bridge OPs = %p\n", br->ops);
+ opae_log("OPAE Bridge Private Data = %p\n", br->data);
+ opae_log("OPAE Accelerator(under this bridge) = %p\n", br->acc);
+ opae_log("==========================\n");
+}
+
+void opae_accelerator_dump(struct opae_accelerator *acc)
+{
+ opae_log("=====%s=====\n", __func__);
+ opae_log("OPAE Accelerator %s\n", acc->name);
+ opae_log("OPAE Accelerator Index = %d\n", acc->index);
+ opae_log("OPAE Accelerator OPs = %p\n", acc->ops);
+ opae_log("OPAE Accelerator Private Data = %p\n", acc->data);
+ opae_log("OPAE Bridge (upstream) = %p\n", acc->br);
+ opae_log("OPAE Manager (upstream) = %p\n", acc->mgr);
+ opae_log("==========================\n");
+
+ if (acc->br)
+ opae_bridge_dump(acc->br);
+}
+
+static void opae_adapter_data_dump(void *data)
+{
+ struct opae_adapter_data *d = data;
+ struct opae_adapter_data_pci *d_pci;
+ struct opae_reg_region *r;
+ int i;
+
+ opae_log("=====%s=====\n", __func__);
+
+ switch (d->type) {
+ case OPAE_FPGA_PCI:
+ d_pci = (struct opae_adapter_data_pci *)d;
+
+ opae_log("OPAE Adapter Type = PCI\n");
+ opae_log("PCI Device ID: 0x%04x\n", d_pci->device_id);
+ opae_log("PCI Vendor ID: 0x%04x\n", d_pci->vendor_id);
+
+ for (i = 0; i < PCI_MAX_RESOURCE; i++) {
+ r = &d_pci->region[i];
+ opae_log("PCI Bar %d: phy(%llx) len(%llx) addr(%p)\n",
+ i, (unsigned long long)r->phys_addr,
+ (unsigned long long)r->len, r->addr);
+ }
+ break;
+ case OPAE_FPGA_NET:
+ break;
+ }
+
+ opae_log("==========================\n");
+}
+
+void opae_adapter_dump(struct opae_adapter *adapter, int verbose)
+{
+ struct opae_accelerator *acc;
+
+ if (verbose) {
+ opae_log("=====%s=====\n", __func__);
+ opae_log("OPAE Adapter %s\n", adapter->name);
+ opae_log("OPAE Adapter OPs = %p\n", adapter->ops);
+ opae_log("OPAE Adapter Private Data = %p\n", adapter->data);
+ opae_log("OPAE Manager (downstream) = %p\n", adapter->mgr);
+
+ if (adapter->mgr)
+ opae_manager_dump(adapter->mgr);
+
+ opae_adapter_for_each_acc(adapter, acc)
+ opae_accelerator_dump(acc);
+
+ if (adapter->data)
+ opae_adapter_data_dump(adapter->data);
+
+ opae_log("==========================\n");
+ }
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _OPAE_DEBUG_H_
+#define _OPAE_DEBUG_H_
+
+#ifdef OPAE_HW_DEBUG
+#define opae_log(fmt, args...) printf(fmt, ## args)
+#else
+#define opae_log(fme, args...) do {} while (0)
+#endif
+
+void opae_manager_dump(struct opae_manager *mgr);
+void opae_bridge_dump(struct opae_bridge *br);
+void opae_accelerator_dump(struct opae_accelerator *acc);
+void opae_adapter_dump(struct opae_adapter *adapter, int verbose);
+
+#endif /* _OPAE_DEBUG_H_ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#include "opae_osdep.h"
+#include "opae_eth_group.h"
+
+#define DATA_VAL_INVL 1 /* us */
+#define DATA_VAL_POLL_TIMEOUT 10 /* us */
+
+static const char *eth_type_to_string(u8 type)
+{
+ switch (type) {
+ case ETH_GROUP_PHY:
+ return "phy";
+ case ETH_GROUP_MAC:
+ return "mac";
+ case ETH_GROUP_ETHER:
+ return "ethernet wrapper";
+ }
+
+ return "unknown";
+}
+
+static int eth_group_get_select(struct eth_group_device *dev,
+ u8 type, u8 index, u8 *select)
+{
+ /*
+ * in different speed configuration, the index of
+ * PHY and MAC are different.
+ *
+ * 1 ethernet wrapper -> Device Select 0x0 - fixed value
+ * n PHYs -> Device Select 0x2,4,6,8,A,C,E,10,...
+ * n MACs -> Device Select 0x3,5,7,9,B,D,F,11,...
+ */
+
+ if (type == ETH_GROUP_PHY && index < dev->phy_num)
+ *select = index * 2 + 2;
+ else if (type == ETH_GROUP_MAC && index < dev->mac_num)
+ *select = index * 2 + 3;
+ else if (type == ETH_GROUP_ETHER && index == 0)
+ *select = 0;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+int eth_group_write_reg(struct eth_group_device *dev,
+ u8 type, u8 index, u16 addr, u32 data)
+{
+ u8 dev_select = 0;
+ u64 v = 0;
+ int ret;
+
+ dev_debug(dev, "%s type %s index %u addr 0x%x\n",
+ __func__, eth_type_to_string(type), index, addr);
+
+ /* find device select */
+ ret = eth_group_get_select(dev, type, index, &dev_select);
+ if (ret)
+ return ret;
+
+ v = CMD_WR << CTRL_CMD_SHIT |
+ (u64)dev_select << CTRL_DS_SHIFT |
+ (u64)addr << CTRL_ADDR_SHIFT |
+ (data & CTRL_WR_DATA);
+
+ /* only PHY has additional feature bit */
+ if (type == ETH_GROUP_PHY)
+ v |= CTRL_FEAT_SELECT;
+
+ opae_writeq(v, dev->base + ETH_GROUP_CTRL);
+
+ return 0;
+}
+
+int eth_group_read_reg(struct eth_group_device *dev,
+ u8 type, u8 index, u16 addr, u32 *data)
+{
+ u8 dev_select = 0;
+ u64 v = 0;
+ int ret;
+
+ dev_debug(dev, "%s type %s index %u addr 0x%x\n",
+ __func__, eth_type_to_string(type), index,
+ addr);
+
+ /* find device select */
+ ret = eth_group_get_select(dev, type, index, &dev_select);
+ if (ret)
+ return ret;
+
+ v = CMD_RD << CTRL_CMD_SHIT |
+ (u64)dev_select << CTRL_DS_SHIFT |
+ (u64)addr << CTRL_ADDR_SHIFT;
+
+ /* only PHY has additional feature bit */
+ if (type == ETH_GROUP_PHY)
+ v |= CTRL_FEAT_SELECT;
+
+ opae_writeq(v, dev->base + ETH_GROUP_CTRL);
+
+ if (opae_readq_poll_timeout(dev->base + ETH_GROUP_STAT,
+ v, v & STAT_DATA_VAL, DATA_VAL_INVL,
+ DATA_VAL_POLL_TIMEOUT))
+ return -ETIMEDOUT;
+
+ *data = (v & STAT_RD_DATA);
+
+ dev_debug(dev, "%s data 0x%x\n", __func__, *data);
+
+ return 0;
+}
+
+static int eth_group_reset_mac(struct eth_group_device *dev, u8 index,
+ bool enable)
+{
+ u32 val;
+ int ret;
+
+ /*
+ * only support 25G & 40G mac reset for now. It uses internal reset.
+ * as PHY and MAC are integrated together, below action will trigger
+ * PHY reset too.
+ */
+ if (dev->speed != 25 && dev->speed != 40)
+ return 0;
+
+ ret = eth_group_read_reg(dev, ETH_GROUP_MAC, index, MAC_CONFIG,
+ &val);
+ if (ret) {
+ dev_err(dev, "fail to read PHY_CONFIG: %d\n", ret);
+ return ret;
+ }
+
+ /* skip if mac is in expected state already */
+ if ((((val & MAC_RESET_MASK) == MAC_RESET_MASK) && enable) ||
+ (((val & MAC_RESET_MASK) == 0) && !enable))
+ return 0;
+
+ if (enable)
+ val |= MAC_RESET_MASK;
+ else
+ val &= ~MAC_RESET_MASK;
+
+ ret = eth_group_write_reg(dev, ETH_GROUP_MAC, index, MAC_CONFIG,
+ val);
+ if (ret)
+ dev_err(dev, "fail to write PHY_CONFIG: %d\n", ret);
+
+ return ret;
+}
+
+static void eth_group_mac_uinit(struct eth_group_device *dev)
+{
+ u8 i;
+
+ for (i = 0; i < dev->mac_num; i++) {
+ if (eth_group_reset_mac(dev, i, true))
+ dev_err(dev, "fail to disable mac %d\n", i);
+ }
+}
+
+static int eth_group_mac_init(struct eth_group_device *dev)
+{
+ int ret;
+ u8 i;
+
+ for (i = 0; i < dev->mac_num; i++) {
+ ret = eth_group_reset_mac(dev, i, false);
+ if (ret) {
+ dev_err(dev, "fail to enable mac %d\n", i);
+ goto exit;
+ }
+ }
+
+ return 0;
+
+exit:
+ while (i--)
+ eth_group_reset_mac(dev, i, true);
+
+ return ret;
+}
+
+static int eth_group_reset_phy(struct eth_group_device *dev, u8 index,
+ bool enable)
+{
+ u32 val;
+ int ret;
+
+ /* only support 10G PHY reset for now. It uses external reset. */
+ if (dev->speed != 10)
+ return 0;
+
+ ret = eth_group_read_reg(dev, ETH_GROUP_PHY, index,
+ ADD_PHY_CTRL, &val);
+ if (ret) {
+ dev_err(dev, "fail to read ADD_PHY_CTRL reg: %d\n", ret);
+ return ret;
+ }
+
+ /* return if PHY is already in expected state */
+ if ((val & PHY_RESET && enable) || (!(val & PHY_RESET) && !enable))
+ return 0;
+
+ if (enable)
+ val |= PHY_RESET;
+ else
+ val &= ~PHY_RESET;
+
+ ret = eth_group_write_reg(dev, ETH_GROUP_PHY, index,
+ ADD_PHY_CTRL, val);
+ if (ret)
+ dev_err(dev, "fail to write ADD_PHY_CTRL reg: %d\n", ret);
+
+ return ret;
+}
+
+static int eth_group_phy_init(struct eth_group_device *dev)
+{
+ int ret;
+ int i;
+
+ for (i = 0; i < dev->phy_num; i++) {
+ ret = eth_group_reset_phy(dev, i, false);
+ if (ret) {
+ dev_err(dev, "fail to enable phy %d\n", i);
+ goto exit;
+ }
+ }
+
+ return 0;
+exit:
+ while (i--)
+ eth_group_reset_phy(dev, i, true);
+
+ return ret;
+}
+
+static void eth_group_phy_uinit(struct eth_group_device *dev)
+{
+ int i;
+
+ for (i = 0; i < dev->phy_num; i++) {
+ if (eth_group_reset_phy(dev, i, true))
+ dev_err(dev, "fail to disable phy %d\n", i);
+ }
+}
+
+static int eth_group_hw_init(struct eth_group_device *dev)
+{
+ int ret;
+
+ ret = eth_group_phy_init(dev);
+ if (ret) {
+ dev_err(dev, "fail to init eth group phys\n");
+ return ret;
+ }
+
+ ret = eth_group_mac_init(dev);
+ if (ret) {
+ dev_err(priv->dev, "fail to init eth group macs\n");
+ goto phy_exit;
+ }
+
+ return 0;
+
+phy_exit:
+ eth_group_phy_uinit(dev);
+ return ret;
+}
+
+static void eth_group_hw_uinit(struct eth_group_device *dev)
+{
+ eth_group_mac_uinit(dev);
+ eth_group_phy_uinit(dev);
+}
+
+struct eth_group_device *eth_group_probe(void *base)
+{
+ struct eth_group_device *dev;
+
+ dev = opae_malloc(sizeof(*dev));
+ if (!dev)
+ return NULL;
+
+ dev->base = (u8 *)base;
+
+ dev->info.info = opae_readq(dev->base + ETH_GROUP_INFO);
+ dev->group_id = dev->info.group_id;
+ dev->phy_num = dev->mac_num = dev->info.num_phys;
+ dev->speed = dev->info.speed;
+
+ dev->status = ETH_GROUP_DEV_ATTACHED;
+
+ if (eth_group_hw_init(dev)) {
+ dev_err(dev, "eth group hw init fail\n");
+ return NULL;
+ }
+
+ dev_info(dev, "eth group device %d probe done: phy_num=mac_num:%d, speed=%d\n",
+ dev->group_id, dev->phy_num, dev->speed);
+
+ return dev;
+}
+
+void eth_group_release(struct eth_group_device *dev)
+{
+ eth_group_hw_uinit(dev);
+
+ if (dev) {
+ dev->status = ETH_GROUP_DEV_NOUSED;
+ opae_free(dev);
+ }
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#ifndef _OPAE_PHY_MAC_H
+#define _OPAE_PHY_MAC_H
+
+#include "opae_osdep.h"
+
+#define MAX_ETH_GROUP_DEVICES 2
+
+#define LINE_SIDE_GROUP_ID 0
+#define HOST_SIDE_GROUP_ID 1
+
+#define ETH_GROUP_SELECT_FEAT 1
+
+#define ETH_GROUP_PHY 1
+#define ETH_GROUP_MAC 2
+#define ETH_GROUP_ETHER 3
+
+#define ETH_GROUP_INFO 0x8
+#define INFO_SPEED GENMASK_ULL(23, 16)
+#define ETH_SPEED_10G 10
+#define ETH_SPEED_25G 25
+#define INFO_PHY_NUM GENMASK_ULL(15, 8)
+#define INFO_GROUP_NUM GENMASK_ULL(7, 0)
+
+#define ETH_GROUP_CTRL 0x10
+#define CTRL_CMD GENMASK_ULL(63, 62)
+#define CTRL_CMD_SHIT 62
+#define CMD_NOP 0ULL
+#define CMD_RD 1ULL
+#define CMD_WR 2ULL
+#define CTRL_DEV_SELECT GENMASK_ULL(53, 49)
+#define CTRL_DS_SHIFT 49
+#define CTRL_FEAT_SELECT BIT_ULL(48)
+#define SELECT_IP 0
+#define SELECT_FEAT 1
+#define CTRL_ADDR GENMASK_ULL(47, 32)
+#define CTRL_ADDR_SHIFT 32
+#define CTRL_WR_DATA GENMASK_ULL(31, 0)
+
+#define ETH_GROUP_STAT 0x18
+#define STAT_DATA_VAL BIT_ULL(32)
+#define STAT_RD_DATA GENMASK_ULL(31, 0)
+
+/* Additional Feature Register */
+#define ADD_PHY_CTRL 0x0
+#define PHY_RESET BIT(0)
+#define MAC_CONFIG 0x310
+#define MAC_RESET_MASK GENMASK(2, 0)
+
+struct opae_eth_group_info {
+ u8 group_id;
+ u8 speed;
+ u8 nums_of_phy;
+ u8 nums_of_mac;
+};
+
+struct opae_eth_group_region_info {
+ u8 group_id;
+ u64 phys_addr;
+ u64 len;
+ u8 *addr;
+ u8 mem_idx;
+};
+
+struct eth_group_info_reg {
+ union {
+ u64 info;
+ struct {
+ u8 group_id:8;
+ u8 num_phys:8;
+ u8 speed:8;
+ u8 direction:1;
+ u64 resvd:39;
+ };
+ };
+};
+
+enum eth_group_status {
+ ETH_GROUP_DEV_NOUSED = 0,
+ ETH_GROUP_DEV_ATTACHED,
+};
+
+struct eth_group_device {
+ u8 *base;
+ struct eth_group_info_reg info;
+ enum eth_group_status status;
+ u8 speed;
+ u8 group_id;
+ u8 phy_num;
+ u8 mac_num;
+};
+
+struct eth_group_device *eth_group_probe(void *base);
+void eth_group_release(struct eth_group_device *dev);
+int eth_group_read_reg(struct eth_group_device *dev,
+ u8 type, u8 index, u16 addr, u32 *data);
+int eth_group_write_reg(struct eth_group_device *dev,
+ u8 type, u8 index, u16 addr, u32 data);
+#endif
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "opae_hw_api.h"
+#include "opae_debug.h"
+#include "ifpga_api.h"
+
+/* OPAE Bridge Functions */
+
+/**
+ * opae_bridge_alloc - alloc opae_bridge data structure
+ * @name: bridge name.
+ * @ops: ops of this bridge.
+ * @data: private data of this bridge.
+ *
+ * Return opae_bridge on success, otherwise NULL.
+ */
+struct opae_bridge *
+opae_bridge_alloc(const char *name, struct opae_bridge_ops *ops, void *data)
+{
+ struct opae_bridge *br = opae_zmalloc(sizeof(*br));
+
+ if (!br)
+ return NULL;
+
+ br->name = name;
+ br->ops = ops;
+ br->data = data;
+
+ opae_log("%s %p\n", __func__, br);
+
+ return br;
+}
+
+/**
+ * opae_bridge_reset - reset opae_bridge
+ * @br: bridge to be reset.
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_bridge_reset(struct opae_bridge *br)
+{
+ if (!br)
+ return -EINVAL;
+
+ if (br->ops && br->ops->reset)
+ return br->ops->reset(br);
+
+ opae_log("%s no ops\n", __func__);
+
+ return -ENOENT;
+}
+
+/* Accelerator Functions */
+
+/**
+ * opae_accelerator_alloc - alloc opae_accelerator data structure
+ * @name: accelerator name.
+ * @ops: ops of this accelerator.
+ * @data: private data of this accelerator.
+ *
+ * Return: opae_accelerator on success, otherwise NULL.
+ */
+struct opae_accelerator *
+opae_accelerator_alloc(const char *name, struct opae_accelerator_ops *ops,
+ void *data)
+{
+ struct opae_accelerator *acc = opae_zmalloc(sizeof(*acc));
+
+ if (!acc)
+ return NULL;
+
+ acc->name = name;
+ acc->ops = ops;
+ acc->data = data;
+
+ opae_log("%s %p\n", __func__, acc);
+
+ return acc;
+}
+
+/**
+ * opae_acc_reg_read - read accelerator's register from its reg region.
+ * @acc: accelerator to read.
+ * @region_idx: reg region index.
+ * @offset: reg offset.
+ * @byte: read operation width, e.g 4 byte = 32bit read.
+ * @data: data to store the value read from the register.
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_acc_reg_read(struct opae_accelerator *acc, unsigned int region_idx,
+ u64 offset, unsigned int byte, void *data)
+{
+ if (!acc || !data)
+ return -EINVAL;
+
+ if (acc->ops && acc->ops->read)
+ return acc->ops->read(acc, region_idx, offset, byte, data);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_acc_reg_write - write to accelerator's register from its reg region.
+ * @acc: accelerator to write.
+ * @region_idx: reg region index.
+ * @offset: reg offset.
+ * @byte: write operation width, e.g 4 byte = 32bit write.
+ * @data: data stored the value to write to the register.
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_acc_reg_write(struct opae_accelerator *acc, unsigned int region_idx,
+ u64 offset, unsigned int byte, void *data)
+{
+ if (!acc || !data)
+ return -EINVAL;
+
+ if (acc->ops && acc->ops->write)
+ return acc->ops->write(acc, region_idx, offset, byte, data);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_acc_get_info - get information of an accelerator.
+ * @acc: targeted accelerator
+ * @info: accelerator info data structure to be filled.
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_acc_get_info(struct opae_accelerator *acc, struct opae_acc_info *info)
+{
+ if (!acc || !info)
+ return -EINVAL;
+
+ if (acc->ops && acc->ops->get_info)
+ return acc->ops->get_info(acc, info);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_acc_get_region_info - get information of an accelerator register region.
+ * @acc: targeted accelerator
+ * @info: accelerator region info data structure to be filled.
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_acc_get_region_info(struct opae_accelerator *acc,
+ struct opae_acc_region_info *info)
+{
+ if (!acc || !info)
+ return -EINVAL;
+
+ if (acc->ops && acc->ops->get_region_info)
+ return acc->ops->get_region_info(acc, info);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_acc_set_irq - set an accelerator's irq.
+ * @acc: targeted accelerator
+ * @start: start vector number
+ * @count: count of vectors to be set from the start vector
+ * @evtfds: event fds to be notified when corresponding irqs happens
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_acc_set_irq(struct opae_accelerator *acc,
+ u32 start, u32 count, s32 evtfds[])
+{
+ if (!acc || !acc->data)
+ return -EINVAL;
+
+ if (start + count <= start)
+ return -EINVAL;
+
+ if (acc->ops && acc->ops->set_irq)
+ return acc->ops->set_irq(acc, start, count, evtfds);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_acc_get_uuid - get accelerator's UUID.
+ * @acc: targeted accelerator
+ * @uuid: a pointer to UUID
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_acc_get_uuid(struct opae_accelerator *acc,
+ struct uuid *uuid)
+{
+ if (!acc || !uuid)
+ return -EINVAL;
+
+ if (acc->ops && acc->ops->get_uuid)
+ return acc->ops->get_uuid(acc, uuid);
+
+ return -ENOENT;
+}
+
+/* Manager Functions */
+
+/**
+ * opae_manager_alloc - alloc opae_manager data structure
+ * @name: manager name.
+ * @ops: ops of this manager.
+ * @network_ops: ops of network management.
+ * @data: private data of this manager.
+ *
+ * Return: opae_manager on success, otherwise NULL.
+ */
+struct opae_manager *
+opae_manager_alloc(const char *name, struct opae_manager_ops *ops,
+ struct opae_manager_networking_ops *network_ops, void *data)
+{
+ struct opae_manager *mgr = opae_zmalloc(sizeof(*mgr));
+
+ if (!mgr)
+ return NULL;
+
+ mgr->name = name;
+ mgr->ops = ops;
+ mgr->network_ops = network_ops;
+ mgr->data = data;
+
+ opae_log("%s %p\n", __func__, mgr);
+
+ return mgr;
+}
+
+/**
+ * opae_manager_flash - flash a reconfiguration image via opae_manager
+ * @mgr: opae_manager for flash.
+ * @id: id of target region (accelerator).
+ * @buf: image data buffer.
+ * @size: buffer size.
+ * @status: status to store flash result.
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_manager_flash(struct opae_manager *mgr, int id, const char *buf,
+ u32 size, u64 *status)
+{
+ if (!mgr)
+ return -EINVAL;
+
+ if (mgr && mgr->ops && mgr->ops->flash)
+ return mgr->ops->flash(mgr, id, buf, size, status);
+
+ return -ENOENT;
+}
+
+/* Adapter Functions */
+
+/**
+ * opae_adapter_data_alloc - alloc opae_adapter_data data structure
+ * @type: opae_adapter_type.
+ *
+ * Return: opae_adapter_data on success, otherwise NULL.
+ */
+void *opae_adapter_data_alloc(enum opae_adapter_type type)
+{
+ struct opae_adapter_data *data;
+ int size;
+
+ switch (type) {
+ case OPAE_FPGA_PCI:
+ size = sizeof(struct opae_adapter_data_pci);
+ break;
+ case OPAE_FPGA_NET:
+ size = sizeof(struct opae_adapter_data_net);
+ break;
+ default:
+ size = sizeof(struct opae_adapter_data);
+ break;
+ }
+
+ data = opae_zmalloc(size);
+ if (!data)
+ return NULL;
+
+ data->type = type;
+
+ return data;
+}
+
+static struct opae_adapter_ops *match_ops(struct opae_adapter *adapter)
+{
+ struct opae_adapter_data *data;
+
+ if (!adapter || !adapter->data)
+ return NULL;
+
+ data = adapter->data;
+
+ if (data->type == OPAE_FPGA_PCI)
+ return &ifpga_adapter_ops;
+
+ return NULL;
+}
+
+/**
+ * opae_adapter_init - init opae_adapter data structure
+ * @adapter: pointer of opae_adapter data structure
+ * @name: adapter name.
+ * @data: private data of this adapter.
+ *
+ * Return: 0 on success.
+ */
+int opae_adapter_init(struct opae_adapter *adapter,
+ const char *name, void *data)
+{
+ if (!adapter)
+ return -ENOMEM;
+
+ TAILQ_INIT(&adapter->acc_list);
+ adapter->data = data;
+ adapter->name = name;
+ adapter->ops = match_ops(adapter);
+
+ return 0;
+}
+
+/**
+ * opae_adapter_enumerate - enumerate this adapter
+ * @adapter: adapter to enumerate.
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_adapter_enumerate(struct opae_adapter *adapter)
+{
+ int ret = -ENOENT;
+
+ if (!adapter)
+ return -EINVAL;
+
+ if (adapter->ops && adapter->ops->enumerate)
+ ret = adapter->ops->enumerate(adapter);
+
+ if (!ret)
+ opae_adapter_dump(adapter, 0);
+
+ return ret;
+}
+
+/**
+ * opae_adapter_destroy - destroy this adapter
+ * @adapter: adapter to destroy.
+ *
+ * destroy things allocated during adapter enumeration.
+ */
+void opae_adapter_destroy(struct opae_adapter *adapter)
+{
+ if (adapter && adapter->ops && adapter->ops->destroy)
+ adapter->ops->destroy(adapter);
+}
+
+/**
+ * opae_adapter_get_acc - find and return accelerator with matched id
+ * @adapter: adapter to find the accelerator.
+ * @acc_id: id (index) of the accelerator.
+ *
+ * destroy things allocated during adapter enumeration.
+ */
+struct opae_accelerator *
+opae_adapter_get_acc(struct opae_adapter *adapter, int acc_id)
+{
+ struct opae_accelerator *acc = NULL;
+
+ if (!adapter)
+ return NULL;
+
+ opae_adapter_for_each_acc(adapter, acc)
+ if (acc->index == acc_id)
+ return acc;
+
+ return NULL;
+}
+
+/**
+ * opae_manager_read_mac_rom - read the content of the MAC ROM
+ * @mgr: opae_manager for MAC ROM
+ * @port: the port number of retimer
+ * @addr: buffer of the MAC address
+ *
+ * Return: return the bytes of read successfully
+ */
+int opae_manager_read_mac_rom(struct opae_manager *mgr, int port,
+ struct opae_ether_addr *addr)
+{
+ if (!mgr || !mgr->network_ops)
+ return -EINVAL;
+
+ if (mgr->network_ops->read_mac_rom)
+ return mgr->network_ops->read_mac_rom(mgr,
+ port * sizeof(struct opae_ether_addr),
+ addr, sizeof(struct opae_ether_addr));
+
+ return -ENOENT;
+}
+
+/**
+ * opae_manager_write_mac_rom - write data into MAC ROM
+ * @mgr: opae_manager for MAC ROM
+ * @port: the port number of the retimer
+ * @addr: data of the MAC address
+ *
+ * Return: return written bytes
+ */
+int opae_manager_write_mac_rom(struct opae_manager *mgr, int port,
+ struct opae_ether_addr *addr)
+{
+ if (!mgr || !mgr->network_ops)
+ return -EINVAL;
+
+ if (mgr->network_ops && mgr->network_ops->write_mac_rom)
+ return mgr->network_ops->write_mac_rom(mgr,
+ port * sizeof(struct opae_ether_addr),
+ addr, sizeof(struct opae_ether_addr));
+
+ return -ENOENT;
+}
+
+/**
+ * opae_manager_get_eth_group_nums - get eth group numbers
+ * @mgr: opae_manager for eth group
+ *
+ * Return: the numbers of eth group
+ */
+int opae_manager_get_eth_group_nums(struct opae_manager *mgr)
+{
+ if (!mgr || !mgr->network_ops)
+ return -EINVAL;
+
+ if (mgr->network_ops->get_retimer_info)
+ return mgr->network_ops->get_eth_group_nums(mgr);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_manager_get_eth_group_info - get eth group info
+ * @mgr: opae_manager for eth group
+ * @group_id: id for eth group
+ * @info: info return to caller
+ *
+ * Return: 0 on success, otherwise error code
+ */
+int opae_manager_get_eth_group_info(struct opae_manager *mgr,
+ u8 group_id, struct opae_eth_group_info *info)
+{
+ if (!mgr || !mgr->network_ops)
+ return -EINVAL;
+
+ if (mgr->network_ops->get_retimer_info)
+ return mgr->network_ops->get_eth_group_info(mgr,
+ group_id, info);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_manager_get_eth_group_region_info
+ * @mgr: opae_manager for flash.
+ * @info: the memory region info for eth group
+ *
+ * Return: 0 on success, otherwise error code.
+ */
+int opae_manager_get_eth_group_region_info(struct opae_manager *mgr,
+ u8 group_id, struct opae_eth_group_region_info *info)
+{
+ if (!mgr)
+ return -EINVAL;
+
+ if (group_id >= MAX_ETH_GROUP_DEVICES)
+ return -EINVAL;
+
+ info->group_id = group_id;
+
+ if (mgr && mgr->ops && mgr->ops->get_eth_group_region_info)
+ return mgr->ops->get_eth_group_region_info(mgr, info);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_manager_eth_group_read_reg - read ETH group register
+ * @mgr: opae_manager for ETH Group
+ * @group_id: ETH group id
+ * @type: eth type
+ * @index: port index in eth group device
+ * @addr: register address of ETH Group
+ * @data: read buffer
+ *
+ * Return: 0 on success, otherwise error code
+ */
+int opae_manager_eth_group_read_reg(struct opae_manager *mgr, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 *data)
+{
+ if (!mgr || !mgr->network_ops)
+ return -EINVAL;
+
+ if (mgr->network_ops->eth_group_reg_read)
+ return mgr->network_ops->eth_group_reg_read(mgr, group_id,
+ type, index, addr, data);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_manager_eth_group_write_reg - write ETH group register
+ * @mgr: opae_manager for ETH Group
+ * @group_id: ETH group id
+ * @type: eth type
+ * @index: port index in eth group device
+ * @addr: register address of ETH Group
+ * @data: data will write to register
+ *
+ * Return: 0 on success, otherwise error code
+ */
+int opae_manager_eth_group_write_reg(struct opae_manager *mgr, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 data)
+{
+ if (!mgr || !mgr->network_ops)
+ return -EINVAL;
+
+ if (mgr->network_ops->eth_group_reg_write)
+ return mgr->network_ops->eth_group_reg_write(mgr, group_id,
+ type, index, addr, data);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_manager_get_retimer_info - get retimer info like PKVL chip
+ * @mgr: opae_manager for retimer
+ * @info: info return to caller
+ *
+ * Return: 0 on success, otherwise error code
+ */
+int opae_manager_get_retimer_info(struct opae_manager *mgr,
+ struct opae_retimer_info *info)
+{
+ if (!mgr || !mgr->network_ops)
+ return -EINVAL;
+
+ if (mgr->network_ops->get_retimer_info)
+ return mgr->network_ops->get_retimer_info(mgr, info);
+
+ return -ENOENT;
+}
+
+/**
+ * opae_manager_get_retimer_status - get retimer status
+ * @mgr: opae_manager of retimer
+ * @status: status of retimer
+ *
+ * Return: 0 on success, otherwise error code
+ */
+int opae_manager_get_retimer_status(struct opae_manager *mgr,
+ struct opae_retimer_status *status)
+{
+ if (!mgr || !mgr->network_ops)
+ return -EINVAL;
+
+ if (mgr->network_ops->get_retimer_status)
+ return mgr->network_ops->get_retimer_status(mgr,
+ status);
+
+ return -ENOENT;
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _OPAE_HW_API_H_
+#define _OPAE_HW_API_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <sys/queue.h>
+
+#include "opae_osdep.h"
+#include "opae_intel_max10.h"
+#include "opae_eth_group.h"
+
+#ifndef PCI_MAX_RESOURCE
+#define PCI_MAX_RESOURCE 6
+#endif
+
+struct opae_adapter;
+
+enum opae_adapter_type {
+ OPAE_FPGA_PCI,
+ OPAE_FPGA_NET,
+};
+
+/* OPAE Manager Data Structure */
+struct opae_manager_ops;
+struct opae_manager_networking_ops;
+
+/*
+ * opae_manager has pointer to its parent adapter, as it could be able to manage
+ * all components on this FPGA device (adapter). If not the case, don't set this
+ * adapter, which limit opae_manager ops to manager itself.
+ */
+struct opae_manager {
+ const char *name;
+ struct opae_adapter *adapter;
+ struct opae_manager_ops *ops;
+ struct opae_manager_networking_ops *network_ops;
+ void *data;
+};
+
+/* FIXME: add more management ops, e.g power/thermal and etc */
+struct opae_manager_ops {
+ int (*flash)(struct opae_manager *mgr, int id, const char *buffer,
+ u32 size, u64 *status);
+ int (*get_eth_group_region_info)(struct opae_manager *mgr,
+ struct opae_eth_group_region_info *info);
+};
+
+/* networking management ops in FME */
+struct opae_manager_networking_ops {
+ int (*read_mac_rom)(struct opae_manager *mgr, int offset, void *buf,
+ int size);
+ int (*write_mac_rom)(struct opae_manager *mgr, int offset, void *buf,
+ int size);
+ int (*get_eth_group_nums)(struct opae_manager *mgr);
+ int (*get_eth_group_info)(struct opae_manager *mgr,
+ u8 group_id, struct opae_eth_group_info *info);
+ int (*eth_group_reg_read)(struct opae_manager *mgr, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 *data);
+ int (*eth_group_reg_write)(struct opae_manager *mgr, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 data);
+ int (*get_retimer_info)(struct opae_manager *mgr,
+ struct opae_retimer_info *info);
+ int (*get_retimer_status)(struct opae_manager *mgr,
+ struct opae_retimer_status *status);
+};
+
+/* OPAE Manager APIs */
+struct opae_manager *
+opae_manager_alloc(const char *name, struct opae_manager_ops *ops,
+ struct opae_manager_networking_ops *network_ops, void *data);
+#define opae_manager_free(mgr) opae_free(mgr)
+int opae_manager_flash(struct opae_manager *mgr, int acc_id, const char *buf,
+ u32 size, u64 *status);
+int opae_manager_get_eth_group_region_info(struct opae_manager *mgr,
+ u8 group_id, struct opae_eth_group_region_info *info);
+
+/* OPAE Bridge Data Structure */
+struct opae_bridge_ops;
+
+/*
+ * opae_bridge only has pointer to its downstream accelerator.
+ */
+struct opae_bridge {
+ const char *name;
+ int id;
+ struct opae_accelerator *acc;
+ struct opae_bridge_ops *ops;
+ void *data;
+};
+
+struct opae_bridge_ops {
+ int (*reset)(struct opae_bridge *br);
+};
+
+/* OPAE Bridge APIs */
+struct opae_bridge *
+opae_bridge_alloc(const char *name, struct opae_bridge_ops *ops, void *data);
+int opae_bridge_reset(struct opae_bridge *br);
+#define opae_bridge_free(br) opae_free(br)
+
+/* OPAE Acceleraotr Data Structure */
+struct opae_accelerator_ops;
+
+/*
+ * opae_accelerator has pointer to its upstream bridge(port).
+ * In some cases, if we allow same user to do PR on its own accelerator, then
+ * set the manager pointer during the enumeration. But in other cases, the PR
+ * functions only could be done via manager in another module / thread / service
+ * / application for better protection.
+ */
+struct opae_accelerator {
+ TAILQ_ENTRY(opae_accelerator) node;
+ const char *name;
+ int index;
+ struct opae_bridge *br;
+ struct opae_manager *mgr;
+ struct opae_accelerator_ops *ops;
+ void *data;
+};
+
+struct opae_acc_info {
+ unsigned int num_regions;
+ unsigned int num_irqs;
+};
+
+struct opae_acc_region_info {
+ u32 flags;
+#define ACC_REGION_READ (1 << 0)
+#define ACC_REGION_WRITE (1 << 1)
+#define ACC_REGION_MMIO (1 << 2)
+ u32 index;
+ u64 phys_addr;
+ u64 len;
+ u8 *addr;
+};
+
+struct opae_accelerator_ops {
+ int (*read)(struct opae_accelerator *acc, unsigned int region_idx,
+ u64 offset, unsigned int byte, void *data);
+ int (*write)(struct opae_accelerator *acc, unsigned int region_idx,
+ u64 offset, unsigned int byte, void *data);
+ int (*get_info)(struct opae_accelerator *acc,
+ struct opae_acc_info *info);
+ int (*get_region_info)(struct opae_accelerator *acc,
+ struct opae_acc_region_info *info);
+ int (*set_irq)(struct opae_accelerator *acc,
+ u32 start, u32 count, s32 evtfds[]);
+ int (*get_uuid)(struct opae_accelerator *acc,
+ struct uuid *uuid);
+};
+
+/* OPAE accelerator APIs */
+struct opae_accelerator *
+opae_accelerator_alloc(const char *name, struct opae_accelerator_ops *ops,
+ void *data);
+#define opae_accelerator_free(acc) opae_free(acc)
+int opae_acc_get_info(struct opae_accelerator *acc, struct opae_acc_info *info);
+int opae_acc_get_region_info(struct opae_accelerator *acc,
+ struct opae_acc_region_info *info);
+int opae_acc_set_irq(struct opae_accelerator *acc,
+ u32 start, u32 count, s32 evtfds[]);
+int opae_acc_get_uuid(struct opae_accelerator *acc,
+ struct uuid *uuid);
+
+static inline struct opae_bridge *
+opae_acc_get_br(struct opae_accelerator *acc)
+{
+ return acc ? acc->br : NULL;
+}
+
+static inline struct opae_manager *
+opae_acc_get_mgr(struct opae_accelerator *acc)
+{
+ return acc ? acc->mgr : NULL;
+}
+
+int opae_acc_reg_read(struct opae_accelerator *acc, unsigned int region_idx,
+ u64 offset, unsigned int byte, void *data);
+int opae_acc_reg_write(struct opae_accelerator *acc, unsigned int region_idx,
+ u64 offset, unsigned int byte, void *data);
+
+#define opae_acc_reg_read64(acc, region, offset, data) \
+ opae_acc_reg_read(acc, region, offset, 8, data)
+#define opae_acc_reg_write64(acc, region, offset, data) \
+ opae_acc_reg_write(acc, region, offset, 8, data)
+#define opae_acc_reg_read32(acc, region, offset, data) \
+ opae_acc_reg_read(acc, region, offset, 4, data)
+#define opae_acc_reg_write32(acc, region, offset, data) \
+ opae_acc_reg_write(acc, region, offset, 4, data)
+#define opae_acc_reg_read16(acc, region, offset, data) \
+ opae_acc_reg_read(acc, region, offset, 2, data)
+#define opae_acc_reg_write16(acc, region, offset, data) \
+ opae_acc_reg_write(acc, region, offset, 2, data)
+#define opae_acc_reg_read8(acc, region, offset, data) \
+ opae_acc_reg_read(acc, region, offset, 1, data)
+#define opae_acc_reg_write8(acc, region, offset, data) \
+ opae_acc_reg_write(acc, region, offset, 1, data)
+
+/*for data stream read/write*/
+int opae_acc_data_read(struct opae_accelerator *acc, unsigned int flags,
+ u64 offset, unsigned int byte, void *data);
+int opae_acc_data_write(struct opae_accelerator *acc, unsigned int flags,
+ u64 offset, unsigned int byte, void *data);
+
+/* OPAE Adapter Data Structure */
+struct opae_adapter_data {
+ enum opae_adapter_type type;
+};
+
+struct opae_reg_region {
+ u64 phys_addr;
+ u64 len;
+ u8 *addr;
+};
+
+struct opae_adapter_data_pci {
+ enum opae_adapter_type type;
+ u16 device_id;
+ u16 vendor_id;
+ struct opae_reg_region region[PCI_MAX_RESOURCE];
+ int vfio_dev_fd; /* VFIO device file descriptor */
+};
+
+/* FIXME: OPAE_FPGA_NET type */
+struct opae_adapter_data_net {
+ enum opae_adapter_type type;
+};
+
+struct opae_adapter_ops {
+ int (*enumerate)(struct opae_adapter *adapter);
+ void (*destroy)(struct opae_adapter *adapter);
+};
+
+TAILQ_HEAD(opae_accelerator_list, opae_accelerator);
+
+#define opae_adapter_for_each_acc(adatper, acc) \
+ TAILQ_FOREACH(acc, &adapter->acc_list, node)
+
+struct opae_adapter {
+ const char *name;
+ struct opae_manager *mgr;
+ struct opae_accelerator_list acc_list;
+ struct opae_adapter_ops *ops;
+ void *data;
+};
+
+/* OPAE Adapter APIs */
+void *opae_adapter_data_alloc(enum opae_adapter_type type);
+#define opae_adapter_data_free(data) opae_free(data)
+
+int opae_adapter_init(struct opae_adapter *adapter,
+ const char *name, void *data);
+#define opae_adapter_free(adapter) opae_free(adapter)
+
+int opae_adapter_enumerate(struct opae_adapter *adapter);
+void opae_adapter_destroy(struct opae_adapter *adapter);
+static inline struct opae_manager *
+opae_adapter_get_mgr(struct opae_adapter *adapter)
+{
+ return adapter ? adapter->mgr : NULL;
+}
+
+struct opae_accelerator *
+opae_adapter_get_acc(struct opae_adapter *adapter, int acc_id);
+
+static inline void opae_adapter_add_acc(struct opae_adapter *adapter,
+ struct opae_accelerator *acc)
+{
+ TAILQ_INSERT_TAIL(&adapter->acc_list, acc, node);
+}
+
+static inline void opae_adapter_remove_acc(struct opae_adapter *adapter,
+ struct opae_accelerator *acc)
+{
+ TAILQ_REMOVE(&adapter->acc_list, acc, node);
+}
+
+/* OPAE vBNG network datastruct */
+#define OPAE_ETHER_ADDR_LEN 6
+
+struct opae_ether_addr {
+ unsigned char addr_bytes[OPAE_ETHER_ADDR_LEN];
+} __attribute__((__packed__));
+
+/* OPAE vBNG network API*/
+int opae_manager_read_mac_rom(struct opae_manager *mgr, int port,
+ struct opae_ether_addr *addr);
+int opae_manager_write_mac_rom(struct opae_manager *mgr, int port,
+ struct opae_ether_addr *addr);
+int opae_manager_get_retimer_info(struct opae_manager *mgr,
+ struct opae_retimer_info *info);
+int opae_manager_get_retimer_status(struct opae_manager *mgr,
+ struct opae_retimer_status *status);
+int opae_manager_get_eth_group_nums(struct opae_manager *mgr);
+int opae_manager_get_eth_group_info(struct opae_manager *mgr,
+ u8 group_id, struct opae_eth_group_info *info);
+int opae_manager_eth_group_write_reg(struct opae_manager *mgr, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 data);
+int opae_manager_eth_group_read_reg(struct opae_manager *mgr, u8 group_id,
+ u8 type, u8 index, u16 addr, u32 *data);
+#endif /* _OPAE_HW_API_H_*/
--- /dev/null
+
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#include "opae_osdep.h"
+#include "opae_i2c.h"
+
+static int i2c_transfer(struct altera_i2c_dev *dev,
+ struct i2c_msg *msg, int num)
+{
+ int ret, try;
+
+ for (ret = 0, try = 0; try < I2C_XFER_RETRY; try++) {
+ ret = dev->xfer(dev, msg, num);
+ if (ret != -EAGAIN)
+ break;
+ }
+
+ return ret;
+}
+
+/**
+ * i2c read function
+ */
+int i2c_read(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
+ u32 offset, u8 *buf, u32 count)
+{
+ u8 msgbuf[2];
+ int i = 0;
+
+ if (flags & I2C_FLAG_ADDR16)
+ msgbuf[i++] = offset >> 8;
+
+ msgbuf[i++] = offset;
+
+ struct i2c_msg msg[2] = {
+ {
+ .addr = slave_addr,
+ .flags = 0,
+ .len = i,
+ .buf = msgbuf,
+ },
+ {
+ .addr = slave_addr,
+ .flags = I2C_M_RD,
+ .len = count,
+ .buf = buf,
+ },
+ };
+
+ if (!dev->xfer)
+ return -ENODEV;
+
+ return i2c_transfer(dev, msg, 2);
+}
+
+int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
+ u32 offset, u8 *buffer, int len)
+{
+ struct i2c_msg msg;
+ u8 *buf;
+ int ret;
+ int i = 0;
+
+ if (!dev->xfer)
+ return -ENODEV;
+
+ buf = opae_malloc(I2C_MAX_OFFSET_LEN + len);
+ if (!buf)
+ return -ENOMEM;
+
+ msg.addr = slave_addr;
+ msg.flags = 0;
+ msg.buf = buf;
+
+ if (flags & I2C_FLAG_ADDR16)
+ msg.buf[i++] = offset >> 8;
+
+ msg.buf[i++] = offset;
+ opae_memcpy(&msg.buf[i], buffer, len);
+ msg.len = i + len;
+
+ ret = i2c_transfer(dev, &msg, 1);
+
+ opae_free(buf);
+ return ret;
+}
+
+int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
+ u8 *buf, u32 count)
+{
+ return i2c_read(dev, 0, slave_addr, offset, buf, count);
+}
+
+int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
+ u8 *buf, u32 count)
+{
+ return i2c_read(dev, I2C_FLAG_ADDR16, slave_addr, offset,
+ buf, count);
+}
+
+int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
+ u8 *buf, u32 count)
+{
+ return i2c_write(dev, 0, slave_addr, offset, buf, count);
+}
+
+int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
+ u8 *buf, u32 count)
+{
+ return i2c_write(dev, I2C_FLAG_ADDR16, slave_addr, offset,
+ buf, count);
+}
+
+static void i2c_indirect_write(struct altera_i2c_dev *dev, u32 reg,
+ u32 value)
+{
+ u64 ctrl;
+
+ ctrl = I2C_CTRL_W | (reg >> 2);
+
+ opae_writeq(value & I2C_WRITE_DATA_MASK, dev->base + I2C_WRITE);
+ opae_writeq(ctrl, dev->base + I2C_CTRL);
+}
+
+static u32 i2c_indirect_read(struct altera_i2c_dev *dev, u32 reg)
+{
+ u64 tmp;
+ u64 ctrl;
+ u32 value;
+
+ ctrl = I2C_CTRL_R | (reg >> 2);
+ opae_writeq(ctrl, dev->base + I2C_CTRL);
+
+ /* FIXME: Read one more time to avoid HW timing issue. */
+ tmp = opae_readq(dev->base + I2C_READ);
+ tmp = opae_readq(dev->base + I2C_READ);
+
+ value = tmp & I2C_READ_DATA_MASK;
+
+ return value;
+}
+
+static void altera_i2c_transfer(struct altera_i2c_dev *dev, u32 data)
+{
+ /*send STOP on last byte*/
+ if (dev->msg_len == 1)
+ data |= ALTERA_I2C_TFR_CMD_STO;
+ if (dev->msg_len > 0)
+ i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD, data);
+}
+
+static void altera_i2c_disable(struct altera_i2c_dev *dev)
+{
+ u32 val = i2c_indirect_read(dev, ALTERA_I2C_CTRL);
+
+ i2c_indirect_write(dev, ALTERA_I2C_CTRL, val&~ALTERA_I2C_CTRL_EN);
+}
+
+static void altera_i2c_enable(struct altera_i2c_dev *dev)
+{
+ u32 val = i2c_indirect_read(dev, ALTERA_I2C_CTRL);
+
+ i2c_indirect_write(dev, ALTERA_I2C_CTRL, val | ALTERA_I2C_CTRL_EN);
+}
+
+static void altera_i2c_reset(struct altera_i2c_dev *dev)
+{
+ altera_i2c_disable(dev);
+ altera_i2c_enable(dev);
+}
+
+static int altera_i2c_wait_core_idle(struct altera_i2c_dev *dev)
+{
+ int retry = 0;
+
+ while (i2c_indirect_read(dev, ALTERA_I2C_STATUS)
+ & ALTERA_I2C_STAT_CORE) {
+ if (retry++ > ALTERA_I2C_TIMEOUT_US) {
+ dev_err(dev, "timeout: Core Status not IDLE...\n");
+ return -EBUSY;
+ }
+ udelay(1);
+ }
+
+ return 0;
+}
+
+static void altera_i2c_enable_interrupt(struct altera_i2c_dev *dev,
+ u32 mask, bool enable)
+{
+ u32 status;
+
+ status = i2c_indirect_read(dev, ALTERA_I2C_ISER);
+ if (enable)
+ dev->isr_mask = status | mask;
+ else
+ dev->isr_mask = status&~mask;
+
+ i2c_indirect_write(dev, ALTERA_I2C_ISER, dev->isr_mask);
+}
+
+static void altera_i2c_interrupt_clear(struct altera_i2c_dev *dev, u32 mask)
+{
+ u32 int_en;
+
+ int_en = i2c_indirect_read(dev, ALTERA_I2C_ISR);
+
+ i2c_indirect_write(dev, ALTERA_I2C_ISR, int_en | mask);
+}
+
+static void altera_i2c_read_rx_fifo(struct altera_i2c_dev *dev)
+{
+ size_t rx_avail;
+ size_t bytes;
+
+ rx_avail = i2c_indirect_read(dev, ALTERA_I2C_RX_FIFO_LVL);
+ bytes = min(rx_avail, dev->msg_len);
+
+ while (bytes-- > 0) {
+ *dev->buf++ = i2c_indirect_read(dev, ALTERA_I2C_RX_DATA);
+ dev->msg_len--;
+ altera_i2c_transfer(dev, 0);
+ }
+}
+
+static void altera_i2c_stop(struct altera_i2c_dev *dev)
+{
+ i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD, ALTERA_I2C_TFR_CMD_STO);
+}
+
+static int altera_i2c_fill_tx_fifo(struct altera_i2c_dev *dev)
+{
+ size_t tx_avail;
+ int bytes;
+ int ret;
+
+ tx_avail = dev->fifo_size -
+ i2c_indirect_read(dev, ALTERA_I2C_TC_FIFO_LVL);
+ bytes = min(tx_avail, dev->msg_len);
+ ret = dev->msg_len - bytes;
+
+ while (bytes-- > 0) {
+ altera_i2c_transfer(dev, *dev->buf++);
+ dev->msg_len--;
+ }
+
+ return ret;
+}
+
+static u8 i2c_8bit_addr_from_msg(const struct i2c_msg *msg)
+{
+ return (msg->addr << 1) | (msg->flags & I2C_M_RD ? 1 : 0);
+}
+
+static int altera_i2c_wait_complete(struct altera_i2c_dev *dev,
+ u32 *status)
+{
+ int retry = 0;
+
+ while (!((*status = i2c_indirect_read(dev, ALTERA_I2C_ISR))
+ & dev->isr_mask)) {
+ if (retry++ > ALTERA_I2C_TIMEOUT_US)
+ return -EBUSY;
+
+ udelay(1000);
+ }
+
+ return 0;
+}
+
+static bool altera_handle_i2c_status(struct altera_i2c_dev *dev, u32 status)
+{
+ bool read, finish = false;
+ int ret;
+
+ read = (dev->msg->flags & I2C_M_RD) != 0;
+
+ if (status & ALTERA_I2C_ISR_ARB) {
+ altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_ARB);
+ dev->msg_err = -EAGAIN;
+ finish = true;
+ } else if (status & ALTERA_I2C_ISR_NACK) {
+ dev_debug(dev, "could not get ACK\n");
+ dev->msg_err = -ENXIO;
+ altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_NACK);
+ altera_i2c_stop(dev);
+ finish = true;
+ } else if (read && (status & ALTERA_I2C_ISR_RXOF)) {
+ /* RX FIFO Overflow */
+ altera_i2c_read_rx_fifo(dev);
+ altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISER_RXOF_EN);
+ altera_i2c_stop(dev);
+ dev_err(dev, "error: RX FIFO overflow\n");
+ finish = true;
+ } else if (read && (status & ALTERA_I2C_ISR_RXRDY)) {
+ altera_i2c_read_rx_fifo(dev);
+ altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_RXRDY);
+ if (!dev->msg_len)
+ finish = true;
+ } else if (!read && (status & ALTERA_I2C_ISR_TXRDY)) {
+ altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_TXRDY);
+ if (dev->msg_len > 0)
+ altera_i2c_fill_tx_fifo(dev);
+ else
+ finish = true;
+ } else {
+ dev_err(dev, "unexpected status:0x%x\n", status);
+ altera_i2c_interrupt_clear(dev, ALTERA_I2C_ALL_IRQ);
+ }
+
+ if (finish) {
+ ret = altera_i2c_wait_core_idle(dev);
+ if (ret)
+ dev_err(dev, "message timeout\n");
+
+ altera_i2c_enable_interrupt(dev, ALTERA_I2C_ALL_IRQ, false);
+ altera_i2c_interrupt_clear(dev, ALTERA_I2C_ALL_IRQ);
+ dev_debug(dev, "message done\n");
+ }
+
+ return finish;
+}
+
+static bool altera_i2c_poll_status(struct altera_i2c_dev *dev)
+{
+ u32 status;
+ bool finish = false;
+ int i = 0;
+
+ do {
+ if (altera_i2c_wait_complete(dev, &status)) {
+ dev_err(dev, "altera i2c wait complete timeout, status=0x%x\n",
+ status);
+ return -EBUSY;
+ }
+
+ finish = altera_handle_i2c_status(dev, status);
+
+ if (i++ > I2C_XFER_RETRY)
+ break;
+
+ } while (!finish);
+
+ return finish;
+}
+
+static int altera_i2c_xfer_msg(struct altera_i2c_dev *dev,
+ struct i2c_msg *msg)
+{
+ u32 int_mask = ALTERA_I2C_ISR_RXOF |
+ ALTERA_I2C_ISR_ARB | ALTERA_I2C_ISR_NACK;
+ u8 addr = i2c_8bit_addr_from_msg(msg);
+ bool finish;
+
+ dev->msg = msg;
+ dev->msg_len = msg->len;
+ dev->buf = msg->buf;
+ dev->msg_err = 0;
+ altera_i2c_enable(dev);
+
+ /*make sure RX FIFO is emtry*/
+ do {
+ i2c_indirect_read(dev, ALTERA_I2C_RX_DATA);
+ } while (i2c_indirect_read(dev, ALTERA_I2C_RX_FIFO_LVL));
+
+ i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD_RW_D,
+ ALTERA_I2C_TFR_CMD_STA | addr);
+
+ /*enable irq*/
+ if (msg->flags & I2C_M_RD) {
+ int_mask |= ALTERA_I2C_ISR_RXOF | ALTERA_I2C_ISR_RXRDY;
+ /* in polling mode, we should set this ISR register? */
+ altera_i2c_enable_interrupt(dev, int_mask, true);
+ altera_i2c_transfer(dev, 0);
+ } else {
+ int_mask |= ALTERA_I2C_ISR_TXRDY;
+ altera_i2c_enable_interrupt(dev, int_mask, true);
+ altera_i2c_fill_tx_fifo(dev);
+ }
+
+ finish = altera_i2c_poll_status(dev);
+ if (!finish) {
+ dev->msg_err = -ETIMEDOUT;
+ dev_err(dev, "%s: i2c transfer error\n", __func__);
+ }
+
+ altera_i2c_enable_interrupt(dev, int_mask, false);
+
+ if (i2c_indirect_read(dev, ALTERA_I2C_STATUS) & ALTERA_I2C_STAT_CORE)
+ dev_info(dev, "core not idle...\n");
+
+ altera_i2c_disable(dev);
+
+ return dev->msg_err;
+}
+
+static int altera_i2c_xfer(struct altera_i2c_dev *dev,
+ struct i2c_msg *msg, int num)
+{
+ int ret = 0;
+ int i;
+
+ for (i = 0; i < num; i++, msg++) {
+ ret = altera_i2c_xfer_msg(dev, msg);
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+
+static void altera_i2c_hardware_init(struct altera_i2c_dev *dev)
+{
+ u32 divisor = dev->i2c_clk / dev->bus_clk_rate;
+ u32 clk_mhz = dev->i2c_clk / 1000000;
+ u32 tmp = (ALTERA_I2C_THRESHOLD << ALTERA_I2C_CTRL_RXT_SHFT) |
+ (ALTERA_I2C_THRESHOLD << ALTERA_I2C_CTRL_TCT_SHFT);
+ u32 t_high, t_low;
+
+ if (dev->bus_clk_rate <= 100000) {
+ tmp &= ~ALTERA_I2C_CTRL_BSPEED;
+ /*standard mode SCL 50/50*/
+ t_high = divisor*1/2;
+ t_low = divisor*1/2;
+ } else {
+ tmp |= ALTERA_I2C_CTRL_BSPEED;
+ /*Fast mode SCL 33/66*/
+ t_high = divisor*1/3;
+ t_low = divisor*2/3;
+ }
+
+ i2c_indirect_write(dev, ALTERA_I2C_CTRL, tmp);
+
+ dev_info(dev, "%s: rate=%uHz per_clk=%uMHz -> ratio=1:%u\n",
+ __func__, dev->bus_clk_rate, clk_mhz, divisor);
+
+ /*reset the i2c*/
+ altera_i2c_reset(dev);
+
+ /*Set SCL high Time*/
+ i2c_indirect_write(dev, ALTERA_I2C_SCL_HIGH, t_high);
+ /*Set SCL low time*/
+ i2c_indirect_write(dev, ALTERA_I2C_SCL_LOW, t_low);
+ /*Set SDA Hold time, 300ms*/
+ i2c_indirect_write(dev, ALTERA_I2C_SDA_HOLD, (300*clk_mhz)/1000);
+
+ altera_i2c_enable_interrupt(dev, ALTERA_I2C_ALL_IRQ, false);
+}
+
+struct altera_i2c_dev *altera_i2c_probe(void *base)
+{
+ struct altera_i2c_dev *dev;
+
+ dev = opae_malloc(sizeof(*dev));
+ if (!dev)
+ return NULL;
+
+ dev->base = (u8 *)base;
+ dev->i2c_param.info = opae_readq(dev->base + I2C_PARAM);
+
+ if (dev->i2c_param.devid != 0xEE011) {
+ dev_err(dev, "find a invalid i2c master\n");
+ return NULL;
+ }
+
+ dev->fifo_size = dev->i2c_param.fifo_depth;
+
+ if (dev->i2c_param.max_req == ALTERA_I2C_100KHZ)
+ dev->bus_clk_rate = 100000;
+ else if (dev->i2c_param.max_req == ALTERA_I2C_400KHZ)
+ /* i2c bus clk 400KHz*/
+ dev->bus_clk_rate = 400000;
+
+ /* i2c input clock for vista creek is 100MHz */
+ dev->i2c_clk = dev->i2c_param.ref_clk * 1000000;
+ dev->xfer = altera_i2c_xfer;
+
+ altera_i2c_hardware_init(dev);
+
+ return dev;
+}
+
+int altera_i2c_remove(struct altera_i2c_dev *dev)
+{
+ altera_i2c_disable(dev);
+
+ return 0;
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#ifndef _OPAE_I2C_H
+#define _OPAE_I2C_H
+
+#include "opae_osdep.h"
+
+#define ALTERA_I2C_TFR_CMD 0x00 /* Transfer Command register */
+#define ALTERA_I2C_TFR_CMD_STA BIT(9) /* send START before byte */
+#define ALTERA_I2C_TFR_CMD_STO BIT(8) /* send STOP after byte */
+#define ALTERA_I2C_TFR_CMD_RW_D BIT(0) /* Direction of transfer */
+#define ALTERA_I2C_RX_DATA 0x04 /* RX data FIFO register */
+#define ALTERA_I2C_CTRL 0x8 /* Control register */
+#define ALTERA_I2C_CTRL_RXT_SHFT 4 /* RX FIFO Threshold */
+#define ALTERA_I2C_CTRL_TCT_SHFT 2 /* TFER CMD FIFO Threshold */
+#define ALTERA_I2C_CTRL_BSPEED BIT(1) /* Bus Speed */
+#define ALTERA_I2C_CTRL_EN BIT(0) /* Enable Core */
+#define ALTERA_I2C_ISER 0xc /* Interrupt Status Enable register */
+#define ALTERA_I2C_ISER_RXOF_EN BIT(4) /* Enable RX OVERFLOW IRQ */
+#define ALTERA_I2C_ISER_ARB_EN BIT(3) /* Enable ARB LOST IRQ */
+#define ALTERA_I2C_ISER_NACK_EN BIT(2) /* Enable NACK DET IRQ */
+#define ALTERA_I2C_ISER_RXRDY_EN BIT(1) /* Enable RX Ready IRQ */
+#define ALTERA_I2C_ISER_TXRDY_EN BIT(0) /* Enable TX Ready IRQ */
+#define ALTERA_I2C_ISR 0x10 /* Interrupt Status register */
+#define ALTERA_I2C_ISR_RXOF BIT(4) /* RX OVERFLOW */
+#define ALTERA_I2C_ISR_ARB BIT(3) /* ARB LOST */
+#define ALTERA_I2C_ISR_NACK BIT(2) /* NACK DET */
+#define ALTERA_I2C_ISR_RXRDY BIT(1) /* RX Ready */
+#define ALTERA_I2C_ISR_TXRDY BIT(0) /* TX Ready */
+#define ALTERA_I2C_STATUS 0x14 /* Status register */
+#define ALTERA_I2C_STAT_CORE BIT(0) /* Core Status */
+#define ALTERA_I2C_TC_FIFO_LVL 0x18 /* Transfer FIFO LVL register */
+#define ALTERA_I2C_RX_FIFO_LVL 0x1c /* Receive FIFO LVL register */
+#define ALTERA_I2C_SCL_LOW 0x20 /* SCL low count register */
+#define ALTERA_I2C_SCL_HIGH 0x24 /* SCL high count register */
+#define ALTERA_I2C_SDA_HOLD 0x28 /* SDA hold count register */
+
+#define ALTERA_I2C_ALL_IRQ (ALTERA_I2C_ISR_RXOF | ALTERA_I2C_ISR_ARB | \
+ ALTERA_I2C_ISR_NACK | ALTERA_I2C_ISR_RXRDY | \
+ ALTERA_I2C_ISR_TXRDY)
+
+#define ALTERA_I2C_THRESHOLD 0
+#define ALTERA_I2C_DFLT_FIFO_SZ 8
+#define ALTERA_I2C_TIMEOUT_US 250000 /* 250ms */
+
+#define I2C_PARAM 0x8
+#define I2C_CTRL 0x10
+#define I2C_CTRL_R BIT_ULL(9)
+#define I2C_CTRL_W BIT_ULL(8)
+#define I2C_CTRL_ADDR_MASK GENMASK_ULL(3, 0)
+#define I2C_READ 0x18
+#define I2C_READ_DATA_VALID BIT_ULL(32)
+#define I2C_READ_DATA_MASK GENMASK_ULL(31, 0)
+#define I2C_WRITE 0x20
+#define I2C_WRITE_DATA_MASK GENMASK_ULL(31, 0)
+
+#define ALTERA_I2C_100KHZ 0
+#define ALTERA_I2C_400KHZ 1
+
+/* i2c slave using 16bit address */
+#define I2C_FLAG_ADDR16 1
+
+#define I2C_XFER_RETRY 10
+
+struct i2c_core_param {
+ union {
+ u64 info;
+ struct {
+ u16 fifo_depth:9;
+ u8 interface:1;
+ /*reference clock of I2C core in MHz*/
+ u32 ref_clk:10;
+ /*Max I2C interface freq*/
+ u8 max_req:4;
+ u64 devid:32;
+ /* number of MAC address*/
+ u8 nu_macs:8;
+ };
+ };
+};
+
+struct altera_i2c_dev {
+ u8 *base;
+ struct i2c_core_param i2c_param;
+ u32 fifo_size;
+ u32 bus_clk_rate; /* i2c bus clock */
+ u32 i2c_clk; /* i2c input clock */
+ struct i2c_msg *msg;
+ size_t msg_len;
+ int msg_err;
+ u32 isr_mask;
+ u8 *buf;
+ int (*xfer)(struct altera_i2c_dev *dev, struct i2c_msg *msg, int num);
+};
+
+/**
+ * struct i2c_msg: an I2C message
+ */
+struct i2c_msg {
+ unsigned int addr;
+ unsigned int flags;
+ unsigned int len;
+ u8 *buf;
+};
+
+#define I2C_MAX_OFFSET_LEN 4
+
+enum i2c_msg_flags {
+ I2C_M_TEN = 0x0010, /*ten-bit chip address*/
+ I2C_M_RD = 0x0001, /*read data*/
+ I2C_M_STOP = 0x8000, /*send stop after this message*/
+};
+
+struct altera_i2c_dev *altera_i2c_probe(void *base);
+int altera_i2c_remove(struct altera_i2c_dev *dev);
+int i2c_read(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
+ u32 offset, u8 *buf, u32 count);
+int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
+ u32 offset, u8 *buffer, int len);
+int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
+ u8 *buf, u32 count);
+int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
+ u8 *buf, u32 count);
+int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
+ u8 *buf, u32 count);
+int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
+ u8 *buf, u32 count);
+#endif
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include "opae_ifpga_hw_api.h"
+#include "ifpga_api.h"
+
+int opae_manager_ifpga_get_prop(struct opae_manager *mgr,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme;
+
+ if (!mgr || !mgr->data)
+ return -EINVAL;
+
+ fme = mgr->data;
+
+ return ifpga_get_prop(fme->parent, FEATURE_FIU_ID_FME, 0, prop);
+}
+
+int opae_manager_ifpga_set_prop(struct opae_manager *mgr,
+ struct feature_prop *prop)
+{
+ struct ifpga_fme_hw *fme;
+
+ if (!mgr || !mgr->data)
+ return -EINVAL;
+
+ fme = mgr->data;
+
+ return ifpga_set_prop(fme->parent, FEATURE_FIU_ID_FME, 0, prop);
+}
+
+int opae_manager_ifpga_get_info(struct opae_manager *mgr,
+ struct fpga_fme_info *fme_info)
+{
+ struct ifpga_fme_hw *fme;
+
+ if (!mgr || !mgr->data || !fme_info)
+ return -EINVAL;
+
+ fme = mgr->data;
+
+ spinlock_lock(&fme->lock);
+ fme_info->capability = fme->capability;
+ spinlock_unlock(&fme->lock);
+
+ return 0;
+}
+
+int opae_manager_ifpga_set_err_irq(struct opae_manager *mgr,
+ struct fpga_fme_err_irq_set *err_irq_set)
+{
+ struct ifpga_fme_hw *fme;
+
+ if (!mgr || !mgr->data)
+ return -EINVAL;
+
+ fme = mgr->data;
+
+ return ifpga_set_irq(fme->parent, FEATURE_FIU_ID_FME, 0,
+ IFPGA_FME_FEATURE_ID_GLOBAL_ERR, err_irq_set);
+}
+
+int opae_bridge_ifpga_get_prop(struct opae_bridge *br,
+ struct feature_prop *prop)
+{
+ struct ifpga_port_hw *port;
+
+ if (!br || !br->data)
+ return -EINVAL;
+
+ port = br->data;
+
+ return ifpga_get_prop(port->parent, FEATURE_FIU_ID_PORT,
+ port->port_id, prop);
+}
+
+int opae_bridge_ifpga_set_prop(struct opae_bridge *br,
+ struct feature_prop *prop)
+{
+ struct ifpga_port_hw *port;
+
+ if (!br || !br->data)
+ return -EINVAL;
+
+ port = br->data;
+
+ return ifpga_set_prop(port->parent, FEATURE_FIU_ID_PORT,
+ port->port_id, prop);
+}
+
+int opae_bridge_ifpga_get_info(struct opae_bridge *br,
+ struct fpga_port_info *port_info)
+{
+ struct ifpga_port_hw *port;
+
+ if (!br || !br->data || !port_info)
+ return -EINVAL;
+
+ port = br->data;
+
+ spinlock_lock(&port->lock);
+ port_info->capability = port->capability;
+ port_info->num_uafu_irqs = port->num_uafu_irqs;
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+int opae_bridge_ifpga_get_region_info(struct opae_bridge *br,
+ struct fpga_port_region_info *info)
+{
+ struct ifpga_port_hw *port;
+
+ if (!br || !br->data || !info)
+ return -EINVAL;
+
+ /* Only support STP region now */
+ if (info->index != PORT_REGION_INDEX_STP)
+ return -EINVAL;
+
+ port = br->data;
+
+ spinlock_lock(&port->lock);
+ info->addr = port->stp_addr;
+ info->size = port->stp_size;
+ spinlock_unlock(&port->lock);
+
+ return 0;
+}
+
+int opae_bridge_ifpga_set_err_irq(struct opae_bridge *br,
+ struct fpga_port_err_irq_set *err_irq_set)
+{
+ struct ifpga_port_hw *port;
+
+ if (!br || !br->data)
+ return -EINVAL;
+
+ port = br->data;
+
+ return ifpga_set_irq(port->parent, FEATURE_FIU_ID_PORT, port->port_id,
+ IFPGA_PORT_FEATURE_ID_ERROR, err_irq_set);
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _OPAE_IFPGA_HW_API_H_
+#define _OPAE_IFPGA_HW_API_H_
+
+#include "opae_hw_api.h"
+
+/**
+ * struct feature_prop - data structure for feature property
+ * @feature_id: id of this feature.
+ * @prop_id: id of this property under this feature.
+ * @data: property value to set/get.
+ */
+struct feature_prop {
+ u64 feature_id;
+ u64 prop_id;
+ u64 data;
+};
+
+#define IFPGA_FIU_ID_FME 0x0
+#define IFPGA_FIU_ID_PORT 0x1
+
+#define IFPGA_FME_FEATURE_ID_HEADER 0x0
+#define IFPGA_FME_FEATURE_ID_THERMAL_MGMT 0x1
+#define IFPGA_FME_FEATURE_ID_POWER_MGMT 0x2
+#define IFPGA_FME_FEATURE_ID_GLOBAL_IPERF 0x3
+#define IFPGA_FME_FEATURE_ID_GLOBAL_ERR 0x4
+#define IFPGA_FME_FEATURE_ID_PR_MGMT 0x5
+#define IFPGA_FME_FEATURE_ID_HSSI 0x6
+#define IFPGA_FME_FEATURE_ID_GLOBAL_DPERF 0x7
+
+#define IFPGA_PORT_FEATURE_ID_HEADER 0x0
+#define IFPGA_PORT_FEATURE_ID_AFU 0xff
+#define IFPGA_PORT_FEATURE_ID_ERROR 0x10
+#define IFPGA_PORT_FEATURE_ID_UMSG 0x11
+#define IFPGA_PORT_FEATURE_ID_UINT 0x12
+#define IFPGA_PORT_FEATURE_ID_STP 0x13
+
+/*
+ * PROP format (TOP + SUB + ID)
+ *
+ * (~0x0) means this field is unused.
+ */
+#define PROP_TOP GENMASK(31, 24)
+#define PROP_TOP_UNUSED 0xff
+#define PROP_SUB GENMASK(23, 16)
+#define PROP_SUB_UNUSED 0xff
+#define PROP_ID GENMASK(15, 0)
+
+#define PROP(_top, _sub, _id) \
+ (SET_FIELD(PROP_TOP, _top) | SET_FIELD(PROP_SUB, _sub) |\
+ SET_FIELD(PROP_ID, _id))
+
+/* FME head feature's properties*/
+#define FME_HDR_PROP_REVISION 0x1 /* RDONLY */
+#define FME_HDR_PROP_PORTS_NUM 0x2 /* RDONLY */
+#define FME_HDR_PROP_CACHE_SIZE 0x3 /* RDONLY */
+#define FME_HDR_PROP_VERSION 0x4 /* RDONLY */
+#define FME_HDR_PROP_SOCKET_ID 0x5 /* RDONLY */
+#define FME_HDR_PROP_BITSTREAM_ID 0x6 /* RDONLY */
+#define FME_HDR_PROP_BITSTREAM_METADATA 0x7 /* RDONLY */
+
+/* FME error reporting feature's properties */
+/* FME error reporting properties format */
+#define ERR_PROP(_top, _id) PROP(_top, 0xff, _id)
+#define ERR_PROP_TOP_UNUSED PROP_TOP_UNUSED
+#define ERR_PROP_TOP_FME_ERR 0x1
+#define ERR_PROP_ROOT(_id) ERR_PROP(0xff, _id)
+#define ERR_PROP_FME_ERR(_id) ERR_PROP(ERR_PROP_TOP_FME_ERR, _id)
+
+#define FME_ERR_PROP_ERRORS ERR_PROP_FME_ERR(0x1)
+#define FME_ERR_PROP_FIRST_ERROR ERR_PROP_FME_ERR(0x2)
+#define FME_ERR_PROP_NEXT_ERROR ERR_PROP_FME_ERR(0x3)
+#define FME_ERR_PROP_CLEAR ERR_PROP_FME_ERR(0x4) /* WO */
+#define FME_ERR_PROP_REVISION ERR_PROP_ROOT(0x5)
+#define FME_ERR_PROP_PCIE0_ERRORS ERR_PROP_ROOT(0x6) /* RW */
+#define FME_ERR_PROP_PCIE1_ERRORS ERR_PROP_ROOT(0x7) /* RW */
+#define FME_ERR_PROP_NONFATAL_ERRORS ERR_PROP_ROOT(0x8)
+#define FME_ERR_PROP_CATFATAL_ERRORS ERR_PROP_ROOT(0x9)
+#define FME_ERR_PROP_INJECT_ERRORS ERR_PROP_ROOT(0xa) /* RW */
+
+/* FME thermal feature's properties */
+#define FME_THERMAL_PROP_THRESHOLD1 0x1 /* RW */
+#define FME_THERMAL_PROP_THRESHOLD2 0x2 /* RW */
+#define FME_THERMAL_PROP_THRESHOLD_TRIP 0x3 /* RDONLY */
+#define FME_THERMAL_PROP_THRESHOLD1_REACHED 0x4 /* RDONLY */
+#define FME_THERMAL_PROP_THRESHOLD2_REACHED 0x5 /* RDONLY */
+#define FME_THERMAL_PROP_THRESHOLD1_POLICY 0x6 /* RW */
+#define FME_THERMAL_PROP_TEMPERATURE 0x7 /* RDONLY */
+#define FME_THERMAL_PROP_REVISION 0x8 /* RDONLY */
+
+/* FME power feature's properties */
+#define FME_PWR_PROP_CONSUMED 0x1 /* RDONLY */
+#define FME_PWR_PROP_THRESHOLD1 0x2 /* RW */
+#define FME_PWR_PROP_THRESHOLD2 0x3 /* RW */
+#define FME_PWR_PROP_THRESHOLD1_STATUS 0x4 /* RDONLY */
+#define FME_PWR_PROP_THRESHOLD2_STATUS 0x5 /* RDONLY */
+#define FME_PWR_PROP_RTL 0x6 /* RDONLY */
+#define FME_PWR_PROP_XEON_LIMIT 0x7 /* RDONLY */
+#define FME_PWR_PROP_FPGA_LIMIT 0x8 /* RDONLY */
+#define FME_PWR_PROP_REVISION 0x9 /* RDONLY */
+
+/* FME iperf/dperf PROP format */
+#define PERF_PROP_TOP_CACHE 0x1
+#define PERF_PROP_TOP_VTD 0x2
+#define PERF_PROP_TOP_FAB 0x3
+#define PERF_PROP_TOP_UNUSED PROP_TOP_UNUSED
+#define PERF_PROP_SUB_UNUSED PROP_SUB_UNUSED
+
+#define PERF_PROP_ROOT(_id) PROP(0xff, 0xff, _id)
+#define PERF_PROP_CACHE(_id) PROP(PERF_PROP_TOP_CACHE, 0xff, _id)
+#define PERF_PROP_VTD(_sub, _id) PROP(PERF_PROP_TOP_VTD, _sub, _id)
+#define PERF_PROP_VTD_ROOT(_id) PROP(PERF_PROP_TOP_VTD, 0xff, _id)
+#define PERF_PROP_FAB(_sub, _id) PROP(PERF_PROP_TOP_FAB, _sub, _id)
+#define PERF_PROP_FAB_ROOT(_id) PROP(PERF_PROP_TOP_FAB, 0xff, _id)
+
+/* FME iperf feature's properties */
+#define FME_IPERF_PROP_CLOCK PERF_PROP_ROOT(0x1)
+#define FME_IPERF_PROP_REVISION PERF_PROP_ROOT(0x2)
+
+/* iperf CACHE properties */
+#define FME_IPERF_PROP_CACHE_FREEZE PERF_PROP_CACHE(0x1) /* RW */
+#define FME_IPERF_PROP_CACHE_READ_HIT PERF_PROP_CACHE(0x2)
+#define FME_IPERF_PROP_CACHE_READ_MISS PERF_PROP_CACHE(0x3)
+#define FME_IPERF_PROP_CACHE_WRITE_HIT PERF_PROP_CACHE(0x4)
+#define FME_IPERF_PROP_CACHE_WRITE_MISS PERF_PROP_CACHE(0x5)
+#define FME_IPERF_PROP_CACHE_HOLD_REQUEST PERF_PROP_CACHE(0x6)
+#define FME_IPERF_PROP_CACHE_TX_REQ_STALL PERF_PROP_CACHE(0x7)
+#define FME_IPERF_PROP_CACHE_RX_REQ_STALL PERF_PROP_CACHE(0x8)
+#define FME_IPERF_PROP_CACHE_RX_EVICTION PERF_PROP_CACHE(0x9)
+#define FME_IPERF_PROP_CACHE_DATA_WRITE_PORT_CONTENTION PERF_PROP_CACHE(0xa)
+#define FME_IPERF_PROP_CACHE_TAG_WRITE_PORT_CONTENTION PERF_PROP_CACHE(0xb)
+/* iperf VTD properties */
+#define FME_IPERF_PROP_VTD_FREEZE PERF_PROP_VTD_ROOT(0x1) /* RW */
+#define FME_IPERF_PROP_VTD_SIP_IOTLB_4K_HIT PERF_PROP_VTD_ROOT(0x2)
+#define FME_IPERF_PROP_VTD_SIP_IOTLB_2M_HIT PERF_PROP_VTD_ROOT(0x3)
+#define FME_IPERF_PROP_VTD_SIP_IOTLB_1G_HIT PERF_PROP_VTD_ROOT(0x4)
+#define FME_IPERF_PROP_VTD_SIP_SLPWC_L3_HIT PERF_PROP_VTD_ROOT(0x5)
+#define FME_IPERF_PROP_VTD_SIP_SLPWC_L4_HIT PERF_PROP_VTD_ROOT(0x6)
+#define FME_IPERF_PROP_VTD_SIP_RCC_HIT PERF_PROP_VTD_ROOT(0x7)
+#define FME_IPERF_PROP_VTD_SIP_IOTLB_4K_MISS PERF_PROP_VTD_ROOT(0x8)
+#define FME_IPERF_PROP_VTD_SIP_IOTLB_2M_MISS PERF_PROP_VTD_ROOT(0x9)
+#define FME_IPERF_PROP_VTD_SIP_IOTLB_1G_MISS PERF_PROP_VTD_ROOT(0xa)
+#define FME_IPERF_PROP_VTD_SIP_SLPWC_L3_MISS PERF_PROP_VTD_ROOT(0xb)
+#define FME_IPERF_PROP_VTD_SIP_SLPWC_L4_MISS PERF_PROP_VTD_ROOT(0xc)
+#define FME_IPERF_PROP_VTD_SIP_RCC_MISS PERF_PROP_VTD_ROOT(0xd)
+#define FME_IPERF_PROP_VTD_PORT_READ_TRANSACTION(n) PERF_PROP_VTD(n, 0xe)
+#define FME_IPERF_PROP_VTD_PORT_WRITE_TRANSACTION(n) PERF_PROP_VTD(n, 0xf)
+#define FME_IPERF_PROP_VTD_PORT_DEVTLB_READ_HIT(n) PERF_PROP_VTD(n, 0x10)
+#define FME_IPERF_PROP_VTD_PORT_DEVTLB_WRITE_HIT(n) PERF_PROP_VTD(n, 0x11)
+#define FME_IPERF_PROP_VTD_PORT_DEVTLB_4K_FILL(n) PERF_PROP_VTD(n, 0x12)
+#define FME_IPERF_PROP_VTD_PORT_DEVTLB_2M_FILL(n) PERF_PROP_VTD(n, 0x13)
+#define FME_IPERF_PROP_VTD_PORT_DEVTLB_1G_FILL(n) PERF_PROP_VTD(n, 0x14)
+/* iperf FAB properties */
+#define FME_IPERF_PROP_FAB_FREEZE PERF_PROP_FAB_ROOT(0x1) /* RW */
+#define FME_IPERF_PROP_FAB_PCIE0_READ PERF_PROP_FAB_ROOT(0x2)
+#define FME_IPERF_PROP_FAB_PORT_PCIE0_READ(n) PERF_PROP_FAB(n, 0x2)
+#define FME_IPERF_PROP_FAB_PCIE0_WRITE PERF_PROP_FAB_ROOT(0x3)
+#define FME_IPERF_PROP_FAB_PORT_PCIE0_WRITE(n) PERF_PROP_FAB(n, 0x3)
+#define FME_IPERF_PROP_FAB_PCIE1_READ PERF_PROP_FAB_ROOT(0x4)
+#define FME_IPERF_PROP_FAB_PORT_PCIE1_READ(n) PERF_PROP_FAB(n, 0x4)
+#define FME_IPERF_PROP_FAB_PCIE1_WRITE PERF_PROP_FAB_ROOT(0x5)
+#define FME_IPERF_PROP_FAB_PORT_PCIE1_WRITE(n) PERF_PROP_FAB(n, 0x5)
+#define FME_IPERF_PROP_FAB_UPI_READ PERF_PROP_FAB_ROOT(0x6)
+#define FME_IPERF_PROP_FAB_PORT_UPI_READ(n) PERF_PROP_FAB(n, 0x6)
+#define FME_IPERF_PROP_FAB_UPI_WRITE PERF_PROP_FAB_ROOT(0x7)
+#define FME_IPERF_PROP_FAB_PORT_UPI_WRITE(n) PERF_PROP_FAB(n, 0x7)
+#define FME_IPERF_PROP_FAB_MMIO_READ PERF_PROP_FAB_ROOT(0x8)
+#define FME_IPERF_PROP_FAB_PORT_MMIO_READ(n) PERF_PROP_FAB(n, 0x8)
+#define FME_IPERF_PROP_FAB_MMIO_WRITE PERF_PROP_FAB_ROOT(0x9)
+#define FME_IPERF_PROP_FAB_PORT_MMIO_WRITE(n) PERF_PROP_FAB(n, 0x9)
+#define FME_IPERF_PROP_FAB_ENABLE PERF_PROP_FAB_ROOT(0xa) /* RW */
+#define FME_IPERF_PROP_FAB_PORT_ENABLE(n) PERF_PROP_FAB(n, 0xa) /* RW */
+
+/* FME dperf properties */
+#define FME_DPERF_PROP_CLOCK PERF_PROP_ROOT(0x1)
+#define FME_DPERF_PROP_REVISION PERF_PROP_ROOT(0x2)
+
+/* dperf FAB properties */
+#define FME_DPERF_PROP_FAB_FREEZE PERF_PROP_FAB_ROOT(0x1) /* RW */
+#define FME_DPERF_PROP_FAB_PCIE0_READ PERF_PROP_FAB_ROOT(0x2)
+#define FME_DPERF_PROP_FAB_PORT_PCIE0_READ(n) PERF_PROP_FAB(n, 0x2)
+#define FME_DPERF_PROP_FAB_PCIE0_WRITE PERF_PROP_FAB_ROOT(0x3)
+#define FME_DPERF_PROP_FAB_PORT_PCIE0_WRITE(n) PERF_PROP_FAB(n, 0x3)
+#define FME_DPERF_PROP_FAB_MMIO_READ PERF_PROP_FAB_ROOT(0x4)
+#define FME_DPERF_PROP_FAB_PORT_MMIO_READ(n) PERF_PROP_FAB(n, 0x4)
+#define FME_DPERF_PROP_FAB_MMIO_WRITE PERF_PROP_FAB_ROOT(0x5)
+#define FME_DPERF_PROP_FAB_PORT_MMIO_WRITE(n) PERF_PROP_FAB(n, 0x5)
+#define FME_DPERF_PROP_FAB_ENABLE PERF_PROP_FAB_ROOT(0x6) /* RW */
+#define FME_DPERF_PROP_FAB_PORT_ENABLE(n) PERF_PROP_FAB(n, 0x6) /* RW */
+
+/*PORT hdr feature's properties*/
+#define PORT_HDR_PROP_REVISION 0x1 /* RDONLY */
+#define PORT_HDR_PROP_PORTIDX 0x2 /* RDONLY */
+#define PORT_HDR_PROP_LATENCY_TOLERANCE 0x3 /* RDONLY */
+#define PORT_HDR_PROP_AP1_EVENT 0x4 /* RW */
+#define PORT_HDR_PROP_AP2_EVENT 0x5 /* RW */
+#define PORT_HDR_PROP_POWER_STATE 0x6 /* RDONLY */
+#define PORT_HDR_PROP_USERCLK_FREQCMD 0x7 /* RW */
+#define PORT_HDR_PROP_USERCLK_FREQCNTRCMD 0x8 /* RW */
+#define PORT_HDR_PROP_USERCLK_FREQSTS 0x9 /* RDONLY */
+#define PORT_HDR_PROP_USERCLK_CNTRSTS 0xa /* RDONLY */
+
+/*PORT error feature's properties*/
+#define PORT_ERR_PROP_REVISION 0x1 /* RDONLY */
+#define PORT_ERR_PROP_ERRORS 0x2 /* RDONLY */
+#define PORT_ERR_PROP_FIRST_ERROR 0x3 /* RDONLY */
+#define PORT_ERR_PROP_FIRST_MALFORMED_REQ_LSB 0x4 /* RDONLY */
+#define PORT_ERR_PROP_FIRST_MALFORMED_REQ_MSB 0x5 /* RDONLY */
+#define PORT_ERR_PROP_CLEAR 0x6 /* WRONLY */
+
+int opae_manager_ifpga_get_prop(struct opae_manager *mgr,
+ struct feature_prop *prop);
+int opae_manager_ifpga_set_prop(struct opae_manager *mgr,
+ struct feature_prop *prop);
+int opae_bridge_ifpga_get_prop(struct opae_bridge *br,
+ struct feature_prop *prop);
+int opae_bridge_ifpga_set_prop(struct opae_bridge *br,
+ struct feature_prop *prop);
+
+/*
+ * Retrieve information about the fpga fme.
+ * Driver fills the info in provided struct fpga_fme_info.
+ */
+struct fpga_fme_info {
+ u32 capability; /* The capability of FME device */
+#define FPGA_FME_CAP_ERR_IRQ (1 << 0) /* Support fme error interrupt */
+};
+
+int opae_manager_ifpga_get_info(struct opae_manager *mgr,
+ struct fpga_fme_info *fme_info);
+
+/* Set eventfd information for ifpga FME error interrupt */
+struct fpga_fme_err_irq_set {
+ s32 evtfd; /* Eventfd handler */
+};
+
+int opae_manager_ifpga_set_err_irq(struct opae_manager *mgr,
+ struct fpga_fme_err_irq_set *err_irq_set);
+
+/*
+ * Retrieve information about the fpga port.
+ * Driver fills the info in provided struct fpga_port_info.
+ */
+struct fpga_port_info {
+ u32 capability; /* The capability of port device */
+#define FPGA_PORT_CAP_ERR_IRQ (1 << 0) /* Support port error interrupt */
+#define FPGA_PORT_CAP_UAFU_IRQ (1 << 1) /* Support uafu error interrupt */
+ u32 num_umsgs; /* The number of allocated umsgs */
+ u32 num_uafu_irqs; /* The number of uafu interrupts */
+};
+
+int opae_bridge_ifpga_get_info(struct opae_bridge *br,
+ struct fpga_port_info *port_info);
+/*
+ * Retrieve region information about the fpga port.
+ * Driver needs to fill the index of struct fpga_port_region_info.
+ */
+struct fpga_port_region_info {
+ u32 index;
+#define PORT_REGION_INDEX_STP (1 << 1) /* Signal Tap Region */
+ u64 size; /* Region Size */
+ u8 *addr; /* Base address of the region */
+};
+
+int opae_bridge_ifpga_get_region_info(struct opae_bridge *br,
+ struct fpga_port_region_info *info);
+
+/* Set eventfd information for ifpga port error interrupt */
+struct fpga_port_err_irq_set {
+ s32 evtfd; /* Eventfd handler */
+};
+
+int opae_bridge_ifpga_set_err_irq(struct opae_bridge *br,
+ struct fpga_port_err_irq_set *err_irq_set);
+
+#endif /* _OPAE_IFPGA_HW_API_H_ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#include "opae_intel_max10.h"
+
+static struct intel_max10_device *g_max10;
+
+int max10_reg_read(unsigned int reg, unsigned int *val)
+{
+ if (!g_max10)
+ return -ENODEV;
+
+ return spi_transaction_read(g_max10->spi_tran_dev,
+ reg, 4, (unsigned char *)val);
+}
+
+int max10_reg_write(unsigned int reg, unsigned int val)
+{
+ unsigned int tmp = val;
+
+ if (!g_max10)
+ return -ENODEV;
+
+ return spi_transaction_write(g_max10->spi_tran_dev,
+ reg, 4, (unsigned char *)&tmp);
+}
+
+struct intel_max10_device *
+intel_max10_device_probe(struct altera_spi_device *spi,
+ int chipselect)
+{
+ struct intel_max10_device *dev;
+ int ret;
+ unsigned int val;
+
+ dev = opae_malloc(sizeof(*dev));
+ if (!dev)
+ return NULL;
+
+ dev->spi_master = spi;
+
+ dev->spi_tran_dev = spi_transaction_init(spi, chipselect);
+ if (!dev->spi_tran_dev) {
+ dev_err(dev, "%s spi tran init fail\n", __func__);
+ goto free_dev;
+ }
+
+ /* set the max10 device firstly */
+ g_max10 = dev;
+
+ /* read FPGA loading information */
+ ret = max10_reg_read(FPGA_PAGE_INFO_OFF, &val);
+ if (ret) {
+ dev_err(dev, "fail to get FPGA loading info\n");
+ goto spi_tran_fail;
+ }
+ dev_info(dev, "FPGA loaded from %s Image\n", val ? "User" : "Factory");
+
+ return dev;
+
+spi_tran_fail:
+ spi_transaction_remove(dev->spi_tran_dev);
+free_dev:
+ g_max10 = NULL;
+ opae_free(dev);
+
+ return NULL;
+}
+
+int intel_max10_device_remove(struct intel_max10_device *dev)
+{
+ if (!dev)
+ return 0;
+
+ if (dev->spi_tran_dev)
+ spi_transaction_remove(dev->spi_tran_dev);
+
+ g_max10 = NULL;
+ opae_free(dev);
+
+ return 0;
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _OPAE_INTEL_MAX10_H_
+#define _OPAE_INTEL_MAX10_H_
+
+#include "opae_osdep.h"
+#include "opae_spi.h"
+
+/* max10 capability flags */
+#define MAX10_FLAGS_NO_I2C2 BIT(0)
+#define MAX10_FLAGS_NO_BMCIMG_FLASH BIT(1)
+#define MAX10_FLAGS_DEVICE_TABLE BIT(2)
+#define MAX10_FLAGS_SPI BIT(3)
+#define MAX10_FLGAS_NIOS_SPI BIT(4)
+#define MAX10_FLAGS_PKVL BIT(5)
+
+struct intel_max10_device {
+ unsigned int flags; /*max10 hardware capability*/
+ struct altera_spi_device *spi_master;
+ struct spi_transaction_dev *spi_tran_dev;
+};
+
+/* retimer speed */
+enum retimer_speed {
+ MXD_1GB = 1,
+ MXD_2_5GB = 2,
+ MXD_5GB = 5,
+ MXD_10GB = 10,
+ MXD_25GB = 25,
+ MXD_40GB = 40,
+ MXD_100GB = 100,
+ MXD_SPEED_UNKNOWN,
+};
+
+/* retimer info */
+struct opae_retimer_info {
+ unsigned int nums_retimer;
+ unsigned int ports_per_retimer;
+ unsigned int nums_fvl;
+ unsigned int ports_per_fvl;
+ enum retimer_speed support_speed;
+};
+
+/* retimer status*/
+struct opae_retimer_status {
+ enum retimer_speed speed;
+ /*
+ * retimer line link status bitmap:
+ * bit 0: Retimer0 Port0 link status
+ * bit 1: Retimer0 Port1 link status
+ * bit 2: Retimer0 Port2 link status
+ * bit 3: Retimer0 Port3 link status
+ *
+ * bit 4: Retimer1 Port0 link status
+ * bit 5: Retimer1 Port1 link status
+ * bit 6: Retimer1 Port2 link status
+ * bit 7: Retimer1 Port3 link status
+ */
+ unsigned int line_link_bitmap;
+};
+
+#define FLASH_BASE 0x10000000
+#define FLASH_OPTION_BITS 0x10000
+
+#define NIOS2_FW_VERSION_OFF 0x300400
+#define RSU_REG_OFF 0x30042c
+#define FPGA_RP_LOAD BIT(3)
+#define NIOS2_PRERESET BIT(4)
+#define NIOS2_HANG BIT(5)
+#define RSU_ENABLE BIT(6)
+#define NIOS2_RESET BIT(7)
+#define NIOS2_I2C2_POLL_STOP BIT(13)
+#define FPGA_RECONF_REG_OFF 0x300430
+#define COUNTDOWN_START BIT(18)
+#define MAX10_BUILD_VER_OFF 0x300468
+#define PCB_INFO GENMASK(31, 24)
+#define MAX10_BUILD_VERION GENMASK(23, 0)
+#define FPGA_PAGE_INFO_OFF 0x30046c
+#define DT_AVAIL_REG_OFF 0x300490
+#define DT_AVAIL BIT(0)
+#define DT_BASE_ADDR_REG_OFF 0x300494
+#define PKVL_POLLING_CTRL 0x300480
+#define PKVL_LINK_STATUS 0x300564
+
+#define DFT_MAX_SIZE 0x7e0000
+
+int max10_reg_read(unsigned int reg, unsigned int *val);
+int max10_reg_write(unsigned int reg, unsigned int val);
+struct intel_max10_device *
+intel_max10_device_probe(struct altera_spi_device *spi,
+ int chipselect);
+int intel_max10_device_remove(struct intel_max10_device *dev);
+
+#endif
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _OPAE_OSDEP_H
+#define _OPAE_OSDEP_H
+
+#include <string.h>
+#include <stdbool.h>
+
+#ifdef RTE_LIBRTE_EAL
+#include "osdep_rte/osdep_generic.h"
+#else
+#include "osdep_raw/osdep_generic.h"
+#endif
+
+#define __iomem
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+typedef uint64_t dma_addr_t;
+
+struct uuid {
+ u8 b[16];
+};
+
+#ifndef LINUX_MACROS
+#ifndef BITS_PER_LONG
+#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
+#endif
+#ifndef BIT
+#define BIT(a) (1UL << (a))
+#endif /* BIT */
+#define U64_C(x) x ## ULL
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif /* BIT_ULL */
+#ifndef GENMASK
+#define GENMASK(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+#endif /* GENMASK */
+#ifndef GENMASK_ULL
+#define GENMASK_ULL(h, l) (((U64_C(1) << ((h) - (l) + 1)) - 1) << (l))
+#endif /* GENMASK_ULL */
+#endif /* LINUX_MACROS */
+
+#define SET_FIELD(m, v) (((v) << (__builtin_ffsll(m) - 1)) & (m))
+#define GET_FIELD(m, v) (((v) & (m)) >> (__builtin_ffsll(m) - 1))
+
+#define dev_err(x, args...) dev_printf(ERR, args)
+#define dev_info(x, args...) dev_printf(INFO, args)
+#define dev_warn(x, args...) dev_printf(WARNING, args)
+#define dev_debug(x, args...) dev_printf(DEBUG, args)
+
+#define pr_err(y, args...) dev_err(0, y, ##args)
+#define pr_warn(y, args...) dev_warn(0, y, ##args)
+#define pr_info(y, args...) dev_info(0, y, ##args)
+
+#ifndef WARN_ON
+#define WARN_ON(x) do { \
+ int ret = !!(x); \
+ if (unlikely(ret)) \
+ pr_warn("WARN_ON: \"" #x "\" at %s:%d\n", __func__, __LINE__); \
+} while (0)
+#endif
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define udelay(x) opae_udelay(x)
+#define msleep(x) opae_udelay(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+#define time_after(a, b) ((long)((b) - (a)) < 0)
+#define time_before(a, b) time_after(b, a)
+#define opae_memset(a, b, c) memset((a), (b), (c))
+
+#define opae_readq_poll_timeout(addr, val, cond, invl, timeout)\
+({ \
+ int wait = 0; \
+ for (; wait <= timeout; wait += invl) { \
+ (val) = opae_readq(addr); \
+ if (cond) \
+ break; \
+ udelay(invl); \
+ } \
+ (cond) ? 0 : -ETIMEDOUT; \
+})
+#endif
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#include "opae_osdep.h"
+#include "opae_spi.h"
+
+static int nios_spi_indirect_read(struct altera_spi_device *dev, u32 reg,
+ u32 *val)
+{
+ u64 ctrl = 0;
+ u64 stat = 0;
+ int loops = SPI_MAX_RETRY;
+
+ ctrl = NIOS_SPI_RD | ((u64)reg << 32);
+ opae_writeq(ctrl, dev->regs + NIOS_SPI_CTRL);
+
+ stat = opae_readq(dev->regs + NIOS_SPI_STAT);
+ while (!(stat & NIOS_SPI_VALID) && --loops)
+ stat = opae_readq(dev->regs + NIOS_SPI_STAT);
+
+ *val = stat & NIOS_SPI_READ_DATA;
+
+ return loops ? 0 : -ETIMEDOUT;
+}
+
+static int nios_spi_indirect_write(struct altera_spi_device *dev, u32 reg,
+ u32 value)
+{
+
+ u64 ctrl = 0;
+ u64 stat = 0;
+ int loops = SPI_MAX_RETRY;
+
+ ctrl |= NIOS_SPI_WR | (u64)reg << 32;
+ ctrl |= value & NIOS_SPI_WRITE_DATA;
+
+ opae_writeq(ctrl, dev->regs + NIOS_SPI_CTRL);
+
+ stat = opae_readq(dev->regs + NIOS_SPI_STAT);
+ while (!(stat & NIOS_SPI_VALID) && --loops)
+ stat = opae_readq(dev->regs + NIOS_SPI_STAT);
+
+ return loops ? 0 : -ETIMEDOUT;
+}
+
+static int spi_indirect_write(struct altera_spi_device *dev, u32 reg,
+ u32 value)
+{
+ u64 ctrl;
+
+ opae_writeq(value & WRITE_DATA_MASK, dev->regs + SPI_WRITE);
+
+ ctrl = CTRL_W | (reg >> 2);
+ opae_writeq(ctrl, dev->regs + SPI_CTRL);
+
+ return 0;
+}
+
+static int spi_indirect_read(struct altera_spi_device *dev, u32 reg,
+ u32 *val)
+{
+ u64 tmp;
+ u64 ctrl;
+
+ ctrl = CTRL_R | (reg >> 2);
+ opae_writeq(ctrl, dev->regs + SPI_CTRL);
+
+ /**
+ * FIXME: Read one more time to avoid HW timing issue. This is
+ * a short term workaround solution, and must be removed once
+ * hardware fixing is done.
+ */
+ tmp = opae_readq(dev->regs + SPI_READ);
+
+ *val = (u32)tmp;
+
+ return 0;
+}
+
+int spi_reg_write(struct altera_spi_device *dev, u32 reg,
+ u32 value)
+{
+ return dev->reg_write(dev, reg, value);
+}
+
+int spi_reg_read(struct altera_spi_device *dev, u32 reg,
+ u32 *val)
+{
+ return dev->reg_read(dev, reg, val);
+}
+
+void spi_cs_activate(struct altera_spi_device *dev, unsigned int chip_select)
+{
+ spi_reg_write(dev, ALTERA_SPI_SLAVE_SEL, 1 << chip_select);
+ spi_reg_write(dev, ALTERA_SPI_CONTROL, ALTERA_SPI_CONTROL_SSO_MSK);
+}
+
+void spi_cs_deactivate(struct altera_spi_device *dev)
+{
+ spi_reg_write(dev, ALTERA_SPI_CONTROL, 0);
+}
+
+static int spi_flush_rx(struct altera_spi_device *dev)
+{
+ u32 val = 0;
+ int ret;
+
+ ret = spi_reg_read(dev, ALTERA_SPI_STATUS, &val);
+ if (ret)
+ return ret;
+
+ if (val & ALTERA_SPI_STATUS_RRDY_MSK) {
+ ret = spi_reg_read(dev, ALTERA_SPI_RXDATA, &val);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static unsigned int spi_write_bytes(struct altera_spi_device *dev, int count)
+{
+ unsigned int val = 0;
+ u16 *p16;
+ u32 *p32;
+
+ if (dev->txbuf) {
+ switch (dev->data_width) {
+ case 1:
+ val = dev->txbuf[count];
+ break;
+ case 2:
+ p16 = (u16 *)(dev->txbuf + 2*count);
+ val = *p16;
+ if (dev->endian == SPI_BIG_ENDIAN)
+ val = cpu_to_be16(val);
+ break;
+ case 4:
+ p32 = (u32 *)(dev->txbuf + 4*count);
+ val = *p32;
+ break;
+ }
+ }
+
+ return val;
+}
+
+static void spi_fill_readbuffer(struct altera_spi_device *dev,
+ unsigned int value, int count)
+{
+ u16 *p16;
+ u32 *p32;
+
+ if (dev->rxbuf) {
+ switch (dev->data_width) {
+ case 1:
+ dev->rxbuf[count] = value;
+ break;
+ case 2:
+ p16 = (u16 *)(dev->rxbuf + 2*count);
+ if (dev->endian == SPI_BIG_ENDIAN)
+ *p16 = cpu_to_be16((u16)value);
+ else
+ *p16 = (u16)value;
+ break;
+ case 4:
+ p32 = (u32 *)(dev->rxbuf + 4*count);
+ if (dev->endian == SPI_BIG_ENDIAN)
+ *p32 = cpu_to_be32(value);
+ else
+ *p32 = value;
+ break;
+ }
+ }
+}
+
+static int spi_txrx(struct altera_spi_device *dev)
+{
+ unsigned int count = 0;
+ u32 rxd;
+ unsigned int tx_data;
+ u32 status;
+ int retry = 0;
+ int ret;
+
+ while (count < dev->len) {
+ tx_data = spi_write_bytes(dev, count);
+ spi_reg_write(dev, ALTERA_SPI_TXDATA, tx_data);
+
+ while (1) {
+ ret = spi_reg_read(dev, ALTERA_SPI_STATUS, &status);
+ if (ret)
+ return -EIO;
+ if (status & ALTERA_SPI_STATUS_RRDY_MSK)
+ break;
+ if (retry++ > SPI_MAX_RETRY) {
+ dev_err(dev, "%s, read timeout\n", __func__);
+ return -EBUSY;
+ }
+ }
+
+ ret = spi_reg_read(dev, ALTERA_SPI_RXDATA, &rxd);
+ if (ret)
+ return -EIO;
+
+ spi_fill_readbuffer(dev, rxd, count);
+
+ count++;
+ }
+
+ return 0;
+}
+
+int spi_command(struct altera_spi_device *dev, unsigned int chip_select,
+ unsigned int wlen, void *wdata,
+ unsigned int rlen, void *rdata)
+{
+ if (((wlen > 0) && !wdata) || ((rlen > 0) && !rdata)) {
+ dev_err(dev, "error on spi command checking\n");
+ return -EINVAL;
+ }
+
+ wlen = wlen / dev->data_width;
+ rlen = rlen / dev->data_width;
+
+ /* flush rx buffer */
+ spi_flush_rx(dev);
+
+ spi_cs_activate(dev, chip_select);
+ if (wlen) {
+ dev->txbuf = wdata;
+ dev->rxbuf = rdata;
+ dev->len = wlen;
+ spi_txrx(dev);
+ }
+ if (rlen) {
+ dev->rxbuf = rdata;
+ dev->txbuf = NULL;
+ dev->len = rlen;
+ spi_txrx(dev);
+ }
+ spi_cs_deactivate(dev);
+ return 0;
+}
+
+struct altera_spi_device *altera_spi_alloc(void *base, int type)
+{
+ struct altera_spi_device *spi_dev =
+ opae_malloc(sizeof(struct altera_spi_device));
+
+ if (!spi_dev)
+ return NULL;
+
+ spi_dev->regs = base;
+
+ switch (type) {
+ case TYPE_SPI:
+ spi_dev->reg_read = spi_indirect_read;
+ spi_dev->reg_write = spi_indirect_write;
+ break;
+ case TYPE_NIOS_SPI:
+ spi_dev->reg_read = nios_spi_indirect_read;
+ spi_dev->reg_write = nios_spi_indirect_write;
+ break;
+ default:
+ dev_err(dev, "%s: invalid SPI type\n", __func__);
+ goto error;
+ }
+
+ return spi_dev;
+
+error:
+ altera_spi_release(spi_dev);
+ return NULL;
+}
+
+void altera_spi_init(struct altera_spi_device *spi_dev)
+{
+ spi_dev->spi_param.info = opae_readq(spi_dev->regs + SPI_CORE_PARAM);
+
+ spi_dev->data_width = spi_dev->spi_param.data_width / 8;
+ spi_dev->endian = spi_dev->spi_param.endian;
+ spi_dev->num_chipselect = spi_dev->spi_param.num_chipselect;
+ dev_info(spi_dev, "spi param: type=%d, data width:%d, endian:%d, clock_polarity=%d, clock=%dMHz, chips=%d, cpha=%d\n",
+ spi_dev->spi_param.type,
+ spi_dev->data_width, spi_dev->endian,
+ spi_dev->spi_param.clock_polarity,
+ spi_dev->spi_param.clock,
+ spi_dev->num_chipselect,
+ spi_dev->spi_param.clock_phase);
+
+ /* clear */
+ spi_reg_write(spi_dev, ALTERA_SPI_CONTROL, 0);
+ spi_reg_write(spi_dev, ALTERA_SPI_STATUS, 0);
+ /* flush rxdata */
+ spi_flush_rx(spi_dev);
+}
+
+void altera_spi_release(struct altera_spi_device *dev)
+{
+ if (dev)
+ opae_free(dev);
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#ifndef _OPAE_SPI_H
+#define _OPAE_SPI_H
+
+#include "opae_osdep.h"
+
+#define ALTERA_SPI_RXDATA 0
+#define ALTERA_SPI_TXDATA 4
+#define ALTERA_SPI_STATUS 8
+#define ALTERA_SPI_CONTROL 12
+#define ALTERA_SPI_SLAVE_SEL 20
+
+#define ALTERA_SPI_STATUS_ROE_MSK 0x8
+#define ALTERA_SPI_STATUS_TOE_MSK 0x10
+#define ALTERA_SPI_STATUS_TMT_MSK 0x20
+#define ALTERA_SPI_STATUS_TRDY_MSK 0x40
+#define ALTERA_SPI_STATUS_RRDY_MSK 0x80
+#define ALTERA_SPI_STATUS_E_MSK 0x100
+
+#define ALTERA_SPI_CONTROL_IROE_MSK 0x8
+#define ALTERA_SPI_CONTROL_ITOE_MSK 0x10
+#define ALTERA_SPI_CONTROL_ITRDY_MSK 0x40
+#define ALTERA_SPI_CONTROL_IRRDY_MSK 0x80
+#define ALTERA_SPI_CONTROL_IE_MSK 0x100
+#define ALTERA_SPI_CONTROL_SSO_MSK 0x400
+
+#define SPI_CORE_PARAM 0x8
+#define SPI_CTRL 0x10
+#define CTRL_R BIT_ULL(9)
+#define CTRL_W BIT_ULL(8)
+#define CTRL_ADDR_MASK GENMASK_ULL(2, 0)
+#define SPI_READ 0x18
+#define READ_DATA_VALID BIT_ULL(32)
+#define READ_DATA_MASK GENMASK_ULL(31, 0)
+#define SPI_WRITE 0x20
+#define WRITE_DATA_MASK GENMASK_ULL(31, 0)
+
+#define SPI_MAX_RETRY 100000
+
+#define TYPE_SPI 0
+#define TYPE_NIOS_SPI 1
+
+struct spi_core_param {
+ union {
+ u64 info;
+ struct {
+ u8 type:1;
+ u8 endian:1;
+ u8 data_width:6;
+ u8 num_chipselect:6;
+ u8 clock_polarity:1;
+ u8 clock_phase:1;
+ u8 stages:2;
+ u8 resvd:4;
+ u16 clock:10;
+ u16 peripheral_id:16;
+ u8 controller_type:1;
+ u16 resvd1:15;
+ };
+ };
+};
+
+struct altera_spi_device {
+ u8 *regs;
+ struct spi_core_param spi_param;
+ int data_width; /* how many bytes for data width */
+ int endian;
+ #define SPI_BIG_ENDIAN 0
+ #define SPI_LITTLE_ENDIAN 1
+ int num_chipselect;
+ unsigned char *rxbuf;
+ unsigned char *txbuf;
+ unsigned int len;
+ int (*reg_read)(struct altera_spi_device *dev, u32 reg, u32 *val);
+ int (*reg_write)(struct altera_spi_device *dev, u32 reg,
+ u32 value);
+};
+
+#define HEADER_LEN 8
+#define RESPONSE_LEN 4
+#define SPI_TRANSACTION_MAX_LEN 1024
+#define TRAN_SEND_MAX_LEN (SPI_TRANSACTION_MAX_LEN + HEADER_LEN)
+#define TRAN_RESP_MAX_LEN SPI_TRANSACTION_MAX_LEN
+#define PACKET_SEND_MAX_LEN (2*TRAN_SEND_MAX_LEN + 4)
+#define PACKET_RESP_MAX_LEN (2*TRAN_RESP_MAX_LEN + 4)
+#define BYTES_SEND_MAX_LEN (2*PACKET_SEND_MAX_LEN)
+#define BYTES_RESP_MAX_LEN (2*PACKET_RESP_MAX_LEN)
+
+struct spi_tran_buffer {
+ unsigned char tran_send[TRAN_SEND_MAX_LEN];
+ unsigned char tran_resp[TRAN_RESP_MAX_LEN];
+ unsigned char packet_send[PACKET_SEND_MAX_LEN];
+ unsigned char packet_resp[PACKET_RESP_MAX_LEN];
+ unsigned char bytes_send[BYTES_SEND_MAX_LEN];
+ unsigned char bytes_resp[2*BYTES_RESP_MAX_LEN];
+};
+
+struct spi_transaction_dev {
+ struct altera_spi_device *dev;
+ int chipselect;
+ struct spi_tran_buffer *buffer;
+};
+
+struct spi_tran_header {
+ u8 trans_type;
+ u8 reserve;
+ u16 size;
+ u32 addr;
+};
+
+int spi_command(struct altera_spi_device *dev, unsigned int chip_select,
+ unsigned int wlen, void *wdata, unsigned int rlen, void *rdata);
+void spi_cs_deactivate(struct altera_spi_device *dev);
+void spi_cs_activate(struct altera_spi_device *dev, unsigned int chip_select);
+struct altera_spi_device *altera_spi_alloc(void *base, int type);
+void altera_spi_init(struct altera_spi_device *dev);
+void altera_spi_release(struct altera_spi_device *dev);
+int spi_transaction_read(struct spi_transaction_dev *dev, unsigned int addr,
+ unsigned int size, unsigned char *data);
+int spi_transaction_write(struct spi_transaction_dev *dev, unsigned int addr,
+ unsigned int size, unsigned char *data);
+struct spi_transaction_dev *spi_transaction_init(struct altera_spi_device *dev,
+ int chipselect);
+void spi_transaction_remove(struct spi_transaction_dev *dev);
+int spi_reg_write(struct altera_spi_device *dev, u32 reg,
+ u32 value);
+int spi_reg_read(struct altera_spi_device *dev, u32 reg, u32 *val);
+
+#define NIOS_SPI_PARAM 0x8
+#define CONTROL_TYPE BIT_ULL(48)
+#define PERI_ID GENMASK_ULL(47, 32)
+#define SPI_CLK GENMASK_ULL(31, 22)
+#define SYNC_STAGES GENMASK_ULL(17, 16)
+#define CLOCK_PHASE BIT_ULL(15)
+#define CLOCK_POLARITY BIT_ULL(14)
+#define NUM_SELECT GENMASK_ULL(13, 8)
+#define DATA_WIDTH GENMASK_ULL(7, 2)
+#define SHIFT_DIRECTION BIT_ULL(1)
+#define SPI_TYPE BIT_ULL(0)
+#define NIOS_SPI_CTRL 0x10
+#define NIOS_SPI_RD (0x1ULL << 62)
+#define NIOS_SPI_WR (0x2ULL << 62)
+#define NIOS_SPI_COMMAND GENMASK_ULL(63, 62)
+#define NIOS_SPI_ADDR GENMASK_ULL(44, 32)
+#define NIOS_SPI_WRITE_DATA GENMASK_ULL(31, 0)
+#define NIOS_SPI_STAT 0x18
+#define NIOS_SPI_VALID BIT_ULL(32)
+#define NIOS_SPI_READ_DATA GENMASK_ULL(31, 0)
+#define NIOS_SPI_INIT_DONE 0x1000
+
+#define NIOS_SPI_INIT_DONE 0x1000
+#define NIOS_SPI_INIT_STS0 0x1020
+#define NIOS_SPI_INIT_STS1 0x1024
+#define PKVL_STATUS_RESET 0
+#define PKVL_10G_MODE 1
+#define PKVL_25G_MODE 2
+#endif
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2019 Intel Corporation
+ */
+
+#include "opae_spi.h"
+#include "ifpga_compat.h"
+
+/*transaction opcodes*/
+#define SPI_TRAN_SEQ_WRITE 0x04 /* SPI transaction sequential write */
+#define SPI_TRAN_SEQ_READ 0x14 /* SPI transaction sequential read */
+#define SPI_TRAN_NON_SEQ_WRITE 0x00 /* SPI transaction non-sequential write */
+#define SPI_TRAN_NON_SEQ_READ 0x10 /* SPI transaction non-sequential read*/
+
+/*specail packet characters*/
+#define SPI_PACKET_SOP 0x7a
+#define SPI_PACKET_EOP 0x7b
+#define SPI_PACKET_CHANNEL 0x7c
+#define SPI_PACKET_ESC 0x7d
+
+/*special byte characters*/
+#define SPI_BYTE_IDLE 0x4a
+#define SPI_BYTE_ESC 0x4d
+
+#define SPI_REG_BYTES 4
+
+#define INIT_SPI_TRAN_HEADER(trans_type, size, address) \
+({ \
+ header.trans_type = trans_type; \
+ header.reserve = 0; \
+ header.size = cpu_to_be16(size); \
+ header.addr = cpu_to_be32(addr); \
+})
+
+#ifdef OPAE_SPI_DEBUG
+static void print_buffer(const char *string, void *buffer, int len)
+{
+ int i;
+ unsigned char *p = buffer;
+
+ printf("%s print buffer, len=%d\n", string, len);
+
+ for (i = 0; i < len; i++)
+ printf("%x ", *(p+i));
+ printf("\n");
+}
+#else
+static void print_buffer(const char *string, void *buffer, int len)
+{
+ UNUSED(string);
+ UNUSED(buffer);
+ UNUSED(len);
+}
+#endif
+
+static unsigned char xor_20(unsigned char val)
+{
+ return val^0x20;
+}
+
+static void reorder_phy_data(u8 bits_per_word,
+ void *buf, unsigned int len)
+{
+ unsigned int count = len / (bits_per_word/8);
+ u32 *p;
+
+ if (bits_per_word == 32) {
+ p = (u32 *)buf;
+ while (count--) {
+ *p = cpu_to_be32(*p);
+ p++;
+ }
+ }
+}
+
+enum {
+ SPI_FOUND_SOP,
+ SPI_FOUND_EOP,
+ SPI_NOT_FOUND,
+};
+
+static int resp_find_sop_eop(unsigned char *resp, unsigned int len,
+ int flags)
+{
+ int ret = SPI_NOT_FOUND;
+
+ unsigned char *b = resp;
+
+ /* find SOP */
+ if (flags != SPI_FOUND_SOP) {
+ while (b < resp + len && *b != SPI_PACKET_SOP)
+ b++;
+
+ if (*b != SPI_PACKET_SOP)
+ goto done;
+
+ ret = SPI_FOUND_SOP;
+ }
+
+ /* find EOP */
+ while (b < resp + len && *b != SPI_PACKET_EOP)
+ b++;
+
+ if (*b != SPI_PACKET_EOP)
+ goto done;
+
+ ret = SPI_FOUND_EOP;
+
+done:
+ return ret;
+}
+
+static int byte_to_core_convert(struct spi_transaction_dev *dev,
+ unsigned int send_len, unsigned char *send_data,
+ unsigned int resp_len, unsigned char *resp_data,
+ unsigned int *valid_resp_len)
+{
+ unsigned int i;
+ int ret = 0;
+ unsigned char *send_packet = dev->buffer->bytes_send;
+ unsigned char *resp_packet = dev->buffer->bytes_resp;
+ unsigned char *p;
+ unsigned char current_byte;
+ unsigned char *tx_buffer;
+ unsigned int tx_len = 0;
+ unsigned char *rx_buffer;
+ unsigned int rx_len = 0;
+ int retry = 0;
+ int spi_flags;
+ unsigned int resp_max_len = 2 * resp_len;
+
+ print_buffer("before bytes:", send_data, send_len);
+
+ p = send_packet;
+
+ for (i = 0; i < send_len; i++) {
+ current_byte = send_data[i];
+ switch (current_byte) {
+ case SPI_BYTE_IDLE:
+ *p++ = SPI_BYTE_IDLE;
+ *p++ = xor_20(current_byte);
+ break;
+ case SPI_BYTE_ESC:
+ *p++ = SPI_BYTE_ESC;
+ *p++ = xor_20(current_byte);
+ break;
+ default:
+ *p++ = current_byte;
+ break;
+ }
+ }
+
+ print_buffer("before spi:", send_packet, p-send_packet);
+
+ reorder_phy_data(32, send_packet, p - send_packet);
+
+ print_buffer("after order to spi:", send_packet, p-send_packet);
+
+ /* call spi */
+ tx_buffer = send_packet;
+ tx_len = p - send_packet;
+ rx_buffer = resp_packet;
+ rx_len = resp_max_len;
+ spi_flags = SPI_NOT_FOUND;
+
+read_again:
+ ret = spi_command(dev->dev, dev->chipselect, tx_len, tx_buffer,
+ rx_len, rx_buffer);
+ if (ret)
+ return -EBUSY;
+
+ print_buffer("read from spi:", rx_buffer, rx_len);
+
+ /* look for SOP firstly*/
+ ret = resp_find_sop_eop(rx_buffer, rx_len - 1, spi_flags);
+ if (ret != SPI_FOUND_EOP) {
+ tx_buffer = NULL;
+ tx_len = 0;
+ if (retry++ > 10) {
+ dev_err(NULL, "cannot found valid data from SPI\n");
+ return -EBUSY;
+ }
+
+ if (ret == SPI_FOUND_SOP) {
+ rx_buffer += rx_len;
+ resp_max_len += rx_len;
+ }
+
+ spi_flags = ret;
+ goto read_again;
+ }
+
+ print_buffer("found valid data:", resp_packet, resp_max_len);
+
+ /* analyze response packet */
+ i = 0;
+ p = resp_data;
+ while (i < resp_max_len) {
+ current_byte = resp_packet[i];
+ switch (current_byte) {
+ case SPI_BYTE_IDLE:
+ i++;
+ break;
+ case SPI_BYTE_ESC:
+ i++;
+ current_byte = resp_packet[i];
+ *p++ = xor_20(current_byte);
+ i++;
+ break;
+ default:
+ *p++ = current_byte;
+ i++;
+ break;
+ }
+ }
+
+ /* receive "4a" means the SPI is idle, not valid data */
+ *valid_resp_len = p - resp_data;
+ if (*valid_resp_len == 0) {
+ dev_err(NULL, "error: repond package without valid data\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int packet_to_byte_conver(struct spi_transaction_dev *dev,
+ unsigned int send_len, unsigned char *send_buf,
+ unsigned int resp_len, unsigned char *resp_buf,
+ unsigned int *valid)
+{
+ int ret = 0;
+ unsigned int i;
+ unsigned char current_byte;
+ unsigned int resp_max_len;
+ unsigned char *send_packet = dev->buffer->packet_send;
+ unsigned char *resp_packet = dev->buffer->packet_resp;
+ unsigned char *p;
+ unsigned int valid_resp_len = 0;
+
+ print_buffer("before packet:", send_buf, send_len);
+
+ resp_max_len = 2 * resp_len + 4;
+
+ p = send_packet;
+
+ /* SOP header */
+ *p++ = SPI_PACKET_SOP;
+
+ *p++ = SPI_PACKET_CHANNEL;
+ *p++ = 0;
+
+ /* append the data into a packet */
+ for (i = 0; i < send_len; i++) {
+ current_byte = send_buf[i];
+
+ /* EOP for last byte */
+ if (i == send_len - 1)
+ *p++ = SPI_PACKET_EOP;
+
+ switch (current_byte) {
+ case SPI_PACKET_SOP:
+ case SPI_PACKET_EOP:
+ case SPI_PACKET_CHANNEL:
+ case SPI_PACKET_ESC:
+ *p++ = SPI_PACKET_ESC;
+ *p++ = xor_20(current_byte);
+ break;
+ default:
+ *p++ = current_byte;
+ }
+ }
+
+ ret = byte_to_core_convert(dev, p - send_packet,
+ send_packet, resp_max_len, resp_packet,
+ &valid_resp_len);
+ if (ret)
+ return -EBUSY;
+
+ print_buffer("after byte conver:", resp_packet, valid_resp_len);
+
+ /* analyze the response packet */
+ p = resp_buf;
+
+ /* look for SOP */
+ for (i = 0; i < valid_resp_len; i++) {
+ if (resp_packet[i] == SPI_PACKET_SOP)
+ break;
+ }
+
+ if (i == valid_resp_len) {
+ dev_err(NULL, "error on analyze response packet 0x%x\n",
+ resp_packet[i]);
+ return -EINVAL;
+ }
+
+ i++;
+
+ /* continue parsing data after SOP */
+ while (i < valid_resp_len) {
+ current_byte = resp_packet[i];
+
+ switch (current_byte) {
+ case SPI_PACKET_ESC:
+ case SPI_PACKET_CHANNEL:
+ case SPI_PACKET_SOP:
+ i++;
+ current_byte = resp_packet[i];
+ *p++ = xor_20(current_byte);
+ i++;
+ break;
+ case SPI_PACKET_EOP:
+ i++;
+ current_byte = resp_packet[i];
+ if (current_byte == SPI_PACKET_ESC ||
+ current_byte == SPI_PACKET_CHANNEL ||
+ current_byte == SPI_PACKET_SOP) {
+ i++;
+ current_byte = resp_packet[i];
+ *p++ = xor_20(current_byte);
+ } else
+ *p++ = current_byte;
+ i = valid_resp_len;
+ break;
+ default:
+ *p++ = current_byte;
+ i++;
+ }
+
+ }
+
+ *valid = p - resp_buf;
+
+ print_buffer("after packet:", resp_buf, *valid);
+
+ return ret;
+}
+
+static int do_transaction(struct spi_transaction_dev *dev, unsigned int addr,
+ unsigned int size, unsigned char *data,
+ unsigned int trans_type)
+{
+
+ struct spi_tran_header header;
+ unsigned char *transaction = dev->buffer->tran_send;
+ unsigned char *response = dev->buffer->tran_resp;
+ unsigned char *p;
+ int ret = 0;
+ unsigned int i;
+ unsigned int valid_len = 0;
+
+ /* make transacation header */
+ INIT_SPI_TRAN_HEADER(trans_type, size, addr);
+
+ /* fill the header */
+ p = transaction;
+ opae_memcpy(p, &header, sizeof(struct spi_tran_header));
+ p = p + sizeof(struct spi_tran_header);
+
+ switch (trans_type) {
+ case SPI_TRAN_SEQ_WRITE:
+ case SPI_TRAN_NON_SEQ_WRITE:
+ for (i = 0; i < size; i++)
+ *p++ = *data++;
+
+ ret = packet_to_byte_conver(dev, size + HEADER_LEN,
+ transaction, RESPONSE_LEN, response,
+ &valid_len);
+ if (ret)
+ return -EBUSY;
+
+ /* check the result */
+ if (size != ((unsigned int)(response[2] & 0xff) << 8 |
+ (unsigned int)(response[3] & 0xff)))
+ ret = -EBUSY;
+
+ break;
+ case SPI_TRAN_SEQ_READ:
+ case SPI_TRAN_NON_SEQ_READ:
+ ret = packet_to_byte_conver(dev, HEADER_LEN,
+ transaction, size, response,
+ &valid_len);
+ if (ret || valid_len != size)
+ return -EBUSY;
+
+ for (i = 0; i < size; i++)
+ *data++ = *response++;
+
+ ret = 0;
+ break;
+ }
+
+ return ret;
+}
+
+int spi_transaction_read(struct spi_transaction_dev *dev, unsigned int addr,
+ unsigned int size, unsigned char *data)
+{
+ return do_transaction(dev, addr, size, data,
+ (size > SPI_REG_BYTES) ?
+ SPI_TRAN_SEQ_READ : SPI_TRAN_NON_SEQ_READ);
+}
+
+int spi_transaction_write(struct spi_transaction_dev *dev, unsigned int addr,
+ unsigned int size, unsigned char *data)
+{
+ return do_transaction(dev, addr, size, data,
+ (size > SPI_REG_BYTES) ?
+ SPI_TRAN_SEQ_WRITE : SPI_TRAN_NON_SEQ_WRITE);
+}
+
+struct spi_transaction_dev *spi_transaction_init(struct altera_spi_device *dev,
+ int chipselect)
+{
+ struct spi_transaction_dev *spi_tran_dev;
+
+ spi_tran_dev = opae_malloc(sizeof(struct spi_transaction_dev));
+ if (!spi_tran_dev)
+ return NULL;
+
+ spi_tran_dev->dev = dev;
+ spi_tran_dev->chipselect = chipselect;
+
+ spi_tran_dev->buffer = opae_malloc(sizeof(struct spi_tran_buffer));
+ if (!spi_tran_dev->buffer) {
+ opae_free(spi_tran_dev);
+ return NULL;
+ }
+
+ return spi_tran_dev;
+}
+
+void spi_transaction_remove(struct spi_transaction_dev *dev)
+{
+ if (dev && dev->buffer)
+ opae_free(dev->buffer);
+ if (dev)
+ opae_free(dev);
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _OSDEP_RAW_GENERIC_H
+#define _OSDEP_RAW_GENERIC_H
+
+#define compiler_barrier() (asm volatile ("" : : : "memory"))
+
+#define io_wmb() compiler_barrier()
+#define io_rmb() compiler_barrier()
+
+static inline uint8_t opae_readb(const volatile void *addr)
+{
+ uint8_t val;
+
+ val = *(const volatile uint8_t *)addr;
+ io_rmb();
+ return val;
+}
+
+static inline uint16_t opae_readw(const volatile void *addr)
+{
+ uint16_t val;
+
+ val = *(const volatile uint16_t *)addr;
+ io_rmb();
+ return val;
+}
+
+static inline uint32_t opae_readl(const volatile void *addr)
+{
+ uint32_t val;
+
+ val = *(const volatile uint32_t *)addr;
+ io_rmb();
+ return val;
+}
+
+static inline uint64_t opae_readq(const volatile void *addr)
+{
+ uint64_t val;
+
+ val = *(const volatile uint64_t *)addr;
+ io_rmb();
+ return val;
+}
+
+static inline void opae_writeb(uint8_t value, volatile void *addr)
+{
+ io_wmb();
+ *(volatile uint8_t *)addr = value;
+}
+
+static inline void opae_writew(uint16_t value, volatile void *addr)
+{
+ io_wmb();
+ *(volatile uint16_t *)addr = value;
+}
+
+static inline void opae_writel(uint32_t value, volatile void *addr)
+{
+ io_wmb();
+ *(volatile uint32_t *)addr = value;
+}
+
+static inline void opae_writeq(uint64_t value, volatile void *addr)
+{
+ io_wmb();
+ *(volatile uint64_t *)addr = value;
+}
+
+#define opae_free(addr) free(addr)
+#define opae_memcpy(a, b, c) memcpy((a), (b), (c))
+
+#endif
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _OSDEP_RTE_GENERIC_H
+#define _OSDEP_RTE_GENERIC_H
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_memcpy.h>
+
+#define dev_printf(level, fmt, args...) \
+ RTE_LOG(level, PMD, "osdep_rte: " fmt, ## args)
+
+#define osdep_panic(...) rte_panic(...)
+
+#define opae_udelay(x) rte_delay_us(x)
+
+#define opae_readb(addr) rte_read8(addr)
+#define opae_readw(addr) rte_read16(addr)
+#define opae_readl(addr) rte_read32(addr)
+#define opae_readq(addr) rte_read64(addr)
+#define opae_writeb(value, addr) rte_write8(value, addr)
+#define opae_writew(value, addr) rte_write16(value, addr)
+#define opae_writel(value, addr) rte_write32(value, addr)
+#define opae_writeq(value, addr) rte_write64(value, addr)
+
+#define opae_malloc(size) rte_malloc(NULL, size, 0)
+#define opae_zmalloc(size) rte_zmalloc(NULL, size, 0)
+#define opae_free(addr) rte_free(addr)
+
+#define ARRAY_SIZE(arr) RTE_DIM(arr)
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define spinlock_t rte_spinlock_t
+#define spinlock_init(x) rte_spinlock_init(x)
+#define spinlock_lock(x) rte_spinlock_lock(x)
+#define spinlock_unlock(x) rte_spinlock_unlock(x)
+
+#define cpu_to_be16(o) rte_cpu_to_be_16(o)
+#define cpu_to_be32(o) rte_cpu_to_be_32(o)
+#define cpu_to_be64(o) rte_cpu_to_be_64(o)
+#define cpu_to_le16(o) rte_cpu_to_le_16(o)
+#define cpu_to_le32(o) rte_cpu_to_le_32(o)
+#define cpu_to_le64(o) rte_cpu_to_le_64(o)
+
+#define opae_memcpy(a, b, c) rte_memcpy((a), (b), (c))
+
+static inline unsigned long msecs_to_timer_cycles(unsigned int m)
+{
+ return rte_get_timer_hz() * (m / 1000);
+}
+
+#endif
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#include <string.h>
+#include <dirent.h>
+#include <sys/stat.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <fcntl.h>
+#include <rte_log.h>
+#include <rte_bus.h>
+#include <rte_eal_memconfig.h>
+#include <rte_malloc.h>
+#include <rte_devargs.h>
+#include <rte_memcpy.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <rte_kvargs.h>
+#include <rte_alarm.h>
+
+#include <rte_errno.h>
+#include <rte_per_lcore.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_eal.h>
+#include <rte_common.h>
+#include <rte_bus_vdev.h>
+
+#include "base/opae_hw_api.h"
+#include "rte_rawdev.h"
+#include "rte_rawdev_pmd.h"
+#include "rte_bus_ifpga.h"
+#include "ifpga_common.h"
+#include "ifpga_logs.h"
+#include "ifpga_rawdev.h"
+#include "ipn3ke_rawdev_api.h"
+
+int ifpga_rawdev_logtype;
+
+#define PCI_VENDOR_ID_INTEL 0x8086
+/* PCI Device ID */
+#define PCIE_DEVICE_ID_PF_INT_5_X 0xBCBD
+#define PCIE_DEVICE_ID_PF_INT_6_X 0xBCC0
+#define PCIE_DEVICE_ID_PF_DSC_1_X 0x09C4
+#define PCIE_DEVICE_ID_PAC_N3000 0x0B30
+/* VF Device */
+#define PCIE_DEVICE_ID_VF_INT_5_X 0xBCBF
+#define PCIE_DEVICE_ID_VF_INT_6_X 0xBCC1
+#define PCIE_DEVICE_ID_VF_DSC_1_X 0x09C5
+#define PCIE_DEVICE_ID_VF_PAC_N3000 0x0B31
+#define RTE_MAX_RAW_DEVICE 10
+
+static const struct rte_pci_id pci_ifpga_map[] = {
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_5_X) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_5_X) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_6_X) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_6_X) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_DSC_1_X) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_DSC_1_X) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PAC_N3000),},
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_PAC_N3000),},
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+ifpga_fill_afu_dev(struct opae_accelerator *acc,
+ struct rte_afu_device *afu_dev)
+{
+ struct rte_mem_resource *res = afu_dev->mem_resource;
+ struct opae_acc_region_info region_info;
+ struct opae_acc_info info;
+ unsigned long i;
+ int ret;
+
+ ret = opae_acc_get_info(acc, &info);
+ if (ret)
+ return ret;
+
+ if (info.num_regions > PCI_MAX_RESOURCE)
+ return -EFAULT;
+
+ afu_dev->num_region = info.num_regions;
+
+ for (i = 0; i < info.num_regions; i++) {
+ region_info.index = i;
+ ret = opae_acc_get_region_info(acc, ®ion_info);
+ if (ret)
+ return ret;
+
+ if ((region_info.flags & ACC_REGION_MMIO) &&
+ (region_info.flags & ACC_REGION_READ) &&
+ (region_info.flags & ACC_REGION_WRITE)) {
+ res[i].phys_addr = region_info.phys_addr;
+ res[i].len = region_info.len;
+ res[i].addr = region_info.addr;
+ } else
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static void
+ifpga_rawdev_info_get(struct rte_rawdev *dev,
+ rte_rawdev_obj_t dev_info)
+{
+ struct opae_adapter *adapter;
+ struct opae_accelerator *acc;
+ struct rte_afu_device *afu_dev;
+ struct opae_manager *mgr = NULL;
+ struct opae_eth_group_region_info opae_lside_eth_info;
+ struct opae_eth_group_region_info opae_nside_eth_info;
+ int lside_bar_idx, nside_bar_idx;
+
+ IFPGA_RAWDEV_PMD_FUNC_TRACE();
+
+ if (!dev_info) {
+ IFPGA_RAWDEV_PMD_ERR("Invalid request");
+ return;
+ }
+
+ adapter = ifpga_rawdev_get_priv(dev);
+ if (!adapter)
+ return;
+
+ afu_dev = dev_info;
+ afu_dev->rawdev = dev;
+
+ /* find opae_accelerator and fill info into afu_device */
+ opae_adapter_for_each_acc(adapter, acc) {
+ if (acc->index != afu_dev->id.port)
+ continue;
+
+ if (ifpga_fill_afu_dev(acc, afu_dev)) {
+ IFPGA_RAWDEV_PMD_ERR("cannot get info\n");
+ return;
+ }
+ }
+
+ /* get opae_manager to rawdev */
+ mgr = opae_adapter_get_mgr(adapter);
+ if (mgr) {
+ /* get LineSide BAR Index */
+ if (opae_manager_get_eth_group_region_info(mgr, 0,
+ &opae_lside_eth_info)) {
+ return;
+ }
+ lside_bar_idx = opae_lside_eth_info.mem_idx;
+
+ /* get NICSide BAR Index */
+ if (opae_manager_get_eth_group_region_info(mgr, 1,
+ &opae_nside_eth_info)) {
+ return;
+ }
+ nside_bar_idx = opae_nside_eth_info.mem_idx;
+
+ if (lside_bar_idx >= PCI_MAX_RESOURCE ||
+ nside_bar_idx >= PCI_MAX_RESOURCE ||
+ lside_bar_idx == nside_bar_idx)
+ return;
+
+ /* fill LineSide BAR Index */
+ afu_dev->mem_resource[lside_bar_idx].phys_addr =
+ opae_lside_eth_info.phys_addr;
+ afu_dev->mem_resource[lside_bar_idx].len =
+ opae_lside_eth_info.len;
+ afu_dev->mem_resource[lside_bar_idx].addr =
+ opae_lside_eth_info.addr;
+
+ /* fill NICSide BAR Index */
+ afu_dev->mem_resource[nside_bar_idx].phys_addr =
+ opae_nside_eth_info.phys_addr;
+ afu_dev->mem_resource[nside_bar_idx].len =
+ opae_nside_eth_info.len;
+ afu_dev->mem_resource[nside_bar_idx].addr =
+ opae_nside_eth_info.addr;
+ }
+}
+
+static int
+ifpga_rawdev_configure(const struct rte_rawdev *dev,
+ rte_rawdev_obj_t config)
+{
+ IFPGA_RAWDEV_PMD_FUNC_TRACE();
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ return config ? 0 : 1;
+}
+
+static int
+ifpga_rawdev_start(struct rte_rawdev *dev)
+{
+ int ret = 0;
+ struct opae_adapter *adapter;
+
+ IFPGA_RAWDEV_PMD_FUNC_TRACE();
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ adapter = ifpga_rawdev_get_priv(dev);
+ if (!adapter)
+ return -ENODEV;
+
+ return ret;
+}
+
+static void
+ifpga_rawdev_stop(struct rte_rawdev *dev)
+{
+ dev->started = 0;
+}
+
+static int
+ifpga_rawdev_close(struct rte_rawdev *dev)
+{
+ return dev ? 0:1;
+}
+
+static int
+ifpga_rawdev_reset(struct rte_rawdev *dev)
+{
+ return dev ? 0:1;
+}
+
+static int
+fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
+ u64 *status)
+{
+
+ struct opae_adapter *adapter;
+ struct opae_manager *mgr;
+ struct opae_accelerator *acc;
+ struct opae_bridge *br;
+ int ret;
+
+ adapter = ifpga_rawdev_get_priv(raw_dev);
+ if (!adapter)
+ return -ENODEV;
+
+ mgr = opae_adapter_get_mgr(adapter);
+ if (!mgr)
+ return -ENODEV;
+
+ acc = opae_adapter_get_acc(adapter, port_id);
+ if (!acc)
+ return -ENODEV;
+
+ br = opae_acc_get_br(acc);
+ if (!br)
+ return -ENODEV;
+
+ ret = opae_manager_flash(mgr, port_id, buffer, size, status);
+ if (ret) {
+ IFPGA_RAWDEV_PMD_ERR("%s pr error %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = opae_bridge_reset(br);
+ if (ret) {
+ IFPGA_RAWDEV_PMD_ERR("%s reset port:%d error %d\n",
+ __func__, port_id, ret);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int
+rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+ const char *file_name)
+{
+ struct stat file_stat;
+ int file_fd;
+ int ret = 0;
+ ssize_t buffer_size;
+ void *buffer;
+ u64 pr_error;
+
+ if (!file_name)
+ return -EINVAL;
+
+ file_fd = open(file_name, O_RDONLY);
+ if (file_fd < 0) {
+ IFPGA_RAWDEV_PMD_ERR("%s: open file error: %s\n",
+ __func__, file_name);
+ IFPGA_RAWDEV_PMD_ERR("Message : %s\n", strerror(errno));
+ return -EINVAL;
+ }
+ ret = stat(file_name, &file_stat);
+ if (ret) {
+ IFPGA_RAWDEV_PMD_ERR("stat on bitstream file failed: %s\n",
+ file_name);
+ ret = -EINVAL;
+ goto close_fd;
+ }
+ buffer_size = file_stat.st_size;
+ if (buffer_size <= 0) {
+ ret = -EINVAL;
+ goto close_fd;
+ }
+
+ IFPGA_RAWDEV_PMD_INFO("bitstream file size: %zu\n", buffer_size);
+ buffer = rte_malloc(NULL, buffer_size, 0);
+ if (!buffer) {
+ ret = -ENOMEM;
+ goto close_fd;
+ }
+
+ /*read the raw data*/
+ if (buffer_size != read(file_fd, (void *)buffer, buffer_size)) {
+ ret = -EINVAL;
+ goto free_buffer;
+ }
+
+ /*do PR now*/
+ ret = fpga_pr(rawdev, port_id, buffer, buffer_size, &pr_error);
+ IFPGA_RAWDEV_PMD_INFO("downloading to device port %d....%s.\n", port_id,
+ ret ? "failed" : "success");
+ if (ret) {
+ ret = -EINVAL;
+ goto free_buffer;
+ }
+
+free_buffer:
+ if (buffer)
+ rte_free(buffer);
+close_fd:
+ close(file_fd);
+ file_fd = 0;
+ return ret;
+}
+
+static int
+ifpga_rawdev_pr(struct rte_rawdev *dev,
+ rte_rawdev_obj_t pr_conf)
+{
+ struct opae_adapter *adapter;
+ struct rte_afu_pr_conf *afu_pr_conf;
+ int ret;
+ struct uuid uuid;
+ struct opae_accelerator *acc;
+
+ IFPGA_RAWDEV_PMD_FUNC_TRACE();
+
+ adapter = ifpga_rawdev_get_priv(dev);
+ if (!adapter)
+ return -ENODEV;
+
+ if (!pr_conf)
+ return -EINVAL;
+
+ afu_pr_conf = pr_conf;
+
+ if (afu_pr_conf->pr_enable) {
+ ret = rte_fpga_do_pr(dev,
+ afu_pr_conf->afu_id.port,
+ afu_pr_conf->bs_path);
+ if (ret) {
+ IFPGA_RAWDEV_PMD_ERR("do pr error %d\n", ret);
+ return ret;
+ }
+ }
+
+ acc = opae_adapter_get_acc(adapter, afu_pr_conf->afu_id.port);
+ if (!acc)
+ return -ENODEV;
+
+ ret = opae_acc_get_uuid(acc, &uuid);
+ if (ret)
+ return ret;
+
+ memcpy(&afu_pr_conf->afu_id.uuid.uuid_low, uuid.b, sizeof(u64));
+ memcpy(&afu_pr_conf->afu_id.uuid.uuid_high, uuid.b + 8, sizeof(u64));
+
+ IFPGA_RAWDEV_PMD_INFO("%s: uuid_l=0x%lx, uuid_h=0x%lx\n", __func__,
+ (unsigned long)afu_pr_conf->afu_id.uuid.uuid_low,
+ (unsigned long)afu_pr_conf->afu_id.uuid.uuid_high);
+
+ return 0;
+}
+
+static int
+ifpga_rawdev_get_attr(struct rte_rawdev *dev,
+ const char *attr_name, uint64_t *attr_value)
+{
+ struct opae_adapter *adapter;
+ struct opae_manager *mgr;
+ struct opae_retimer_info opae_rtm_info;
+ struct opae_retimer_status opae_rtm_status;
+ struct opae_eth_group_info opae_eth_grp_info;
+ struct opae_eth_group_region_info opae_eth_grp_reg_info;
+ int eth_group_num = 0;
+ uint64_t port_link_bitmap = 0, port_link_bit;
+ uint32_t i, j, p, q;
+
+#define MAX_PORT_PER_RETIMER 4
+
+ IFPGA_RAWDEV_PMD_FUNC_TRACE();
+
+ if (!dev || !attr_name || !attr_value) {
+ IFPGA_RAWDEV_PMD_ERR("Invalid arguments for getting attributes");
+ return -1;
+ }
+
+ adapter = ifpga_rawdev_get_priv(dev);
+ if (!adapter) {
+ IFPGA_RAWDEV_PMD_ERR("Adapter of dev %s is NULL", dev->name);
+ return -1;
+ }
+
+ mgr = opae_adapter_get_mgr(adapter);
+ if (!mgr) {
+ IFPGA_RAWDEV_PMD_ERR("opae_manager of opae_adapter is NULL");
+ return -1;
+ }
+
+ /* currently, eth_group_num is always 2 */
+ eth_group_num = opae_manager_get_eth_group_nums(mgr);
+ if (eth_group_num < 0)
+ return -1;
+
+ if (!strcmp(attr_name, "LineSideBaseMAC")) {
+ /* Currently FPGA not implement, so just set all zeros*/
+ *attr_value = (uint64_t)0;
+ return 0;
+ }
+ if (!strcmp(attr_name, "LineSideMACType")) {
+ /* eth_group 0 on FPGA connect to LineSide */
+ if (opae_manager_get_eth_group_info(mgr, 0,
+ &opae_eth_grp_info))
+ return -1;
+ switch (opae_eth_grp_info.speed) {
+ case ETH_SPEED_10G:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI);
+ break;
+ case ETH_SPEED_25G:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI);
+ break;
+ default:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_RETIMER_MAC_TYPE_UNKNOWN);
+ break;
+ }
+ return 0;
+ }
+ if (!strcmp(attr_name, "LineSideLinkSpeed")) {
+ if (opae_manager_get_retimer_status(mgr, &opae_rtm_status))
+ return -1;
+ switch (opae_rtm_status.speed) {
+ case MXD_1GB:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
+ break;
+ case MXD_2_5GB:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
+ break;
+ case MXD_5GB:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
+ break;
+ case MXD_10GB:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_10GB);
+ break;
+ case MXD_25GB:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_25GB);
+ break;
+ case MXD_40GB:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_40GB);
+ break;
+ case MXD_100GB:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
+ break;
+ case MXD_SPEED_UNKNOWN:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
+ break;
+ default:
+ *attr_value =
+ (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
+ break;
+ }
+ return 0;
+ }
+ if (!strcmp(attr_name, "LineSideLinkRetimerNum")) {
+ if (opae_manager_get_retimer_info(mgr, &opae_rtm_info))
+ return -1;
+ *attr_value = (uint64_t)(opae_rtm_info.nums_retimer);
+ return 0;
+ }
+ if (!strcmp(attr_name, "LineSideLinkPortNum")) {
+ if (opae_manager_get_retimer_info(mgr, &opae_rtm_info))
+ return -1;
+ uint64_t tmp = (uint64_t)opae_rtm_info.ports_per_retimer *
+ (uint64_t)opae_rtm_info.nums_retimer;
+ *attr_value = tmp;
+ return 0;
+ }
+ if (!strcmp(attr_name, "LineSideLinkStatus")) {
+ if (opae_manager_get_retimer_info(mgr, &opae_rtm_info))
+ return -1;
+ if (opae_manager_get_retimer_status(mgr, &opae_rtm_status))
+ return -1;
+ (*attr_value) = 0;
+ q = 0;
+ port_link_bitmap = (uint64_t)(opae_rtm_status.line_link_bitmap);
+ for (i = 0; i < opae_rtm_info.nums_retimer; i++) {
+ p = i * MAX_PORT_PER_RETIMER;
+ for (j = 0; j < opae_rtm_info.ports_per_retimer; j++) {
+ port_link_bit = 0;
+ IFPGA_BIT_SET(port_link_bit, (p+j));
+ port_link_bit &= port_link_bitmap;
+ if (port_link_bit)
+ IFPGA_BIT_SET((*attr_value), q);
+ q++;
+ }
+ }
+ return 0;
+ }
+ if (!strcmp(attr_name, "LineSideBARIndex")) {
+ /* eth_group 0 on FPGA connect to LineSide */
+ if (opae_manager_get_eth_group_region_info(mgr, 0,
+ &opae_eth_grp_reg_info))
+ return -1;
+ *attr_value = (uint64_t)opae_eth_grp_reg_info.mem_idx;
+ return 0;
+ }
+ if (!strcmp(attr_name, "NICSideMACType")) {
+ /* eth_group 1 on FPGA connect to NicSide */
+ if (opae_manager_get_eth_group_info(mgr, 1,
+ &opae_eth_grp_info))
+ return -1;
+ *attr_value = (uint64_t)(opae_eth_grp_info.speed);
+ return 0;
+ }
+ if (!strcmp(attr_name, "NICSideLinkSpeed")) {
+ /* eth_group 1 on FPGA connect to NicSide */
+ if (opae_manager_get_eth_group_info(mgr, 1,
+ &opae_eth_grp_info))
+ return -1;
+ *attr_value = (uint64_t)(opae_eth_grp_info.speed);
+ return 0;
+ }
+ if (!strcmp(attr_name, "NICSideLinkPortNum")) {
+ if (opae_manager_get_retimer_info(mgr, &opae_rtm_info))
+ return -1;
+ uint64_t tmp = (uint64_t)opae_rtm_info.nums_fvl *
+ (uint64_t)opae_rtm_info.ports_per_fvl;
+ *attr_value = tmp;
+ return 0;
+ }
+ if (!strcmp(attr_name, "NICSideLinkStatus"))
+ return 0;
+ if (!strcmp(attr_name, "NICSideBARIndex")) {
+ /* eth_group 1 on FPGA connect to NicSide */
+ if (opae_manager_get_eth_group_region_info(mgr, 1,
+ &opae_eth_grp_reg_info))
+ return -1;
+ *attr_value = (uint64_t)opae_eth_grp_reg_info.mem_idx;
+ return 0;
+ }
+
+ IFPGA_RAWDEV_PMD_ERR("%s not support", attr_name);
+ return -1;
+}
+
+static const struct rte_rawdev_ops ifpga_rawdev_ops = {
+ .dev_info_get = ifpga_rawdev_info_get,
+ .dev_configure = ifpga_rawdev_configure,
+ .dev_start = ifpga_rawdev_start,
+ .dev_stop = ifpga_rawdev_stop,
+ .dev_close = ifpga_rawdev_close,
+ .dev_reset = ifpga_rawdev_reset,
+
+ .queue_def_conf = NULL,
+ .queue_setup = NULL,
+ .queue_release = NULL,
+
+ .attr_get = ifpga_rawdev_get_attr,
+ .attr_set = NULL,
+
+ .enqueue_bufs = NULL,
+ .dequeue_bufs = NULL,
+
+ .dump = NULL,
+
+ .xstats_get = NULL,
+ .xstats_get_names = NULL,
+ .xstats_get_by_name = NULL,
+ .xstats_reset = NULL,
+
+ .firmware_status_get = NULL,
+ .firmware_version_get = NULL,
+ .firmware_load = ifpga_rawdev_pr,
+ .firmware_unload = NULL,
+
+ .dev_selftest = NULL,
+};
+
+static int
+ifpga_rawdev_create(struct rte_pci_device *pci_dev,
+ int socket_id)
+{
+ int ret = 0;
+ struct rte_rawdev *rawdev = NULL;
+ struct opae_adapter *adapter = NULL;
+ struct opae_manager *mgr = NULL;
+ struct opae_adapter_data_pci *data = NULL;
+ char name[RTE_RAWDEV_NAME_MAX_LEN];
+ int i;
+
+ if (!pci_dev) {
+ IFPGA_RAWDEV_PMD_ERR("Invalid pci_dev of the device!");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ memset(name, 0, sizeof(name));
+ snprintf(name, RTE_RAWDEV_NAME_MAX_LEN, "IFPGA:%x:%02x.%x",
+ pci_dev->addr.bus, pci_dev->addr.devid, pci_dev->addr.function);
+
+ IFPGA_RAWDEV_PMD_INFO("Init %s on NUMA node %d", name, rte_socket_id());
+
+ /* Allocate device structure */
+ rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct opae_adapter),
+ socket_id);
+ if (rawdev == NULL) {
+ IFPGA_RAWDEV_PMD_ERR("Unable to allocate rawdevice");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /* alloc OPAE_FPGA_PCI data to register to OPAE hardware level API */
+ data = opae_adapter_data_alloc(OPAE_FPGA_PCI);
+ if (!data) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ /* init opae_adapter_data_pci for device specific information */
+ for (i = 0; i < PCI_MAX_RESOURCE; i++) {
+ data->region[i].phys_addr = pci_dev->mem_resource[i].phys_addr;
+ data->region[i].len = pci_dev->mem_resource[i].len;
+ data->region[i].addr = pci_dev->mem_resource[i].addr;
+ }
+ data->device_id = pci_dev->id.device_id;
+ data->vendor_id = pci_dev->id.vendor_id;
+
+ adapter = rawdev->dev_private;
+ /* create a opae_adapter based on above device data */
+ ret = opae_adapter_init(adapter, pci_dev->device.name, data);
+ if (ret) {
+ ret = -ENOMEM;
+ goto free_adapter_data;
+ }
+
+ rawdev->dev_ops = &ifpga_rawdev_ops;
+ rawdev->device = &pci_dev->device;
+ rawdev->driver_name = pci_dev->driver->driver.name;
+
+ /* must enumerate the adapter before use it */
+ ret = opae_adapter_enumerate(adapter);
+ if (ret)
+ goto free_adapter_data;
+
+ /* get opae_manager to rawdev */
+ mgr = opae_adapter_get_mgr(adapter);
+ if (mgr) {
+ /* PF function */
+ IFPGA_RAWDEV_PMD_INFO("this is a PF function");
+ }
+
+ return ret;
+
+free_adapter_data:
+ if (data)
+ opae_adapter_data_free(data);
+cleanup:
+ if (rawdev)
+ rte_rawdev_pmd_release(rawdev);
+
+ return ret;
+}
+
+static int
+ifpga_rawdev_destroy(struct rte_pci_device *pci_dev)
+{
+ int ret;
+ struct rte_rawdev *rawdev;
+ char name[RTE_RAWDEV_NAME_MAX_LEN];
+ struct opae_adapter *adapter;
+
+ if (!pci_dev) {
+ IFPGA_RAWDEV_PMD_ERR("Invalid pci_dev of the device!");
+ ret = -EINVAL;
+ return ret;
+ }
+
+ memset(name, 0, sizeof(name));
+ snprintf(name, RTE_RAWDEV_NAME_MAX_LEN, "IFPGA:%x:%02x.%x",
+ pci_dev->addr.bus, pci_dev->addr.devid, pci_dev->addr.function);
+
+ IFPGA_RAWDEV_PMD_INFO("Closing %s on NUMA node %d",
+ name, rte_socket_id());
+
+ rawdev = rte_rawdev_pmd_get_named_dev(name);
+ if (!rawdev) {
+ IFPGA_RAWDEV_PMD_ERR("Invalid device name (%s)", name);
+ return -EINVAL;
+ }
+
+ adapter = ifpga_rawdev_get_priv(rawdev);
+ if (!adapter)
+ return -ENODEV;
+
+ opae_adapter_data_free(adapter->data);
+ opae_adapter_free(adapter);
+
+ /* rte_rawdev_close is called by pmd_release */
+ ret = rte_rawdev_pmd_release(rawdev);
+ if (ret)
+ IFPGA_RAWDEV_PMD_DEBUG("Device cleanup failed");
+
+ return ret;
+}
+
+static int
+ifpga_rawdev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ IFPGA_RAWDEV_PMD_FUNC_TRACE();
+ return ifpga_rawdev_create(pci_dev, rte_socket_id());
+}
+
+static int
+ifpga_rawdev_pci_remove(struct rte_pci_device *pci_dev)
+{
+ return ifpga_rawdev_destroy(pci_dev);
+}
+
+static struct rte_pci_driver rte_ifpga_rawdev_pmd = {
+ .id_table = pci_ifpga_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = ifpga_rawdev_pci_probe,
+ .remove = ifpga_rawdev_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(ifpga_rawdev_pci_driver, rte_ifpga_rawdev_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(ifpga_rawdev_pci_driver, rte_ifpga_rawdev_pmd);
+RTE_PMD_REGISTER_KMOD_DEP(ifpga_rawdev_pci_driver, "* igb_uio | uio_pci_generic | vfio-pci");
+
+RTE_INIT(ifpga_rawdev_init_log)
+{
+ ifpga_rawdev_logtype = rte_log_register("driver.raw.init");
+ if (ifpga_rawdev_logtype >= 0)
+ rte_log_set_level(ifpga_rawdev_logtype, RTE_LOG_NOTICE);
+}
+
+static const char * const valid_args[] = {
+#define IFPGA_ARG_NAME "ifpga"
+ IFPGA_ARG_NAME,
+#define IFPGA_ARG_PORT "port"
+ IFPGA_ARG_PORT,
+#define IFPGA_AFU_BTS "afu_bts"
+ IFPGA_AFU_BTS,
+ NULL
+};
+
+static int
+ifpga_cfg_probe(struct rte_vdev_device *dev)
+{
+ struct rte_devargs *devargs;
+ struct rte_kvargs *kvlist = NULL;
+ int port;
+ char *name = NULL;
+ char dev_name[RTE_RAWDEV_NAME_MAX_LEN];
+ int ret = -1;
+
+ devargs = dev->device.devargs;
+
+ kvlist = rte_kvargs_parse(devargs->args, valid_args);
+ if (!kvlist) {
+ IFPGA_RAWDEV_PMD_LOG(ERR, "error when parsing param");
+ goto end;
+ }
+
+ if (rte_kvargs_count(kvlist, IFPGA_ARG_NAME) == 1) {
+ if (rte_kvargs_process(kvlist, IFPGA_ARG_NAME,
+ &rte_ifpga_get_string_arg, &name) < 0) {
+ IFPGA_RAWDEV_PMD_ERR("error to parse %s",
+ IFPGA_ARG_NAME);
+ goto end;
+ }
+ } else {
+ IFPGA_RAWDEV_PMD_ERR("arg %s is mandatory for ifpga bus",
+ IFPGA_ARG_NAME);
+ goto end;
+ }
+
+ if (rte_kvargs_count(kvlist, IFPGA_ARG_PORT) == 1) {
+ if (rte_kvargs_process(kvlist,
+ IFPGA_ARG_PORT,
+ &rte_ifpga_get_integer32_arg,
+ &port) < 0) {
+ IFPGA_RAWDEV_PMD_ERR("error to parse %s",
+ IFPGA_ARG_PORT);
+ goto end;
+ }
+ } else {
+ IFPGA_RAWDEV_PMD_ERR("arg %s is mandatory for ifpga bus",
+ IFPGA_ARG_PORT);
+ goto end;
+ }
+
+ memset(dev_name, 0, sizeof(dev_name));
+ snprintf(dev_name, RTE_RAWDEV_NAME_MAX_LEN, "%d|%s",
+ port, name);
+
+ ret = rte_eal_hotplug_add(RTE_STR(IFPGA_BUS_NAME),
+ dev_name, devargs->args);
+end:
+ if (kvlist)
+ rte_kvargs_free(kvlist);
+ if (name)
+ free(name);
+
+ return ret;
+}
+
+static int
+ifpga_cfg_remove(struct rte_vdev_device *vdev)
+{
+ IFPGA_RAWDEV_PMD_INFO("Remove ifpga_cfg %p",
+ vdev);
+
+ return 0;
+}
+
+static struct rte_vdev_driver ifpga_cfg_driver = {
+ .probe = ifpga_cfg_probe,
+ .remove = ifpga_cfg_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ifpga_rawdev_cfg, ifpga_cfg_driver);
+RTE_PMD_REGISTER_ALIAS(ifpga_rawdev_cfg, ifpga_cfg);
+RTE_PMD_REGISTER_PARAM_STRING(ifpga_rawdev_cfg,
+ "ifpga=<string> "
+ "port=<int> "
+ "afu_bts=<path>");
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2018 Intel Corporation
+ */
+
+#ifndef _IFPGA_RAWDEV_H_
+#define _IFPGA_RAWDEV_H_
+
+extern int ifpga_rawdev_logtype;
+
+#define IFPGA_RAWDEV_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, ifpga_rawdev_logtype, "%s(): " fmt "\n", \
+ __func__, ##args)
+
+#define IFPGA_RAWDEV_PMD_FUNC_TRACE() IFPGA_RAWDEV_PMD_LOG(DEBUG, ">>")
+
+#define IFPGA_RAWDEV_PMD_DEBUG(fmt, args...) \
+ IFPGA_RAWDEV_PMD_LOG(DEBUG, fmt, ## args)
+#define IFPGA_RAWDEV_PMD_INFO(fmt, args...) \
+ IFPGA_RAWDEV_PMD_LOG(INFO, fmt, ## args)
+#define IFPGA_RAWDEV_PMD_ERR(fmt, args...) \
+ IFPGA_RAWDEV_PMD_LOG(ERR, fmt, ## args)
+#define IFPGA_RAWDEV_PMD_WARN(fmt, args...) \
+ IFPGA_RAWDEV_PMD_LOG(WARNING, fmt, ## args)
+
+enum ifpga_rawdev_device_state {
+ IFPGA_IDLE,
+ IFPGA_READY,
+ IFPGA_ERROR
+};
+
+/** Set a bit in the uint64 variable */
+#define IFPGA_BIT_SET(var, pos) \
+ ((var) |= ((uint64_t)1 << ((pos))))
+
+/** Reset the bit in the variable */
+#define IFPGA_BIT_RESET(var, pos) \
+ ((var) &= ~((uint64_t)1 << ((pos))))
+
+/** Check the bit is set in the variable */
+#define IFPGA_BIT_ISSET(var, pos) \
+ (((var) & ((uint64_t)1 << ((pos)))) ? 1 : 0)
+
+static inline struct opae_adapter *
+ifpga_rawdev_get_priv(const struct rte_rawdev *rawdev)
+{
+ return rawdev->dev_private;
+}
+
+#endif /* _IFPGA_RAWDEV_H_ */
--- /dev/null
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+version = 1
+
+subdir('base')
+objs = [base_objs]
+
+dep = dependency('libfdt', required: false)
+if not dep.found()
+ build = false
+ reason = 'missing dependency, "libfdt"'
+endif
+deps += ['rawdev', 'pci', 'bus_pci', 'kvargs',
+ 'bus_vdev', 'bus_ifpga', 'net']
+sources = files('ifpga_rawdev.c')
+
+includes += include_directories('base')
+
+allow_experimental_apis = true
--- /dev/null
+DPDK_18.05 {
+
+ local: *;
+};
+++ /dev/null
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-#
-# library name
-#
-LIB = librte_pmd_ifpga_rawdev.a
-
-CFLAGS += -DALLOW_EXPERIMENTAL_API
-CFLAGS += -O3
-CFLAGS += $(WERROR_FLAGS)
-CFLAGS += -I$(RTE_SDK)/drivers/bus/ifpga
-CFLAGS += -I$(RTE_SDK)/drivers/raw/ifpga_rawdev
-CFLAGS += -I$(RTE_SDK)/drivers/net/ipn3ke
-LDLIBS += -lrte_eal
-LDLIBS += -lrte_rawdev
-LDLIBS += -lrte_bus_vdev
-LDLIBS += -lrte_kvargs
-LDLIBS += -lrte_bus_pci
-LDLIBS += -lrte_bus_ifpga
-
-EXPORT_MAP := rte_pmd_ifpga_rawdev_version.map
-
-LIBABIVER := 1
-
-VPATH += $(SRCDIR)/base
-
-include $(RTE_SDK)/drivers/raw/ifpga_rawdev/base/Makefile
-
-#
-# all source are stored in SRCS-y
-#
-SRCS-$(CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV) += ifpga_rawdev.c
-
-include $(RTE_SDK)/mk/rte.lib.mk
+++ /dev/null
-#SPDX-License-Identifier: BSD-3-Clause
-#Copyright(c) 2010-2018 Intel Corporation
-
-ifneq ($(CONFIG_RTE_LIBRTE_EAL),)
-OSDEP := osdep_rte
-else
-OSDEP := osdep_raw
-endif
-
-CFLAGS += -I$(RTE_SDK)/drivers/raw/ifpga_rawdev/base/$(OSDEP)
-
-SRCS-y += ifpga_api.c
-SRCS-y += ifpga_enumerate.c
-SRCS-y += ifpga_feature_dev.c
-SRCS-y += ifpga_fme.c
-SRCS-y += ifpga_fme_iperf.c
-SRCS-y += ifpga_fme_dperf.c
-SRCS-y += ifpga_fme_error.c
-SRCS-y += ifpga_port.c
-SRCS-y += ifpga_port_error.c
-SRCS-y += opae_hw_api.c
-SRCS-y += opae_ifpga_hw_api.c
-SRCS-y += opae_debug.c
-SRCS-y += ifpga_fme_pr.c
-SRCS-y += opae_spi.c
-SRCS-y += opae_spi_transaction.c
-SRCS-y += opae_intel_max10.c
-SRCS-y += opae_i2c.c
-SRCS-y += opae_at24_eeprom.c
-SRCS-y += opae_eth_group.c
-
-SRCS-y += $(wildcard $(SRCDIR)/base/$(OSDEP)/*.c)
+++ /dev/null
-..
-
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-Intel iFPGA driver
-==================
-
-This directory contains source code of Intel FPGA driver released by
-the team which develops Intel FPGA Open Programmable Acceleration Engine (OPAE).
-The directory of base/ contains the original source package. The base code
-currently supports Intel FPGA solutions including integrated solution (Intel(R)
-Xeon(R) CPU with FPGAs) and discrete solution (Intel(R) Programmable Acceleration
-Card with Intel(R) Arria(R) 10 FPGA) and it could be extended to support more FPGA
-devices in the future.
-
-Please refer to [1][2] for more introduction on OPAE and Intel FPGAs.
-
-[1] https://01.org/OPAE
-[2] https://www.altera.com/solutions/acceleration-hub/overview.html
-
-
-Updating the driver
-===================
-
-NOTE: The source code in this directory should not be modified apart from
-the following file(s):
-
- osdep_raw/osdep_generic.h
- osdep_rte/osdep_generic.h
-
-
-New Features
-==================
-
-2019-03:
-Support Intel FPGA PAC N3000 card.
-Some features added in this version:
-1. Store private features in FME and Port list.
-2. Add eth group devices driver.
-3. Add altera SPI master driver and Intel MAX10 device driver.
-4. Add Altera I2C master driver and AT24 eeprom driver.
-5. Add Device Tree support to get the configuration from card.
-6. Instruding and exposing APIs to DPDK PMD driver to access networking
-functionality.
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "ifpga_api.h"
-#include "ifpga_enumerate.h"
-#include "ifpga_feature_dev.h"
-
-#include "opae_hw_api.h"
-
-/* Accelerator APIs */
-static int ifpga_acc_get_uuid(struct opae_accelerator *acc,
- struct uuid *uuid)
-{
- struct opae_bridge *br = acc->br;
- struct ifpga_port_hw *port;
-
- if (!br || !br->data)
- return -EINVAL;
-
- port = br->data;
-
- return fpga_get_afu_uuid(port, uuid);
-}
-
-static int ifpga_acc_set_irq(struct opae_accelerator *acc,
- u32 start, u32 count, s32 evtfds[])
-{
- struct ifpga_afu_info *afu_info = acc->data;
- struct opae_bridge *br = acc->br;
- struct ifpga_port_hw *port;
- struct fpga_uafu_irq_set irq_set;
-
- if (!br || !br->data)
- return -EINVAL;
-
- if (start >= afu_info->num_irqs || start + count > afu_info->num_irqs)
- return -EINVAL;
-
- port = br->data;
-
- irq_set.start = start;
- irq_set.count = count;
- irq_set.evtfds = evtfds;
-
- return ifpga_set_irq(port->parent, FEATURE_FIU_ID_PORT, port->port_id,
- IFPGA_PORT_FEATURE_ID_UINT, &irq_set);
-}
-
-static int ifpga_acc_get_info(struct opae_accelerator *acc,
- struct opae_acc_info *info)
-{
- struct ifpga_afu_info *afu_info = acc->data;
-
- if (!afu_info)
- return -ENODEV;
-
- info->num_regions = afu_info->num_regions;
- info->num_irqs = afu_info->num_irqs;
-
- return 0;
-}
-
-static int ifpga_acc_get_region_info(struct opae_accelerator *acc,
- struct opae_acc_region_info *info)
-{
- struct ifpga_afu_info *afu_info = acc->data;
-
- if (!afu_info)
- return -EINVAL;
-
- if (info->index >= afu_info->num_regions)
- return -EINVAL;
-
- /* always one RW region only for AFU now */
- info->flags = ACC_REGION_READ | ACC_REGION_WRITE | ACC_REGION_MMIO;
- info->len = afu_info->region[info->index].len;
- info->addr = afu_info->region[info->index].addr;
- info->phys_addr = afu_info->region[info->index].phys_addr;
-
- return 0;
-}
-
-static int ifpga_acc_read(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data)
-{
- struct ifpga_afu_info *afu_info = acc->data;
- struct opae_reg_region *region;
-
- if (!afu_info)
- return -EINVAL;
-
- if (offset + byte <= offset)
- return -EINVAL;
-
- if (region_idx >= afu_info->num_regions)
- return -EINVAL;
-
- region = &afu_info->region[region_idx];
- if (offset + byte > region->len)
- return -EINVAL;
-
- switch (byte) {
- case 8:
- *(u64 *)data = opae_readq(region->addr + offset);
- break;
- case 4:
- *(u32 *)data = opae_readl(region->addr + offset);
- break;
- case 2:
- *(u16 *)data = opae_readw(region->addr + offset);
- break;
- case 1:
- *(u8 *)data = opae_readb(region->addr + offset);
- break;
- default:
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int ifpga_acc_write(struct opae_accelerator *acc,
- unsigned int region_idx, u64 offset,
- unsigned int byte, void *data)
-{
- struct ifpga_afu_info *afu_info = acc->data;
- struct opae_reg_region *region;
-
- if (!afu_info)
- return -EINVAL;
-
- if (offset + byte <= offset)
- return -EINVAL;
-
- if (region_idx >= afu_info->num_regions)
- return -EINVAL;
-
- region = &afu_info->region[region_idx];
- if (offset + byte > region->len)
- return -EINVAL;
-
- /* normal mmio case */
- switch (byte) {
- case 8:
- opae_writeq(*(u64 *)data, region->addr + offset);
- break;
- case 4:
- opae_writel(*(u32 *)data, region->addr + offset);
- break;
- case 2:
- opae_writew(*(u16 *)data, region->addr + offset);
- break;
- case 1:
- opae_writeb(*(u8 *)data, region->addr + offset);
- break;
- default:
- return -EINVAL;
- }
-
- return 0;
-}
-
-struct opae_accelerator_ops ifpga_acc_ops = {
- .read = ifpga_acc_read,
- .write = ifpga_acc_write,
- .set_irq = ifpga_acc_set_irq,
- .get_info = ifpga_acc_get_info,
- .get_region_info = ifpga_acc_get_region_info,
- .get_uuid = ifpga_acc_get_uuid,
-};
-
-/* Bridge APIs */
-static int ifpga_br_reset(struct opae_bridge *br)
-{
- struct ifpga_port_hw *port = br->data;
-
- return fpga_port_reset(port);
-}
-
-struct opae_bridge_ops ifpga_br_ops = {
- .reset = ifpga_br_reset,
-};
-
-/* Manager APIs */
-static int ifpga_mgr_flash(struct opae_manager *mgr, int id, const char *buf,
- u32 size, u64 *status)
-{
- struct ifpga_fme_hw *fme = mgr->data;
- struct ifpga_hw *hw = fme->parent;
-
- return ifpga_pr(hw, id, buf, size, status);
-}
-
-static int ifpga_mgr_get_eth_group_region_info(struct opae_manager *mgr,
- struct opae_eth_group_region_info *info)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- if (info->group_id >= MAX_ETH_GROUP_DEVICES)
- return -EINVAL;
-
- info->phys_addr = fme->eth_group_region[info->group_id].phys_addr;
- info->addr = fme->eth_group_region[info->group_id].addr;
- info->len = fme->eth_group_region[info->group_id].len;
-
- info->mem_idx = fme->nums_acc_region + info->group_id;
-
- return 0;
-}
-
-struct opae_manager_ops ifpga_mgr_ops = {
- .flash = ifpga_mgr_flash,
- .get_eth_group_region_info = ifpga_mgr_get_eth_group_region_info,
-};
-
-static int ifpga_mgr_read_mac_rom(struct opae_manager *mgr, int offset,
- void *buf, int size)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- return fme_mgr_read_mac_rom(fme, offset, buf, size);
-}
-
-static int ifpga_mgr_write_mac_rom(struct opae_manager *mgr, int offset,
- void *buf, int size)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- return fme_mgr_write_mac_rom(fme, offset, buf, size);
-}
-
-static int ifpga_mgr_get_eth_group_nums(struct opae_manager *mgr)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- return fme_mgr_get_eth_group_nums(fme);
-}
-
-static int ifpga_mgr_get_eth_group_info(struct opae_manager *mgr,
- u8 group_id, struct opae_eth_group_info *info)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- return fme_mgr_get_eth_group_info(fme, group_id, info);
-}
-
-static int ifpga_mgr_eth_group_reg_read(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 *data)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- return fme_mgr_eth_group_read_reg(fme, group_id,
- type, index, addr, data);
-}
-
-static int ifpga_mgr_eth_group_reg_write(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 data)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- return fme_mgr_eth_group_write_reg(fme, group_id,
- type, index, addr, data);
-}
-
-static int ifpga_mgr_get_retimer_info(struct opae_manager *mgr,
- struct opae_retimer_info *info)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- return fme_mgr_get_retimer_info(fme, info);
-}
-
-static int ifpga_mgr_get_retimer_status(struct opae_manager *mgr,
- struct opae_retimer_status *status)
-{
- struct ifpga_fme_hw *fme = mgr->data;
-
- return fme_mgr_get_retimer_status(fme, status);
-}
-
-/* Network APIs in FME */
-struct opae_manager_networking_ops ifpga_mgr_network_ops = {
- .read_mac_rom = ifpga_mgr_read_mac_rom,
- .write_mac_rom = ifpga_mgr_write_mac_rom,
- .get_eth_group_nums = ifpga_mgr_get_eth_group_nums,
- .get_eth_group_info = ifpga_mgr_get_eth_group_info,
- .eth_group_reg_read = ifpga_mgr_eth_group_reg_read,
- .eth_group_reg_write = ifpga_mgr_eth_group_reg_write,
- .get_retimer_info = ifpga_mgr_get_retimer_info,
- .get_retimer_status = ifpga_mgr_get_retimer_status,
-};
-
-/* Adapter APIs */
-static int ifpga_adapter_enumerate(struct opae_adapter *adapter)
-{
- struct ifpga_hw *hw = malloc(sizeof(*hw));
-
- if (hw) {
- opae_memset(hw, 0, sizeof(*hw));
- hw->pci_data = adapter->data;
- hw->adapter = adapter;
- if (ifpga_bus_enumerate(hw))
- goto error;
- return ifpga_bus_init(hw);
- }
-
-error:
- return -ENOMEM;
-}
-
-struct opae_adapter_ops ifpga_adapter_ops = {
- .enumerate = ifpga_adapter_enumerate,
-};
-
-/**
- * ifpga_pr - do the partial reconfiguration for a given port device
- * @hw: pointer to the HW structure
- * @port_id: the port device id
- * @buffer: the buffer of the bitstream
- * @size: the size of the bitstream
- * @status: hardware status including PR error code if return -EIO.
- *
- * @return
- * - 0: Success, partial reconfiguration finished.
- * - <0: Error code returned in partial reconfiguration.
- **/
-int ifpga_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer, u32 size,
- u64 *status)
-{
- if (!is_valid_port_id(hw, port_id))
- return -ENODEV;
-
- return do_pr(hw, port_id, buffer, size, status);
-}
-
-int ifpga_get_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
- struct feature_prop *prop)
-{
- if (!hw || !prop)
- return -EINVAL;
-
- switch (fiu_id) {
- case FEATURE_FIU_ID_FME:
- return fme_get_prop(&hw->fme, prop);
- case FEATURE_FIU_ID_PORT:
- if (!is_valid_port_id(hw, port_id))
- return -ENODEV;
- return port_get_prop(&hw->port[port_id], prop);
- }
-
- return -ENOENT;
-}
-
-int ifpga_set_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
- struct feature_prop *prop)
-{
- if (!hw || !prop)
- return -EINVAL;
-
- switch (fiu_id) {
- case FEATURE_FIU_ID_FME:
- return fme_set_prop(&hw->fme, prop);
- case FEATURE_FIU_ID_PORT:
- if (!is_valid_port_id(hw, port_id))
- return -ENODEV;
- return port_set_prop(&hw->port[port_id], prop);
- }
-
- return -ENOENT;
-}
-
-int ifpga_set_irq(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
- u32 feature_id, void *irq_set)
-{
- if (!hw || !irq_set)
- return -EINVAL;
-
- switch (fiu_id) {
- case FEATURE_FIU_ID_FME:
- return fme_set_irq(&hw->fme, feature_id, irq_set);
- case FEATURE_FIU_ID_PORT:
- if (!is_valid_port_id(hw, port_id))
- return -ENODEV;
- return port_set_irq(&hw->port[port_id], feature_id, irq_set);
- }
-
- return -ENOENT;
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _IFPGA_API_H_
-#define _IFPGA_API_H_
-
-#include "opae_hw_api.h"
-#include "ifpga_hw.h"
-
-extern struct opae_adapter_ops ifpga_adapter_ops;
-extern struct opae_manager_ops ifpga_mgr_ops;
-extern struct opae_bridge_ops ifpga_br_ops;
-extern struct opae_accelerator_ops ifpga_acc_ops;
-extern struct opae_manager_networking_ops ifpga_mgr_network_ops;
-
-/* common APIs */
-int ifpga_get_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
- struct feature_prop *prop);
-int ifpga_set_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
- struct feature_prop *prop);
-int ifpga_set_irq(struct ifpga_hw *hw, u32 fiu_id, u32 port_id,
- u32 feature_id, void *irq_set);
-
-/* FME APIs */
-int ifpga_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer, u32 size,
- u64 *status);
-
-#endif /* _IFPGA_API_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _IFPGA_COMPAT_H_
-#define _IFPGA_COMPAT_H_
-
-#include "opae_osdep.h"
-
-#undef container_of
-#define container_of(ptr, type, member) ({ \
- typeof(((type *)0)->member)(*__mptr) = (ptr); \
- (type *)((char *)__mptr - offsetof(type, member)); })
-
-#define IFPGA_PAGE_SHIFT 12
-#define IFPGA_PAGE_SIZE (1 << IFPGA_PAGE_SHIFT)
-#define IFPGA_PAGE_MASK (~(IFPGA_PAGE_SIZE - 1))
-#define IFPGA_PAGE_ALIGN(addr) (((addr) + IFPGA_PAGE_SIZE - 1)\
- & IFPGA_PAGE_MASK)
-#define IFPGA_ALIGN(x, a) (((x) + (a) - 1) & ~((a) - 1))
-
-#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0)
-#define PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), IFPGA_PAGE_SIZE)
-
-#define readl(addr) opae_readl(addr)
-#define readq(addr) opae_readq(addr)
-#define writel(value, addr) opae_writel(value, addr)
-#define writeq(value, addr) opae_writeq(value, addr)
-
-#define malloc(size) opae_malloc(size)
-#define zmalloc(size) opae_zmalloc(size)
-#define free(size) opae_free(size)
-
-/*
- * Wait register's _field to be changed to the given value (_expect's _field)
- * by polling with given interval and timeout.
- */
-#define fpga_wait_register_field(_field, _expect, _reg_addr, _timeout, _invl)\
-({ \
- int wait = 0; \
- int ret = -ETIMEDOUT; \
- typeof(_expect) value; \
- for (; wait <= _timeout; wait += _invl) { \
- value.csr = readq(_reg_addr); \
- if (_expect._field == value._field) { \
- ret = 0; \
- break; \
- } \
- udelay(_invl); \
- } \
- ret; \
-})
-
-#define __maybe_unused __attribute__((__unused__))
-
-#define UNUSED(x) (void)(x)
-
-#endif /* _IFPGA_COMPAT_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _IFPGA_DEFINES_H_
-#define _IFPGA_DEFINES_H_
-
-#include "ifpga_compat.h"
-
-#define MAX_FPGA_PORT_NUM 4
-
-#define FME_FEATURE_HEADER "fme_hdr"
-#define FME_FEATURE_THERMAL_MGMT "fme_thermal"
-#define FME_FEATURE_POWER_MGMT "fme_power"
-#define FME_FEATURE_GLOBAL_IPERF "fme_iperf"
-#define FME_FEATURE_GLOBAL_ERR "fme_error"
-#define FME_FEATURE_PR_MGMT "fme_pr"
-#define FME_FEATURE_EMIF_MGMT "fme_emif"
-#define FME_FEATURE_HSSI_ETH "fme_hssi"
-#define FME_FEATURE_GLOBAL_DPERF "fme_dperf"
-#define FME_FEATURE_QSPI_FLASH "fme_qspi_flash"
-#define FME_FEATURE_MAX10_SPI "fme_max10_spi"
-#define FME_FEATURE_NIOS_SPI "fme_nios_spi"
-#define FME_FEATURE_I2C_MASTER "fme_i2c_master"
-#define FME_FEATURE_ETH_GROUP "fme_eth_group"
-
-#define PORT_FEATURE_HEADER "port_hdr"
-#define PORT_FEATURE_UAFU "port_uafu"
-#define PORT_FEATURE_ERR "port_err"
-#define PORT_FEATURE_UMSG "port_umsg"
-#define PORT_FEATURE_PR "port_pr"
-#define PORT_FEATURE_UINT "port_uint"
-#define PORT_FEATURE_STP "port_stp"
-
-/*
- * do not check the revision id as id may be dynamic under
- * some cases, e.g, UAFU.
- */
-#define SKIP_REVISION_CHECK 0xff
-
-#define FME_HEADER_REVISION 1
-#define FME_THERMAL_MGMT_REVISION 0
-#define FME_POWER_MGMT_REVISION 1
-#define FME_GLOBAL_IPERF_REVISION 1
-#define FME_GLOBAL_ERR_REVISION 1
-#define FME_PR_MGMT_REVISION 2
-#define FME_HSSI_ETH_REVISION 0
-#define FME_GLOBAL_DPERF_REVISION 0
-#define FME_QSPI_REVISION 0
-#define FME_MAX10_SPI 0
-#define FME_I2C_MASTER 0
-
-#define PORT_HEADER_REVISION 0
-/* UAFU's header info depends on the downloaded GBS */
-#define PORT_UAFU_REVISION SKIP_REVISION_CHECK
-#define PORT_ERR_REVISION 1
-#define PORT_UMSG_REVISION 0
-#define PORT_UINT_REVISION 0
-#define PORT_STP_REVISION 1
-
-#define FEATURE_TYPE_AFU 0x1
-#define FEATURE_TYPE_BBB 0x2
-#define FEATURE_TYPE_PRIVATE 0x3
-#define FEATURE_TYPE_FIU 0x4
-
-#define FEATURE_FIU_ID_FME 0x0
-#define FEATURE_FIU_ID_PORT 0x1
-
-/* Reserved 0xfe for Header, 0xff for AFU*/
-#define FEATURE_ID_FIU_HEADER 0xfe
-#define FEATURE_ID_AFU 0xff
-
-enum fpga_id_type {
- FME_ID,
- PORT_ID,
- FPGA_ID_MAX,
-};
-
-#define FME_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER
-#define FME_FEATURE_ID_THERMAL_MGMT 0x1
-#define FME_FEATURE_ID_POWER_MGMT 0x2
-#define FME_FEATURE_ID_GLOBAL_IPERF 0x3
-#define FME_FEATURE_ID_GLOBAL_ERR 0x4
-#define FME_FEATURE_ID_PR_MGMT 0x5
-#define FME_FEATURE_ID_HSSI_ETH 0x6
-#define FME_FEATURE_ID_GLOBAL_DPERF 0x7
-#define FME_FEATURE_ID_QSPI_FLASH 0x8
-#define FME_FEATURE_ID_EMIF_MGMT 0x9
-#define FME_FEATURE_ID_MAX10_SPI 0xe
-#define FME_FEATURE_ID_NIOS_SPI 0xd
-#define FME_FEATURE_ID_I2C_MASTER 0xf
-#define FME_FEATURE_ID_ETH_GROUP 0x10
-
-#define PORT_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER
-#define PORT_FEATURE_ID_ERROR 0x10
-#define PORT_FEATURE_ID_UMSG 0x12
-#define PORT_FEATURE_ID_UINT 0x13
-#define PORT_FEATURE_ID_STP 0x14
-#define PORT_FEATURE_ID_UAFU FEATURE_ID_AFU
-
-/*
- * All headers and structures must be byte-packed to match the spec.
- */
-#pragma pack(push, 1)
-
-struct feature_header {
- union {
- u64 csr;
- struct {
- u16 id:12;
- u8 revision:4;
- u32 next_header_offset:24;
- u8 end_of_list:1;
- u32 reserved:19;
- u8 type:4;
- };
- };
-};
-
-struct feature_bbb_header {
- struct uuid guid;
-};
-
-struct feature_afu_header {
- struct uuid guid;
- union {
- u64 csr;
- struct {
- u64 next_afu:24;
- u64 reserved:40;
- };
- };
-};
-
-struct feature_fiu_header {
- struct uuid guid;
- union {
- u64 csr;
- struct {
- u64 next_afu:24;
- u64 reserved:40;
- };
- };
-};
-
-struct feature_fme_capability {
- union {
- u64 csr;
- struct {
- u8 fabric_verid; /* Fabric version ID */
- u8 socket_id:1; /* Socket id */
- u8 rsvd1:3; /* Reserved */
- /* pci0 link available yes /no */
- u8 pci0_link_avile:1;
- /* pci1 link available yes /no */
- u8 pci1_link_avile:1;
- /* Coherent (QPI/UPI) link available yes /no */
- u8 qpi_link_avile:1;
- u8 rsvd2:1; /* Reserved */
- /* IOMMU or VT-d supported yes/no */
- u8 iommu_support:1;
- u8 num_ports:3; /* Number of ports */
- u8 sf_fab_ctl:1; /* Internal validation bit */
- u8 rsvd3:3; /* Reserved */
- /*
- * Address width supported in bits
- * BXT -0x26 , SKX -0x30
- */
- u8 address_width_bits:6;
- u8 rsvd4:2; /* Reserved */
- /* Size of cache supported in kb */
- u16 cache_size:12;
- u8 cache_assoc:4; /* Cache Associativity */
- u16 rsvd5:15; /* Reserved */
- u8 lock_bit:1; /* Lock bit */
- };
- };
-};
-
-#define FME_AFU_ACCESS_PF 0
-#define FME_AFU_ACCESS_VF 1
-
-struct feature_fme_port {
- union {
- u64 csr;
- struct {
- u32 port_offset:24;
- u8 reserved1;
- u8 port_bar:3;
- u32 reserved2:20;
- u8 afu_access_control:1;
- u8 reserved3:4;
- u8 port_implemented:1;
- u8 reserved4:3;
- };
- };
-};
-
-struct feature_fme_fab_status {
- union {
- u64 csr;
- struct {
- u8 upilink_status:4; /* UPI Link Status */
- u8 rsvd1:4; /* Reserved */
- u8 pci0link_status:1; /* pci0 link status */
- u8 rsvd2:3; /* Reserved */
- u8 pci1link_status:1; /* pci1 link status */
- u64 rsvd3:51; /* Reserved */
- };
- };
-};
-
-struct feature_fme_genprotrange2_base {
- union {
- u64 csr;
- struct {
- u16 rsvd1; /* Reserved */
- /* Base Address of memory range */
- u8 protected_base_addrss:4;
- u64 rsvd2:44; /* Reserved */
- };
- };
-};
-
-struct feature_fme_genprotrange2_limit {
- union {
- u64 csr;
- struct {
- u16 rsvd1; /* Reserved */
- /* Limit Address of memory range */
- u8 protected_limit_addrss:4;
- u16 rsvd2:11; /* Reserved */
- u8 enable:1; /* Enable GENPROTRANGE check */
- u32 rsvd3; /* Reserved */
- };
- };
-};
-
-struct feature_fme_dxe_lock {
- union {
- u64 csr;
- struct {
- /*
- * Determines write access to the DXE region CSRs
- * 1 - CSR region is locked;
- * 0 - it is open for write access.
- */
- u8 dxe_early_lock:1;
- /*
- * Determines write access to the HSSI CSR
- * 1 - CSR region is locked;
- * 0 - it is open for write access.
- */
- u8 dxe_late_lock:1;
- u64 rsvd:62;
- };
- };
-};
-
-#define HSSI_ID_NO_HASSI 0
-#define HSSI_ID_PCIE_RP 1
-#define HSSI_ID_ETHERNET 2
-
-struct feature_fme_bitstream_id {
- union {
- u64 csr;
- struct {
- u32 gitrepo_hash:32; /* GIT repository hash */
- /*
- * HSSI configuration identifier:
- * 0 - No HSSI
- * 1 - PCIe-RP
- * 2 - Ethernet
- */
- u8 hssi_id:4;
- u16 rsvd1:12; /* Reserved */
- /* Bitstream version patch number */
- u8 bs_verpatch:4;
- /* Bitstream version minor number */
- u8 bs_verminor:4;
- /* Bitstream version major number */
- u8 bs_vermajor:4;
- /* Bitstream version debug number */
- u8 bs_verdebug:4;
- };
- };
-};
-
-struct feature_fme_bitstream_md {
- union {
- u64 csr;
- struct {
- /* Seed number userd for synthesis flow */
- u8 synth_seed:4;
- /* Synthesis date(day number - 2 digits) */
- u8 synth_day:8;
- /* Synthesis date(month number - 2 digits) */
- u8 synth_month:8;
- /* Synthesis date(year number - 2 digits) */
- u8 synth_year:8;
- u64 rsvd:36; /* Reserved */
- };
- };
-};
-
-struct feature_fme_iommu_ctrl {
- union {
- u64 csr;
- struct {
- /* Disables IOMMU prefetcher for C0 channel */
- u8 prefetch_disableC0:1;
- /* Disables IOMMU prefetcher for C1 channel */
- u8 prefetch_disableC1:1;
- /* Disables IOMMU partial cache line writes */
- u8 prefetch_wrdisable:1;
- u8 rsvd1:1; /* Reserved */
- /*
- * Select counter and read value from register
- * iommu_stat.dbg_counters
- * 0 - Number of 4K page translation response
- * 1 - Number of 2M page translation response
- * 2 - Number of 1G page translation response
- */
- u8 counter_sel:2;
- u32 rsvd2:26; /* Reserved */
- /* Connected to IOMMU SIP Capabilities */
- u32 capecap_defeature;
- };
- };
-};
-
-struct feature_fme_iommu_stat {
- union {
- u64 csr;
- struct {
- /* Translation Enable bit from IOMMU SIP */
- u8 translation_enable:1;
- /* Drain request in progress */
- u8 drain_req_inprog:1;
- /* Invalidation current state */
- u8 inv_state:3;
- /* C0 Response Buffer current state */
- u8 respbuffer_stateC0:3;
- /* C1 Response Buffer current state */
- u8 respbuffer_stateC1:3;
- /* Last request ID to IOMMU SIP */
- u8 last_reqID:4;
- /* Last IOMMU SIP response ID value */
- u8 last_respID:4;
- /* Last IOMMU SIP response status value */
- u8 last_respstatus:3;
- /* C0 Transaction Buffer is not empty */
- u8 transbuf_notEmptyC0:1;
- /* C1 Transaction Buffer is not empty */
- u8 transbuf_notEmptyC1:1;
- /* C0 Request FIFO is not empty */
- u8 reqFIFO_notemptyC0:1;
- /* C1 Request FIFO is not empty */
- u8 reqFIFO_notemptyC1:1;
- /* C0 Response FIFO is not empty */
- u8 respFIFO_notemptyC0:1;
- /* C1 Response FIFO is not empty */
- u8 respFIFO_notemptyC1:1;
- /* C0 Response FIFO overflow detected */
- u8 respFIFO_overflowC0:1;
- /* C1 Response FIFO overflow detected */
- u8 respFIFO_overflowC1:1;
- /* C0 Transaction Buffer overflow detected */
- u8 tranbuf_overflowC0:1;
- /* C1 Transaction Buffer overflow detected */
- u8 tranbuf_overflowC1:1;
- /* Request FIFO overflow detected */
- u8 reqFIFO_overflow:1;
- /* IOMMU memory read in progress */
- u8 memrd_inprog:1;
- /* IOMMU memory write in progress */
- u8 memwr_inprog:1;
- u8 rsvd1:1; /* Reserved */
- /* Value of counter selected by iommu_ctl.counter_sel */
- u16 dbg_counters:16;
- u16 rsvd2:12; /* Reserved */
- };
- };
-};
-
-struct feature_fme_pcie0_ctrl {
- union {
- u64 csr;
- struct {
- u64 vtd_bar_lock:1; /* Lock VT-D BAR register */
- u64 rsvd1:3;
- u64 rciep:1; /* Configure PCIE0 as RCiEP */
- u64 rsvd2:59;
- };
- };
-};
-
-struct feature_fme_llpr_smrr_base {
- union {
- u64 csr;
- struct {
- u64 rsvd1:12;
- u64 base:20; /* SMRR2 memory range base address */
- u64 rsvd2:32;
- };
- };
-};
-
-struct feature_fme_llpr_smrr_mask {
- union {
- u64 csr;
- struct {
- u64 rsvd1:11;
- u64 valid:1; /* LLPR_SMRR rule is valid or not */
- /*
- * SMRR memory range mask which determines the range
- * of region being mapped
- */
- u64 phys_mask:20;
- u64 rsvd2:32;
- };
- };
-};
-
-struct feature_fme_llpr_smrr2_base {
- union {
- u64 csr;
- struct {
- u64 rsvd1:12;
- u64 base:20; /* SMRR2 memory range base address */
- u64 rsvd2:32;
- };
- };
-};
-
-struct feature_fme_llpr_smrr2_mask {
- union {
- u64 csr;
- struct {
- u64 rsvd1:11;
- u64 valid:1; /* LLPR_SMRR2 rule is valid or not */
- /*
- * SMRR2 memory range mask which determines the range
- * of region being mapped
- */
- u64 phys_mask:20;
- u64 rsvd2:32;
- };
- };
-};
-
-struct feature_fme_llpr_meseg_base {
- union {
- u64 csr;
- struct {
- /* A[45:19] of base address memory range */
- u64 me_base:27;
- u64 rsvd:37;
- };
- };
-};
-
-struct feature_fme_llpr_meseg_limit {
- union {
- u64 csr;
- struct {
- /* A[45:19] of limit address memory range */
- u64 me_limit:27;
- u64 rsvd1:4;
- u64 enable:1; /* Enable LLPR MESEG rule */
- u64 rsvd2:32;
- };
- };
-};
-
-struct feature_fme_header {
- struct feature_header header;
- struct feature_afu_header afu_header;
- u64 reserved;
- u64 scratchpad;
- struct feature_fme_capability capability;
- struct feature_fme_port port[MAX_FPGA_PORT_NUM];
- struct feature_fme_fab_status fab_status;
- struct feature_fme_bitstream_id bitstream_id;
- struct feature_fme_bitstream_md bitstream_md;
- struct feature_fme_genprotrange2_base genprotrange2_base;
- struct feature_fme_genprotrange2_limit genprotrange2_limit;
- struct feature_fme_dxe_lock dxe_lock;
- struct feature_fme_iommu_ctrl iommu_ctrl;
- struct feature_fme_iommu_stat iommu_stat;
- struct feature_fme_pcie0_ctrl pcie0_control;
- struct feature_fme_llpr_smrr_base smrr_base;
- struct feature_fme_llpr_smrr_mask smrr_mask;
- struct feature_fme_llpr_smrr2_base smrr2_base;
- struct feature_fme_llpr_smrr2_mask smrr2_mask;
- struct feature_fme_llpr_meseg_base meseg_base;
- struct feature_fme_llpr_meseg_limit meseg_limit;
-};
-
-struct feature_port_capability {
- union {
- u64 csr;
- struct {
- u8 port_number:2; /* Port Number 0-3 */
- u8 rsvd1:6; /* Reserved */
- u16 mmio_size; /* User MMIO size in KB */
- u8 rsvd2; /* Reserved */
- u8 sp_intr_num:4; /* Supported interrupts num */
- u32 rsvd3:28; /* Reserved */
- };
- };
-};
-
-struct feature_port_control {
- union {
- u64 csr;
- struct {
- u8 port_sftrst:1; /* Port Soft Reset */
- u8 rsvd1:1; /* Reserved */
- u8 latency_tolerance:1;/* '1' >= 40us, '0' < 40us */
- u8 rsvd2:1; /* Reserved */
- u8 port_sftrst_ack:1; /* HW ACK for Soft Reset */
- u64 rsvd3:59; /* Reserved */
- };
- };
-};
-
-#define PORT_POWER_STATE_NORMAL 0
-#define PORT_POWER_STATE_AP1 1
-#define PORT_POWER_STATE_AP2 2
-#define PORT_POWER_STATE_AP6 6
-
-struct feature_port_status {
- union {
- u64 csr;
- struct {
- u8 port_freeze:1; /* '1' - freezed '0' - normal */
- u8 rsvd1:7; /* Reserved */
- u8 power_state:4; /* Power State */
- u8 ap1_event:1; /* AP1 event was detected */
- u8 ap2_event:1; /* AP2 event was detected */
- u64 rsvd2:50; /* Reserved */
- };
- };
-};
-
-/* Port Header Register Set */
-struct feature_port_header {
- struct feature_header header;
- struct feature_afu_header afu_header;
- u64 port_mailbox;
- u64 scratchpad;
- struct feature_port_capability capability;
- struct feature_port_control control;
- struct feature_port_status status;
- u64 rsvd2;
- u64 user_clk_freq_cmd0;
- u64 user_clk_freq_cmd1;
- u64 user_clk_freq_sts0;
- u64 user_clk_freq_sts1;
-};
-
-struct feature_fme_tmp_threshold {
- union {
- u64 csr;
- struct {
- u8 tmp_thshold1:7; /* temperature Threshold 1 */
- /* temperature Threshold 1 enable/disable */
- u8 tmp_thshold1_enable:1;
- u8 tmp_thshold2:7; /* temperature Threshold 2 */
- /* temperature Threshold 2 enable /disable */
- u8 tmp_thshold2_enable:1;
- u8 pro_hot_setpoint:7; /* Proc Hot set point */
- u8 rsvd4:1; /* Reserved */
- u8 therm_trip_thshold:7; /* Thermeal Trip Threshold */
- u8 rsvd3:1; /* Reserved */
- u8 thshold1_status:1; /* Threshold 1 Status */
- u8 thshold2_status:1; /* Threshold 2 Status */
- u8 rsvd5:1; /* Reserved */
- /* Thermeal Trip Threshold status */
- u8 therm_trip_thshold_status:1;
- u8 rsvd6:4; /* Reserved */
- /* Validation mode- Force Proc Hot */
- u8 valmodeforce:1;
- /* Validation mode - Therm trip Hot */
- u8 valmodetherm:1;
- u8 rsvd2:2; /* Reserved */
- u8 thshold_policy:1; /* threshold policy */
- u32 rsvd:19; /* Reserved */
- };
- };
-};
-
-/* Temperature Sensor Read values format 1 */
-struct feature_fme_temp_rdsensor_fmt1 {
- union {
- u64 csr;
- struct {
- /* Reads out FPGA temperature in celsius */
- u8 fpga_temp:7;
- u8 rsvd0:1; /* Reserved */
- /* Temperature reading sequence number */
- u16 tmp_reading_seq_num;
- /* Temperature reading is valid */
- u8 tmp_reading_valid:1;
- u8 rsvd1:7; /* Reserved */
- u16 dbg_mode:10; /* Debug mode */
- u32 rsvd2:22; /* Reserved */
- };
- };
-};
-
-/* Temperature sensor read values format 2 */
-struct feature_fme_temp_rdsensor_fmt2 {
- u64 rsvd; /* Reserved */
-};
-
-/* Temperature Threshold Capability Register */
-struct feature_fme_tmp_threshold_cap {
- union {
- u64 csr;
- struct {
- /* Temperature Threshold Unsupported */
- u8 tmp_thshold_disabled:1;
- u64 rsvd:63; /* Reserved */
- };
- };
-};
-
-/* FME THERNAL FEATURE */
-struct feature_fme_thermal {
- struct feature_header header;
- struct feature_fme_tmp_threshold threshold;
- struct feature_fme_temp_rdsensor_fmt1 rdsensor_fm1;
- struct feature_fme_temp_rdsensor_fmt2 rdsensor_fm2;
- struct feature_fme_tmp_threshold_cap threshold_cap;
-};
-
-/* Power Status register */
-struct feature_fme_pm_status {
- union {
- u64 csr;
- struct {
- /* FPGA Power consumed, The format is to be defined */
- u32 pwr_consumed:18;
- /* FPGA Latency Tolerance Reporting */
- u8 fpga_latency_report:1;
- u64 rsvd:45; /* Reserved */
- };
- };
-};
-
-/* AP Thresholds */
-struct feature_fme_pm_ap_threshold {
- union {
- u64 csr;
- struct {
- /*
- * Number of clocks (5ns period) for assertion
- * of FME_data
- */
- u8 threshold1:7;
- u8 rsvd1:1;
- u8 threshold2:7;
- u8 rsvd2:1;
- u8 threshold1_status:1;
- u8 threshold2_status:1;
- u64 rsvd3:46; /* Reserved */
- };
- };
-};
-
-/* Xeon Power Limit */
-struct feature_fme_pm_xeon_limit {
- union {
- u64 csr;
- struct {
- /* Power limit in Watts in 12.3 format */
- u16 pwr_limit:15;
- /* Indicates that power limit has been written */
- u8 enable:1;
- /* 0 - Turbe range, 1 - Entire range */
- u8 clamping:1;
- /* Time constant in XXYYY format */
- u8 time:7;
- u64 rsvd:40; /* Reserved */
- };
- };
-};
-
-/* FPGA Power Limit */
-struct feature_fme_pm_fpga_limit {
- union {
- u64 csr;
- struct {
- /* Power limit in Watts in 12.3 format */
- u16 pwr_limit:15;
- /* Indicates that power limit has been written */
- u8 enable:1;
- /* 0 - Turbe range, 1 - Entire range */
- u8 clamping:1;
- /* Time constant in XXYYY format */
- u8 time:7;
- u64 rsvd:40; /* Reserved */
- };
- };
-};
-
-/* FME POWER FEATURE */
-struct feature_fme_power {
- struct feature_header header;
- struct feature_fme_pm_status status;
- struct feature_fme_pm_ap_threshold threshold;
- struct feature_fme_pm_xeon_limit xeon_limit;
- struct feature_fme_pm_fpga_limit fpga_limit;
-};
-
-#define CACHE_CHANNEL_RD 0
-#define CACHE_CHANNEL_WR 1
-
-enum iperf_cache_events {
- IPERF_CACHE_RD_HIT,
- IPERF_CACHE_WR_HIT,
- IPERF_CACHE_RD_MISS,
- IPERF_CACHE_WR_MISS,
- IPERF_CACHE_RSVD, /* reserved */
- IPERF_CACHE_HOLD_REQ,
- IPERF_CACHE_DATA_WR_PORT_CONTEN,
- IPERF_CACHE_TAG_WR_PORT_CONTEN,
- IPERF_CACHE_TX_REQ_STALL,
- IPERF_CACHE_RX_REQ_STALL,
- IPERF_CACHE_EVICTIONS,
-};
-
-/* FPMON Cache Control */
-struct feature_fme_ifpmon_ch_ctl {
- union {
- u64 csr;
- struct {
- u8 reset_counters:1; /* Reset Counters */
- u8 rsvd1:7; /* Reserved */
- u8 freeze:1; /* Freeze if set to 1 */
- u8 rsvd2:7; /* Reserved */
- u8 cache_event:4; /* Select the cache event */
- u8 cci_chsel:1; /* Select the channel */
- u64 rsvd3:43; /* Reserved */
- };
- };
-};
-
-/* FPMON Cache Counter */
-struct feature_fme_ifpmon_ch_ctr {
- union {
- u64 csr;
- struct {
- /* Cache Counter for even addresse */
- u64 cache_counter:48;
- u16 rsvd:12; /* Reserved */
- /* Cache Event being reported */
- u8 event_code:4;
- };
- };
-};
-
-enum iperf_fab_events {
- IPERF_FAB_PCIE0_RD,
- IPERF_FAB_PCIE0_WR,
- IPERF_FAB_PCIE1_RD,
- IPERF_FAB_PCIE1_WR,
- IPERF_FAB_UPI_RD,
- IPERF_FAB_UPI_WR,
- IPERF_FAB_MMIO_RD,
- IPERF_FAB_MMIO_WR,
-};
-
-#define FAB_DISABLE_FILTER 0
-#define FAB_ENABLE_FILTER 1
-
-/* FPMON FAB Control */
-struct feature_fme_ifpmon_fab_ctl {
- union {
- u64 csr;
- struct {
- u8 reset_counters:1; /* Reset Counters */
- u8 rsvd:7; /* Reserved */
- u8 freeze:1; /* Set to 1 frozen counter */
- u8 rsvd1:7; /* Reserved */
- u8 fab_evtcode:4; /* Fabric Event Code */
- u8 port_id:2; /* Port ID */
- u8 rsvd2:1; /* Reserved */
- u8 port_filter:1; /* Port Filter */
- u64 rsvd3:40; /* Reserved */
- };
- };
-};
-
-/* FPMON Event Counter */
-struct feature_fme_ifpmon_fab_ctr {
- union {
- u64 csr;
- struct {
- u64 fab_cnt:60; /* Fabric event counter */
- /* Fabric event code being reported */
- u8 event_code:4;
- };
- };
-};
-
-/* FPMON Clock Counter */
-struct feature_fme_ifpmon_clk_ctr {
- u64 afu_interf_clock; /* Clk_16UI (AFU clock) counter. */
-};
-
-enum iperf_vtd_events {
- IPERF_VTD_AFU_MEM_RD_TRANS,
- IPERF_VTD_AFU_MEM_WR_TRANS,
- IPERF_VTD_AFU_DEVTLB_RD_HIT,
- IPERF_VTD_AFU_DEVTLB_WR_HIT,
- IPERF_VTD_DEVTLB_4K_FILL,
- IPERF_VTD_DEVTLB_2M_FILL,
- IPERF_VTD_DEVTLB_1G_FILL,
-};
-
-/* VT-d control register */
-struct feature_fme_ifpmon_vtd_ctl {
- union {
- u64 csr;
- struct {
- u8 reset_counters:1; /* Reset Counters */
- u8 rsvd:7; /* Reserved */
- u8 freeze:1; /* Set to 1 frozen counter */
- u8 rsvd1:7; /* Reserved */
- u8 vtd_evtcode:4; /* VTd and TLB event code */
- u64 rsvd2:44; /* Reserved */
- };
- };
-};
-
-/* VT-d event counter */
-struct feature_fme_ifpmon_vtd_ctr {
- union {
- u64 csr;
- struct {
- u64 vtd_counter:48; /* VTd event counter */
- u16 rsvd:12; /* Reserved */
- u8 event_code:4; /* VTd event code */
- };
- };
-};
-
-enum iperf_vtd_sip_events {
- IPERF_VTD_SIP_IOTLB_4K_HIT,
- IPERF_VTD_SIP_IOTLB_2M_HIT,
- IPERF_VTD_SIP_IOTLB_1G_HIT,
- IPERF_VTD_SIP_SLPWC_L3_HIT,
- IPERF_VTD_SIP_SLPWC_L4_HIT,
- IPERF_VTD_SIP_RCC_HIT,
- IPERF_VTD_SIP_IOTLB_4K_MISS,
- IPERF_VTD_SIP_IOTLB_2M_MISS,
- IPERF_VTD_SIP_IOTLB_1G_MISS,
- IPERF_VTD_SIP_SLPWC_L3_MISS,
- IPERF_VTD_SIP_SLPWC_L4_MISS,
- IPERF_VTD_SIP_RCC_MISS,
-};
-
-/* VT-d SIP control register */
-struct feature_fme_ifpmon_vtd_sip_ctl {
- union {
- u64 csr;
- struct {
- u8 reset_counters:1; /* Reset Counters */
- u8 rsvd:7; /* Reserved */
- u8 freeze:1; /* Set to 1 frozen counter */
- u8 rsvd1:7; /* Reserved */
- u8 vtd_evtcode:4; /* VTd and TLB event code */
- u64 rsvd2:44; /* Reserved */
- };
- };
-};
-
-/* VT-d SIP event counter */
-struct feature_fme_ifpmon_vtd_sip_ctr {
- union {
- u64 csr;
- struct {
- u64 vtd_counter:48; /* VTd event counter */
- u16 rsvd:12; /* Reserved */
- u8 event_code:4; /* VTd event code */
- };
- };
-};
-
-/* FME IPERF FEATURE */
-struct feature_fme_iperf {
- struct feature_header header;
- struct feature_fme_ifpmon_ch_ctl ch_ctl;
- struct feature_fme_ifpmon_ch_ctr ch_ctr0;
- struct feature_fme_ifpmon_ch_ctr ch_ctr1;
- struct feature_fme_ifpmon_fab_ctl fab_ctl;
- struct feature_fme_ifpmon_fab_ctr fab_ctr;
- struct feature_fme_ifpmon_clk_ctr clk;
- struct feature_fme_ifpmon_vtd_ctl vtd_ctl;
- struct feature_fme_ifpmon_vtd_ctr vtd_ctr;
- struct feature_fme_ifpmon_vtd_sip_ctl vtd_sip_ctl;
- struct feature_fme_ifpmon_vtd_sip_ctr vtd_sip_ctr;
-};
-
-enum dperf_fab_events {
- DPERF_FAB_PCIE0_RD,
- DPERF_FAB_PCIE0_WR,
- DPERF_FAB_MMIO_RD = 6,
- DPERF_FAB_MMIO_WR,
-};
-
-/* FPMON FAB Control */
-struct feature_fme_dfpmon_fab_ctl {
- union {
- u64 csr;
- struct {
- u8 reset_counters:1; /* Reset Counters */
- u8 rsvd:7; /* Reserved */
- u8 freeze:1; /* Set to 1 frozen counter */
- u8 rsvd1:7; /* Reserved */
- u8 fab_evtcode:4; /* Fabric Event Code */
- u8 port_id:2; /* Port ID */
- u8 rsvd2:1; /* Reserved */
- u8 port_filter:1; /* Port Filter */
- u64 rsvd3:40; /* Reserved */
- };
- };
-};
-
-/* FPMON Event Counter */
-struct feature_fme_dfpmon_fab_ctr {
- union {
- u64 csr;
- struct {
- u64 fab_cnt:60; /* Fabric event counter */
- /* Fabric event code being reported */
- u8 event_code:4;
- };
- };
-};
-
-/* FPMON Clock Counter */
-struct feature_fme_dfpmon_clk_ctr {
- u64 afu_interf_clock; /* Clk_16UI (AFU clock) counter. */
-};
-
-/* FME DPERF FEATURE */
-struct feature_fme_dperf {
- struct feature_header header;
- u64 rsvd[3];
- struct feature_fme_dfpmon_fab_ctl fab_ctl;
- struct feature_fme_dfpmon_fab_ctr fab_ctr;
- struct feature_fme_dfpmon_clk_ctr clk;
-};
-
-struct feature_fme_error0 {
-#define FME_ERROR0_MASK 0xFFUL
-#define FME_ERROR0_MASK_DEFAULT 0x40UL /* pcode workaround */
- union {
- u64 csr;
- struct {
- u8 fabric_err:1; /* Fabric error */
- u8 fabfifo_overflow:1; /* Fabric fifo overflow */
- u8 kticdc_parity_err:2;/* KTI CDC Parity Error */
- u8 iommu_parity_err:1; /* IOMMU Parity error */
- /* AFU PF/VF access mismatch detected */
- u8 afu_acc_mode_err:1;
- u8 mbp_err:1; /* Indicates an MBP event */
- /* PCIE0 CDC Parity Error */
- u8 pcie0cdc_parity_err:5;
- /* PCIE1 CDC Parity Error */
- u8 pcie1cdc_parity_err:5;
- /* CVL CDC Parity Error */
- u8 cvlcdc_parity_err:3;
- u64 rsvd:44; /* Reserved */
- };
- };
-};
-
-/* PCIe0 Error Status register */
-struct feature_fme_pcie0_error {
-#define FME_PCIE0_ERROR_MASK 0xFFUL
- union {
- u64 csr;
- struct {
- u8 formattype_err:1; /* TLP format/type error */
- u8 MWAddr_err:1; /* TLP MW address error */
- u8 MWAddrLength_err:1; /* TLP MW length error */
- u8 MRAddr_err:1; /* TLP MR address error */
- u8 MRAddrLength_err:1; /* TLP MR length error */
- u8 cpl_tag_err:1; /* TLP CPL tag error */
- u8 cpl_status_err:1; /* TLP CPL status error */
- u8 cpl_timeout_err:1; /* TLP CPL timeout */
- u8 cci_parity_err:1; /* CCI bridge parity error */
- u8 rxpoison_tlp_err:1; /* Received a TLP with EP set */
- u64 rsvd:52; /* Reserved */
- u8 vfnumb_err:1; /* Number of error VF */
- u8 funct_type_err:1; /* Virtual (1) or Physical */
- };
- };
-};
-
-/* PCIe1 Error Status register */
-struct feature_fme_pcie1_error {
-#define FME_PCIE1_ERROR_MASK 0xFFUL
- union {
- u64 csr;
- struct {
- u8 formattype_err:1; /* TLP format/type error */
- u8 MWAddr_err:1; /* TLP MW address error */
- u8 MWAddrLength_err:1; /* TLP MW length error */
- u8 MRAddr_err:1; /* TLP MR address error */
- u8 MRAddrLength_err:1; /* TLP MR length error */
- u8 cpl_tag_err:1; /* TLP CPL tag error */
- u8 cpl_status_err:1; /* TLP CPL status error */
- u8 cpl_timeout_err:1; /* TLP CPL timeout */
- u8 cci_parity_err:1; /* CCI bridge parity error */
- u8 rxpoison_tlp_err:1; /* Received a TLP with EP set */
- u64 rsvd:54; /* Reserved */
- };
- };
-};
-
-/* FME First Error register */
-struct feature_fme_first_error {
-#define FME_FIRST_ERROR_MASK ((1ULL << 60) - 1)
- union {
- u64 csr;
- struct {
- /*
- * Indicates the Error Register that was
- * triggered first
- */
- u64 err_reg_status:60;
- /*
- * Holds 60 LSBs from the Error register that was
- * triggered first
- */
- u8 errReg_id:4;
- };
- };
-};
-
-/* FME Next Error register */
-struct feature_fme_next_error {
-#define FME_NEXT_ERROR_MASK ((1ULL << 60) - 1)
- union {
- u64 csr;
- struct {
- /*
- * Indicates the Error Register that was
- * triggered second
- */
- u64 err_reg_status:60;
- /*
- * Holds 60 LSBs from the Error register that was
- * triggered second
- */
- u8 errReg_id:4;
- };
- };
-};
-
-/* RAS Non Fatal Error Status register */
-struct feature_fme_ras_nonfaterror {
- union {
- u64 csr;
- struct {
- /* thremal threshold AP1 */
- u8 temp_thresh_ap1:1;
- /* thremal threshold AP2 */
- u8 temp_thresh_ap2:1;
- u8 pcie_error:1; /* pcie Error */
- u8 portfatal_error:1; /* port fatal error */
- u8 proc_hot:1; /* Indicates a ProcHot event */
- /* Indicates an AFU PF/VF access mismatch */
- u8 afu_acc_mode_err:1;
- /* Injected nonfata Error */
- u8 injected_nonfata_err:1;
- u8 rsvd1:2;
- /* Temperature threshold triggered AP6*/
- u8 temp_thresh_AP6:1;
- /* Power threshold triggered AP1 */
- u8 power_thresh_AP1:1;
- /* Power threshold triggered AP2 */
- u8 power_thresh_AP2:1;
- /* Indicates a MBP event */
- u8 mbp_err:1;
- u64 rsvd2:51; /* Reserved */
- };
- };
-};
-
-/* RAS Catastrophic Fatal Error Status register */
-struct feature_fme_ras_catfaterror {
- union {
- u64 csr;
- struct {
- /* KTI Link layer error detected */
- u8 ktilink_fatal_err:1;
- /* tag-n-cache error detected */
- u8 tagcch_fatal_err:1;
- /* CCI error detected */
- u8 cci_fatal_err:1;
- /* KTI Protocol error detected */
- u8 ktiprpto_fatal_err:1;
- /* Fatal DRAM error detected */
- u8 dram_fatal_err:1;
- /* IOMMU detected */
- u8 iommu_fatal_err:1;
- /* Fabric Fatal Error */
- u8 fabric_fatal_err:1;
- /* PCIe possion Error */
- u8 pcie_poison_err:1;
- /* Injected fatal Error */
- u8 inject_fata_err:1;
- /* Catastrophic CRC Error */
- u8 crc_catast_err:1;
- /* Catastrophic Thermal Error */
- u8 therm_catast_err:1;
- /* Injected Catastrophic Error */
- u8 injected_catast_err:1;
- u64 rsvd:52;
- };
- };
-};
-
-/* RAS Error injection register */
-struct feature_fme_ras_error_inj {
-#define FME_RAS_ERROR_INJ_MASK 0x7UL
- union {
- u64 csr;
- struct {
- u8 catast_error:1; /* Catastrophic error flag */
- u8 fatal_error:1; /* Fatal error flag */
- u8 nonfatal_error:1; /* NonFatal error flag */
- u64 rsvd:61; /* Reserved */
- };
- };
-};
-
-/* FME error capabilities */
-struct feature_fme_error_capability {
- union {
- u64 csr;
- struct {
- u8 support_intr:1;
- /* MSI-X vector table entry number */
- u16 intr_vector_num:12;
- u64 rsvd:51; /* Reserved */
- };
- };
-};
-
-/* FME ERR FEATURE */
-struct feature_fme_err {
- struct feature_header header;
- struct feature_fme_error0 fme_err_mask;
- struct feature_fme_error0 fme_err;
- struct feature_fme_pcie0_error pcie0_err_mask;
- struct feature_fme_pcie0_error pcie0_err;
- struct feature_fme_pcie1_error pcie1_err_mask;
- struct feature_fme_pcie1_error pcie1_err;
- struct feature_fme_first_error fme_first_err;
- struct feature_fme_next_error fme_next_err;
- struct feature_fme_ras_nonfaterror ras_nonfat_mask;
- struct feature_fme_ras_nonfaterror ras_nonfaterr;
- struct feature_fme_ras_catfaterror ras_catfat_mask;
- struct feature_fme_ras_catfaterror ras_catfaterr;
- struct feature_fme_ras_error_inj ras_error_inj;
- struct feature_fme_error_capability fme_err_capability;
-};
-
-/* FME Partial Reconfiguration Control */
-struct feature_fme_pr_ctl {
- union {
- u64 csr;
- struct {
- u8 pr_reset:1; /* Reset PR Engine */
- u8 rsvd3:3; /* Reserved */
- u8 pr_reset_ack:1; /* Reset PR Engine Ack */
- u8 rsvd4:3; /* Reserved */
- u8 pr_regionid:2; /* PR Region ID */
- u8 rsvd1:2; /* Reserved */
- u8 pr_start_req:1; /* PR Start Request */
- u8 pr_push_complete:1; /* PR Data push complete */
- u8 pr_kind:1; /* PR Data push complete */
- u32 rsvd:17; /* Reserved */
- u32 config_data; /* Config data TBD */
- };
- };
-};
-
-/* FME Partial Reconfiguration Status */
-struct feature_fme_pr_status {
- union {
- u64 csr;
- struct {
- u16 pr_credit:9; /* PR Credits */
- u8 rsvd2:7; /* Reserved */
- u8 pr_status:1; /* PR status */
- u8 rsvd:3; /* Reserved */
- /* Altra PR Controller Block status */
- u8 pr_controller_status:3;
- u8 rsvd1:1; /* Reserved */
- u8 pr_host_status:4; /* PR Host status */
- u8 rsvd3:4; /* Reserved */
- /* Security Block Status fields (TBD) */
- u32 security_bstatus;
- };
- };
-};
-
-/* FME Partial Reconfiguration Data */
-struct feature_fme_pr_data {
- union {
- u64 csr; /* PR data from the raw-binary file */
- struct {
- /* PR data from the raw-binary file */
- u32 pr_data_raw;
- u32 rsvd;
- };
- };
-};
-
-/* FME PR Public Key */
-struct feature_fme_pr_key {
- u64 key; /* FME PR Public Hash */
-};
-
-/* FME PR FEATURE */
-struct feature_fme_pr {
- struct feature_header header;
- /*Partial Reconfiguration control */
- struct feature_fme_pr_ctl ccip_fme_pr_control;
-
- /* Partial Reconfiguration Status */
- struct feature_fme_pr_status ccip_fme_pr_status;
-
- /* Partial Reconfiguration data */
- struct feature_fme_pr_data ccip_fme_pr_data;
-
- /* Partial Reconfiguration data */
- u64 ccip_fme_pr_err;
-
- u64 rsvd1[3];
-
- /* Partial Reconfiguration data registers */
- u64 fme_pr_data1;
- u64 fme_pr_data2;
- u64 fme_pr_data3;
- u64 fme_pr_data4;
- u64 fme_pr_data5;
- u64 fme_pr_data6;
- u64 fme_pr_data7;
- u64 fme_pr_data8;
-
- u64 rsvd2[5];
-
- /* PR Interface ID */
- u64 fme_pr_intfc_id_l;
- u64 fme_pr_intfc_id_h;
-
- /* MSIX filed to be Added */
-};
-
-/* FME HSSI Control */
-struct feature_fme_hssi_eth_ctrl {
- union {
- u64 csr;
- struct {
- u32 data:32; /* HSSI data */
- u16 address:16; /* HSSI address */
- /*
- * HSSI comamnd
- * 0x0 - No request
- * 0x08 - SW register RD request
- * 0x10 - SW register WR request
- * 0x40 - Auxiliar bus RD request
- * 0x80 - Auxiliar bus WR request
- */
- u16 cmd:16;
- };
- };
-};
-
-/* FME HSSI Status */
-struct feature_fme_hssi_eth_stat {
- union {
- u64 csr;
- struct {
- u32 data:32; /* HSSI data */
- u8 acknowledge:1; /* HSSI acknowledge */
- u8 spare:1; /* HSSI spare */
- u32 rsvd:30; /* Reserved */
- };
- };
-};
-
-/* FME HSSI FEATURE */
-struct feature_fme_hssi {
- struct feature_header header;
- struct feature_fme_hssi_eth_ctrl hssi_control;
- struct feature_fme_hssi_eth_stat hssi_status;
-};
-
-#define PORT_ERR_MASK 0xfff0703ff001f
-struct feature_port_err_key {
- union {
- u64 csr;
- struct {
- /* Tx Channel0: Overflow */
- u8 tx_ch0_overflow:1;
- /* Tx Channel0: Invalid request encoding */
- u8 tx_ch0_invaldreq :1;
- /* Tx Channel0: Request with cl_len=3 not supported */
- u8 tx_ch0_cl_len3:1;
- /* Tx Channel0: Request with cl_len=2 not aligned 2CL */
- u8 tx_ch0_cl_len2:1;
- /* Tx Channel0: Request with cl_len=4 not aligned 4CL */
- u8 tx_ch0_cl_len4:1;
-
- u16 rsvd1:4; /* Reserved */
-
- /* AFU MMIO RD received while PORT is in reset */
- u8 mmio_rd_whilerst:1;
- /* AFU MMIO WR received while PORT is in reset */
- u8 mmio_wr_whilerst:1;
-
- u16 rsvd2:5; /* Reserved */
-
- /* Tx Channel1: Overflow */
- u8 tx_ch1_overflow:1;
- /* Tx Channel1: Invalid request encoding */
- u8 tx_ch1_invaldreq:1;
- /* Tx Channel1: Request with cl_len=3 not supported */
- u8 tx_ch1_cl_len3:1;
- /* Tx Channel1: Request with cl_len=2 not aligned 2CL */
- u8 tx_ch1_cl_len2:1;
- /* Tx Channel1: Request with cl_len=4 not aligned 4CL */
- u8 tx_ch1_cl_len4:1;
-
- /* Tx Channel1: Insufficient data payload */
- u8 tx_ch1_insuff_data:1;
- /* Tx Channel1: Data payload overrun */
- u8 tx_ch1_data_overrun:1;
- /* Tx Channel1 : Incorrect address */
- u8 tx_ch1_incorr_addr:1;
- /* Tx Channel1 : NON-Zero SOP Detected */
- u8 tx_ch1_nzsop:1;
- /* Tx Channel1 : Illegal VC_SEL, atomic request VLO */
- u8 tx_ch1_illegal_vcsel:1;
-
- u8 rsvd3:6; /* Reserved */
-
- /* MMIO Read Timeout in AFU */
- u8 mmioread_timeout:1;
-
- /* Tx Channel2: FIFO Overflow */
- u8 tx_ch2_fifo_overflow:1;
-
- /* MMIO read is not matching pending request */
- u8 unexp_mmio_resp:1;
-
- u8 rsvd4:5; /* Reserved */
-
- /* Number of pending Requests: counter overflow */
- u8 tx_req_counter_overflow:1;
- /* Req with Address violating SMM Range */
- u8 llpr_smrr_err:1;
- /* Req with Address violating second SMM Range */
- u8 llpr_smrr2_err:1;
- /* Req with Address violating ME Stolen message */
- u8 llpr_mesg_err:1;
- /* Req with Address violating Generic Protected Range */
- u8 genprot_range_err:1;
- /* Req with Address violating Legacy Range low */
- u8 legrange_low_err:1;
- /* Req with Address violating Legacy Range High */
- u8 legrange_high_err:1;
- /* Req with Address violating VGA memory range */
- u8 vgmem_range_err:1;
- u8 page_fault_err:1; /* Page fault */
- u8 pmr_err:1; /* PMR Error */
- u8 ap6_event:1; /* AP6 event */
- /* VF FLR detected on Port with PF access control */
- u8 vfflr_access_err:1;
- u16 rsvd5:12; /* Reserved */
- };
- };
-};
-
-/* Port first error register, not contain all error bits in error register. */
-struct feature_port_first_err_key {
- union {
- u64 csr;
- struct {
- u8 tx_ch0_overflow:1;
- u8 tx_ch0_invaldreq :1;
- u8 tx_ch0_cl_len3:1;
- u8 tx_ch0_cl_len2:1;
- u8 tx_ch0_cl_len4:1;
- u8 rsvd1:4; /* Reserved */
- u8 mmio_rd_whilerst:1;
- u8 mmio_wr_whilerst:1;
- u8 rsvd2:5; /* Reserved */
- u8 tx_ch1_overflow:1;
- u8 tx_ch1_invaldreq:1;
- u8 tx_ch1_cl_len3:1;
- u8 tx_ch1_cl_len2:1;
- u8 tx_ch1_cl_len4:1;
- u8 tx_ch1_insuff_data:1;
- u8 tx_ch1_data_overrun:1;
- u8 tx_ch1_incorr_addr:1;
- u8 tx_ch1_nzsop:1;
- u8 tx_ch1_illegal_vcsel:1;
- u8 rsvd3:6; /* Reserved */
- u8 mmioread_timeout:1;
- u8 tx_ch2_fifo_overflow:1;
- u8 rsvd4:6; /* Reserved */
- u8 tx_req_counter_overflow:1;
- u32 rsvd5:23; /* Reserved */
- };
- };
-};
-
-/* Port malformed Req0 */
-struct feature_port_malformed_req0 {
- u64 header_lsb;
-};
-
-/* Port malformed Req1 */
-struct feature_port_malformed_req1 {
- u64 header_msb;
-};
-
-/* Port debug register */
-struct feature_port_debug {
- u64 port_debug;
-};
-
-/* Port error capabilities */
-struct feature_port_err_capability {
- union {
- u64 csr;
- struct {
- u8 support_intr:1;
- /* MSI-X vector table entry number */
- u16 intr_vector_num:12;
- u64 rsvd:51; /* Reserved */
- };
- };
-};
-
-/* PORT FEATURE ERROR */
-struct feature_port_error {
- struct feature_header header;
- struct feature_port_err_key error_mask;
- struct feature_port_err_key port_error;
- struct feature_port_first_err_key port_first_error;
- struct feature_port_malformed_req0 malreq0;
- struct feature_port_malformed_req1 malreq1;
- struct feature_port_debug port_debug;
- struct feature_port_err_capability error_capability;
-};
-
-/* Port UMSG Capability */
-struct feature_port_umsg_cap {
- union {
- u64 csr;
- struct {
- /* Number of umsg allocated to this port */
- u8 umsg_allocated;
- /* Enable / Disable UMsg engine for this port */
- u8 umsg_enable:1;
- /* Usmg initialization status */
- u8 umsg_init_complete:1;
- /* IOMMU can not translate the umsg base address */
- u8 umsg_trans_error:1;
- u64 rsvd:53; /* Reserved */
- };
- };
-};
-
-/* Port UMSG base address */
-struct feature_port_umsg_baseaddr {
- union {
- u64 csr;
- struct {
- u64 base_addr:48; /* 48 bit physical address */
- u16 rsvd; /* Reserved */
- };
- };
-};
-
-struct feature_port_umsg_mode {
- union {
- u64 csr;
- struct {
- u32 umsg_hint_enable; /* UMSG hint enable/disable */
- u32 rsvd; /* Reserved */
- };
- };
-};
-
-/* PORT FEATURE UMSG */
-struct feature_port_umsg {
- struct feature_header header;
- struct feature_port_umsg_cap capability;
- struct feature_port_umsg_baseaddr baseaddr;
- struct feature_port_umsg_mode mode;
-};
-
-#define UMSG_EN_POLL_INVL 10 /* us */
-#define UMSG_EN_POLL_TIMEOUT 1000 /* us */
-
-/* Port UINT Capability */
-struct feature_port_uint_cap {
- union {
- u64 csr;
- struct {
- u16 intr_num:12; /* Supported interrupts num */
- /* First MSI-X vector table entry number */
- u16 first_vec_num:12;
- u64 rsvd:40;
- };
- };
-};
-
-/* PORT FEATURE UINT */
-struct feature_port_uint {
- struct feature_header header;
- struct feature_port_uint_cap capability;
-};
-
-/* STP region supports mmap operation, so use page aligned size. */
-#define PORT_FEATURE_STP_REGION_SIZE \
- IFPGA_PAGE_ALIGN(sizeof(struct feature_port_stp))
-
-/* Port STP status register (for debug only)*/
-struct feature_port_stp_status {
- union {
- u64 csr;
- struct {
- /* SLD Hub end-point read/write timeout */
- u8 sld_ep_timeout:1;
- /* Remote STP in reset/disable */
- u8 rstp_disabled:1;
- u8 unsupported_read:1;
- /* MMIO timeout detected and faked with a response */
- u8 mmio_timeout:1;
- u8 txfifo_count:4;
- u8 rxfifo_count:4;
- u8 txfifo_overflow:1;
- u8 txfifo_underflow:1;
- u8 rxfifo_overflow:1;
- u8 rxfifo_underflow:1;
- /* Number of MMIO write requests */
- u16 write_requests;
- /* Number of MMIO read requests */
- u16 read_requests;
- /* Number of MMIO read responses */
- u16 read_responses;
- };
- };
-};
-
-/*
- * PORT FEATURE STP
- * Most registers in STP region are not touched by driver, but mmapped to user
- * space. So they are not defined in below data structure, as its actual size
- * is 0x18c per spec.
- */
-struct feature_port_stp {
- struct feature_header header;
- struct feature_port_stp_status stp_status;
-};
-
-/**
- * enum fpga_pr_states - fpga PR states
- * @FPGA_PR_STATE_UNKNOWN: can't determine state
- * @FPGA_PR_STATE_WRITE_INIT: preparing FPGA for programming
- * @FPGA_PR_STATE_WRITE_INIT_ERR: Error during WRITE_INIT stage
- * @FPGA_PR_STATE_WRITE: writing image to FPGA
- * @FPGA_PR_STATE_WRITE_ERR: Error while writing FPGA
- * @FPGA_PR_STATE_WRITE_COMPLETE: Doing post programming steps
- * @FPGA_PR_STATE_WRITE_COMPLETE_ERR: Error during WRITE_COMPLETE
- * @FPGA_PR_STATE_OPERATING: FPGA PR done
- */
-enum fpga_pr_states {
- /* canot determine state states */
- FPGA_PR_STATE_UNKNOWN,
-
- /* write sequence: init, write, complete */
- FPGA_PR_STATE_WRITE_INIT,
- FPGA_PR_STATE_WRITE_INIT_ERR,
- FPGA_PR_STATE_WRITE,
- FPGA_PR_STATE_WRITE_ERR,
- FPGA_PR_STATE_WRITE_COMPLETE,
- FPGA_PR_STATE_WRITE_COMPLETE_ERR,
-
- /* FPGA PR done */
- FPGA_PR_STATE_DONE,
-};
-
-/*
- * FPGA Manager flags
- * FPGA_MGR_PARTIAL_RECONFIG: do partial reconfiguration if supported
- */
-#define FPGA_MGR_PARTIAL_RECONFIG BIT(0)
-
-/**
- * struct fpga_pr_info - specific information to a FPGA PR
- * @flags: boolean flags as defined above
- * @pr_err: PR error code
- * @state: fpga manager state
- * @port_id: port id
- */
-struct fpga_pr_info {
- u32 flags;
- u64 pr_err;
- enum fpga_pr_states state;
- int port_id;
-};
-
-#define DEFINE_FPGA_PR_ERR_MSG(_name_) \
-static const char * const _name_[] = { \
- "PR operation error detected", \
- "PR CRC error detected", \
- "PR incompatiable bitstream error detected", \
- "PR IP protocol error detected", \
- "PR FIFO overflow error detected", \
- "PR timeout error detected", \
- "PR secure load error detected", \
-}
-
-#define RST_POLL_INVL 10 /* us */
-#define RST_POLL_TIMEOUT 1000 /* us */
-
-#define PR_WAIT_TIMEOUT 15000000
-
-#define PR_HOST_STATUS_IDLE 0
-#define PR_MAX_ERR_NUM 7
-
-DEFINE_FPGA_PR_ERR_MSG(pr_err_msg);
-
-/*
- * green bitstream header must be byte-packed to match the
- * real file format.
- */
-struct bts_header {
- u64 guid_h;
- u64 guid_l;
- u32 metadata_len;
-};
-
-#define GBS_GUID_H 0x414750466e6f6558
-#define GBS_GUID_L 0x31303076534247b7
-#define is_valid_bts(bts_hdr) \
- (((bts_hdr)->guid_h == GBS_GUID_H) && \
- ((bts_hdr)->guid_l == GBS_GUID_L))
-
-/* bitstream id definition */
-struct fme_bitstream_id {
- union {
- u64 id;
- struct {
- u64 hash:32;
- u64 interface:4;
- u64 reserved:12;
- u64 debug:4;
- u64 patch:4;
- u64 minor:4;
- u64 major:4;
- };
- };
-};
-
-enum board_interface {
- VC_8_10G = 0,
- VC_4_25G = 1,
- VC_2_1_25 = 2,
- VC_4_25G_2_25G = 3,
- VC_2_2_25G = 4,
-};
-
-struct ifpga_fme_board_info {
- enum board_interface type;
- u32 build_hash;
- u32 debug_version;
- u32 patch_version;
- u32 minor_version;
- u32 major_version;
- u32 nums_of_retimer;
- u32 ports_per_retimer;
- u32 nums_of_fvl;
- u32 ports_per_fvl;
-};
-
-#pragma pack(pop)
-#endif /* _BASE_IFPGA_DEFINES_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "opae_hw_api.h"
-#include "ifpga_api.h"
-
-#include "ifpga_hw.h"
-#include "ifpga_enumerate.h"
-#include "ifpga_feature_dev.h"
-
-struct build_feature_devs_info {
- struct opae_adapter_data_pci *pci_data;
-
- struct ifpga_afu_info *acc_info;
-
- void *fiu;
- enum fpga_id_type current_type;
- int current_port_id;
-
- void *ioaddr;
- void *ioend;
- uint64_t phys_addr;
- int current_bar;
-
- void *pfme_hdr;
-
- struct ifpga_hw *hw;
-};
-
-static int feature_revision(void __iomem *start)
-{
- struct feature_header header;
-
- header.csr = readq(start);
-
- return header.revision;
-}
-
-static u32 feature_size(void __iomem *start)
-{
- struct feature_header header;
-
- header.csr = readq(start);
-
- /*the size of private feature is 4KB aligned*/
- return header.next_header_offset ? header.next_header_offset:4096;
-}
-
-static u64 feature_id(void __iomem *start)
-{
- struct feature_header header;
-
- header.csr = readq(start);
-
- switch (header.type) {
- case FEATURE_TYPE_FIU:
- return FEATURE_ID_FIU_HEADER;
- case FEATURE_TYPE_PRIVATE:
- return header.id;
- case FEATURE_TYPE_AFU:
- return FEATURE_ID_AFU;
- }
-
- WARN_ON(1);
- return 0;
-}
-
-static int
-build_info_add_sub_feature(struct build_feature_devs_info *binfo,
- void __iomem *start, u64 fid, unsigned int size,
- unsigned int vec_start,
- unsigned int vec_cnt)
-{
- struct ifpga_hw *hw = binfo->hw;
- struct ifpga_feature *feature = NULL;
- struct feature_irq_ctx *ctx = NULL;
- int port_id, ret = 0;
- unsigned int i;
-
- fid = fid?fid:feature_id(start);
- size = size?size:feature_size(start);
-
- feature = opae_malloc(sizeof(struct ifpga_feature));
- if (!feature)
- return -ENOMEM;
-
- feature->state = IFPGA_FEATURE_ATTACHED;
- feature->addr = start;
- feature->id = fid;
- feature->size = size;
- feature->revision = feature_revision(start);
- feature->phys_addr = binfo->phys_addr +
- ((u8 *)start - (u8 *)binfo->ioaddr);
- feature->vec_start = vec_start;
- feature->vec_cnt = vec_cnt;
-
- dev_debug(binfo, "%s: id=0x%llx, phys_addr=0x%llx, size=%u\n",
- __func__, (unsigned long long)feature->id,
- (unsigned long long)feature->phys_addr, size);
-
- if (vec_cnt) {
- if (vec_start + vec_cnt <= vec_start)
- return -EINVAL;
-
- ctx = zmalloc(sizeof(*ctx) * vec_cnt);
- if (!ctx)
- return -ENOMEM;
-
- for (i = 0; i < vec_cnt; i++) {
- ctx[i].eventfd = -1;
- ctx[i].idx = vec_start + i;
- }
- }
-
- feature->ctx = ctx;
- feature->ctx_num = vec_cnt;
- feature->vfio_dev_fd = binfo->pci_data->vfio_dev_fd;
-
- if (binfo->current_type == FME_ID) {
- feature->parent = &hw->fme;
- feature->type = FEATURE_FME_TYPE;
- feature->name = get_fme_feature_name(fid);
- TAILQ_INSERT_TAIL(&hw->fme.feature_list, feature, next);
- } else if (binfo->current_type == PORT_ID) {
- port_id = binfo->current_port_id;
- feature->parent = &hw->port[port_id];
- feature->type = FEATURE_PORT_TYPE;
- feature->name = get_port_feature_name(fid);
- TAILQ_INSERT_TAIL(&hw->port[port_id].feature_list,
- feature, next);
- } else {
- return -EFAULT;
- }
- return ret;
-}
-
-static int
-create_feature_instance(struct build_feature_devs_info *binfo,
- void __iomem *start, u64 fid,
- unsigned int size, unsigned int vec_start,
- unsigned int vec_cnt)
-{
- return build_info_add_sub_feature(binfo, start, fid, size, vec_start,
- vec_cnt);
-}
-
-/*
- * UAFU GUID is dynamic as it can be changed after FME downloads different
- * Green Bitstream to the port, so we treat the unknown GUIDs which are
- * attached on port's feature list as UAFU.
- */
-static bool feature_is_UAFU(struct build_feature_devs_info *binfo)
-{
- if (binfo->current_type != PORT_ID)
- return false;
-
- return true;
-}
-
-static int parse_feature_port_uafu(struct build_feature_devs_info *binfo,
- struct feature_header *hdr)
-{
- u64 id = PORT_FEATURE_ID_UAFU;
- struct ifpga_afu_info *info;
- void *start = (void *)hdr;
- struct feature_port_header *port_hdr = binfo->ioaddr;
- struct feature_port_capability capability;
- int ret;
- int size;
-
- capability.csr = readq(&port_hdr->capability);
-
- size = capability.mmio_size << 10;
-
- ret = create_feature_instance(binfo, hdr, id, size, 0, 0);
- if (ret)
- return ret;
-
- info = opae_malloc(sizeof(*info));
- if (!info)
- return -ENOMEM;
-
- info->region[0].addr = start;
- info->region[0].phys_addr = binfo->phys_addr +
- (uint8_t *)start - (uint8_t *)binfo->ioaddr;
- info->region[0].len = size;
- info->num_regions = 1;
-
- binfo->acc_info = info;
-
- return ret;
-}
-
-static int parse_feature_afus(struct build_feature_devs_info *binfo,
- struct feature_header *hdr)
-{
- int ret;
- struct feature_afu_header *afu_hdr, header;
- u8 __iomem *start;
- u8 __iomem *end = binfo->ioend;
-
- start = (u8 __iomem *)hdr;
- for (; start < end; start += header.next_afu) {
- if ((unsigned int)(end - start) <
- (unsigned int)(sizeof(*afu_hdr) + sizeof(*hdr)))
- return -EINVAL;
-
- hdr = (struct feature_header *)start;
- afu_hdr = (struct feature_afu_header *)(hdr + 1);
- header.csr = readq(&afu_hdr->csr);
-
- if (feature_is_UAFU(binfo)) {
- ret = parse_feature_port_uafu(binfo, hdr);
- if (ret)
- return ret;
- }
-
- if (!header.next_afu)
- break;
- }
-
- return 0;
-}
-
-/* create and register proper private data */
-static int build_info_commit_dev(struct build_feature_devs_info *binfo)
-{
- struct ifpga_afu_info *info = binfo->acc_info;
- struct ifpga_hw *hw = binfo->hw;
- struct opae_manager *mgr;
- struct opae_bridge *br;
- struct opae_accelerator *acc;
- struct ifpga_port_hw *port;
- struct ifpga_fme_hw *fme;
- struct ifpga_feature *feature;
-
- if (!binfo->fiu)
- return 0;
-
- if (binfo->current_type == PORT_ID) {
- /* return error if no valid acc info data structure */
- if (!info)
- return -EFAULT;
-
- br = opae_bridge_alloc(hw->adapter->name, &ifpga_br_ops,
- binfo->fiu);
- if (!br)
- return -ENOMEM;
-
- br->id = binfo->current_port_id;
-
- /* update irq info */
- port = &hw->port[binfo->current_port_id];
- feature = get_feature_by_id(&port->feature_list,
- PORT_FEATURE_ID_UINT);
- if (feature)
- info->num_irqs = feature->vec_cnt;
-
- acc = opae_accelerator_alloc(hw->adapter->name,
- &ifpga_acc_ops, info);
- if (!acc) {
- opae_bridge_free(br);
- return -ENOMEM;
- }
-
- acc->br = br;
- if (hw->adapter->mgr)
- acc->mgr = hw->adapter->mgr;
- acc->index = br->id;
-
- fme = &hw->fme;
- fme->nums_acc_region = info->num_regions;
-
- opae_adapter_add_acc(hw->adapter, acc);
-
- } else if (binfo->current_type == FME_ID) {
- mgr = opae_manager_alloc(hw->adapter->name, &ifpga_mgr_ops,
- &ifpga_mgr_network_ops, binfo->fiu);
- if (!mgr)
- return -ENOMEM;
-
- mgr->adapter = hw->adapter;
- hw->adapter->mgr = mgr;
- }
-
- binfo->fiu = NULL;
-
- return 0;
-}
-
-static int
-build_info_create_dev(struct build_feature_devs_info *binfo,
- enum fpga_id_type type, unsigned int index)
-{
- int ret;
-
- ret = build_info_commit_dev(binfo);
- if (ret)
- return ret;
-
- binfo->current_type = type;
-
- if (type == FME_ID) {
- binfo->fiu = &binfo->hw->fme;
- } else if (type == PORT_ID) {
- binfo->fiu = &binfo->hw->port[index];
- binfo->current_port_id = index;
- }
-
- return 0;
-}
-
-static int parse_feature_fme(struct build_feature_devs_info *binfo,
- struct feature_header *start)
-{
- struct ifpga_hw *hw = binfo->hw;
- struct ifpga_fme_hw *fme = &hw->fme;
- int ret;
-
- ret = build_info_create_dev(binfo, FME_ID, 0);
- if (ret)
- return ret;
-
- /* Update FME states */
- fme->state = IFPGA_FME_IMPLEMENTED;
- fme->parent = hw;
- TAILQ_INIT(&fme->feature_list);
- spinlock_init(&fme->lock);
-
- return create_feature_instance(binfo, start, 0, 0, 0, 0);
-}
-
-static int parse_feature_port(struct build_feature_devs_info *binfo,
- void __iomem *start)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_capability capability;
- struct ifpga_hw *hw = binfo->hw;
- struct ifpga_port_hw *port;
- unsigned int port_id;
- int ret;
-
- /* Get current port's id */
- port_hdr = (struct feature_port_header *)start;
- capability.csr = readq(&port_hdr->capability);
- port_id = capability.port_number;
-
- ret = build_info_create_dev(binfo, PORT_ID, port_id);
- if (ret)
- return ret;
-
- /*found a Port device*/
- port = &hw->port[port_id];
- port->port_id = binfo->current_port_id;
- port->parent = hw;
- port->state = IFPGA_PORT_ATTACHED;
- spinlock_init(&port->lock);
- TAILQ_INIT(&port->feature_list);
-
- return create_feature_instance(binfo, start, 0, 0, 0, 0);
-}
-
-static void enable_port_uafu(struct build_feature_devs_info *binfo,
- void __iomem *start)
-{
- struct ifpga_port_hw *port = &binfo->hw->port[binfo->current_port_id];
-
- UNUSED(start);
-
- fpga_port_reset(port);
-}
-
-static int parse_feature_fiu(struct build_feature_devs_info *binfo,
- struct feature_header *hdr)
-{
- struct feature_header header;
- struct feature_fiu_header *fiu_hdr, fiu_header;
- u8 __iomem *start = (u8 __iomem *)hdr;
- int ret;
-
- header.csr = readq(hdr);
-
- switch (header.id) {
- case FEATURE_FIU_ID_FME:
- ret = parse_feature_fme(binfo, hdr);
- binfo->pfme_hdr = hdr;
- if (ret)
- return ret;
- break;
- case FEATURE_FIU_ID_PORT:
- ret = parse_feature_port(binfo, hdr);
- enable_port_uafu(binfo, hdr);
- if (ret)
- return ret;
-
- /* Check Port FIU's next_afu pointer to User AFU DFH */
- fiu_hdr = (struct feature_fiu_header *)(hdr + 1);
- fiu_header.csr = readq(&fiu_hdr->csr);
-
- if (fiu_header.next_afu) {
- start += fiu_header.next_afu;
- ret = parse_feature_afus(binfo,
- (struct feature_header *)start);
- if (ret)
- return ret;
- } else {
- dev_info(binfo, "No AFUs detected on Port\n");
- }
-
- break;
- default:
- dev_info(binfo, "FIU TYPE %d is not supported yet.\n",
- header.id);
- }
-
- return 0;
-}
-
-static void parse_feature_irqs(struct build_feature_devs_info *binfo,
- void __iomem *start, unsigned int *vec_start,
- unsigned int *vec_cnt)
-{
- UNUSED(binfo);
- u64 id;
-
- id = feature_id(start);
-
- if (id == PORT_FEATURE_ID_UINT) {
- struct feature_port_uint *port_uint = start;
- struct feature_port_uint_cap uint_cap;
-
- uint_cap.csr = readq(&port_uint->capability);
- if (uint_cap.intr_num) {
- *vec_start = uint_cap.first_vec_num;
- *vec_cnt = uint_cap.intr_num;
- } else {
- dev_debug(binfo, "UAFU doesn't support interrupt\n");
- }
- } else if (id == PORT_FEATURE_ID_ERROR) {
- struct feature_port_error *port_err = start;
- struct feature_port_err_capability port_err_cap;
-
- port_err_cap.csr = readq(&port_err->error_capability);
- if (port_err_cap.support_intr) {
- *vec_start = port_err_cap.intr_vector_num;
- *vec_cnt = 1;
- } else {
- dev_debug(&binfo, "Port error doesn't support interrupt\n");
- }
-
- } else if (id == FME_FEATURE_ID_GLOBAL_ERR) {
- struct feature_fme_err *fme_err = start;
- struct feature_fme_error_capability fme_err_cap;
-
- fme_err_cap.csr = readq(&fme_err->fme_err_capability);
- if (fme_err_cap.support_intr) {
- *vec_start = fme_err_cap.intr_vector_num;
- *vec_cnt = 1;
- } else {
- dev_debug(&binfo, "FME error doesn't support interrupt\n");
- }
- }
-}
-
-static int parse_feature_fme_private(struct build_feature_devs_info *binfo,
- struct feature_header *hdr)
-{
- unsigned int vec_start = 0;
- unsigned int vec_cnt = 0;
-
- parse_feature_irqs(binfo, hdr, &vec_start, &vec_cnt);
-
- return create_feature_instance(binfo, hdr, 0, 0, vec_start, vec_cnt);
-}
-
-static int parse_feature_port_private(struct build_feature_devs_info *binfo,
- struct feature_header *hdr)
-{
- unsigned int vec_start = 0;
- unsigned int vec_cnt = 0;
-
- parse_feature_irqs(binfo, hdr, &vec_start, &vec_cnt);
-
- return create_feature_instance(binfo, hdr, 0, 0, vec_start, vec_cnt);
-}
-
-static int parse_feature_private(struct build_feature_devs_info *binfo,
- struct feature_header *hdr)
-{
- struct feature_header header;
-
- header.csr = readq(hdr);
-
- switch (binfo->current_type) {
- case FME_ID:
- return parse_feature_fme_private(binfo, hdr);
- case PORT_ID:
- return parse_feature_port_private(binfo, hdr);
- default:
- dev_err(binfo, "private feature %x belonging to AFU %d (unknown_type) is not supported yet.\n",
- header.id, binfo->current_type);
- }
- return 0;
-}
-
-static int parse_feature(struct build_feature_devs_info *binfo,
- struct feature_header *hdr)
-{
- struct feature_header header;
- int ret = 0;
-
- header.csr = readq(hdr);
-
- switch (header.type) {
- case FEATURE_TYPE_AFU:
- ret = parse_feature_afus(binfo, hdr);
- break;
- case FEATURE_TYPE_PRIVATE:
- ret = parse_feature_private(binfo, hdr);
- break;
- case FEATURE_TYPE_FIU:
- ret = parse_feature_fiu(binfo, hdr);
- break;
- default:
- dev_err(binfo, "Feature Type %x is not supported.\n",
- hdr->type);
- };
-
- return ret;
-}
-
-static int
-parse_feature_list(struct build_feature_devs_info *binfo, u8 __iomem *start)
-{
- struct feature_header *hdr, header;
- u8 __iomem *end = (u8 __iomem *)binfo->ioend;
- int ret = 0;
-
- for (; start < end; start += header.next_header_offset) {
- if ((unsigned int)(end - start) < (unsigned int)sizeof(*hdr)) {
- dev_err(binfo, "The region is too small to contain a feature.\n");
- ret = -EINVAL;
- break;
- }
-
- hdr = (struct feature_header *)start;
- header.csr = readq(hdr);
-
- dev_debug(binfo, "%s: address=0x%p, val=0x%llx, header.id=0x%x, header.next_offset=0x%x, header.eol=0x%x, header.type=0x%x\n",
- __func__, hdr, (unsigned long long)header.csr,
- header.id, header.next_header_offset,
- header.end_of_list, header.type);
-
- ret = parse_feature(binfo, hdr);
- if (ret)
- return ret;
-
- if (header.end_of_list || !header.next_header_offset)
- break;
- }
-
- return build_info_commit_dev(binfo);
-}
-
-/* switch the memory mapping to BAR# @bar */
-static int parse_switch_to(struct build_feature_devs_info *binfo, int bar)
-{
- struct opae_adapter_data_pci *pci_data = binfo->pci_data;
-
- if (!pci_data->region[bar].addr)
- return -ENOMEM;
-
- binfo->ioaddr = pci_data->region[bar].addr;
- binfo->ioend = (u8 __iomem *)binfo->ioaddr + pci_data->region[bar].len;
- binfo->phys_addr = pci_data->region[bar].phys_addr;
- binfo->current_bar = bar;
-
- return 0;
-}
-
-static int parse_ports_from_fme(struct build_feature_devs_info *binfo)
-{
- struct feature_fme_header *fme_hdr;
- struct feature_fme_port port;
- int i = 0, ret = 0;
-
- if (!binfo->pfme_hdr) {
- dev_info(binfo, "VF is detected.\n");
- return ret;
- }
-
- fme_hdr = binfo->pfme_hdr;
-
- do {
- port.csr = readq(&fme_hdr->port[i]);
- if (!port.port_implemented)
- break;
-
- /* skip port which only could be accessed via VF */
- if (port.afu_access_control == FME_AFU_ACCESS_VF)
- continue;
-
- ret = parse_switch_to(binfo, port.port_bar);
- if (ret)
- break;
-
- ret = parse_feature_list(binfo,
- (u8 __iomem *)binfo->ioaddr +
- port.port_offset);
- if (ret)
- break;
- } while (++i < MAX_FPGA_PORT_NUM);
-
- return ret;
-}
-
-static struct build_feature_devs_info *
-build_info_alloc_and_init(struct ifpga_hw *hw)
-{
- struct build_feature_devs_info *binfo;
-
- binfo = zmalloc(sizeof(*binfo));
- if (!binfo)
- return binfo;
-
- binfo->hw = hw;
- binfo->pci_data = hw->pci_data;
-
- /* fpga feature list starts from BAR 0 */
- if (parse_switch_to(binfo, 0)) {
- free(binfo);
- return NULL;
- }
-
- return binfo;
-}
-
-static void build_info_free(struct build_feature_devs_info *binfo)
-{
- free(binfo);
-}
-
-static void ifpga_print_device_feature_list(struct ifpga_hw *hw)
-{
- struct ifpga_fme_hw *fme = &hw->fme;
- struct ifpga_port_hw *port;
- struct ifpga_feature *feature;
- int i;
-
- dev_info(hw, "found fme_device, is in PF: %s\n",
- is_ifpga_hw_pf(hw) ? "yes" : "no");
-
- ifpga_for_each_fme_feature(fme, feature) {
- if (feature->state != IFPGA_FEATURE_ATTACHED)
- continue;
-
- dev_info(hw, "%12s: %p - %p - paddr: 0x%lx\n",
- feature->name, feature->addr,
- feature->addr + feature->size - 1,
- (unsigned long)feature->phys_addr);
-
- }
-
- for (i = 0; i < MAX_FPGA_PORT_NUM; i++) {
- port = &hw->port[i];
-
- if (port->state != IFPGA_PORT_ATTACHED)
- continue;
-
- dev_info(hw, "port device: %d\n", port->port_id);
-
- ifpga_for_each_port_feature(port, feature) {
- if (feature->state != IFPGA_FEATURE_ATTACHED)
- continue;
-
- dev_info(hw, "%12s: %p - %p - paddr:0x%lx\n",
- feature->name,
- feature->addr,
- feature->addr +
- feature->size - 1,
- (unsigned long)feature->phys_addr);
- }
-
- }
-}
-
-int ifpga_bus_enumerate(struct ifpga_hw *hw)
-{
- struct build_feature_devs_info *binfo;
- int ret;
-
- binfo = build_info_alloc_and_init(hw);
- if (!binfo)
- return -ENOMEM;
-
- ret = parse_feature_list(binfo, binfo->ioaddr);
- if (ret)
- goto exit;
-
- ret = parse_ports_from_fme(binfo);
- if (ret)
- goto exit;
-
- ifpga_print_device_feature_list(hw);
-
-exit:
- build_info_free(binfo);
- return ret;
-}
-
-int ifpga_bus_init(struct ifpga_hw *hw)
-{
- int i;
- struct ifpga_port_hw *port;
-
- fme_hw_init(&hw->fme);
- for (i = 0; i < MAX_FPGA_PORT_NUM; i++) {
- port = &hw->port[i];
- port_hw_init(port);
- }
-
- return 0;
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _IFPGA_ENUMERATE_H_
-#define _IFPGA_ENUMERATE_H_
-
-int ifpga_bus_init(struct ifpga_hw *hw);
-int ifpga_bus_enumerate(struct ifpga_hw *hw);
-
-#endif /* _IFPGA_ENUMERATE_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include <sys/ioctl.h>
-
-#include "ifpga_feature_dev.h"
-
-/*
- * Enable Port by clear the port soft reset bit, which is set by default.
- * The AFU is unable to respond to any MMIO access while in reset.
- * __fpga_port_enable function should only be used after __fpga_port_disable
- * function.
- */
-void __fpga_port_enable(struct ifpga_port_hw *port)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_control control;
-
- WARN_ON(!port->disable_count);
-
- if (--port->disable_count != 0)
- return;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
- WARN_ON(!port_hdr);
-
- control.csr = readq(&port_hdr->control);
- control.port_sftrst = 0x0;
- writeq(control.csr, &port_hdr->control);
-}
-
-int __fpga_port_disable(struct ifpga_port_hw *port)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_control control;
-
- if (port->disable_count++ != 0)
- return 0;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
- WARN_ON(!port_hdr);
-
- /* Set port soft reset */
- control.csr = readq(&port_hdr->control);
- control.port_sftrst = 0x1;
- writeq(control.csr, &port_hdr->control);
-
- /*
- * HW sets ack bit to 1 when all outstanding requests have been drained
- * on this port and minimum soft reset pulse width has elapsed.
- * Driver polls port_soft_reset_ack to determine if reset done by HW.
- */
- control.port_sftrst_ack = 1;
-
- if (fpga_wait_register_field(port_sftrst_ack, control,
- &port_hdr->control, RST_POLL_TIMEOUT,
- RST_POLL_INVL)) {
- dev_err(port, "timeout, fail to reset device\n");
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-int fpga_get_afu_uuid(struct ifpga_port_hw *port, struct uuid *uuid)
-{
- struct feature_port_header *port_hdr;
- u64 guidl, guidh;
-
- if (!uuid)
- return -EINVAL;
-
- port_hdr = get_port_feature_ioaddr_by_index(port, PORT_FEATURE_ID_UAFU);
-
- spinlock_lock(&port->lock);
- guidl = readq(&port_hdr->afu_header.guid.b[0]);
- guidh = readq(&port_hdr->afu_header.guid.b[8]);
- spinlock_unlock(&port->lock);
-
- opae_memcpy(uuid->b, &guidl, sizeof(u64));
- opae_memcpy(uuid->b + 8, &guidh, sizeof(u64));
-
- return 0;
-}
-
-/* Mask / Unmask Port Errors by the Error Mask register. */
-void port_err_mask(struct ifpga_port_hw *port, bool mask)
-{
- struct feature_port_error *port_err;
- struct feature_port_err_key err_mask;
-
- port_err = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_ERROR);
-
- if (mask)
- err_mask.csr = PORT_ERR_MASK;
- else
- err_mask.csr = 0;
-
- writeq(err_mask.csr, &port_err->error_mask);
-}
-
-/* Clear All Port Errors. */
-int port_err_clear(struct ifpga_port_hw *port, u64 err)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_error *port_err;
- struct feature_port_err_key mask;
- struct feature_port_first_err_key first;
- struct feature_port_status status;
- int ret = 0;
-
- port_err = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_ERROR);
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- /*
- * Clear All Port Errors
- *
- * - Check for AP6 State
- * - Halt Port by keeping Port in reset
- * - Set PORT Error mask to all 1 to mask errors
- * - Clear all errors
- * - Set Port mask to all 0 to enable errors
- * - All errors start capturing new errors
- * - Enable Port by pulling the port out of reset
- */
-
- /* If device is still in AP6 state, can not clear any error.*/
- status.csr = readq(&port_hdr->status);
- if (status.power_state == PORT_POWER_STATE_AP6) {
- dev_err(dev, "Could not clear errors, device in AP6 state.\n");
- return -EBUSY;
- }
-
- /* Halt Port by keeping Port in reset */
- ret = __fpga_port_disable(port);
- if (ret)
- return ret;
-
- /* Mask all errors */
- port_err_mask(port, true);
-
- /* Clear errors if err input matches with current port errors.*/
- mask.csr = readq(&port_err->port_error);
-
- if (mask.csr == err) {
- writeq(mask.csr, &port_err->port_error);
-
- first.csr = readq(&port_err->port_first_error);
- writeq(first.csr, &port_err->port_first_error);
- } else {
- ret = -EBUSY;
- }
-
- /* Clear mask */
- port_err_mask(port, false);
-
- /* Enable the Port by clear the reset */
- __fpga_port_enable(port);
-
- return ret;
-}
-
-int port_clear_error(struct ifpga_port_hw *port)
-{
- struct feature_port_error *port_err;
- struct feature_port_err_key error;
-
- port_err = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_ERROR);
- error.csr = readq(&port_err->port_error);
-
- dev_info(port, "read port error: 0x%lx\n", (unsigned long)error.csr);
-
- return port_err_clear(port, error.csr);
-}
-
-static struct feature_driver fme_feature_drvs[] = {
- {FEATURE_DRV(FME_FEATURE_ID_HEADER, FME_FEATURE_HEADER,
- &fme_hdr_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_THERMAL_MGMT, FME_FEATURE_THERMAL_MGMT,
- &fme_thermal_mgmt_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_POWER_MGMT, FME_FEATURE_POWER_MGMT,
- &fme_power_mgmt_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_GLOBAL_ERR, FME_FEATURE_GLOBAL_ERR,
- &fme_global_err_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_PR_MGMT, FME_FEATURE_PR_MGMT,
- &fme_pr_mgmt_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_GLOBAL_DPERF, FME_FEATURE_GLOBAL_DPERF,
- &fme_global_dperf_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_HSSI_ETH, FME_FEATURE_HSSI_ETH,
- &fme_hssi_eth_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_EMIF_MGMT, FME_FEATURE_EMIF_MGMT,
- &fme_emif_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_MAX10_SPI, FME_FEATURE_MAX10_SPI,
- &fme_spi_master_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_NIOS_SPI, FME_FEATURE_NIOS_SPI,
- &fme_nios_spi_master_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_I2C_MASTER, FME_FEATURE_I2C_MASTER,
- &fme_i2c_master_ops),},
- {FEATURE_DRV(FME_FEATURE_ID_ETH_GROUP, FME_FEATURE_ETH_GROUP,
- &fme_eth_group_ops),},
- {0, NULL, NULL}, /* end of arrary */
-};
-
-static struct feature_driver port_feature_drvs[] = {
- {FEATURE_DRV(PORT_FEATURE_ID_HEADER, PORT_FEATURE_HEADER,
- &ifpga_rawdev_port_hdr_ops)},
- {FEATURE_DRV(PORT_FEATURE_ID_ERROR, PORT_FEATURE_ERR,
- &ifpga_rawdev_port_error_ops)},
- {FEATURE_DRV(PORT_FEATURE_ID_UINT, PORT_FEATURE_UINT,
- &ifpga_rawdev_port_uint_ops)},
- {FEATURE_DRV(PORT_FEATURE_ID_STP, PORT_FEATURE_STP,
- &ifpga_rawdev_port_stp_ops)},
- {FEATURE_DRV(PORT_FEATURE_ID_UAFU, PORT_FEATURE_UAFU,
- &ifpga_rawdev_port_afu_ops)},
- {0, NULL, NULL}, /* end of array */
-};
-
-const char *get_fme_feature_name(unsigned int id)
-{
- struct feature_driver *drv = fme_feature_drvs;
-
- while (drv->name) {
- if (drv->id == id)
- return drv->name;
-
- drv++;
- }
-
- return NULL;
-}
-
-const char *get_port_feature_name(unsigned int id)
-{
- struct feature_driver *drv = port_feature_drvs;
-
- while (drv->name) {
- if (drv->id == id)
- return drv->name;
-
- drv++;
- }
-
- return NULL;
-}
-
-static void feature_uinit(struct ifpga_feature_list *list)
-{
- struct ifpga_feature *feature;
-
- TAILQ_FOREACH(feature, list, next) {
- if (feature->state != IFPGA_FEATURE_ATTACHED)
- continue;
- if (feature->ops && feature->ops->uinit)
- feature->ops->uinit(feature);
- }
-}
-
-static int feature_init(struct feature_driver *drv,
- struct ifpga_feature_list *list)
-{
- struct ifpga_feature *feature;
- int ret;
-
- while (drv->ops) {
- TAILQ_FOREACH(feature, list, next) {
- if (feature->state != IFPGA_FEATURE_ATTACHED)
- continue;
- if (feature->id == drv->id) {
- feature->ops = drv->ops;
- feature->name = drv->name;
- if (feature->ops->init) {
- ret = feature->ops->init(feature);
- if (ret)
- goto error;
- }
- }
- }
- drv++;
- }
-
- return 0;
-error:
- feature_uinit(list);
- return ret;
-}
-
-int fme_hw_init(struct ifpga_fme_hw *fme)
-{
- int ret;
-
- if (fme->state != IFPGA_FME_IMPLEMENTED)
- return -ENODEV;
-
- ret = feature_init(fme_feature_drvs, &fme->feature_list);
- if (ret)
- return ret;
-
- return 0;
-}
-
-void fme_hw_uinit(struct ifpga_fme_hw *fme)
-{
- feature_uinit(&fme->feature_list);
-}
-
-void port_hw_uinit(struct ifpga_port_hw *port)
-{
- feature_uinit(&port->feature_list);
-}
-
-int port_hw_init(struct ifpga_port_hw *port)
-{
- int ret;
-
- if (port->state == IFPGA_PORT_UNUSED)
- return 0;
-
- ret = feature_init(port_feature_drvs, &port->feature_list);
- if (ret)
- goto error;
-
- return 0;
-error:
- port_hw_uinit(port);
- return ret;
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _IFPGA_FEATURE_DEV_H_
-#define _IFPGA_FEATURE_DEV_H_
-
-#include "ifpga_hw.h"
-
-struct feature_driver {
- u64 id;
- const char *name;
- struct ifpga_feature_ops *ops;
-};
-
-/**
- * FEATURE_DRV - macro used to describe a specific feature driver
- */
-#define FEATURE_DRV(n, s, p) \
- .id = (n), .name = (s), .ops = (p)
-
-static inline struct ifpga_port_hw *
-get_port(struct ifpga_hw *hw, u32 port_id)
-{
- if (!is_valid_port_id(hw, port_id))
- return NULL;
-
- return &hw->port[port_id];
-}
-
-#define ifpga_for_each_fme_feature(hw, feature) \
- TAILQ_FOREACH(feature, &hw->feature_list, next)
-
-#define ifpga_for_each_port_feature(port, feature) \
- TAILQ_FOREACH(feature, &port->feature_list, next)
-
-static inline struct ifpga_feature *
-get_fme_feature_by_id(struct ifpga_fme_hw *fme, u64 id)
-{
- struct ifpga_feature *feature;
-
- ifpga_for_each_fme_feature(fme, feature) {
- if (feature->id == id)
- return feature;
- }
-
- return NULL;
-}
-
-static inline struct ifpga_feature *
-get_port_feature_by_id(struct ifpga_port_hw *port, u64 id)
-{
- struct ifpga_feature *feature;
-
- ifpga_for_each_port_feature(port, feature) {
- if (feature->id == id)
- return feature;
- }
-
- return NULL;
-}
-
-static inline struct ifpga_feature *
-get_feature_by_id(struct ifpga_feature_list *list, u64 id)
-{
- struct ifpga_feature *feature;
-
- TAILQ_FOREACH(feature, list, next)
- if (feature->id == id)
- return feature;
-
- return NULL;
-}
-
-static inline void *
-get_fme_feature_ioaddr_by_index(struct ifpga_fme_hw *fme, int index)
-{
- struct ifpga_feature *feature =
- get_feature_by_id(&fme->feature_list, index);
-
- return feature ? feature->addr : NULL;
-}
-
-static inline void *
-get_port_feature_ioaddr_by_index(struct ifpga_port_hw *port, int index)
-{
- struct ifpga_feature *feature =
- get_feature_by_id(&port->feature_list, index);
-
- return feature ? feature->addr : NULL;
-}
-
-static inline bool
-is_fme_feature_present(struct ifpga_fme_hw *fme, int index)
-{
- return !!get_fme_feature_ioaddr_by_index(fme, index);
-}
-
-static inline bool
-is_port_feature_present(struct ifpga_port_hw *port, int index)
-{
- return !!get_port_feature_ioaddr_by_index(port, index);
-}
-
-int fpga_get_afu_uuid(struct ifpga_port_hw *port, struct uuid *uuid);
-
-int __fpga_port_disable(struct ifpga_port_hw *port);
-void __fpga_port_enable(struct ifpga_port_hw *port);
-
-static inline int fpga_port_disable(struct ifpga_port_hw *port)
-{
- int ret;
-
- spinlock_lock(&port->lock);
- ret = __fpga_port_disable(port);
- spinlock_unlock(&port->lock);
- return ret;
-}
-
-static inline int fpga_port_enable(struct ifpga_port_hw *port)
-{
- spinlock_lock(&port->lock);
- __fpga_port_enable(port);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static inline int __fpga_port_reset(struct ifpga_port_hw *port)
-{
- int ret;
-
- ret = __fpga_port_disable(port);
- if (ret)
- return ret;
-
- __fpga_port_enable(port);
-
- return 0;
-}
-
-static inline int fpga_port_reset(struct ifpga_port_hw *port)
-{
- int ret;
-
- spinlock_lock(&port->lock);
- ret = __fpga_port_reset(port);
- spinlock_unlock(&port->lock);
- return ret;
-}
-
-int do_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer, u32 size,
- u64 *status);
-
-int fme_get_prop(struct ifpga_fme_hw *fme, struct feature_prop *prop);
-int fme_set_prop(struct ifpga_fme_hw *fme, struct feature_prop *prop);
-int fme_set_irq(struct ifpga_fme_hw *fme, u32 feature_id, void *irq_set);
-
-int fme_hw_init(struct ifpga_fme_hw *fme);
-void fme_hw_uinit(struct ifpga_fme_hw *fme);
-void port_hw_uinit(struct ifpga_port_hw *port);
-int port_hw_init(struct ifpga_port_hw *port);
-int port_clear_error(struct ifpga_port_hw *port);
-void port_err_mask(struct ifpga_port_hw *port, bool mask);
-int port_err_clear(struct ifpga_port_hw *port, u64 err);
-
-extern struct ifpga_feature_ops fme_hdr_ops;
-extern struct ifpga_feature_ops fme_thermal_mgmt_ops;
-extern struct ifpga_feature_ops fme_power_mgmt_ops;
-extern struct ifpga_feature_ops fme_global_err_ops;
-extern struct ifpga_feature_ops fme_pr_mgmt_ops;
-extern struct ifpga_feature_ops fme_global_iperf_ops;
-extern struct ifpga_feature_ops fme_global_dperf_ops;
-extern struct ifpga_feature_ops fme_hssi_eth_ops;
-extern struct ifpga_feature_ops fme_emif_ops;
-extern struct ifpga_feature_ops fme_spi_master_ops;
-extern struct ifpga_feature_ops fme_i2c_master_ops;
-extern struct ifpga_feature_ops fme_eth_group_ops;
-extern struct ifpga_feature_ops fme_nios_spi_master_ops;
-
-int port_get_prop(struct ifpga_port_hw *port, struct feature_prop *prop);
-int port_set_prop(struct ifpga_port_hw *port, struct feature_prop *prop);
-
-/* This struct is used when parsing uafu irq_set */
-struct fpga_uafu_irq_set {
- u32 start;
- u32 count;
- s32 *evtfds;
-};
-
-int port_set_irq(struct ifpga_port_hw *port, u32 feature_id, void *irq_set);
-const char *get_fme_feature_name(unsigned int id);
-const char *get_port_feature_name(unsigned int id);
-
-extern struct ifpga_feature_ops ifpga_rawdev_port_hdr_ops;
-extern struct ifpga_feature_ops ifpga_rawdev_port_error_ops;
-extern struct ifpga_feature_ops ifpga_rawdev_port_stp_ops;
-extern struct ifpga_feature_ops ifpga_rawdev_port_uint_ops;
-extern struct ifpga_feature_ops ifpga_rawdev_port_afu_ops;
-
-/* help functions for feature ops */
-int fpga_msix_set_block(struct ifpga_feature *feature, unsigned int start,
- unsigned int count, s32 *fds);
-
-/* FME network function ops*/
-int fme_mgr_read_mac_rom(struct ifpga_fme_hw *fme, int offset,
- void *buf, int size);
-int fme_mgr_write_mac_rom(struct ifpga_fme_hw *fme, int offset,
- void *buf, int size);
-int fme_mgr_get_eth_group_nums(struct ifpga_fme_hw *fme);
-int fme_mgr_get_eth_group_info(struct ifpga_fme_hw *fme,
- u8 group_id, struct opae_eth_group_info *info);
-int fme_mgr_eth_group_read_reg(struct ifpga_fme_hw *fme, u8 group_id,
- u8 type, u8 index, u16 addr, u32 *data);
-int fme_mgr_eth_group_write_reg(struct ifpga_fme_hw *fme, u8 group_id,
- u8 type, u8 index, u16 addr, u32 data);
-int fme_mgr_get_retimer_info(struct ifpga_fme_hw *fme,
- struct opae_retimer_info *info);
-int fme_mgr_get_retimer_status(struct ifpga_fme_hw *fme,
- struct opae_retimer_status *status);
-#endif /* _IFPGA_FEATURE_DEV_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "ifpga_feature_dev.h"
-#include "opae_spi.h"
-#include "opae_intel_max10.h"
-#include "opae_i2c.h"
-#include "opae_at24_eeprom.h"
-
-#define PWR_THRESHOLD_MAX 0x7F
-
-int fme_get_prop(struct ifpga_fme_hw *fme, struct feature_prop *prop)
-{
- struct ifpga_feature *feature;
-
- if (!fme)
- return -ENOENT;
-
- feature = get_fme_feature_by_id(fme, prop->feature_id);
-
- if (feature && feature->ops && feature->ops->get_prop)
- return feature->ops->get_prop(feature, prop);
-
- return -ENOENT;
-}
-
-int fme_set_prop(struct ifpga_fme_hw *fme, struct feature_prop *prop)
-{
- struct ifpga_feature *feature;
-
- if (!fme)
- return -ENOENT;
-
- feature = get_fme_feature_by_id(fme, prop->feature_id);
-
- if (feature && feature->ops && feature->ops->set_prop)
- return feature->ops->set_prop(feature, prop);
-
- return -ENOENT;
-}
-
-int fme_set_irq(struct ifpga_fme_hw *fme, u32 feature_id, void *irq_set)
-{
- struct ifpga_feature *feature;
-
- if (!fme)
- return -ENOENT;
-
- feature = get_fme_feature_by_id(fme, feature_id);
-
- if (feature && feature->ops && feature->ops->set_irq)
- return feature->ops->set_irq(feature, irq_set);
-
- return -ENOENT;
-}
-
-/* fme private feature head */
-static int fme_hdr_init(struct ifpga_feature *feature)
-{
- struct feature_fme_header *fme_hdr;
-
- fme_hdr = (struct feature_fme_header *)feature->addr;
-
- dev_info(NULL, "FME HDR Init.\n");
- dev_info(NULL, "FME cap %llx.\n",
- (unsigned long long)fme_hdr->capability.csr);
-
- return 0;
-}
-
-static void fme_hdr_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "FME HDR UInit.\n");
-}
-
-static int fme_hdr_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
-{
- struct feature_fme_header *fme_hdr
- = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
- struct feature_header header;
-
- header.csr = readq(&fme_hdr->header);
- *revision = header.revision;
-
- return 0;
-}
-
-static int fme_hdr_get_ports_num(struct ifpga_fme_hw *fme, u64 *ports_num)
-{
- struct feature_fme_header *fme_hdr
- = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
- struct feature_fme_capability fme_capability;
-
- fme_capability.csr = readq(&fme_hdr->capability);
- *ports_num = fme_capability.num_ports;
-
- return 0;
-}
-
-static int fme_hdr_get_cache_size(struct ifpga_fme_hw *fme, u64 *cache_size)
-{
- struct feature_fme_header *fme_hdr
- = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
- struct feature_fme_capability fme_capability;
-
- fme_capability.csr = readq(&fme_hdr->capability);
- *cache_size = fme_capability.cache_size;
-
- return 0;
-}
-
-static int fme_hdr_get_version(struct ifpga_fme_hw *fme, u64 *version)
-{
- struct feature_fme_header *fme_hdr
- = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
- struct feature_fme_capability fme_capability;
-
- fme_capability.csr = readq(&fme_hdr->capability);
- *version = fme_capability.fabric_verid;
-
- return 0;
-}
-
-static int fme_hdr_get_socket_id(struct ifpga_fme_hw *fme, u64 *socket_id)
-{
- struct feature_fme_header *fme_hdr
- = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
- struct feature_fme_capability fme_capability;
-
- fme_capability.csr = readq(&fme_hdr->capability);
- *socket_id = fme_capability.socket_id;
-
- return 0;
-}
-
-static int fme_hdr_get_bitstream_id(struct ifpga_fme_hw *fme,
- u64 *bitstream_id)
-{
- struct feature_fme_header *fme_hdr
- = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
-
- *bitstream_id = readq(&fme_hdr->bitstream_id);
-
- return 0;
-}
-
-static int fme_hdr_get_bitstream_metadata(struct ifpga_fme_hw *fme,
- u64 *bitstream_metadata)
-{
- struct feature_fme_header *fme_hdr
- = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
-
- *bitstream_metadata = readq(&fme_hdr->bitstream_md);
-
- return 0;
-}
-
-static int
-fme_hdr_get_prop(struct ifpga_feature *feature, struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
-
- switch (prop->prop_id) {
- case FME_HDR_PROP_REVISION:
- return fme_hdr_get_revision(fme, &prop->data);
- case FME_HDR_PROP_PORTS_NUM:
- return fme_hdr_get_ports_num(fme, &prop->data);
- case FME_HDR_PROP_CACHE_SIZE:
- return fme_hdr_get_cache_size(fme, &prop->data);
- case FME_HDR_PROP_VERSION:
- return fme_hdr_get_version(fme, &prop->data);
- case FME_HDR_PROP_SOCKET_ID:
- return fme_hdr_get_socket_id(fme, &prop->data);
- case FME_HDR_PROP_BITSTREAM_ID:
- return fme_hdr_get_bitstream_id(fme, &prop->data);
- case FME_HDR_PROP_BITSTREAM_METADATA:
- return fme_hdr_get_bitstream_metadata(fme, &prop->data);
- }
-
- return -ENOENT;
-}
-
-struct ifpga_feature_ops fme_hdr_ops = {
- .init = fme_hdr_init,
- .uinit = fme_hdr_uinit,
- .get_prop = fme_hdr_get_prop,
-};
-
-/* thermal management */
-static int fme_thermal_get_threshold1(struct ifpga_fme_hw *fme, u64 *thres1)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_tmp_threshold temp_threshold;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
-
- temp_threshold.csr = readq(&thermal->threshold);
- *thres1 = temp_threshold.tmp_thshold1;
-
- return 0;
-}
-
-static int fme_thermal_set_threshold1(struct ifpga_fme_hw *fme, u64 thres1)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_header *fme_hdr;
- struct feature_fme_tmp_threshold tmp_threshold;
- struct feature_fme_capability fme_capability;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
- fme_hdr = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
-
- spinlock_lock(&fme->lock);
- tmp_threshold.csr = readq(&thermal->threshold);
- fme_capability.csr = readq(&fme_hdr->capability);
-
- if (fme_capability.lock_bit == 1) {
- spinlock_unlock(&fme->lock);
- return -EBUSY;
- } else if (thres1 > 100) {
- spinlock_unlock(&fme->lock);
- return -EINVAL;
- } else if (thres1 == 0) {
- tmp_threshold.tmp_thshold1_enable = 0;
- tmp_threshold.tmp_thshold1 = thres1;
- } else {
- tmp_threshold.tmp_thshold1_enable = 1;
- tmp_threshold.tmp_thshold1 = thres1;
- }
-
- writeq(tmp_threshold.csr, &thermal->threshold);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static int fme_thermal_get_threshold2(struct ifpga_fme_hw *fme, u64 *thres2)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_tmp_threshold temp_threshold;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
-
- temp_threshold.csr = readq(&thermal->threshold);
- *thres2 = temp_threshold.tmp_thshold2;
-
- return 0;
-}
-
-static int fme_thermal_set_threshold2(struct ifpga_fme_hw *fme, u64 thres2)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_header *fme_hdr;
- struct feature_fme_tmp_threshold tmp_threshold;
- struct feature_fme_capability fme_capability;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
- fme_hdr = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
-
- spinlock_lock(&fme->lock);
- tmp_threshold.csr = readq(&thermal->threshold);
- fme_capability.csr = readq(&fme_hdr->capability);
-
- if (fme_capability.lock_bit == 1) {
- spinlock_unlock(&fme->lock);
- return -EBUSY;
- } else if (thres2 > 100) {
- spinlock_unlock(&fme->lock);
- return -EINVAL;
- } else if (thres2 == 0) {
- tmp_threshold.tmp_thshold2_enable = 0;
- tmp_threshold.tmp_thshold2 = thres2;
- } else {
- tmp_threshold.tmp_thshold2_enable = 1;
- tmp_threshold.tmp_thshold2 = thres2;
- }
-
- writeq(tmp_threshold.csr, &thermal->threshold);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static int fme_thermal_get_threshold_trip(struct ifpga_fme_hw *fme,
- u64 *thres_trip)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_tmp_threshold temp_threshold;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
-
- temp_threshold.csr = readq(&thermal->threshold);
- *thres_trip = temp_threshold.therm_trip_thshold;
-
- return 0;
-}
-
-static int fme_thermal_get_threshold1_reached(struct ifpga_fme_hw *fme,
- u64 *thres1_reached)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_tmp_threshold temp_threshold;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
-
- temp_threshold.csr = readq(&thermal->threshold);
- *thres1_reached = temp_threshold.thshold1_status;
-
- return 0;
-}
-
-static int fme_thermal_get_threshold2_reached(struct ifpga_fme_hw *fme,
- u64 *thres1_reached)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_tmp_threshold temp_threshold;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
-
- temp_threshold.csr = readq(&thermal->threshold);
- *thres1_reached = temp_threshold.thshold2_status;
-
- return 0;
-}
-
-static int fme_thermal_get_threshold1_policy(struct ifpga_fme_hw *fme,
- u64 *thres1_policy)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_tmp_threshold temp_threshold;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
-
- temp_threshold.csr = readq(&thermal->threshold);
- *thres1_policy = temp_threshold.thshold_policy;
-
- return 0;
-}
-
-static int fme_thermal_set_threshold1_policy(struct ifpga_fme_hw *fme,
- u64 thres1_policy)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_tmp_threshold tmp_threshold;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
-
- spinlock_lock(&fme->lock);
- tmp_threshold.csr = readq(&thermal->threshold);
-
- if (thres1_policy == 0) {
- tmp_threshold.thshold_policy = 0;
- } else if (thres1_policy == 1) {
- tmp_threshold.thshold_policy = 1;
- } else {
- spinlock_unlock(&fme->lock);
- return -EINVAL;
- }
-
- writeq(tmp_threshold.csr, &thermal->threshold);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static int fme_thermal_get_temperature(struct ifpga_fme_hw *fme, u64 *temp)
-{
- struct feature_fme_thermal *thermal;
- struct feature_fme_temp_rdsensor_fmt1 temp_rdsensor_fmt1;
-
- thermal = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
-
- temp_rdsensor_fmt1.csr = readq(&thermal->rdsensor_fm1);
- *temp = temp_rdsensor_fmt1.fpga_temp;
-
- return 0;
-}
-
-static int fme_thermal_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
-{
- struct feature_fme_thermal *fme_thermal
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_THERMAL_MGMT);
- struct feature_header header;
-
- header.csr = readq(&fme_thermal->header);
- *revision = header.revision;
-
- return 0;
-}
-
-#define FME_THERMAL_CAP_NO_TMP_THRESHOLD 0x1
-
-static int fme_thermal_mgmt_init(struct ifpga_feature *feature)
-{
- struct feature_fme_thermal *fme_thermal;
- struct feature_fme_tmp_threshold_cap thermal_cap;
-
- UNUSED(feature);
-
- dev_info(NULL, "FME thermal mgmt Init.\n");
-
- fme_thermal = (struct feature_fme_thermal *)feature->addr;
- thermal_cap.csr = readq(&fme_thermal->threshold_cap);
-
- dev_info(NULL, "FME thermal cap %llx.\n",
- (unsigned long long)fme_thermal->threshold_cap.csr);
-
- if (thermal_cap.tmp_thshold_disabled)
- feature->cap |= FME_THERMAL_CAP_NO_TMP_THRESHOLD;
-
- return 0;
-}
-
-static void fme_thermal_mgmt_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "FME thermal mgmt UInit.\n");
-}
-
-static int
-fme_thermal_set_prop(struct ifpga_feature *feature, struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
-
- if (feature->cap & FME_THERMAL_CAP_NO_TMP_THRESHOLD)
- return -ENOENT;
-
- switch (prop->prop_id) {
- case FME_THERMAL_PROP_THRESHOLD1:
- return fme_thermal_set_threshold1(fme, prop->data);
- case FME_THERMAL_PROP_THRESHOLD2:
- return fme_thermal_set_threshold2(fme, prop->data);
- case FME_THERMAL_PROP_THRESHOLD1_POLICY:
- return fme_thermal_set_threshold1_policy(fme, prop->data);
- }
-
- return -ENOENT;
-}
-
-static int
-fme_thermal_get_prop(struct ifpga_feature *feature, struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
-
- if (feature->cap & FME_THERMAL_CAP_NO_TMP_THRESHOLD &&
- prop->prop_id != FME_THERMAL_PROP_TEMPERATURE &&
- prop->prop_id != FME_THERMAL_PROP_REVISION)
- return -ENOENT;
-
- switch (prop->prop_id) {
- case FME_THERMAL_PROP_THRESHOLD1:
- return fme_thermal_get_threshold1(fme, &prop->data);
- case FME_THERMAL_PROP_THRESHOLD2:
- return fme_thermal_get_threshold2(fme, &prop->data);
- case FME_THERMAL_PROP_THRESHOLD_TRIP:
- return fme_thermal_get_threshold_trip(fme, &prop->data);
- case FME_THERMAL_PROP_THRESHOLD1_REACHED:
- return fme_thermal_get_threshold1_reached(fme, &prop->data);
- case FME_THERMAL_PROP_THRESHOLD2_REACHED:
- return fme_thermal_get_threshold2_reached(fme, &prop->data);
- case FME_THERMAL_PROP_THRESHOLD1_POLICY:
- return fme_thermal_get_threshold1_policy(fme, &prop->data);
- case FME_THERMAL_PROP_TEMPERATURE:
- return fme_thermal_get_temperature(fme, &prop->data);
- case FME_THERMAL_PROP_REVISION:
- return fme_thermal_get_revision(fme, &prop->data);
- }
-
- return -ENOENT;
-}
-
-struct ifpga_feature_ops fme_thermal_mgmt_ops = {
- .init = fme_thermal_mgmt_init,
- .uinit = fme_thermal_mgmt_uinit,
- .get_prop = fme_thermal_get_prop,
- .set_prop = fme_thermal_set_prop,
-};
-
-static int fme_pwr_get_consumed(struct ifpga_fme_hw *fme, u64 *consumed)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_status pm_status;
-
- pm_status.csr = readq(&fme_power->status);
-
- *consumed = pm_status.pwr_consumed;
-
- return 0;
-}
-
-static int fme_pwr_get_threshold1(struct ifpga_fme_hw *fme, u64 *threshold)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_ap_threshold pm_ap_threshold;
-
- pm_ap_threshold.csr = readq(&fme_power->threshold);
-
- *threshold = pm_ap_threshold.threshold1;
-
- return 0;
-}
-
-static int fme_pwr_set_threshold1(struct ifpga_fme_hw *fme, u64 threshold)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_ap_threshold pm_ap_threshold;
-
- spinlock_lock(&fme->lock);
- pm_ap_threshold.csr = readq(&fme_power->threshold);
-
- if (threshold <= PWR_THRESHOLD_MAX) {
- pm_ap_threshold.threshold1 = threshold;
- } else {
- spinlock_unlock(&fme->lock);
- return -EINVAL;
- }
-
- writeq(pm_ap_threshold.csr, &fme_power->threshold);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static int fme_pwr_get_threshold2(struct ifpga_fme_hw *fme, u64 *threshold)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_ap_threshold pm_ap_threshold;
-
- pm_ap_threshold.csr = readq(&fme_power->threshold);
-
- *threshold = pm_ap_threshold.threshold2;
-
- return 0;
-}
-
-static int fme_pwr_set_threshold2(struct ifpga_fme_hw *fme, u64 threshold)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_ap_threshold pm_ap_threshold;
-
- spinlock_lock(&fme->lock);
- pm_ap_threshold.csr = readq(&fme_power->threshold);
-
- if (threshold <= PWR_THRESHOLD_MAX) {
- pm_ap_threshold.threshold2 = threshold;
- } else {
- spinlock_unlock(&fme->lock);
- return -EINVAL;
- }
-
- writeq(pm_ap_threshold.csr, &fme_power->threshold);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static int fme_pwr_get_threshold1_status(struct ifpga_fme_hw *fme,
- u64 *threshold_status)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_ap_threshold pm_ap_threshold;
-
- pm_ap_threshold.csr = readq(&fme_power->threshold);
-
- *threshold_status = pm_ap_threshold.threshold1_status;
-
- return 0;
-}
-
-static int fme_pwr_get_threshold2_status(struct ifpga_fme_hw *fme,
- u64 *threshold_status)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_ap_threshold pm_ap_threshold;
-
- pm_ap_threshold.csr = readq(&fme_power->threshold);
-
- *threshold_status = pm_ap_threshold.threshold2_status;
-
- return 0;
-}
-
-static int fme_pwr_get_rtl(struct ifpga_fme_hw *fme, u64 *rtl)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_status pm_status;
-
- pm_status.csr = readq(&fme_power->status);
-
- *rtl = pm_status.fpga_latency_report;
-
- return 0;
-}
-
-static int fme_pwr_get_xeon_limit(struct ifpga_fme_hw *fme, u64 *limit)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_xeon_limit xeon_limit;
-
- xeon_limit.csr = readq(&fme_power->xeon_limit);
-
- if (!xeon_limit.enable)
- xeon_limit.pwr_limit = 0;
-
- *limit = xeon_limit.pwr_limit;
-
- return 0;
-}
-
-static int fme_pwr_get_fpga_limit(struct ifpga_fme_hw *fme, u64 *limit)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_fme_pm_fpga_limit fpga_limit;
-
- fpga_limit.csr = readq(&fme_power->fpga_limit);
-
- if (!fpga_limit.enable)
- fpga_limit.pwr_limit = 0;
-
- *limit = fpga_limit.pwr_limit;
-
- return 0;
-}
-
-static int fme_pwr_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
-{
- struct feature_fme_power *fme_power
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_POWER_MGMT);
- struct feature_header header;
-
- header.csr = readq(&fme_power->header);
- *revision = header.revision;
-
- return 0;
-}
-
-static int fme_power_mgmt_init(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "FME power mgmt Init.\n");
-
- return 0;
-}
-
-static void fme_power_mgmt_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "FME power mgmt UInit.\n");
-}
-
-static int fme_power_mgmt_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
-
- switch (prop->prop_id) {
- case FME_PWR_PROP_CONSUMED:
- return fme_pwr_get_consumed(fme, &prop->data);
- case FME_PWR_PROP_THRESHOLD1:
- return fme_pwr_get_threshold1(fme, &prop->data);
- case FME_PWR_PROP_THRESHOLD2:
- return fme_pwr_get_threshold2(fme, &prop->data);
- case FME_PWR_PROP_THRESHOLD1_STATUS:
- return fme_pwr_get_threshold1_status(fme, &prop->data);
- case FME_PWR_PROP_THRESHOLD2_STATUS:
- return fme_pwr_get_threshold2_status(fme, &prop->data);
- case FME_PWR_PROP_RTL:
- return fme_pwr_get_rtl(fme, &prop->data);
- case FME_PWR_PROP_XEON_LIMIT:
- return fme_pwr_get_xeon_limit(fme, &prop->data);
- case FME_PWR_PROP_FPGA_LIMIT:
- return fme_pwr_get_fpga_limit(fme, &prop->data);
- case FME_PWR_PROP_REVISION:
- return fme_pwr_get_revision(fme, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_power_mgmt_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
-
- switch (prop->prop_id) {
- case FME_PWR_PROP_THRESHOLD1:
- return fme_pwr_set_threshold1(fme, prop->data);
- case FME_PWR_PROP_THRESHOLD2:
- return fme_pwr_set_threshold2(fme, prop->data);
- }
-
- return -ENOENT;
-}
-
-struct ifpga_feature_ops fme_power_mgmt_ops = {
- .init = fme_power_mgmt_init,
- .uinit = fme_power_mgmt_uinit,
- .get_prop = fme_power_mgmt_get_prop,
- .set_prop = fme_power_mgmt_set_prop,
-};
-
-static int fme_hssi_eth_init(struct ifpga_feature *feature)
-{
- UNUSED(feature);
- return 0;
-}
-
-static void fme_hssi_eth_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-}
-
-struct ifpga_feature_ops fme_hssi_eth_ops = {
- .init = fme_hssi_eth_init,
- .uinit = fme_hssi_eth_uinit,
-};
-
-static int fme_emif_init(struct ifpga_feature *feature)
-{
- UNUSED(feature);
- return 0;
-}
-
-static void fme_emif_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-}
-
-struct ifpga_feature_ops fme_emif_ops = {
- .init = fme_emif_init,
- .uinit = fme_emif_uinit,
-};
-
-static const char *board_type_to_string(u32 type)
-{
- switch (type) {
- case VC_8_10G:
- return "VC_8x10G";
- case VC_4_25G:
- return "VC_4x25G";
- case VC_2_1_25:
- return "VC_2x1x25G";
- case VC_4_25G_2_25G:
- return "VC_4x25G+2x25G";
- case VC_2_2_25G:
- return "VC_2x2x25G";
- }
-
- return "unknown";
-}
-
-static int board_type_to_info(u32 type,
- struct ifpga_fme_board_info *info)
-{
- switch (type) {
- case VC_8_10G:
- info->nums_of_retimer = 2;
- info->ports_per_retimer = 4;
- info->nums_of_fvl = 2;
- info->ports_per_fvl = 4;
- break;
- case VC_4_25G:
- info->nums_of_retimer = 1;
- info->ports_per_retimer = 4;
- info->nums_of_fvl = 2;
- info->ports_per_fvl = 2;
- break;
- case VC_2_1_25:
- info->nums_of_retimer = 2;
- info->ports_per_retimer = 1;
- info->nums_of_fvl = 1;
- info->ports_per_fvl = 2;
- break;
- case VC_2_2_25G:
- info->nums_of_retimer = 2;
- info->ports_per_retimer = 2;
- info->nums_of_fvl = 2;
- info->ports_per_fvl = 2;
- break;
- default:
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int fme_get_board_interface(struct ifpga_fme_hw *fme)
-{
- struct fme_bitstream_id id;
-
- if (fme_hdr_get_bitstream_id(fme, &id.id))
- return -EINVAL;
-
- fme->board_info.type = id.interface;
- fme->board_info.build_hash = id.hash;
- fme->board_info.debug_version = id.debug;
- fme->board_info.major_version = id.major;
- fme->board_info.minor_version = id.minor;
-
- dev_info(fme, "board type: %s major_version:%u minor_version:%u build_hash:%u\n",
- board_type_to_string(fme->board_info.type),
- fme->board_info.major_version,
- fme->board_info.minor_version,
- fme->board_info.build_hash);
-
- if (board_type_to_info(fme->board_info.type, &fme->board_info))
- return -EINVAL;
-
- dev_info(fme, "get board info: nums_retimers %d ports_per_retimer %d nums_fvl %d ports_per_fvl %d\n",
- fme->board_info.nums_of_retimer,
- fme->board_info.ports_per_retimer,
- fme->board_info.nums_of_fvl,
- fme->board_info.ports_per_fvl);
-
- return 0;
-}
-
-static int spi_self_checking(void)
-{
- u32 val;
- int ret;
-
- ret = max10_reg_read(0x30043c, &val);
- if (ret)
- return -EIO;
-
- if (val != 0x87654321) {
- dev_err(NULL, "Read MAX10 test register fail: 0x%x\n", val);
- return -EIO;
- }
-
- dev_info(NULL, "Read MAX10 test register success, SPI self-test done\n");
-
- return 0;
-}
-
-static int fme_spi_init(struct ifpga_feature *feature)
-{
- struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
- struct altera_spi_device *spi_master;
- struct intel_max10_device *max10;
- int ret = 0;
-
- dev_info(fme, "FME SPI Master (Max10) Init.\n");
- dev_debug(fme, "FME SPI base addr %p.\n",
- feature->addr);
- dev_debug(fme, "spi param=0x%llx\n",
- (unsigned long long)opae_readq(feature->addr + 0x8));
-
- spi_master = altera_spi_alloc(feature->addr, TYPE_SPI);
- if (!spi_master)
- return -ENODEV;
-
- altera_spi_init(spi_master);
-
- max10 = intel_max10_device_probe(spi_master, 0);
- if (!max10) {
- ret = -ENODEV;
- dev_err(fme, "max10 init fail\n");
- goto spi_fail;
- }
-
- fme->max10_dev = max10;
-
- /* SPI self test */
- if (spi_self_checking()) {
- ret = -EIO;
- goto max10_fail;
- }
-
- return ret;
-
-max10_fail:
- intel_max10_device_remove(fme->max10_dev);
-spi_fail:
- altera_spi_release(spi_master);
- return ret;
-}
-
-static void fme_spi_uinit(struct ifpga_feature *feature)
-{
- struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
-
- if (fme->max10_dev)
- intel_max10_device_remove(fme->max10_dev);
-}
-
-struct ifpga_feature_ops fme_spi_master_ops = {
- .init = fme_spi_init,
- .uinit = fme_spi_uinit,
-};
-
-static int nios_spi_wait_init_done(struct altera_spi_device *dev)
-{
- u32 val = 0;
- unsigned long timeout = msecs_to_timer_cycles(10000);
- unsigned long ticks;
-
- do {
- if (spi_reg_read(dev, NIOS_SPI_INIT_DONE, &val))
- return -EIO;
- if (val)
- break;
-
- ticks = rte_get_timer_cycles();
- if (time_after(ticks, timeout))
- return -ETIMEDOUT;
- msleep(100);
- } while (!val);
-
- return 0;
-}
-
-static int nios_spi_check_error(struct altera_spi_device *dev)
-{
- u32 value = 0;
-
- if (spi_reg_read(dev, NIOS_SPI_INIT_STS0, &value))
- return -EIO;
-
- dev_debug(dev, "SPI init status0 0x%x\n", value);
-
- /* Error code: 0xFFF0 to 0xFFFC */
- if (value >= 0xFFF0 && value <= 0xFFFC)
- return -EINVAL;
-
- value = 0;
- if (spi_reg_read(dev, NIOS_SPI_INIT_STS1, &value))
- return -EIO;
-
- dev_debug(dev, "SPI init status1 0x%x\n", value);
-
- /* Error code: 0xFFF0 to 0xFFFC */
- if (value >= 0xFFF0 && value <= 0xFFFC)
- return -EINVAL;
-
- return 0;
-}
-
-static int fme_nios_spi_init(struct ifpga_feature *feature)
-{
- struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
- struct altera_spi_device *spi_master;
- struct intel_max10_device *max10;
- int ret = 0;
-
- dev_info(fme, "FME SPI Master (NIOS) Init.\n");
- dev_debug(fme, "FME SPI base addr %p.\n",
- feature->addr);
- dev_debug(fme, "spi param=0x%llx\n",
- (unsigned long long)opae_readq(feature->addr + 0x8));
-
- spi_master = altera_spi_alloc(feature->addr, TYPE_NIOS_SPI);
- if (!spi_master)
- return -ENODEV;
-
- /**
- * 1. wait A10 NIOS initial finished and
- * release the SPI master to Host
- */
- ret = nios_spi_wait_init_done(spi_master);
- if (ret != 0) {
- dev_err(fme, "FME NIOS_SPI init fail\n");
- goto release_dev;
- }
-
- dev_info(fme, "FME NIOS_SPI initial done\n");
-
- /* 2. check if error occur? */
- if (nios_spi_check_error(spi_master))
- dev_info(fme, "NIOS_SPI INIT done, but found some error\n");
-
- /* 3. init the spi master*/
- altera_spi_init(spi_master);
-
- /* init the max10 device */
- max10 = intel_max10_device_probe(spi_master, 0);
- if (!max10) {
- ret = -ENODEV;
- dev_err(fme, "max10 init fail\n");
- goto release_dev;
- }
-
- fme_get_board_interface(fme);
-
- fme->max10_dev = max10;
-
- /* SPI self test */
- if (spi_self_checking())
- goto spi_fail;
-
- return ret;
-
-spi_fail:
- intel_max10_device_remove(fme->max10_dev);
-release_dev:
- altera_spi_release(spi_master);
- return -ENODEV;
-}
-
-static void fme_nios_spi_uinit(struct ifpga_feature *feature)
-{
- struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
-
- if (fme->max10_dev)
- intel_max10_device_remove(fme->max10_dev);
-}
-
-struct ifpga_feature_ops fme_nios_spi_master_ops = {
- .init = fme_nios_spi_init,
- .uinit = fme_nios_spi_uinit,
-};
-
-static int i2c_mac_rom_test(struct altera_i2c_dev *dev)
-{
- char buf[20];
- int ret;
- char read_buf[20] = {0,};
- const char *string = "1a2b3c4d5e";
-
- opae_memcpy(buf, string, strlen(string));
-
- ret = at24_eeprom_write(dev, AT24512_SLAVE_ADDR, 0,
- (u8 *)buf, strlen(string));
- if (ret < 0) {
- dev_err(NULL, "write i2c error:%d\n", ret);
- return ret;
- }
-
- ret = at24_eeprom_read(dev, AT24512_SLAVE_ADDR, 0,
- (u8 *)read_buf, strlen(string));
- if (ret < 0) {
- dev_err(NULL, "read i2c error:%d\n", ret);
- return ret;
- }
-
- if (memcmp(buf, read_buf, strlen(string))) {
- dev_err(NULL, "%s test fail!\n", __func__);
- return -EFAULT;
- }
-
- dev_info(NULL, "%s test successful\n", __func__);
-
- return 0;
-}
-
-static int fme_i2c_init(struct ifpga_feature *feature)
-{
- struct feature_fme_i2c *i2c;
- struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
-
- i2c = (struct feature_fme_i2c *)feature->addr;
-
- dev_info(NULL, "FME I2C Master Init.\n");
-
- fme->i2c_master = altera_i2c_probe(i2c);
- if (!fme->i2c_master)
- return -ENODEV;
-
- /* MAC ROM self test */
- i2c_mac_rom_test(fme->i2c_master);
-
- return 0;
-}
-
-static void fme_i2c_uninit(struct ifpga_feature *feature)
-{
- struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
-
- altera_i2c_remove(fme->i2c_master);
-}
-
-struct ifpga_feature_ops fme_i2c_master_ops = {
- .init = fme_i2c_init,
- .uinit = fme_i2c_uninit,
-};
-
-static int fme_eth_group_init(struct ifpga_feature *feature)
-{
- struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent;
- struct eth_group_device *dev;
-
- dev = (struct eth_group_device *)eth_group_probe(feature->addr);
- if (!dev)
- return -ENODEV;
-
- fme->eth_dev[dev->group_id] = dev;
-
- fme->eth_group_region[dev->group_id].addr =
- feature->addr;
- fme->eth_group_region[dev->group_id].phys_addr =
- feature->phys_addr;
- fme->eth_group_region[dev->group_id].len =
- feature->size;
-
- fme->nums_eth_dev++;
-
- dev_info(NULL, "FME PHY Group %d Init.\n", dev->group_id);
- dev_info(NULL, "found %d eth group, addr %p phys_addr 0x%llx len %u\n",
- dev->group_id, feature->addr,
- (unsigned long long)feature->phys_addr,
- feature->size);
-
- return 0;
-}
-
-static void fme_eth_group_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-}
-
-struct ifpga_feature_ops fme_eth_group_ops = {
- .init = fme_eth_group_init,
- .uinit = fme_eth_group_uinit,
-};
-
-int fme_mgr_read_mac_rom(struct ifpga_fme_hw *fme, int offset,
- void *buf, int size)
-{
- struct altera_i2c_dev *dev;
-
- dev = fme->i2c_master;
- if (!dev)
- return -ENODEV;
-
- return at24_eeprom_read(dev, AT24512_SLAVE_ADDR, offset, buf, size);
-}
-
-int fme_mgr_write_mac_rom(struct ifpga_fme_hw *fme, int offset,
- void *buf, int size)
-{
- struct altera_i2c_dev *dev;
-
- dev = fme->i2c_master;
- if (!dev)
- return -ENODEV;
-
- return at24_eeprom_write(dev, AT24512_SLAVE_ADDR, offset, buf, size);
-}
-
-static struct eth_group_device *get_eth_group_dev(struct ifpga_fme_hw *fme,
- u8 group_id)
-{
- struct eth_group_device *dev;
-
- if (group_id > (MAX_ETH_GROUP_DEVICES - 1))
- return NULL;
-
- dev = (struct eth_group_device *)fme->eth_dev[group_id];
- if (!dev)
- return NULL;
-
- if (dev->status != ETH_GROUP_DEV_ATTACHED)
- return NULL;
-
- return dev;
-}
-
-int fme_mgr_get_eth_group_nums(struct ifpga_fme_hw *fme)
-{
- return fme->nums_eth_dev;
-}
-
-int fme_mgr_get_eth_group_info(struct ifpga_fme_hw *fme,
- u8 group_id, struct opae_eth_group_info *info)
-{
- struct eth_group_device *dev;
-
- dev = get_eth_group_dev(fme, group_id);
- if (!dev)
- return -ENODEV;
-
- info->group_id = group_id;
- info->speed = dev->speed;
- info->nums_of_mac = dev->mac_num;
- info->nums_of_phy = dev->phy_num;
-
- return 0;
-}
-
-int fme_mgr_eth_group_read_reg(struct ifpga_fme_hw *fme, u8 group_id,
- u8 type, u8 index, u16 addr, u32 *data)
-{
- struct eth_group_device *dev;
-
- dev = get_eth_group_dev(fme, group_id);
- if (!dev)
- return -ENODEV;
-
- return eth_group_read_reg(dev, type, index, addr, data);
-}
-
-int fme_mgr_eth_group_write_reg(struct ifpga_fme_hw *fme, u8 group_id,
- u8 type, u8 index, u16 addr, u32 data)
-{
- struct eth_group_device *dev;
-
- dev = get_eth_group_dev(fme, group_id);
- if (!dev)
- return -ENODEV;
-
- return eth_group_write_reg(dev, type, index, addr, data);
-}
-
-static int fme_get_eth_group_speed(struct ifpga_fme_hw *fme,
- u8 group_id)
-{
- struct eth_group_device *dev;
-
- dev = get_eth_group_dev(fme, group_id);
- if (!dev)
- return -ENODEV;
-
- return dev->speed;
-}
-
-int fme_mgr_get_retimer_info(struct ifpga_fme_hw *fme,
- struct opae_retimer_info *info)
-{
- struct intel_max10_device *dev;
-
- dev = (struct intel_max10_device *)fme->max10_dev;
- if (!dev)
- return -ENODEV;
-
- info->nums_retimer = fme->board_info.nums_of_retimer;
- info->ports_per_retimer = fme->board_info.ports_per_retimer;
- info->nums_fvl = fme->board_info.nums_of_fvl;
- info->ports_per_fvl = fme->board_info.ports_per_fvl;
-
- /* The speed of PKVL is identical the eth group's speed */
- info->support_speed = fme_get_eth_group_speed(fme,
- LINE_SIDE_GROUP_ID);
-
- return 0;
-}
-
-int fme_mgr_get_retimer_status(struct ifpga_fme_hw *fme,
- struct opae_retimer_status *status)
-{
- struct intel_max10_device *dev;
- unsigned int val;
-
- dev = (struct intel_max10_device *)fme->max10_dev;
- if (!dev)
- return -ENODEV;
-
- if (max10_reg_read(PKVL_LINK_STATUS, &val)) {
- dev_err(dev, "%s: read pkvl status fail\n", __func__);
- return -EINVAL;
- }
-
- /* The speed of PKVL is identical the eth group's speed */
- status->speed = fme_get_eth_group_speed(fme,
- LINE_SIDE_GROUP_ID);
-
- status->line_link_bitmap = val;
-
- dev_debug(dev, "get retimer status: speed:%d. line_link_bitmap:0x%x\n",
- status->speed,
- status->line_link_bitmap);
-
- return 0;
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "ifpga_feature_dev.h"
-
-#define PERF_OBJ_ROOT_ID 0xff
-
-static int fme_dperf_get_clock(struct ifpga_fme_hw *fme, u64 *clock)
-{
- struct feature_fme_dperf *dperf;
- struct feature_fme_dfpmon_clk_ctr clk;
-
- dperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_DPERF);
- clk.afu_interf_clock = readq(&dperf->clk);
-
- *clock = clk.afu_interf_clock;
- return 0;
-}
-
-static int fme_dperf_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
-{
- struct feature_fme_dperf *dperf;
- struct feature_header header;
-
- dperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_DPERF);
- header.csr = readq(&dperf->header);
- *revision = header.revision;
-
- return 0;
-}
-
-#define DPERF_TIMEOUT 30
-
-static bool fabric_pobj_is_enabled(int port_id,
- struct feature_fme_dperf *dperf)
-{
- struct feature_fme_dfpmon_fab_ctl ctl;
-
- ctl.csr = readq(&dperf->fab_ctl);
-
- if (ctl.port_filter == FAB_DISABLE_FILTER)
- return port_id == PERF_OBJ_ROOT_ID;
-
- return port_id == ctl.port_id;
-}
-
-static u64 read_fabric_counter(struct ifpga_fme_hw *fme, u8 port_id,
- enum dperf_fab_events fab_event)
-{
- struct feature_fme_dfpmon_fab_ctl ctl;
- struct feature_fme_dfpmon_fab_ctr ctr;
- struct feature_fme_dperf *dperf;
- u64 counter = 0;
-
- spinlock_lock(&fme->lock);
- dperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_DPERF);
-
- /* if it is disabled, force the counter to return zero. */
- if (!fabric_pobj_is_enabled(port_id, dperf))
- goto exit;
-
- ctl.csr = readq(&dperf->fab_ctl);
- ctl.fab_evtcode = fab_event;
- writeq(ctl.csr, &dperf->fab_ctl);
-
- ctr.event_code = fab_event;
-
- if (fpga_wait_register_field(event_code, ctr,
- &dperf->fab_ctr, DPERF_TIMEOUT, 1)) {
- dev_err(fme, "timeout, unmatched VTd event type in counter registers.\n");
- spinlock_unlock(&fme->lock);
- return -ETIMEDOUT;
- }
-
- ctr.csr = readq(&dperf->fab_ctr);
- counter = ctr.fab_cnt;
-exit:
- spinlock_unlock(&fme->lock);
- return counter;
-}
-
-#define FAB_PORT_SHOW(name, event) \
-static int fme_dperf_get_fab_port_##name(struct ifpga_fme_hw *fme, \
- u8 port_id, u64 *counter) \
-{ \
- *counter = read_fabric_counter(fme, port_id, event); \
- return 0; \
-}
-
-FAB_PORT_SHOW(pcie0_read, DPERF_FAB_PCIE0_RD);
-FAB_PORT_SHOW(pcie0_write, DPERF_FAB_PCIE0_WR);
-FAB_PORT_SHOW(mmio_read, DPERF_FAB_MMIO_RD);
-FAB_PORT_SHOW(mmio_write, DPERF_FAB_MMIO_WR);
-
-static int fme_dperf_get_fab_port_enable(struct ifpga_fme_hw *fme,
- u8 port_id, u64 *enable)
-{
- struct feature_fme_dperf *dperf;
- int status;
-
- dperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_DPERF);
-
- status = fabric_pobj_is_enabled(port_id, dperf);
- *enable = (u64)status;
-
- return 0;
-}
-
-/*
- * If enable one port or all port event counter in fabric, other
- * fabric event counter originally enabled will be disable automatically.
- */
-static int fme_dperf_set_fab_port_enable(struct ifpga_fme_hw *fme,
- u8 port_id, u64 enable)
-{
- struct feature_fme_dfpmon_fab_ctl ctl;
- struct feature_fme_dperf *dperf;
- bool state;
-
- state = !!enable;
-
- if (!state)
- return -EINVAL;
-
- dperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_DPERF);
-
- /* if it is already enabled. */
- if (fabric_pobj_is_enabled(port_id, dperf))
- return 0;
-
- spinlock_lock(&fme->lock);
- ctl.csr = readq(&dperf->fab_ctl);
- if (port_id == PERF_OBJ_ROOT_ID) {
- ctl.port_filter = FAB_DISABLE_FILTER;
- } else {
- ctl.port_filter = FAB_ENABLE_FILTER;
- ctl.port_id = port_id;
- }
-
- writeq(ctl.csr, &dperf->fab_ctl);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static int fme_dperf_get_fab_freeze(struct ifpga_fme_hw *fme, u64 *freeze)
-{
- struct feature_fme_dperf *dperf;
- struct feature_fme_dfpmon_fab_ctl ctl;
-
- dperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_DPERF);
- ctl.csr = readq(&dperf->fab_ctl);
- *freeze = (u64)ctl.freeze;
-
- return 0;
-}
-
-static int fme_dperf_set_fab_freeze(struct ifpga_fme_hw *fme, u64 freeze)
-{
- struct feature_fme_dperf *dperf;
- struct feature_fme_dfpmon_fab_ctl ctl;
- bool state;
-
- state = !!freeze;
-
- spinlock_lock(&fme->lock);
- dperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_DPERF);
- ctl.csr = readq(&dperf->fab_ctl);
- ctl.freeze = state;
- writeq(ctl.csr, &dperf->fab_ctl);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-#define PERF_MAX_PORT_NUM 1
-
-static int fme_global_dperf_init(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "FME global_dperf Init.\n");
-
- return 0;
-}
-
-static void fme_global_dperf_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "FME global_dperf UInit.\n");
-}
-
-static int fme_dperf_fab_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- switch (id) {
- case 0x1: /* FREEZE */
- return fme_dperf_get_fab_freeze(fme, &prop->data);
- case 0x2: /* PCIE0_READ */
- return fme_dperf_get_fab_port_pcie0_read(fme, sub, &prop->data);
- case 0x3: /* PCIE0_WRITE */
- return fme_dperf_get_fab_port_pcie0_write(fme, sub,
- &prop->data);
- case 0x4: /* MMIO_READ */
- return fme_dperf_get_fab_port_mmio_read(fme, sub, &prop->data);
- case 0x5: /* MMIO_WRITE */
- return fme_dperf_get_fab_port_mmio_write(fme, sub, &prop->data);
- case 0x6: /* ENABLE */
- return fme_dperf_get_fab_port_enable(fme, sub, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_dperf_root_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- if (sub != PERF_PROP_SUB_UNUSED)
- return -ENOENT;
-
- switch (id) {
- case 0x1: /* CLOCK */
- return fme_dperf_get_clock(fme, &prop->data);
- case 0x2: /* REVISION */
- return fme_dperf_get_revision(fme, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_global_dperf_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
-
- switch (top) {
- case PERF_PROP_TOP_FAB:
- return fme_dperf_fab_get_prop(feature, prop);
- case PERF_PROP_TOP_UNUSED:
- return fme_dperf_root_get_prop(feature, prop);
- }
-
- return -ENOENT;
-}
-
-static int fme_dperf_fab_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- switch (id) {
- case 0x1: /* FREEZE - fab root only prop */
- if (sub != PERF_PROP_SUB_UNUSED)
- return -ENOENT;
- return fme_dperf_set_fab_freeze(fme, prop->data);
- case 0x6: /* ENABLE - fab both root and sub */
- return fme_dperf_set_fab_port_enable(fme, sub, prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_global_dperf_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
-
- switch (top) {
- case PERF_PROP_TOP_FAB:
- return fme_dperf_fab_set_prop(feature, prop);
- }
-
- return -ENOENT;
-}
-
-struct ifpga_feature_ops fme_global_dperf_ops = {
- .init = fme_global_dperf_init,
- .uinit = fme_global_dperf_uinit,
- .get_prop = fme_global_dperf_get_prop,
- .set_prop = fme_global_dperf_set_prop,
-
-};
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "ifpga_feature_dev.h"
-
-static int fme_err_get_errors(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_error0 fme_error0;
-
- fme_error0.csr = readq(&fme_err->fme_err);
- *val = fme_error0.csr;
-
- return 0;
-}
-
-static int fme_err_get_first_error(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_first_error fme_first_err;
-
- fme_first_err.csr = readq(&fme_err->fme_first_err);
- *val = fme_first_err.err_reg_status;
-
- return 0;
-}
-
-static int fme_err_get_next_error(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_next_error fme_next_err;
-
- fme_next_err.csr = readq(&fme_err->fme_next_err);
- *val = fme_next_err.err_reg_status;
-
- return 0;
-}
-
-static int fme_err_set_clear(struct ifpga_fme_hw *fme, u64 val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_error0 fme_error0;
- struct feature_fme_first_error fme_first_err;
- struct feature_fme_next_error fme_next_err;
- int ret = 0;
-
- spinlock_lock(&fme->lock);
- writeq(FME_ERROR0_MASK, &fme_err->fme_err_mask);
-
- fme_error0.csr = readq(&fme_err->fme_err);
- if (val != fme_error0.csr) {
- ret = -EBUSY;
- goto exit;
- }
-
- fme_first_err.csr = readq(&fme_err->fme_first_err);
- fme_next_err.csr = readq(&fme_err->fme_next_err);
-
- writeq(fme_error0.csr & FME_ERROR0_MASK, &fme_err->fme_err);
- writeq(fme_first_err.csr & FME_FIRST_ERROR_MASK,
- &fme_err->fme_first_err);
- writeq(fme_next_err.csr & FME_NEXT_ERROR_MASK,
- &fme_err->fme_next_err);
-
-exit:
- writeq(FME_ERROR0_MASK_DEFAULT, &fme_err->fme_err_mask);
- spinlock_unlock(&fme->lock);
-
- return ret;
-}
-
-static int fme_err_get_revision(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_header header;
-
- header.csr = readq(&fme_err->header);
- *val = header.revision;
-
- return 0;
-}
-
-static int fme_err_get_pcie0_errors(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_pcie0_error pcie0_err;
-
- pcie0_err.csr = readq(&fme_err->pcie0_err);
- *val = pcie0_err.csr;
-
- return 0;
-}
-
-static int fme_err_set_pcie0_errors(struct ifpga_fme_hw *fme, u64 val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_pcie0_error pcie0_err;
- int ret = 0;
-
- spinlock_lock(&fme->lock);
- writeq(FME_PCIE0_ERROR_MASK, &fme_err->pcie0_err_mask);
-
- pcie0_err.csr = readq(&fme_err->pcie0_err);
- if (val != pcie0_err.csr)
- ret = -EBUSY;
- else
- writeq(pcie0_err.csr & FME_PCIE0_ERROR_MASK,
- &fme_err->pcie0_err);
-
- writeq(0UL, &fme_err->pcie0_err_mask);
- spinlock_unlock(&fme->lock);
-
- return ret;
-}
-
-static int fme_err_get_pcie1_errors(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_pcie1_error pcie1_err;
-
- pcie1_err.csr = readq(&fme_err->pcie1_err);
- *val = pcie1_err.csr;
-
- return 0;
-}
-
-static int fme_err_set_pcie1_errors(struct ifpga_fme_hw *fme, u64 val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_pcie1_error pcie1_err;
- int ret = 0;
-
- spinlock_lock(&fme->lock);
- writeq(FME_PCIE1_ERROR_MASK, &fme_err->pcie1_err_mask);
-
- pcie1_err.csr = readq(&fme_err->pcie1_err);
- if (val != pcie1_err.csr)
- ret = -EBUSY;
- else
- writeq(pcie1_err.csr & FME_PCIE1_ERROR_MASK,
- &fme_err->pcie1_err);
-
- writeq(0UL, &fme_err->pcie1_err_mask);
- spinlock_unlock(&fme->lock);
-
- return ret;
-}
-
-static int fme_err_get_nonfatal_errors(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_ras_nonfaterror ras_nonfaterr;
-
- ras_nonfaterr.csr = readq(&fme_err->ras_nonfaterr);
- *val = ras_nonfaterr.csr;
-
- return 0;
-}
-
-static int fme_err_get_catfatal_errors(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_ras_catfaterror ras_catfaterr;
-
- ras_catfaterr.csr = readq(&fme_err->ras_catfaterr);
- *val = ras_catfaterr.csr;
-
- return 0;
-}
-
-static int fme_err_get_inject_errors(struct ifpga_fme_hw *fme, u64 *val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_ras_error_inj ras_error_inj;
-
- ras_error_inj.csr = readq(&fme_err->ras_error_inj);
- *val = ras_error_inj.csr & FME_RAS_ERROR_INJ_MASK;
-
- return 0;
-}
-
-static int fme_err_set_inject_errors(struct ifpga_fme_hw *fme, u64 val)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
- struct feature_fme_ras_error_inj ras_error_inj;
-
- spinlock_lock(&fme->lock);
- ras_error_inj.csr = readq(&fme_err->ras_error_inj);
-
- if (val <= FME_RAS_ERROR_INJ_MASK) {
- ras_error_inj.csr = val;
- } else {
- spinlock_unlock(&fme->lock);
- return -EINVAL;
- }
-
- writeq(ras_error_inj.csr, &fme_err->ras_error_inj);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static void fme_error_enable(struct ifpga_fme_hw *fme)
-{
- struct feature_fme_err *fme_err
- = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_ERR);
-
- writeq(FME_ERROR0_MASK_DEFAULT, &fme_err->fme_err_mask);
- writeq(0UL, &fme_err->pcie0_err_mask);
- writeq(0UL, &fme_err->pcie1_err_mask);
- writeq(0UL, &fme_err->ras_nonfat_mask);
- writeq(0UL, &fme_err->ras_catfat_mask);
-}
-
-static int fme_global_error_init(struct ifpga_feature *feature)
-{
- struct ifpga_fme_hw *fme = feature->parent;
-
- fme_error_enable(fme);
-
- if (feature->ctx_num)
- fme->capability |= FPGA_FME_CAP_ERR_IRQ;
-
- return 0;
-}
-
-static void fme_global_error_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-}
-
-static int fme_err_fme_err_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- switch (id) {
- case 0x1: /* ERRORS */
- return fme_err_get_errors(fme, &prop->data);
- case 0x2: /* FIRST_ERROR */
- return fme_err_get_first_error(fme, &prop->data);
- case 0x3: /* NEXT_ERROR */
- return fme_err_get_next_error(fme, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_err_root_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- switch (id) {
- case 0x5: /* REVISION */
- return fme_err_get_revision(fme, &prop->data);
- case 0x6: /* PCIE0_ERRORS */
- return fme_err_get_pcie0_errors(fme, &prop->data);
- case 0x7: /* PCIE1_ERRORS */
- return fme_err_get_pcie1_errors(fme, &prop->data);
- case 0x8: /* NONFATAL_ERRORS */
- return fme_err_get_nonfatal_errors(fme, &prop->data);
- case 0x9: /* CATFATAL_ERRORS */
- return fme_err_get_catfatal_errors(fme, &prop->data);
- case 0xa: /* INJECT_ERRORS */
- return fme_err_get_inject_errors(fme, &prop->data);
- case 0xb: /* REVISION*/
- return fme_err_get_revision(fme, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_global_error_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
-
- /* PROP_SUB is never used */
- if (sub != PROP_SUB_UNUSED)
- return -ENOENT;
-
- switch (top) {
- case ERR_PROP_TOP_FME_ERR:
- return fme_err_fme_err_get_prop(feature, prop);
- case ERR_PROP_TOP_UNUSED:
- return fme_err_root_get_prop(feature, prop);
- }
-
- return -ENOENT;
-}
-
-static int fme_err_fme_err_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- switch (id) {
- case 0x4: /* CLEAR */
- return fme_err_set_clear(fme, prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_err_root_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- switch (id) {
- case 0x6: /* PCIE0_ERRORS */
- return fme_err_set_pcie0_errors(fme, prop->data);
- case 0x7: /* PCIE1_ERRORS */
- return fme_err_set_pcie1_errors(fme, prop->data);
- case 0xa: /* INJECT_ERRORS */
- return fme_err_set_inject_errors(fme, prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_global_error_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
-
- /* PROP_SUB is never used */
- if (sub != PROP_SUB_UNUSED)
- return -ENOENT;
-
- switch (top) {
- case ERR_PROP_TOP_FME_ERR:
- return fme_err_fme_err_set_prop(feature, prop);
- case ERR_PROP_TOP_UNUSED:
- return fme_err_root_set_prop(feature, prop);
- }
-
- return -ENOENT;
-}
-
-struct ifpga_feature_ops fme_global_err_ops = {
- .init = fme_global_error_init,
- .uinit = fme_global_error_uinit,
- .get_prop = fme_global_error_get_prop,
- .set_prop = fme_global_error_set_prop,
-};
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "ifpga_feature_dev.h"
-
-#define PERF_OBJ_ROOT_ID 0xff
-
-static int fme_iperf_get_clock(struct ifpga_fme_hw *fme, u64 *clock)
-{
- struct feature_fme_iperf *iperf;
- struct feature_fme_ifpmon_clk_ctr clk;
-
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- clk.afu_interf_clock = readq(&iperf->clk);
-
- *clock = clk.afu_interf_clock;
- return 0;
-}
-
-static int fme_iperf_get_revision(struct ifpga_fme_hw *fme, u64 *revision)
-{
- struct feature_fme_iperf *iperf;
- struct feature_header header;
-
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- header.csr = readq(&iperf->header);
- *revision = header.revision;
-
- return 0;
-}
-
-static int fme_iperf_get_cache_freeze(struct ifpga_fme_hw *fme, u64 *freeze)
-{
- struct feature_fme_iperf *iperf;
- struct feature_fme_ifpmon_ch_ctl ctl;
-
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- ctl.csr = readq(&iperf->ch_ctl);
- *freeze = (u64)ctl.freeze;
- return 0;
-}
-
-static int fme_iperf_set_cache_freeze(struct ifpga_fme_hw *fme, u64 freeze)
-{
- struct feature_fme_iperf *iperf;
- struct feature_fme_ifpmon_ch_ctl ctl;
- bool state;
-
- state = !!freeze;
-
- spinlock_lock(&fme->lock);
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- ctl.csr = readq(&iperf->ch_ctl);
- ctl.freeze = state;
- writeq(ctl.csr, &iperf->ch_ctl);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-#define IPERF_TIMEOUT 30
-
-static u64 read_cache_counter(struct ifpga_fme_hw *fme,
- u8 channel, enum iperf_cache_events event)
-{
- struct feature_fme_iperf *iperf;
- struct feature_fme_ifpmon_ch_ctl ctl;
- struct feature_fme_ifpmon_ch_ctr ctr0, ctr1;
- u64 counter;
-
- spinlock_lock(&fme->lock);
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
-
- /* set channel access type and cache event code. */
- ctl.csr = readq(&iperf->ch_ctl);
- ctl.cci_chsel = channel;
- ctl.cache_event = event;
- writeq(ctl.csr, &iperf->ch_ctl);
-
- /* check the event type in the counter registers */
- ctr0.event_code = event;
-
- if (fpga_wait_register_field(event_code, ctr0,
- &iperf->ch_ctr0, IPERF_TIMEOUT, 1)) {
- dev_err(fme, "timeout, unmatched cache event type in counter registers.\n");
- spinlock_unlock(&fme->lock);
- return -ETIMEDOUT;
- }
-
- ctr0.csr = readq(&iperf->ch_ctr0);
- ctr1.csr = readq(&iperf->ch_ctr1);
- counter = ctr0.cache_counter + ctr1.cache_counter;
- spinlock_unlock(&fme->lock);
-
- return counter;
-}
-
-#define CACHE_SHOW(name, type, event) \
-static int fme_iperf_get_cache_##name(struct ifpga_fme_hw *fme, \
- u64 *counter) \
-{ \
- *counter = read_cache_counter(fme, type, event); \
- return 0; \
-}
-
-CACHE_SHOW(read_hit, CACHE_CHANNEL_RD, IPERF_CACHE_RD_HIT);
-CACHE_SHOW(read_miss, CACHE_CHANNEL_RD, IPERF_CACHE_RD_MISS);
-CACHE_SHOW(write_hit, CACHE_CHANNEL_WR, IPERF_CACHE_WR_HIT);
-CACHE_SHOW(write_miss, CACHE_CHANNEL_WR, IPERF_CACHE_WR_MISS);
-CACHE_SHOW(hold_request, CACHE_CHANNEL_RD, IPERF_CACHE_HOLD_REQ);
-CACHE_SHOW(tx_req_stall, CACHE_CHANNEL_RD, IPERF_CACHE_TX_REQ_STALL);
-CACHE_SHOW(rx_req_stall, CACHE_CHANNEL_RD, IPERF_CACHE_RX_REQ_STALL);
-CACHE_SHOW(rx_eviction, CACHE_CHANNEL_RD, IPERF_CACHE_EVICTIONS);
-CACHE_SHOW(data_write_port_contention, CACHE_CHANNEL_WR,
- IPERF_CACHE_DATA_WR_PORT_CONTEN);
-CACHE_SHOW(tag_write_port_contention, CACHE_CHANNEL_WR,
- IPERF_CACHE_TAG_WR_PORT_CONTEN);
-
-static int fme_iperf_get_vtd_freeze(struct ifpga_fme_hw *fme, u64 *freeze)
-{
- struct feature_fme_ifpmon_vtd_ctl ctl;
- struct feature_fme_iperf *iperf;
-
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- ctl.csr = readq(&iperf->vtd_ctl);
- *freeze = (u64)ctl.freeze;
-
- return 0;
-}
-
-static int fme_iperf_set_vtd_freeze(struct ifpga_fme_hw *fme, u64 freeze)
-{
- struct feature_fme_ifpmon_vtd_ctl ctl;
- struct feature_fme_iperf *iperf;
- bool state;
-
- state = !!freeze;
-
- spinlock_lock(&fme->lock);
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- ctl.csr = readq(&iperf->vtd_ctl);
- ctl.freeze = state;
- writeq(ctl.csr, &iperf->vtd_ctl);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static u64 read_iommu_sip_counter(struct ifpga_fme_hw *fme,
- enum iperf_vtd_sip_events event)
-{
- struct feature_fme_ifpmon_vtd_sip_ctl sip_ctl;
- struct feature_fme_ifpmon_vtd_sip_ctr sip_ctr;
- struct feature_fme_iperf *iperf;
- u64 counter;
-
- spinlock_lock(&fme->lock);
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- sip_ctl.csr = readq(&iperf->vtd_sip_ctl);
- sip_ctl.vtd_evtcode = event;
- writeq(sip_ctl.csr, &iperf->vtd_sip_ctl);
-
- sip_ctr.event_code = event;
-
- if (fpga_wait_register_field(event_code, sip_ctr,
- &iperf->vtd_sip_ctr, IPERF_TIMEOUT, 1)) {
- dev_err(fme, "timeout, unmatched VTd SIP event type in counter registers\n");
- spinlock_unlock(&fme->lock);
- return -ETIMEDOUT;
- }
-
- sip_ctr.csr = readq(&iperf->vtd_sip_ctr);
- counter = sip_ctr.vtd_counter;
- spinlock_unlock(&fme->lock);
-
- return counter;
-}
-
-#define VTD_SIP_SHOW(name, event) \
-static int fme_iperf_get_vtd_sip_##name(struct ifpga_fme_hw *fme, \
- u64 *counter) \
-{ \
- *counter = read_iommu_sip_counter(fme, event); \
- return 0; \
-}
-
-VTD_SIP_SHOW(iotlb_4k_hit, IPERF_VTD_SIP_IOTLB_4K_HIT);
-VTD_SIP_SHOW(iotlb_2m_hit, IPERF_VTD_SIP_IOTLB_2M_HIT);
-VTD_SIP_SHOW(iotlb_1g_hit, IPERF_VTD_SIP_IOTLB_1G_HIT);
-VTD_SIP_SHOW(slpwc_l3_hit, IPERF_VTD_SIP_SLPWC_L3_HIT);
-VTD_SIP_SHOW(slpwc_l4_hit, IPERF_VTD_SIP_SLPWC_L4_HIT);
-VTD_SIP_SHOW(rcc_hit, IPERF_VTD_SIP_RCC_HIT);
-VTD_SIP_SHOW(iotlb_4k_miss, IPERF_VTD_SIP_IOTLB_4K_MISS);
-VTD_SIP_SHOW(iotlb_2m_miss, IPERF_VTD_SIP_IOTLB_2M_MISS);
-VTD_SIP_SHOW(iotlb_1g_miss, IPERF_VTD_SIP_IOTLB_1G_MISS);
-VTD_SIP_SHOW(slpwc_l3_miss, IPERF_VTD_SIP_SLPWC_L3_MISS);
-VTD_SIP_SHOW(slpwc_l4_miss, IPERF_VTD_SIP_SLPWC_L4_MISS);
-VTD_SIP_SHOW(rcc_miss, IPERF_VTD_SIP_RCC_MISS);
-
-static u64 read_iommu_counter(struct ifpga_fme_hw *fme, u8 port_id,
- enum iperf_vtd_events base_event)
-{
- struct feature_fme_ifpmon_vtd_ctl ctl;
- struct feature_fme_ifpmon_vtd_ctr ctr;
- struct feature_fme_iperf *iperf;
- enum iperf_vtd_events event = base_event + port_id;
- u64 counter;
-
- spinlock_lock(&fme->lock);
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- ctl.csr = readq(&iperf->vtd_ctl);
- ctl.vtd_evtcode = event;
- writeq(ctl.csr, &iperf->vtd_ctl);
-
- ctr.event_code = event;
-
- if (fpga_wait_register_field(event_code, ctr,
- &iperf->vtd_ctr, IPERF_TIMEOUT, 1)) {
- dev_err(fme, "timeout, unmatched VTd event type in counter registers.\n");
- spinlock_unlock(&fme->lock);
- return -ETIMEDOUT;
- }
-
- ctr.csr = readq(&iperf->vtd_ctr);
- counter = ctr.vtd_counter;
- spinlock_unlock(&fme->lock);
-
- return counter;
-}
-
-#define VTD_PORT_SHOW(name, base_event) \
-static int fme_iperf_get_vtd_port_##name(struct ifpga_fme_hw *fme, \
- u8 port_id, u64 *counter) \
-{ \
- *counter = read_iommu_counter(fme, port_id, base_event); \
- return 0; \
-}
-
-VTD_PORT_SHOW(read_transaction, IPERF_VTD_AFU_MEM_RD_TRANS);
-VTD_PORT_SHOW(write_transaction, IPERF_VTD_AFU_MEM_WR_TRANS);
-VTD_PORT_SHOW(devtlb_read_hit, IPERF_VTD_AFU_DEVTLB_RD_HIT);
-VTD_PORT_SHOW(devtlb_write_hit, IPERF_VTD_AFU_DEVTLB_WR_HIT);
-VTD_PORT_SHOW(devtlb_4k_fill, IPERF_VTD_DEVTLB_4K_FILL);
-VTD_PORT_SHOW(devtlb_2m_fill, IPERF_VTD_DEVTLB_2M_FILL);
-VTD_PORT_SHOW(devtlb_1g_fill, IPERF_VTD_DEVTLB_1G_FILL);
-
-static bool fabric_pobj_is_enabled(u8 port_id, struct feature_fme_iperf *iperf)
-{
- struct feature_fme_ifpmon_fab_ctl ctl;
-
- ctl.csr = readq(&iperf->fab_ctl);
-
- if (ctl.port_filter == FAB_DISABLE_FILTER)
- return port_id == PERF_OBJ_ROOT_ID;
-
- return port_id == ctl.port_id;
-}
-
-static u64 read_fabric_counter(struct ifpga_fme_hw *fme, u8 port_id,
- enum iperf_fab_events fab_event)
-{
- struct feature_fme_ifpmon_fab_ctl ctl;
- struct feature_fme_ifpmon_fab_ctr ctr;
- struct feature_fme_iperf *iperf;
- u64 counter = 0;
-
- spinlock_lock(&fme->lock);
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
-
- /* if it is disabled, force the counter to return zero. */
- if (!fabric_pobj_is_enabled(port_id, iperf))
- goto exit;
-
- ctl.csr = readq(&iperf->fab_ctl);
- ctl.fab_evtcode = fab_event;
- writeq(ctl.csr, &iperf->fab_ctl);
-
- ctr.event_code = fab_event;
-
- if (fpga_wait_register_field(event_code, ctr,
- &iperf->fab_ctr, IPERF_TIMEOUT, 1)) {
- dev_err(fme, "timeout, unmatched VTd event type in counter registers.\n");
- spinlock_unlock(&fme->lock);
- return -ETIMEDOUT;
- }
-
- ctr.csr = readq(&iperf->fab_ctr);
- counter = ctr.fab_cnt;
-exit:
- spinlock_unlock(&fme->lock);
- return counter;
-}
-
-#define FAB_PORT_SHOW(name, event) \
-static int fme_iperf_get_fab_port_##name(struct ifpga_fme_hw *fme, \
- u8 port_id, u64 *counter) \
-{ \
- *counter = read_fabric_counter(fme, port_id, event); \
- return 0; \
-}
-
-FAB_PORT_SHOW(pcie0_read, IPERF_FAB_PCIE0_RD);
-FAB_PORT_SHOW(pcie0_write, IPERF_FAB_PCIE0_WR);
-FAB_PORT_SHOW(pcie1_read, IPERF_FAB_PCIE1_RD);
-FAB_PORT_SHOW(pcie1_write, IPERF_FAB_PCIE1_WR);
-FAB_PORT_SHOW(upi_read, IPERF_FAB_UPI_RD);
-FAB_PORT_SHOW(upi_write, IPERF_FAB_UPI_WR);
-FAB_PORT_SHOW(mmio_read, IPERF_FAB_MMIO_RD);
-FAB_PORT_SHOW(mmio_write, IPERF_FAB_MMIO_WR);
-
-static int fme_iperf_get_fab_port_enable(struct ifpga_fme_hw *fme,
- u8 port_id, u64 *enable)
-{
- struct feature_fme_iperf *iperf;
- int status;
-
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
-
- status = fabric_pobj_is_enabled(port_id, iperf);
- *enable = (u64)status;
-
- return 0;
-}
-
-/*
- * If enable one port or all port event counter in fabric, other
- * fabric event counter originally enabled will be disable automatically.
- */
-static int fme_iperf_set_fab_port_enable(struct ifpga_fme_hw *fme,
- u8 port_id, u64 enable)
-{
- struct feature_fme_ifpmon_fab_ctl ctl;
- struct feature_fme_iperf *iperf;
- bool state;
-
- state = !!enable;
-
- if (!state)
- return -EINVAL;
-
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
-
- /* if it is already enabled. */
- if (fabric_pobj_is_enabled(port_id, iperf))
- return 0;
-
- spinlock_lock(&fme->lock);
- ctl.csr = readq(&iperf->fab_ctl);
- if (port_id == PERF_OBJ_ROOT_ID) {
- ctl.port_filter = FAB_DISABLE_FILTER;
- } else {
- ctl.port_filter = FAB_ENABLE_FILTER;
- ctl.port_id = port_id;
- }
-
- writeq(ctl.csr, &iperf->fab_ctl);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-static int fme_iperf_get_fab_freeze(struct ifpga_fme_hw *fme, u64 *freeze)
-{
- struct feature_fme_iperf *iperf;
- struct feature_fme_ifpmon_fab_ctl ctl;
-
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- ctl.csr = readq(&iperf->fab_ctl);
- *freeze = (u64)ctl.freeze;
-
- return 0;
-}
-
-static int fme_iperf_set_fab_freeze(struct ifpga_fme_hw *fme, u64 freeze)
-{
- struct feature_fme_iperf *iperf;
- struct feature_fme_ifpmon_fab_ctl ctl;
- bool state;
-
- state = !!freeze;
-
- spinlock_lock(&fme->lock);
- iperf = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_GLOBAL_IPERF);
- ctl.csr = readq(&iperf->fab_ctl);
- ctl.freeze = state;
- writeq(ctl.csr, &iperf->fab_ctl);
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-#define PERF_MAX_PORT_NUM 1
-#define FME_IPERF_CAP_IOMMU 0x1
-
-static int fme_global_iperf_init(struct ifpga_feature *feature)
-{
- struct ifpga_fme_hw *fme;
- struct feature_fme_header *fme_hdr;
- struct feature_fme_capability fme_capability;
-
- dev_info(NULL, "FME global_iperf Init.\n");
-
- fme = (struct ifpga_fme_hw *)feature->parent;
- fme_hdr = get_fme_feature_ioaddr_by_index(fme, FME_FEATURE_ID_HEADER);
-
- /* check if iommu is not supported on this device. */
- fme_capability.csr = readq(&fme_hdr->capability);
- dev_info(NULL, "FME HEAD fme_capability %llx.\n",
- (unsigned long long)fme_hdr->capability.csr);
-
- if (fme_capability.iommu_support)
- feature->cap |= FME_IPERF_CAP_IOMMU;
-
- return 0;
-}
-
-static void fme_global_iperf_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "FME global_iperf UInit.\n");
-}
-
-static int fme_iperf_root_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- if (sub != PERF_PROP_SUB_UNUSED)
- return -ENOENT;
-
- switch (id) {
- case 0x1: /* CLOCK */
- return fme_iperf_get_clock(fme, &prop->data);
- case 0x2: /* REVISION */
- return fme_iperf_get_revision(fme, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_iperf_cache_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- if (sub != PERF_PROP_SUB_UNUSED)
- return -ENOENT;
-
- switch (id) {
- case 0x1: /* FREEZE */
- return fme_iperf_get_cache_freeze(fme, &prop->data);
- case 0x2: /* READ_HIT */
- return fme_iperf_get_cache_read_hit(fme, &prop->data);
- case 0x3: /* READ_MISS */
- return fme_iperf_get_cache_read_miss(fme, &prop->data);
- case 0x4: /* WRITE_HIT */
- return fme_iperf_get_cache_write_hit(fme, &prop->data);
- case 0x5: /* WRITE_MISS */
- return fme_iperf_get_cache_write_miss(fme, &prop->data);
- case 0x6: /* HOLD_REQUEST */
- return fme_iperf_get_cache_hold_request(fme, &prop->data);
- case 0x7: /* TX_REQ_STALL */
- return fme_iperf_get_cache_tx_req_stall(fme, &prop->data);
- case 0x8: /* RX_REQ_STALL */
- return fme_iperf_get_cache_rx_req_stall(fme, &prop->data);
- case 0x9: /* RX_EVICTION */
- return fme_iperf_get_cache_rx_eviction(fme, &prop->data);
- case 0xa: /* DATA_WRITE_PORT_CONTENTION */
- return fme_iperf_get_cache_data_write_port_contention(fme,
- &prop->data);
- case 0xb: /* TAG_WRITE_PORT_CONTENTION */
- return fme_iperf_get_cache_tag_write_port_contention(fme,
- &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_iperf_vtd_root_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- switch (id) {
- case 0x1: /* FREEZE */
- return fme_iperf_get_vtd_freeze(fme, &prop->data);
- case 0x2: /* IOTLB_4K_HIT */
- return fme_iperf_get_vtd_sip_iotlb_4k_hit(fme, &prop->data);
- case 0x3: /* IOTLB_2M_HIT */
- return fme_iperf_get_vtd_sip_iotlb_2m_hit(fme, &prop->data);
- case 0x4: /* IOTLB_1G_HIT */
- return fme_iperf_get_vtd_sip_iotlb_1g_hit(fme, &prop->data);
- case 0x5: /* SLPWC_L3_HIT */
- return fme_iperf_get_vtd_sip_slpwc_l3_hit(fme, &prop->data);
- case 0x6: /* SLPWC_L4_HIT */
- return fme_iperf_get_vtd_sip_slpwc_l4_hit(fme, &prop->data);
- case 0x7: /* RCC_HIT */
- return fme_iperf_get_vtd_sip_rcc_hit(fme, &prop->data);
- case 0x8: /* IOTLB_4K_MISS */
- return fme_iperf_get_vtd_sip_iotlb_4k_miss(fme, &prop->data);
- case 0x9: /* IOTLB_2M_MISS */
- return fme_iperf_get_vtd_sip_iotlb_2m_miss(fme, &prop->data);
- case 0xa: /* IOTLB_1G_MISS */
- return fme_iperf_get_vtd_sip_iotlb_1g_miss(fme, &prop->data);
- case 0xb: /* SLPWC_L3_MISS */
- return fme_iperf_get_vtd_sip_slpwc_l3_miss(fme, &prop->data);
- case 0xc: /* SLPWC_L4_MISS */
- return fme_iperf_get_vtd_sip_slpwc_l4_miss(fme, &prop->data);
- case 0xd: /* RCC_MISS */
- return fme_iperf_get_vtd_sip_rcc_miss(fme, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_iperf_vtd_sub_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
-
- if (sub > PERF_MAX_PORT_NUM)
- return -ENOENT;
-
- switch (id) {
- case 0xe: /* READ_TRANSACTION */
- return fme_iperf_get_vtd_port_read_transaction(fme, sub,
- &prop->data);
- case 0xf: /* WRITE_TRANSACTION */
- return fme_iperf_get_vtd_port_write_transaction(fme, sub,
- &prop->data);
- case 0x10: /* DEVTLB_READ_HIT */
- return fme_iperf_get_vtd_port_devtlb_read_hit(fme, sub,
- &prop->data);
- case 0x11: /* DEVTLB_WRITE_HIT */
- return fme_iperf_get_vtd_port_devtlb_write_hit(fme, sub,
- &prop->data);
- case 0x12: /* DEVTLB_4K_FILL */
- return fme_iperf_get_vtd_port_devtlb_4k_fill(fme, sub,
- &prop->data);
- case 0x13: /* DEVTLB_2M_FILL */
- return fme_iperf_get_vtd_port_devtlb_2m_fill(fme, sub,
- &prop->data);
- case 0x14: /* DEVTLB_1G_FILL */
- return fme_iperf_get_vtd_port_devtlb_1g_fill(fme, sub,
- &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_iperf_vtd_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
-
- if (sub == PERF_PROP_SUB_UNUSED)
- return fme_iperf_vtd_root_get_prop(feature, prop);
-
- return fme_iperf_vtd_sub_get_prop(feature, prop);
-}
-
-static int fme_iperf_fab_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- /* Other properties are present for both top and sub levels */
- switch (id) {
- case 0x1: /* FREEZE */
- if (sub != PERF_PROP_SUB_UNUSED)
- return -ENOENT;
- return fme_iperf_get_fab_freeze(fme, &prop->data);
- case 0x2: /* PCIE0_READ */
- return fme_iperf_get_fab_port_pcie0_read(fme, sub,
- &prop->data);
- case 0x3: /* PCIE0_WRITE */
- return fme_iperf_get_fab_port_pcie0_write(fme, sub,
- &prop->data);
- case 0x4: /* PCIE1_READ */
- return fme_iperf_get_fab_port_pcie1_read(fme, sub,
- &prop->data);
- case 0x5: /* PCIE1_WRITE */
- return fme_iperf_get_fab_port_pcie1_write(fme, sub,
- &prop->data);
- case 0x6: /* UPI_READ */
- return fme_iperf_get_fab_port_upi_read(fme, sub,
- &prop->data);
- case 0x7: /* UPI_WRITE */
- return fme_iperf_get_fab_port_upi_write(fme, sub,
- &prop->data);
- case 0x8: /* MMIO_READ */
- return fme_iperf_get_fab_port_mmio_read(fme, sub,
- &prop->data);
- case 0x9: /* MMIO_WRITE */
- return fme_iperf_get_fab_port_mmio_write(fme, sub,
- &prop->data);
- case 0xa: /* ENABLE */
- return fme_iperf_get_fab_port_enable(fme, sub, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_global_iperf_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
-
- switch (top) {
- case PERF_PROP_TOP_CACHE:
- return fme_iperf_cache_get_prop(feature, prop);
- case PERF_PROP_TOP_VTD:
- return fme_iperf_vtd_get_prop(feature, prop);
- case PERF_PROP_TOP_FAB:
- return fme_iperf_fab_get_prop(feature, prop);
- case PERF_PROP_TOP_UNUSED:
- return fme_iperf_root_get_prop(feature, prop);
- }
-
- return -ENOENT;
-}
-
-static int fme_iperf_cache_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- if (sub == PERF_PROP_SUB_UNUSED && id == 0x1) /* FREEZE */
- return fme_iperf_set_cache_freeze(fme, prop->data);
-
- return -ENOENT;
-}
-
-static int fme_iperf_vtd_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- if (sub == PERF_PROP_SUB_UNUSED && id == 0x1) /* FREEZE */
- return fme_iperf_set_vtd_freeze(fme, prop->data);
-
- return -ENOENT;
-}
-
-static int fme_iperf_fab_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme = feature->parent;
- u8 sub = GET_FIELD(PROP_SUB, prop->prop_id);
- u16 id = GET_FIELD(PROP_ID, prop->prop_id);
-
- switch (id) {
- case 0x1: /* FREEZE */
- if (sub != PERF_PROP_SUB_UNUSED)
- return -ENOENT;
- return fme_iperf_set_fab_freeze(fme, prop->data);
- case 0xa: /* ENABLE */
- return fme_iperf_set_fab_port_enable(fme, sub, prop->data);
- }
-
- return -ENOENT;
-}
-
-static int fme_global_iperf_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- u8 top = GET_FIELD(PROP_TOP, prop->prop_id);
-
- switch (top) {
- case PERF_PROP_TOP_CACHE:
- return fme_iperf_cache_set_prop(feature, prop);
- case PERF_PROP_TOP_VTD:
- return fme_iperf_vtd_set_prop(feature, prop);
- case PERF_PROP_TOP_FAB:
- return fme_iperf_fab_set_prop(feature, prop);
- }
-
- return -ENOENT;
-}
-
-struct ifpga_feature_ops fme_global_iperf_ops = {
- .init = fme_global_iperf_init,
- .uinit = fme_global_iperf_uinit,
- .get_prop = fme_global_iperf_get_prop,
- .set_prop = fme_global_iperf_set_prop,
-};
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "ifpga_feature_dev.h"
-
-static u64
-pr_err_handle(struct feature_fme_pr *fme_pr)
-{
- struct feature_fme_pr_status fme_pr_status;
- unsigned long err_code;
- u64 fme_pr_error;
- int i;
-
- fme_pr_status.csr = readq(&fme_pr->ccip_fme_pr_status);
- if (!fme_pr_status.pr_status)
- return 0;
-
- err_code = readq(&fme_pr->ccip_fme_pr_err);
- fme_pr_error = err_code;
-
- for (i = 0; i < PR_MAX_ERR_NUM; i++) {
- if (err_code & (1 << i))
- dev_info(NULL, "%s\n", pr_err_msg[i]);
- }
-
- writeq(fme_pr_error, &fme_pr->ccip_fme_pr_err);
- return fme_pr_error;
-}
-
-static int fme_pr_write_init(struct ifpga_fme_hw *fme_dev,
- struct fpga_pr_info *info)
-{
- struct feature_fme_pr *fme_pr;
- struct feature_fme_pr_ctl fme_pr_ctl;
- struct feature_fme_pr_status fme_pr_status;
-
- fme_pr = get_fme_feature_ioaddr_by_index(fme_dev,
- FME_FEATURE_ID_PR_MGMT);
- if (!fme_pr)
- return -EINVAL;
-
- if (info->flags != FPGA_MGR_PARTIAL_RECONFIG)
- return -EINVAL;
-
- dev_info(fme_dev, "resetting PR before initiated PR\n");
-
- fme_pr_ctl.csr = readq(&fme_pr->ccip_fme_pr_control);
- fme_pr_ctl.pr_reset = 1;
- writeq(fme_pr_ctl.csr, &fme_pr->ccip_fme_pr_control);
-
- fme_pr_ctl.pr_reset_ack = 1;
-
- if (fpga_wait_register_field(pr_reset_ack, fme_pr_ctl,
- &fme_pr->ccip_fme_pr_control,
- PR_WAIT_TIMEOUT, 1)) {
- dev_err(fme_dev, "maximum PR timeout\n");
- return -ETIMEDOUT;
- }
-
- fme_pr_ctl.csr = readq(&fme_pr->ccip_fme_pr_control);
- fme_pr_ctl.pr_reset = 0;
- writeq(fme_pr_ctl.csr, &fme_pr->ccip_fme_pr_control);
-
- dev_info(fme_dev, "waiting for PR resource in HW to be initialized and ready\n");
-
- fme_pr_status.pr_host_status = PR_HOST_STATUS_IDLE;
-
- if (fpga_wait_register_field(pr_host_status, fme_pr_status,
- &fme_pr->ccip_fme_pr_status,
- PR_WAIT_TIMEOUT, 1)) {
- dev_err(fme_dev, "maximum PR timeout\n");
- return -ETIMEDOUT;
- }
-
- dev_info(fme_dev, "check if have any previous PR error\n");
- pr_err_handle(fme_pr);
- return 0;
-}
-
-static int fme_pr_write(struct ifpga_fme_hw *fme_dev,
- int port_id, const char *buf, size_t count,
- struct fpga_pr_info *info)
-{
- struct feature_fme_pr *fme_pr;
- struct feature_fme_pr_ctl fme_pr_ctl;
- struct feature_fme_pr_status fme_pr_status;
- struct feature_fme_pr_data fme_pr_data;
- int delay, pr_credit;
- int ret = 0;
-
- fme_pr = get_fme_feature_ioaddr_by_index(fme_dev,
- FME_FEATURE_ID_PR_MGMT);
- if (!fme_pr)
- return -EINVAL;
-
- dev_info(fme_dev, "set PR port ID and start request\n");
-
- fme_pr_ctl.csr = readq(&fme_pr->ccip_fme_pr_control);
- fme_pr_ctl.pr_regionid = port_id;
- fme_pr_ctl.pr_start_req = 1;
- writeq(fme_pr_ctl.csr, &fme_pr->ccip_fme_pr_control);
-
- dev_info(fme_dev, "pushing data from bitstream to HW\n");
-
- fme_pr_status.csr = readq(&fme_pr->ccip_fme_pr_status);
- pr_credit = fme_pr_status.pr_credit;
-
- while (count > 0) {
- delay = 0;
- while (pr_credit <= 1) {
- if (delay++ > PR_WAIT_TIMEOUT) {
- dev_err(fme_dev, "maximum try\n");
-
- info->pr_err = pr_err_handle(fme_pr);
- return info->pr_err ? -EIO : -ETIMEDOUT;
- }
- udelay(1);
-
- fme_pr_status.csr = readq(&fme_pr->ccip_fme_pr_status);
- pr_credit = fme_pr_status.pr_credit;
- };
-
- if (count >= fme_dev->pr_bandwidth) {
- switch (fme_dev->pr_bandwidth) {
- case 4:
- fme_pr_data.rsvd = 0;
- fme_pr_data.pr_data_raw = *((const u32 *)buf);
- writeq(fme_pr_data.csr,
- &fme_pr->ccip_fme_pr_data);
- break;
- default:
- ret = -EFAULT;
- goto done;
- }
-
- buf += fme_dev->pr_bandwidth;
- count -= fme_dev->pr_bandwidth;
- pr_credit--;
- } else {
- WARN_ON(1);
- ret = -EINVAL;
- goto done;
- }
- }
-
-done:
- return ret;
-}
-
-static int fme_pr_write_complete(struct ifpga_fme_hw *fme_dev,
- struct fpga_pr_info *info)
-{
- struct feature_fme_pr *fme_pr;
- struct feature_fme_pr_ctl fme_pr_ctl;
-
- fme_pr = get_fme_feature_ioaddr_by_index(fme_dev,
- FME_FEATURE_ID_PR_MGMT);
-
- fme_pr_ctl.csr = readq(&fme_pr->ccip_fme_pr_control);
- fme_pr_ctl.pr_push_complete = 1;
- writeq(fme_pr_ctl.csr, &fme_pr->ccip_fme_pr_control);
-
- dev_info(fme_dev, "green bitstream push complete\n");
- dev_info(fme_dev, "waiting for HW to release PR resource\n");
-
- fme_pr_ctl.pr_start_req = 0;
-
- if (fpga_wait_register_field(pr_start_req, fme_pr_ctl,
- &fme_pr->ccip_fme_pr_control,
- PR_WAIT_TIMEOUT, 1)) {
- printf("maximum try.\n");
- return -ETIMEDOUT;
- }
-
- dev_info(fme_dev, "PR operation complete, checking status\n");
- info->pr_err = pr_err_handle(fme_pr);
- if (info->pr_err)
- return -EIO;
-
- dev_info(fme_dev, "PR done successfully\n");
- return 0;
-}
-
-static int fpga_pr_buf_load(struct ifpga_fme_hw *fme_dev,
- struct fpga_pr_info *info, const char *buf,
- size_t count)
-{
- int ret;
-
- info->state = FPGA_PR_STATE_WRITE_INIT;
- ret = fme_pr_write_init(fme_dev, info);
- if (ret) {
- dev_err(fme_dev, "Error preparing FPGA for writing\n");
- info->state = FPGA_PR_STATE_WRITE_INIT_ERR;
- return ret;
- }
-
- /*
- * Write the FPGA image to the FPGA.
- */
- info->state = FPGA_PR_STATE_WRITE;
- ret = fme_pr_write(fme_dev, info->port_id, buf, count, info);
- if (ret) {
- dev_err(fme_dev, "Error while writing image data to FPGA\n");
- info->state = FPGA_PR_STATE_WRITE_ERR;
- return ret;
- }
-
- /*
- * After all the FPGA image has been written, do the device specific
- * steps to finish and set the FPGA into operating mode.
- */
- info->state = FPGA_PR_STATE_WRITE_COMPLETE;
- ret = fme_pr_write_complete(fme_dev, info);
- if (ret) {
- dev_err(fme_dev, "Error after writing image data to FPGA\n");
- info->state = FPGA_PR_STATE_WRITE_COMPLETE_ERR;
- return ret;
- }
- info->state = FPGA_PR_STATE_DONE;
-
- return 0;
-}
-
-static int fme_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer,
- u32 size, u64 *status)
-{
- struct feature_fme_header *fme_hdr;
- struct feature_fme_capability fme_capability;
- struct ifpga_fme_hw *fme = &hw->fme;
- struct fpga_pr_info info;
- struct ifpga_port_hw *port;
- int ret = 0;
-
- if (!buffer || size == 0)
- return -EINVAL;
- if (fme->state != IFPGA_FME_IMPLEMENTED)
- return -EINVAL;
-
- /*
- * Padding extra zeros to align PR buffer with PR bandwidth, HW will
- * ignore these zeros automatically.
- */
- size = IFPGA_ALIGN(size, fme->pr_bandwidth);
-
- /* get fme header region */
- fme_hdr = get_fme_feature_ioaddr_by_index(fme,
- FME_FEATURE_ID_HEADER);
- if (!fme_hdr)
- return -EINVAL;
-
- /* check port id */
- fme_capability.csr = readq(&fme_hdr->capability);
- if (port_id >= fme_capability.num_ports) {
- dev_err(fme, "port number more than maximum\n");
- return -EINVAL;
- }
-
- opae_memset(&info, 0, sizeof(struct fpga_pr_info));
- info.flags = FPGA_MGR_PARTIAL_RECONFIG;
- info.port_id = port_id;
-
- spinlock_lock(&fme->lock);
-
- /* get port device by port_id */
- port = &hw->port[port_id];
-
- /* Disable Port before PR */
- fpga_port_disable(port);
-
- ret = fpga_pr_buf_load(fme, &info, buffer, size);
-
- *status = info.pr_err;
-
- /* Re-enable Port after PR finished */
- fpga_port_enable(port);
- spinlock_unlock(&fme->lock);
-
- return ret;
-}
-
-int do_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer,
- u32 size, u64 *status)
-{
- const struct bts_header *bts_hdr;
- const char *buf;
- struct ifpga_port_hw *port;
- int ret;
- u32 header_size;
-
- if (!buffer || size == 0) {
- dev_err(hw, "invalid parameter\n");
- return -EINVAL;
- }
-
- bts_hdr = (const struct bts_header *)buffer;
-
- if (is_valid_bts(bts_hdr)) {
- dev_info(hw, "this is a valid bitsteam..\n");
- header_size = sizeof(struct bts_header) +
- bts_hdr->metadata_len;
- if (size < header_size)
- return -EINVAL;
- size -= header_size;
- buf = buffer + header_size;
- } else {
- dev_err(hw, "this is an invalid bitstream..\n");
- return -EINVAL;
- }
-
- /* clean port error before do PR */
- port = &hw->port[port_id];
- ret = port_clear_error(port);
- if (ret) {
- dev_err(hw, "port cannot clear error\n");
- return -EINVAL;
- }
-
- return fme_pr(hw, port_id, buf, size, status);
-}
-
-static int fme_pr_mgmt_init(struct ifpga_feature *feature)
-{
- struct feature_fme_pr *fme_pr;
- struct feature_header fme_pr_header;
- struct ifpga_fme_hw *fme;
-
- dev_info(NULL, "FME PR MGMT Init.\n");
-
- fme = (struct ifpga_fme_hw *)feature->parent;
-
- fme_pr = (struct feature_fme_pr *)feature->addr;
-
- fme_pr_header.csr = readq(&fme_pr->header);
- if (fme_pr_header.revision == 2) {
- dev_info(NULL, "using 512-bit PR\n");
- fme->pr_bandwidth = 64;
- } else {
- dev_info(NULL, "using 32-bit PR\n");
- fme->pr_bandwidth = 4;
- }
-
- return 0;
-}
-
-static void fme_pr_mgmt_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "FME PR MGMT UInit.\n");
-}
-
-struct ifpga_feature_ops fme_pr_mgmt_ops = {
- .init = fme_pr_mgmt_init,
- .uinit = fme_pr_mgmt_uinit,
-};
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _IFPGA_HW_H_
-#define _IFPGA_HW_H_
-
-#include "ifpga_defines.h"
-#include "opae_ifpga_hw_api.h"
-#include "opae_eth_group.h"
-
-/** List of private feateues */
-TAILQ_HEAD(ifpga_feature_list, ifpga_feature);
-
-enum ifpga_feature_state {
- IFPGA_FEATURE_UNUSED = 0,
- IFPGA_FEATURE_ATTACHED,
-};
-
-enum feature_type {
- FEATURE_FME_TYPE = 0,
- FEATURE_PORT_TYPE,
-};
-
-struct feature_irq_ctx {
- int eventfd;
- int idx;
-};
-
-struct ifpga_feature {
- TAILQ_ENTRY(ifpga_feature)next;
- enum ifpga_feature_state state;
- enum feature_type type;
- const char *name;
- u64 id;
- u8 *addr;
- uint64_t phys_addr;
- u32 size;
- int revision;
- u64 cap;
- int vfio_dev_fd;
- struct feature_irq_ctx *ctx;
- unsigned int ctx_num;
-
- void *parent; /* to parent hw data structure */
-
- struct ifpga_feature_ops *ops;/* callback to this private feature */
- unsigned int vec_start;
- unsigned int vec_cnt;
-};
-
-struct ifpga_feature_ops {
- int (*init)(struct ifpga_feature *feature);
- void (*uinit)(struct ifpga_feature *feature);
- int (*get_prop)(struct ifpga_feature *feature,
- struct feature_prop *prop);
- int (*set_prop)(struct ifpga_feature *feature,
- struct feature_prop *prop);
- int (*set_irq)(struct ifpga_feature *feature, void *irq_set);
-};
-
-enum ifpga_fme_state {
- IFPGA_FME_UNUSED = 0,
- IFPGA_FME_IMPLEMENTED,
-};
-
-struct ifpga_fme_hw {
- enum ifpga_fme_state state;
-
- struct ifpga_feature_list feature_list;
- spinlock_t lock; /* protect hardware access */
-
- void *parent; /* pointer to ifpga_hw */
-
- /* provied by HEADER feature */
- u32 port_num;
- struct uuid bitstream_id;
- u64 bitstream_md;
- size_t pr_bandwidth;
- u32 socket_id;
- u32 fabric_version_id;
- u32 cache_size;
-
- u32 capability;
-
- void *max10_dev; /* MAX10 device */
- void *i2c_master; /* I2C Master device */
- void *eth_dev[MAX_ETH_GROUP_DEVICES];
- struct opae_reg_region
- eth_group_region[MAX_ETH_GROUP_DEVICES];
- struct ifpga_fme_board_info board_info;
- int nums_eth_dev;
- unsigned int nums_acc_region;
-};
-
-enum ifpga_port_state {
- IFPGA_PORT_UNUSED = 0,
- IFPGA_PORT_ATTACHED,
- IFPGA_PORT_DETACHED,
-};
-
-struct ifpga_port_hw {
- enum ifpga_port_state state;
-
- struct ifpga_feature_list feature_list;
- spinlock_t lock; /* protect access to hw */
-
- void *parent; /* pointer to ifpga_hw */
-
- int port_id; /* provied by HEADER feature */
- struct uuid afu_id; /* provied by User AFU feature */
-
- unsigned int disable_count;
-
- u32 capability;
- u32 num_umsgs; /* The number of allocated umsgs */
- u32 num_uafu_irqs; /* The number of uafu interrupts */
- u8 *stp_addr;
- u32 stp_size;
-};
-
-#define AFU_MAX_REGION 1
-
-struct ifpga_afu_info {
- struct opae_reg_region region[AFU_MAX_REGION];
- unsigned int num_regions;
- unsigned int num_irqs;
-};
-
-struct ifpga_hw {
- struct opae_adapter *adapter;
- struct opae_adapter_data_pci *pci_data;
-
- struct ifpga_fme_hw fme;
- struct ifpga_port_hw port[MAX_FPGA_PORT_NUM];
-};
-
-static inline bool is_ifpga_hw_pf(struct ifpga_hw *hw)
-{
- return hw->fme.state != IFPGA_FME_UNUSED;
-}
-
-static inline bool is_valid_port_id(struct ifpga_hw *hw, u32 port_id)
-{
- if (port_id >= MAX_FPGA_PORT_NUM ||
- hw->port[port_id].state != IFPGA_PORT_ATTACHED)
- return false;
-
- return true;
-}
-#endif /* _IFPGA_HW_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "ifpga_feature_dev.h"
-
-int port_get_prop(struct ifpga_port_hw *port, struct feature_prop *prop)
-{
- struct ifpga_feature *feature;
-
- if (!port)
- return -ENOENT;
-
- feature = get_port_feature_by_id(port, prop->feature_id);
-
- if (feature && feature->ops && feature->ops->get_prop)
- return feature->ops->get_prop(feature, prop);
-
- return -ENOENT;
-}
-
-int port_set_prop(struct ifpga_port_hw *port, struct feature_prop *prop)
-{
- struct ifpga_feature *feature;
-
- if (!port)
- return -ENOENT;
-
- feature = get_port_feature_by_id(port, prop->feature_id);
-
- if (feature && feature->ops && feature->ops->set_prop)
- return feature->ops->set_prop(feature, prop);
-
- return -ENOENT;
-}
-
-int port_set_irq(struct ifpga_port_hw *port, u32 feature_id, void *irq_set)
-{
- struct ifpga_feature *feature;
-
- if (!port)
- return -ENOENT;
-
- feature = get_port_feature_by_id(port, feature_id);
-
- if (feature && feature->ops && feature->ops->set_irq)
- return feature->ops->set_irq(feature, irq_set);
-
- return -ENOENT;
-}
-
-static int port_get_revision(struct ifpga_port_hw *port, u64 *revision)
-{
- struct feature_port_header *port_hdr
- = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
- struct feature_header header;
-
- header.csr = readq(&port_hdr->header);
-
- *revision = header.revision;
-
- return 0;
-}
-
-static int port_get_portidx(struct ifpga_port_hw *port, u64 *idx)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_capability capability;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- capability.csr = readq(&port_hdr->capability);
- *idx = capability.port_number;
-
- return 0;
-}
-
-static int port_get_latency_tolerance(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_control control;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- control.csr = readq(&port_hdr->control);
- *val = control.latency_tolerance;
-
- return 0;
-}
-
-static int port_get_ap1_event(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_status status;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- status.csr = readq(&port_hdr->status);
- spinlock_unlock(&port->lock);
-
- *val = status.ap1_event;
-
- return 0;
-}
-
-static int port_set_ap1_event(struct ifpga_port_hw *port, u64 val)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_status status;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- status.csr = readq(&port_hdr->status);
- status.ap1_event = val;
- writeq(status.csr, &port_hdr->status);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static int port_get_ap2_event(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_status status;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- status.csr = readq(&port_hdr->status);
- spinlock_unlock(&port->lock);
-
- *val = status.ap2_event;
-
- return 0;
-}
-
-static int port_set_ap2_event(struct ifpga_port_hw *port, u64 val)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_status status;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- status.csr = readq(&port_hdr->status);
- status.ap2_event = val;
- writeq(status.csr, &port_hdr->status);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static int port_get_power_state(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_header *port_hdr;
- struct feature_port_status status;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- status.csr = readq(&port_hdr->status);
- spinlock_unlock(&port->lock);
-
- *val = status.power_state;
-
- return 0;
-}
-
-static int port_get_userclk_freqcmd(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_header *port_hdr;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- *val = readq(&port_hdr->user_clk_freq_cmd0);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static int port_set_userclk_freqcmd(struct ifpga_port_hw *port, u64 val)
-{
- struct feature_port_header *port_hdr;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- writeq(val, &port_hdr->user_clk_freq_cmd0);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static int port_get_userclk_freqcntrcmd(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_header *port_hdr;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- *val = readq(&port_hdr->user_clk_freq_cmd1);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static int port_set_userclk_freqcntrcmd(struct ifpga_port_hw *port, u64 val)
-{
- struct feature_port_header *port_hdr;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- writeq(val, &port_hdr->user_clk_freq_cmd1);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static int port_get_userclk_freqsts(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_header *port_hdr;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- *val = readq(&port_hdr->user_clk_freq_sts0);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static int port_get_userclk_freqcntrsts(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_header *port_hdr;
-
- port_hdr = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_HEADER);
-
- spinlock_lock(&port->lock);
- *val = readq(&port_hdr->user_clk_freq_sts1);
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static int port_hdr_init(struct ifpga_feature *feature)
-{
- struct ifpga_port_hw *port = feature->parent;
-
- dev_info(NULL, "port hdr Init.\n");
-
- fpga_port_reset(port);
-
- return 0;
-}
-
-static void port_hdr_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "port hdr uinit.\n");
-}
-
-static int port_hdr_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_port_hw *port = feature->parent;
-
- switch (prop->prop_id) {
- case PORT_HDR_PROP_REVISION:
- return port_get_revision(port, &prop->data);
- case PORT_HDR_PROP_PORTIDX:
- return port_get_portidx(port, &prop->data);
- case PORT_HDR_PROP_LATENCY_TOLERANCE:
- return port_get_latency_tolerance(port, &prop->data);
- case PORT_HDR_PROP_AP1_EVENT:
- return port_get_ap1_event(port, &prop->data);
- case PORT_HDR_PROP_AP2_EVENT:
- return port_get_ap2_event(port, &prop->data);
- case PORT_HDR_PROP_POWER_STATE:
- return port_get_power_state(port, &prop->data);
- case PORT_HDR_PROP_USERCLK_FREQCMD:
- return port_get_userclk_freqcmd(port, &prop->data);
- case PORT_HDR_PROP_USERCLK_FREQCNTRCMD:
- return port_get_userclk_freqcntrcmd(port, &prop->data);
- case PORT_HDR_PROP_USERCLK_FREQSTS:
- return port_get_userclk_freqsts(port, &prop->data);
- case PORT_HDR_PROP_USERCLK_CNTRSTS:
- return port_get_userclk_freqcntrsts(port, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int port_hdr_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_port_hw *port = feature->parent;
-
- switch (prop->prop_id) {
- case PORT_HDR_PROP_AP1_EVENT:
- return port_set_ap1_event(port, prop->data);
- case PORT_HDR_PROP_AP2_EVENT:
- return port_set_ap2_event(port, prop->data);
- case PORT_HDR_PROP_USERCLK_FREQCMD:
- return port_set_userclk_freqcmd(port, prop->data);
- case PORT_HDR_PROP_USERCLK_FREQCNTRCMD:
- return port_set_userclk_freqcntrcmd(port, prop->data);
- }
-
- return -ENOENT;
-}
-
-struct ifpga_feature_ops ifpga_rawdev_port_hdr_ops = {
- .init = port_hdr_init,
- .uinit = port_hdr_uinit,
- .get_prop = port_hdr_get_prop,
- .set_prop = port_hdr_set_prop,
-};
-
-static int port_stp_init(struct ifpga_feature *feature)
-{
- struct ifpga_port_hw *port = feature->parent;
-
- dev_info(NULL, "port stp Init.\n");
-
- spinlock_lock(&port->lock);
- port->stp_addr = feature->addr;
- port->stp_size = feature->size;
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static void port_stp_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "port stp uinit.\n");
-}
-
-struct ifpga_feature_ops ifpga_rawdev_port_stp_ops = {
- .init = port_stp_init,
- .uinit = port_stp_uinit,
-};
-
-static int port_uint_init(struct ifpga_feature *feature)
-{
- struct ifpga_port_hw *port = feature->parent;
-
- dev_info(NULL, "PORT UINT Init.\n");
-
- spinlock_lock(&port->lock);
- if (feature->ctx_num) {
- port->capability |= FPGA_PORT_CAP_UAFU_IRQ;
- port->num_uafu_irqs = feature->ctx_num;
- }
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static void port_uint_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "PORT UINT UInit.\n");
-}
-
-struct ifpga_feature_ops ifpga_rawdev_port_uint_ops = {
- .init = port_uint_init,
- .uinit = port_uint_uinit,
-};
-
-static int port_afu_init(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "PORT AFU Init.\n");
-
- return 0;
-}
-
-static void port_afu_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-
- dev_info(NULL, "PORT AFU UInit.\n");
-}
-
-struct ifpga_feature_ops ifpga_rawdev_port_afu_ops = {
- .init = port_afu_init,
- .uinit = port_afu_uinit,
-};
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "ifpga_feature_dev.h"
-
-static int port_err_get_revision(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_error *port_err;
- struct feature_header header;
-
- port_err = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_ERROR);
- header.csr = readq(&port_err->header);
- *val = header.revision;
-
- return 0;
-}
-
-static int port_err_get_errors(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_error *port_err;
- struct feature_port_err_key error;
-
- port_err = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_ERROR);
- error.csr = readq(&port_err->port_error);
- *val = error.csr;
-
- return 0;
-}
-
-static int port_err_get_first_error(struct ifpga_port_hw *port, u64 *val)
-{
- struct feature_port_error *port_err;
- struct feature_port_first_err_key first_error;
-
- port_err = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_ERROR);
- first_error.csr = readq(&port_err->port_first_error);
- *val = first_error.csr;
-
- return 0;
-}
-
-static int port_err_get_first_malformed_req_lsb(struct ifpga_port_hw *port,
- u64 *val)
-{
- struct feature_port_error *port_err;
- struct feature_port_malformed_req0 malreq0;
-
- port_err = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_ERROR);
-
- malreq0.header_lsb = readq(&port_err->malreq0);
- *val = malreq0.header_lsb;
-
- return 0;
-}
-
-static int port_err_get_first_malformed_req_msb(struct ifpga_port_hw *port,
- u64 *val)
-{
- struct feature_port_error *port_err;
- struct feature_port_malformed_req1 malreq1;
-
- port_err = get_port_feature_ioaddr_by_index(port,
- PORT_FEATURE_ID_ERROR);
-
- malreq1.header_msb = readq(&port_err->malreq1);
- *val = malreq1.header_msb;
-
- return 0;
-}
-
-static int port_err_set_clear(struct ifpga_port_hw *port, u64 val)
-{
- int ret;
-
- spinlock_lock(&port->lock);
- ret = port_err_clear(port, val);
- spinlock_unlock(&port->lock);
-
- return ret;
-}
-
-static int port_error_init(struct ifpga_feature *feature)
-{
- struct ifpga_port_hw *port = feature->parent;
-
- dev_info(NULL, "port error Init.\n");
-
- spinlock_lock(&port->lock);
- port_err_mask(port, false);
- if (feature->ctx_num)
- port->capability |= FPGA_PORT_CAP_ERR_IRQ;
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-static void port_error_uinit(struct ifpga_feature *feature)
-{
- UNUSED(feature);
-}
-
-static int port_error_get_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_port_hw *port = feature->parent;
-
- switch (prop->prop_id) {
- case PORT_ERR_PROP_REVISION:
- return port_err_get_revision(port, &prop->data);
- case PORT_ERR_PROP_ERRORS:
- return port_err_get_errors(port, &prop->data);
- case PORT_ERR_PROP_FIRST_ERROR:
- return port_err_get_first_error(port, &prop->data);
- case PORT_ERR_PROP_FIRST_MALFORMED_REQ_LSB:
- return port_err_get_first_malformed_req_lsb(port, &prop->data);
- case PORT_ERR_PROP_FIRST_MALFORMED_REQ_MSB:
- return port_err_get_first_malformed_req_msb(port, &prop->data);
- }
-
- return -ENOENT;
-}
-
-static int port_error_set_prop(struct ifpga_feature *feature,
- struct feature_prop *prop)
-{
- struct ifpga_port_hw *port = feature->parent;
-
- if (prop->prop_id == PORT_ERR_PROP_CLEAR)
- return port_err_set_clear(port, prop->data);
-
- return -ENOENT;
-}
-
-struct ifpga_feature_ops ifpga_rawdev_port_error_ops = {
- .init = port_error_init,
- .uinit = port_error_uinit,
- .get_prop = port_error_get_prop,
- .set_prop = port_error_set_prop,
-};
+++ /dev/null
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-sources = [
- 'ifpga_api.c',
- 'ifpga_enumerate.c',
- 'ifpga_feature_dev.c',
- 'ifpga_fme.c',
- 'ifpga_fme_iperf.c',
- 'ifpga_fme_dperf.c',
- 'ifpga_fme_error.c',
- 'ifpga_port.c',
- 'ifpga_port_error.c',
- 'ifpga_fme_pr.c',
- 'opae_hw_api.c',
- 'opae_ifpga_hw_api.c',
- 'opae_debug.c',
- 'opae_spi.c',
- 'opae_spi_transaction.c',
- 'opae_intel_max10.c',
- 'opae_i2c.c',
- 'opae_at24_eeprom.c',
- 'opae_eth_group.c',
-]
-
-error_cflags = ['-Wno-sign-compare', '-Wno-unused-value',
- '-Wno-format', '-Wno-error=format-security',
- '-Wno-strict-aliasing', '-Wno-unused-but-set-variable'
-]
-c_args = cflags
-foreach flag: error_cflags
- if cc.has_argument(flag)
- c_args += flag
- endif
-endforeach
-
-base_lib = static_library('ifpga_rawdev_base', sources,
- dependencies: static_rte_eal,
- c_args: c_args)
-base_objs = base_lib.extract_all_objects()
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#include "opae_osdep.h"
-#include "opae_i2c.h"
-#include "opae_at24_eeprom.h"
-
-#define AT24_READ_RETRY 10
-
-static int at24_eeprom_read_and_try(struct altera_i2c_dev *dev,
- unsigned int slave_addr,
- u32 offset, u8 *buf, u32 len)
-{
- int i;
- int ret = 0;
-
- for (i = 0; i < AT24_READ_RETRY; i++) {
- ret = i2c_read16(dev, slave_addr, offset,
- buf, len);
- if (ret == 0)
- break;
-
- opae_udelay(100);
- }
-
- return ret;
-}
-
-int at24_eeprom_read(struct altera_i2c_dev *dev, unsigned int slave_addr,
- u32 offset, u8 *buf, int count)
-{
- int len;
- int status;
- int read_count = 0;
-
- if (!count)
- return count;
-
- if (count > AT24C512_IO_LIMIT)
- len = AT24C512_IO_LIMIT;
- else
- len = count;
-
- while (count) {
- status = at24_eeprom_read_and_try(dev, slave_addr, offset,
- buf, len);
- if (status)
- break;
-
- buf += len;
- offset += len;
- count -= len;
- read_count += len;
- }
-
- return read_count;
-}
-
-int at24_eeprom_write(struct altera_i2c_dev *dev, unsigned int slave_addr,
- u32 offset, u8 *buf, int count)
-{
- int len;
- int status;
- int write_count = 0;
-
- if (!count)
- return count;
-
- if (count > AT24C512_PAGE_SIZE)
- len = AT24C512_PAGE_SIZE;
- else
- len = count;
-
- while (count) {
- status = i2c_write16(dev, slave_addr, offset, buf, len);
- if (status)
- break;
-
- buf += len;
- offset += len;
- count -= len;
- write_count += len;
- }
-
- return write_count;
-}
-
+++ /dev/null
-
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#define AT24C512_PAGE_SIZE 128
-#define AT24C512_IO_LIMIT 128
-
-#define AT24512_SLAVE_ADDR 0x51
-
-int at24_eeprom_read(struct altera_i2c_dev *dev, unsigned int slave_addr,
- u32 offset, u8 *buf, int count);
-int at24_eeprom_write(struct altera_i2c_dev *dev, unsigned int slave_addr,
- u32 offset, u8 *buf, int count);
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#define OPAE_HW_DEBUG
-
-#include "opae_hw_api.h"
-#include "opae_debug.h"
-
-void opae_manager_dump(struct opae_manager *mgr)
-{
- opae_log("=====%s=====\n", __func__);
- opae_log("OPAE Manger %s\n", mgr->name);
- opae_log("OPAE Manger OPs = %p\n", mgr->ops);
- opae_log("OPAE Manager Private Data = %p\n", mgr->data);
- opae_log("OPAE Adapter(parent) = %p\n", mgr->adapter);
- opae_log("==========================\n");
-}
-
-void opae_bridge_dump(struct opae_bridge *br)
-{
- opae_log("=====%s=====\n", __func__);
- opae_log("OPAE Bridge %s\n", br->name);
- opae_log("OPAE Bridge ID = %d\n", br->id);
- opae_log("OPAE Bridge OPs = %p\n", br->ops);
- opae_log("OPAE Bridge Private Data = %p\n", br->data);
- opae_log("OPAE Accelerator(under this bridge) = %p\n", br->acc);
- opae_log("==========================\n");
-}
-
-void opae_accelerator_dump(struct opae_accelerator *acc)
-{
- opae_log("=====%s=====\n", __func__);
- opae_log("OPAE Accelerator %s\n", acc->name);
- opae_log("OPAE Accelerator Index = %d\n", acc->index);
- opae_log("OPAE Accelerator OPs = %p\n", acc->ops);
- opae_log("OPAE Accelerator Private Data = %p\n", acc->data);
- opae_log("OPAE Bridge (upstream) = %p\n", acc->br);
- opae_log("OPAE Manager (upstream) = %p\n", acc->mgr);
- opae_log("==========================\n");
-
- if (acc->br)
- opae_bridge_dump(acc->br);
-}
-
-static void opae_adapter_data_dump(void *data)
-{
- struct opae_adapter_data *d = data;
- struct opae_adapter_data_pci *d_pci;
- struct opae_reg_region *r;
- int i;
-
- opae_log("=====%s=====\n", __func__);
-
- switch (d->type) {
- case OPAE_FPGA_PCI:
- d_pci = (struct opae_adapter_data_pci *)d;
-
- opae_log("OPAE Adapter Type = PCI\n");
- opae_log("PCI Device ID: 0x%04x\n", d_pci->device_id);
- opae_log("PCI Vendor ID: 0x%04x\n", d_pci->vendor_id);
-
- for (i = 0; i < PCI_MAX_RESOURCE; i++) {
- r = &d_pci->region[i];
- opae_log("PCI Bar %d: phy(%llx) len(%llx) addr(%p)\n",
- i, (unsigned long long)r->phys_addr,
- (unsigned long long)r->len, r->addr);
- }
- break;
- case OPAE_FPGA_NET:
- break;
- }
-
- opae_log("==========================\n");
-}
-
-void opae_adapter_dump(struct opae_adapter *adapter, int verbose)
-{
- struct opae_accelerator *acc;
-
- if (verbose) {
- opae_log("=====%s=====\n", __func__);
- opae_log("OPAE Adapter %s\n", adapter->name);
- opae_log("OPAE Adapter OPs = %p\n", adapter->ops);
- opae_log("OPAE Adapter Private Data = %p\n", adapter->data);
- opae_log("OPAE Manager (downstream) = %p\n", adapter->mgr);
-
- if (adapter->mgr)
- opae_manager_dump(adapter->mgr);
-
- opae_adapter_for_each_acc(adapter, acc)
- opae_accelerator_dump(acc);
-
- if (adapter->data)
- opae_adapter_data_dump(adapter->data);
-
- opae_log("==========================\n");
- }
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _OPAE_DEBUG_H_
-#define _OPAE_DEBUG_H_
-
-#ifdef OPAE_HW_DEBUG
-#define opae_log(fmt, args...) printf(fmt, ## args)
-#else
-#define opae_log(fme, args...) do {} while (0)
-#endif
-
-void opae_manager_dump(struct opae_manager *mgr);
-void opae_bridge_dump(struct opae_bridge *br);
-void opae_accelerator_dump(struct opae_accelerator *acc);
-void opae_adapter_dump(struct opae_adapter *adapter, int verbose);
-
-#endif /* _OPAE_DEBUG_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#include "opae_osdep.h"
-#include "opae_eth_group.h"
-
-#define DATA_VAL_INVL 1 /* us */
-#define DATA_VAL_POLL_TIMEOUT 10 /* us */
-
-static const char *eth_type_to_string(u8 type)
-{
- switch (type) {
- case ETH_GROUP_PHY:
- return "phy";
- case ETH_GROUP_MAC:
- return "mac";
- case ETH_GROUP_ETHER:
- return "ethernet wrapper";
- }
-
- return "unknown";
-}
-
-static int eth_group_get_select(struct eth_group_device *dev,
- u8 type, u8 index, u8 *select)
-{
- /*
- * in different speed configuration, the index of
- * PHY and MAC are different.
- *
- * 1 ethernet wrapper -> Device Select 0x0 - fixed value
- * n PHYs -> Device Select 0x2,4,6,8,A,C,E,10,...
- * n MACs -> Device Select 0x3,5,7,9,B,D,F,11,...
- */
-
- if (type == ETH_GROUP_PHY && index < dev->phy_num)
- *select = index * 2 + 2;
- else if (type == ETH_GROUP_MAC && index < dev->mac_num)
- *select = index * 2 + 3;
- else if (type == ETH_GROUP_ETHER && index == 0)
- *select = 0;
- else
- return -EINVAL;
-
- return 0;
-}
-
-int eth_group_write_reg(struct eth_group_device *dev,
- u8 type, u8 index, u16 addr, u32 data)
-{
- u8 dev_select = 0;
- u64 v = 0;
- int ret;
-
- dev_debug(dev, "%s type %s index %u addr 0x%x\n",
- __func__, eth_type_to_string(type), index, addr);
-
- /* find device select */
- ret = eth_group_get_select(dev, type, index, &dev_select);
- if (ret)
- return ret;
-
- v = CMD_WR << CTRL_CMD_SHIT |
- (u64)dev_select << CTRL_DS_SHIFT |
- (u64)addr << CTRL_ADDR_SHIFT |
- (data & CTRL_WR_DATA);
-
- /* only PHY has additional feature bit */
- if (type == ETH_GROUP_PHY)
- v |= CTRL_FEAT_SELECT;
-
- opae_writeq(v, dev->base + ETH_GROUP_CTRL);
-
- return 0;
-}
-
-int eth_group_read_reg(struct eth_group_device *dev,
- u8 type, u8 index, u16 addr, u32 *data)
-{
- u8 dev_select = 0;
- u64 v = 0;
- int ret;
-
- dev_debug(dev, "%s type %s index %u addr 0x%x\n",
- __func__, eth_type_to_string(type), index,
- addr);
-
- /* find device select */
- ret = eth_group_get_select(dev, type, index, &dev_select);
- if (ret)
- return ret;
-
- v = CMD_RD << CTRL_CMD_SHIT |
- (u64)dev_select << CTRL_DS_SHIFT |
- (u64)addr << CTRL_ADDR_SHIFT;
-
- /* only PHY has additional feature bit */
- if (type == ETH_GROUP_PHY)
- v |= CTRL_FEAT_SELECT;
-
- opae_writeq(v, dev->base + ETH_GROUP_CTRL);
-
- if (opae_readq_poll_timeout(dev->base + ETH_GROUP_STAT,
- v, v & STAT_DATA_VAL, DATA_VAL_INVL,
- DATA_VAL_POLL_TIMEOUT))
- return -ETIMEDOUT;
-
- *data = (v & STAT_RD_DATA);
-
- dev_debug(dev, "%s data 0x%x\n", __func__, *data);
-
- return 0;
-}
-
-static int eth_group_reset_mac(struct eth_group_device *dev, u8 index,
- bool enable)
-{
- u32 val;
- int ret;
-
- /*
- * only support 25G & 40G mac reset for now. It uses internal reset.
- * as PHY and MAC are integrated together, below action will trigger
- * PHY reset too.
- */
- if (dev->speed != 25 && dev->speed != 40)
- return 0;
-
- ret = eth_group_read_reg(dev, ETH_GROUP_MAC, index, MAC_CONFIG,
- &val);
- if (ret) {
- dev_err(dev, "fail to read PHY_CONFIG: %d\n", ret);
- return ret;
- }
-
- /* skip if mac is in expected state already */
- if ((((val & MAC_RESET_MASK) == MAC_RESET_MASK) && enable) ||
- (((val & MAC_RESET_MASK) == 0) && !enable))
- return 0;
-
- if (enable)
- val |= MAC_RESET_MASK;
- else
- val &= ~MAC_RESET_MASK;
-
- ret = eth_group_write_reg(dev, ETH_GROUP_MAC, index, MAC_CONFIG,
- val);
- if (ret)
- dev_err(dev, "fail to write PHY_CONFIG: %d\n", ret);
-
- return ret;
-}
-
-static void eth_group_mac_uinit(struct eth_group_device *dev)
-{
- u8 i;
-
- for (i = 0; i < dev->mac_num; i++) {
- if (eth_group_reset_mac(dev, i, true))
- dev_err(dev, "fail to disable mac %d\n", i);
- }
-}
-
-static int eth_group_mac_init(struct eth_group_device *dev)
-{
- int ret;
- u8 i;
-
- for (i = 0; i < dev->mac_num; i++) {
- ret = eth_group_reset_mac(dev, i, false);
- if (ret) {
- dev_err(dev, "fail to enable mac %d\n", i);
- goto exit;
- }
- }
-
- return 0;
-
-exit:
- while (i--)
- eth_group_reset_mac(dev, i, true);
-
- return ret;
-}
-
-static int eth_group_reset_phy(struct eth_group_device *dev, u8 index,
- bool enable)
-{
- u32 val;
- int ret;
-
- /* only support 10G PHY reset for now. It uses external reset. */
- if (dev->speed != 10)
- return 0;
-
- ret = eth_group_read_reg(dev, ETH_GROUP_PHY, index,
- ADD_PHY_CTRL, &val);
- if (ret) {
- dev_err(dev, "fail to read ADD_PHY_CTRL reg: %d\n", ret);
- return ret;
- }
-
- /* return if PHY is already in expected state */
- if ((val & PHY_RESET && enable) || (!(val & PHY_RESET) && !enable))
- return 0;
-
- if (enable)
- val |= PHY_RESET;
- else
- val &= ~PHY_RESET;
-
- ret = eth_group_write_reg(dev, ETH_GROUP_PHY, index,
- ADD_PHY_CTRL, val);
- if (ret)
- dev_err(dev, "fail to write ADD_PHY_CTRL reg: %d\n", ret);
-
- return ret;
-}
-
-static int eth_group_phy_init(struct eth_group_device *dev)
-{
- int ret;
- int i;
-
- for (i = 0; i < dev->phy_num; i++) {
- ret = eth_group_reset_phy(dev, i, false);
- if (ret) {
- dev_err(dev, "fail to enable phy %d\n", i);
- goto exit;
- }
- }
-
- return 0;
-exit:
- while (i--)
- eth_group_reset_phy(dev, i, true);
-
- return ret;
-}
-
-static void eth_group_phy_uinit(struct eth_group_device *dev)
-{
- int i;
-
- for (i = 0; i < dev->phy_num; i++) {
- if (eth_group_reset_phy(dev, i, true))
- dev_err(dev, "fail to disable phy %d\n", i);
- }
-}
-
-static int eth_group_hw_init(struct eth_group_device *dev)
-{
- int ret;
-
- ret = eth_group_phy_init(dev);
- if (ret) {
- dev_err(dev, "fail to init eth group phys\n");
- return ret;
- }
-
- ret = eth_group_mac_init(dev);
- if (ret) {
- dev_err(priv->dev, "fail to init eth group macs\n");
- goto phy_exit;
- }
-
- return 0;
-
-phy_exit:
- eth_group_phy_uinit(dev);
- return ret;
-}
-
-static void eth_group_hw_uinit(struct eth_group_device *dev)
-{
- eth_group_mac_uinit(dev);
- eth_group_phy_uinit(dev);
-}
-
-struct eth_group_device *eth_group_probe(void *base)
-{
- struct eth_group_device *dev;
-
- dev = opae_malloc(sizeof(*dev));
- if (!dev)
- return NULL;
-
- dev->base = (u8 *)base;
-
- dev->info.info = opae_readq(dev->base + ETH_GROUP_INFO);
- dev->group_id = dev->info.group_id;
- dev->phy_num = dev->mac_num = dev->info.num_phys;
- dev->speed = dev->info.speed;
-
- dev->status = ETH_GROUP_DEV_ATTACHED;
-
- if (eth_group_hw_init(dev)) {
- dev_err(dev, "eth group hw init fail\n");
- return NULL;
- }
-
- dev_info(dev, "eth group device %d probe done: phy_num=mac_num:%d, speed=%d\n",
- dev->group_id, dev->phy_num, dev->speed);
-
- return dev;
-}
-
-void eth_group_release(struct eth_group_device *dev)
-{
- eth_group_hw_uinit(dev);
-
- if (dev) {
- dev->status = ETH_GROUP_DEV_NOUSED;
- opae_free(dev);
- }
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#ifndef _OPAE_PHY_MAC_H
-#define _OPAE_PHY_MAC_H
-
-#include "opae_osdep.h"
-
-#define MAX_ETH_GROUP_DEVICES 2
-
-#define LINE_SIDE_GROUP_ID 0
-#define HOST_SIDE_GROUP_ID 1
-
-#define ETH_GROUP_SELECT_FEAT 1
-
-#define ETH_GROUP_PHY 1
-#define ETH_GROUP_MAC 2
-#define ETH_GROUP_ETHER 3
-
-#define ETH_GROUP_INFO 0x8
-#define INFO_SPEED GENMASK_ULL(23, 16)
-#define ETH_SPEED_10G 10
-#define ETH_SPEED_25G 25
-#define INFO_PHY_NUM GENMASK_ULL(15, 8)
-#define INFO_GROUP_NUM GENMASK_ULL(7, 0)
-
-#define ETH_GROUP_CTRL 0x10
-#define CTRL_CMD GENMASK_ULL(63, 62)
-#define CTRL_CMD_SHIT 62
-#define CMD_NOP 0ULL
-#define CMD_RD 1ULL
-#define CMD_WR 2ULL
-#define CTRL_DEV_SELECT GENMASK_ULL(53, 49)
-#define CTRL_DS_SHIFT 49
-#define CTRL_FEAT_SELECT BIT_ULL(48)
-#define SELECT_IP 0
-#define SELECT_FEAT 1
-#define CTRL_ADDR GENMASK_ULL(47, 32)
-#define CTRL_ADDR_SHIFT 32
-#define CTRL_WR_DATA GENMASK_ULL(31, 0)
-
-#define ETH_GROUP_STAT 0x18
-#define STAT_DATA_VAL BIT_ULL(32)
-#define STAT_RD_DATA GENMASK_ULL(31, 0)
-
-/* Additional Feature Register */
-#define ADD_PHY_CTRL 0x0
-#define PHY_RESET BIT(0)
-#define MAC_CONFIG 0x310
-#define MAC_RESET_MASK GENMASK(2, 0)
-
-struct opae_eth_group_info {
- u8 group_id;
- u8 speed;
- u8 nums_of_phy;
- u8 nums_of_mac;
-};
-
-struct opae_eth_group_region_info {
- u8 group_id;
- u64 phys_addr;
- u64 len;
- u8 *addr;
- u8 mem_idx;
-};
-
-struct eth_group_info_reg {
- union {
- u64 info;
- struct {
- u8 group_id:8;
- u8 num_phys:8;
- u8 speed:8;
- u8 direction:1;
- u64 resvd:39;
- };
- };
-};
-
-enum eth_group_status {
- ETH_GROUP_DEV_NOUSED = 0,
- ETH_GROUP_DEV_ATTACHED,
-};
-
-struct eth_group_device {
- u8 *base;
- struct eth_group_info_reg info;
- enum eth_group_status status;
- u8 speed;
- u8 group_id;
- u8 phy_num;
- u8 mac_num;
-};
-
-struct eth_group_device *eth_group_probe(void *base);
-void eth_group_release(struct eth_group_device *dev);
-int eth_group_read_reg(struct eth_group_device *dev,
- u8 type, u8 index, u16 addr, u32 *data);
-int eth_group_write_reg(struct eth_group_device *dev,
- u8 type, u8 index, u16 addr, u32 data);
-#endif
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "opae_hw_api.h"
-#include "opae_debug.h"
-#include "ifpga_api.h"
-
-/* OPAE Bridge Functions */
-
-/**
- * opae_bridge_alloc - alloc opae_bridge data structure
- * @name: bridge name.
- * @ops: ops of this bridge.
- * @data: private data of this bridge.
- *
- * Return opae_bridge on success, otherwise NULL.
- */
-struct opae_bridge *
-opae_bridge_alloc(const char *name, struct opae_bridge_ops *ops, void *data)
-{
- struct opae_bridge *br = opae_zmalloc(sizeof(*br));
-
- if (!br)
- return NULL;
-
- br->name = name;
- br->ops = ops;
- br->data = data;
-
- opae_log("%s %p\n", __func__, br);
-
- return br;
-}
-
-/**
- * opae_bridge_reset - reset opae_bridge
- * @br: bridge to be reset.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_bridge_reset(struct opae_bridge *br)
-{
- if (!br)
- return -EINVAL;
-
- if (br->ops && br->ops->reset)
- return br->ops->reset(br);
-
- opae_log("%s no ops\n", __func__);
-
- return -ENOENT;
-}
-
-/* Accelerator Functions */
-
-/**
- * opae_accelerator_alloc - alloc opae_accelerator data structure
- * @name: accelerator name.
- * @ops: ops of this accelerator.
- * @data: private data of this accelerator.
- *
- * Return: opae_accelerator on success, otherwise NULL.
- */
-struct opae_accelerator *
-opae_accelerator_alloc(const char *name, struct opae_accelerator_ops *ops,
- void *data)
-{
- struct opae_accelerator *acc = opae_zmalloc(sizeof(*acc));
-
- if (!acc)
- return NULL;
-
- acc->name = name;
- acc->ops = ops;
- acc->data = data;
-
- opae_log("%s %p\n", __func__, acc);
-
- return acc;
-}
-
-/**
- * opae_acc_reg_read - read accelerator's register from its reg region.
- * @acc: accelerator to read.
- * @region_idx: reg region index.
- * @offset: reg offset.
- * @byte: read operation width, e.g 4 byte = 32bit read.
- * @data: data to store the value read from the register.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_reg_read(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data)
-{
- if (!acc || !data)
- return -EINVAL;
-
- if (acc->ops && acc->ops->read)
- return acc->ops->read(acc, region_idx, offset, byte, data);
-
- return -ENOENT;
-}
-
-/**
- * opae_acc_reg_write - write to accelerator's register from its reg region.
- * @acc: accelerator to write.
- * @region_idx: reg region index.
- * @offset: reg offset.
- * @byte: write operation width, e.g 4 byte = 32bit write.
- * @data: data stored the value to write to the register.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_reg_write(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data)
-{
- if (!acc || !data)
- return -EINVAL;
-
- if (acc->ops && acc->ops->write)
- return acc->ops->write(acc, region_idx, offset, byte, data);
-
- return -ENOENT;
-}
-
-/**
- * opae_acc_get_info - get information of an accelerator.
- * @acc: targeted accelerator
- * @info: accelerator info data structure to be filled.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_get_info(struct opae_accelerator *acc, struct opae_acc_info *info)
-{
- if (!acc || !info)
- return -EINVAL;
-
- if (acc->ops && acc->ops->get_info)
- return acc->ops->get_info(acc, info);
-
- return -ENOENT;
-}
-
-/**
- * opae_acc_get_region_info - get information of an accelerator register region.
- * @acc: targeted accelerator
- * @info: accelerator region info data structure to be filled.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_get_region_info(struct opae_accelerator *acc,
- struct opae_acc_region_info *info)
-{
- if (!acc || !info)
- return -EINVAL;
-
- if (acc->ops && acc->ops->get_region_info)
- return acc->ops->get_region_info(acc, info);
-
- return -ENOENT;
-}
-
-/**
- * opae_acc_set_irq - set an accelerator's irq.
- * @acc: targeted accelerator
- * @start: start vector number
- * @count: count of vectors to be set from the start vector
- * @evtfds: event fds to be notified when corresponding irqs happens
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_set_irq(struct opae_accelerator *acc,
- u32 start, u32 count, s32 evtfds[])
-{
- if (!acc || !acc->data)
- return -EINVAL;
-
- if (start + count <= start)
- return -EINVAL;
-
- if (acc->ops && acc->ops->set_irq)
- return acc->ops->set_irq(acc, start, count, evtfds);
-
- return -ENOENT;
-}
-
-/**
- * opae_acc_get_uuid - get accelerator's UUID.
- * @acc: targeted accelerator
- * @uuid: a pointer to UUID
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_get_uuid(struct opae_accelerator *acc,
- struct uuid *uuid)
-{
- if (!acc || !uuid)
- return -EINVAL;
-
- if (acc->ops && acc->ops->get_uuid)
- return acc->ops->get_uuid(acc, uuid);
-
- return -ENOENT;
-}
-
-/* Manager Functions */
-
-/**
- * opae_manager_alloc - alloc opae_manager data structure
- * @name: manager name.
- * @ops: ops of this manager.
- * @network_ops: ops of network management.
- * @data: private data of this manager.
- *
- * Return: opae_manager on success, otherwise NULL.
- */
-struct opae_manager *
-opae_manager_alloc(const char *name, struct opae_manager_ops *ops,
- struct opae_manager_networking_ops *network_ops, void *data)
-{
- struct opae_manager *mgr = opae_zmalloc(sizeof(*mgr));
-
- if (!mgr)
- return NULL;
-
- mgr->name = name;
- mgr->ops = ops;
- mgr->network_ops = network_ops;
- mgr->data = data;
-
- opae_log("%s %p\n", __func__, mgr);
-
- return mgr;
-}
-
-/**
- * opae_manager_flash - flash a reconfiguration image via opae_manager
- * @mgr: opae_manager for flash.
- * @id: id of target region (accelerator).
- * @buf: image data buffer.
- * @size: buffer size.
- * @status: status to store flash result.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_manager_flash(struct opae_manager *mgr, int id, const char *buf,
- u32 size, u64 *status)
-{
- if (!mgr)
- return -EINVAL;
-
- if (mgr && mgr->ops && mgr->ops->flash)
- return mgr->ops->flash(mgr, id, buf, size, status);
-
- return -ENOENT;
-}
-
-/* Adapter Functions */
-
-/**
- * opae_adapter_data_alloc - alloc opae_adapter_data data structure
- * @type: opae_adapter_type.
- *
- * Return: opae_adapter_data on success, otherwise NULL.
- */
-void *opae_adapter_data_alloc(enum opae_adapter_type type)
-{
- struct opae_adapter_data *data;
- int size;
-
- switch (type) {
- case OPAE_FPGA_PCI:
- size = sizeof(struct opae_adapter_data_pci);
- break;
- case OPAE_FPGA_NET:
- size = sizeof(struct opae_adapter_data_net);
- break;
- default:
- size = sizeof(struct opae_adapter_data);
- break;
- }
-
- data = opae_zmalloc(size);
- if (!data)
- return NULL;
-
- data->type = type;
-
- return data;
-}
-
-static struct opae_adapter_ops *match_ops(struct opae_adapter *adapter)
-{
- struct opae_adapter_data *data;
-
- if (!adapter || !adapter->data)
- return NULL;
-
- data = adapter->data;
-
- if (data->type == OPAE_FPGA_PCI)
- return &ifpga_adapter_ops;
-
- return NULL;
-}
-
-/**
- * opae_adapter_init - init opae_adapter data structure
- * @adapter: pointer of opae_adapter data structure
- * @name: adapter name.
- * @data: private data of this adapter.
- *
- * Return: 0 on success.
- */
-int opae_adapter_init(struct opae_adapter *adapter,
- const char *name, void *data)
-{
- if (!adapter)
- return -ENOMEM;
-
- TAILQ_INIT(&adapter->acc_list);
- adapter->data = data;
- adapter->name = name;
- adapter->ops = match_ops(adapter);
-
- return 0;
-}
-
-/**
- * opae_adapter_enumerate - enumerate this adapter
- * @adapter: adapter to enumerate.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_adapter_enumerate(struct opae_adapter *adapter)
-{
- int ret = -ENOENT;
-
- if (!adapter)
- return -EINVAL;
-
- if (adapter->ops && adapter->ops->enumerate)
- ret = adapter->ops->enumerate(adapter);
-
- if (!ret)
- opae_adapter_dump(adapter, 0);
-
- return ret;
-}
-
-/**
- * opae_adapter_destroy - destroy this adapter
- * @adapter: adapter to destroy.
- *
- * destroy things allocated during adapter enumeration.
- */
-void opae_adapter_destroy(struct opae_adapter *adapter)
-{
- if (adapter && adapter->ops && adapter->ops->destroy)
- adapter->ops->destroy(adapter);
-}
-
-/**
- * opae_adapter_get_acc - find and return accelerator with matched id
- * @adapter: adapter to find the accelerator.
- * @acc_id: id (index) of the accelerator.
- *
- * destroy things allocated during adapter enumeration.
- */
-struct opae_accelerator *
-opae_adapter_get_acc(struct opae_adapter *adapter, int acc_id)
-{
- struct opae_accelerator *acc = NULL;
-
- if (!adapter)
- return NULL;
-
- opae_adapter_for_each_acc(adapter, acc)
- if (acc->index == acc_id)
- return acc;
-
- return NULL;
-}
-
-/**
- * opae_manager_read_mac_rom - read the content of the MAC ROM
- * @mgr: opae_manager for MAC ROM
- * @port: the port number of retimer
- * @addr: buffer of the MAC address
- *
- * Return: return the bytes of read successfully
- */
-int opae_manager_read_mac_rom(struct opae_manager *mgr, int port,
- struct opae_ether_addr *addr)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->read_mac_rom)
- return mgr->network_ops->read_mac_rom(mgr,
- port * sizeof(struct opae_ether_addr),
- addr, sizeof(struct opae_ether_addr));
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_write_mac_rom - write data into MAC ROM
- * @mgr: opae_manager for MAC ROM
- * @port: the port number of the retimer
- * @addr: data of the MAC address
- *
- * Return: return written bytes
- */
-int opae_manager_write_mac_rom(struct opae_manager *mgr, int port,
- struct opae_ether_addr *addr)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops && mgr->network_ops->write_mac_rom)
- return mgr->network_ops->write_mac_rom(mgr,
- port * sizeof(struct opae_ether_addr),
- addr, sizeof(struct opae_ether_addr));
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_get_eth_group_nums - get eth group numbers
- * @mgr: opae_manager for eth group
- *
- * Return: the numbers of eth group
- */
-int opae_manager_get_eth_group_nums(struct opae_manager *mgr)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->get_retimer_info)
- return mgr->network_ops->get_eth_group_nums(mgr);
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_get_eth_group_info - get eth group info
- * @mgr: opae_manager for eth group
- * @group_id: id for eth group
- * @info: info return to caller
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_get_eth_group_info(struct opae_manager *mgr,
- u8 group_id, struct opae_eth_group_info *info)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->get_retimer_info)
- return mgr->network_ops->get_eth_group_info(mgr,
- group_id, info);
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_get_eth_group_region_info
- * @mgr: opae_manager for flash.
- * @info: the memory region info for eth group
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_manager_get_eth_group_region_info(struct opae_manager *mgr,
- u8 group_id, struct opae_eth_group_region_info *info)
-{
- if (!mgr)
- return -EINVAL;
-
- if (group_id >= MAX_ETH_GROUP_DEVICES)
- return -EINVAL;
-
- info->group_id = group_id;
-
- if (mgr && mgr->ops && mgr->ops->get_eth_group_region_info)
- return mgr->ops->get_eth_group_region_info(mgr, info);
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_eth_group_read_reg - read ETH group register
- * @mgr: opae_manager for ETH Group
- * @group_id: ETH group id
- * @type: eth type
- * @index: port index in eth group device
- * @addr: register address of ETH Group
- * @data: read buffer
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_eth_group_read_reg(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 *data)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->eth_group_reg_read)
- return mgr->network_ops->eth_group_reg_read(mgr, group_id,
- type, index, addr, data);
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_eth_group_write_reg - write ETH group register
- * @mgr: opae_manager for ETH Group
- * @group_id: ETH group id
- * @type: eth type
- * @index: port index in eth group device
- * @addr: register address of ETH Group
- * @data: data will write to register
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_eth_group_write_reg(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 data)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->eth_group_reg_write)
- return mgr->network_ops->eth_group_reg_write(mgr, group_id,
- type, index, addr, data);
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_get_retimer_info - get retimer info like PKVL chip
- * @mgr: opae_manager for retimer
- * @info: info return to caller
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_get_retimer_info(struct opae_manager *mgr,
- struct opae_retimer_info *info)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->get_retimer_info)
- return mgr->network_ops->get_retimer_info(mgr, info);
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_get_retimer_status - get retimer status
- * @mgr: opae_manager of retimer
- * @status: status of retimer
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_get_retimer_status(struct opae_manager *mgr,
- struct opae_retimer_status *status)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->get_retimer_status)
- return mgr->network_ops->get_retimer_status(mgr,
- status);
-
- return -ENOENT;
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _OPAE_HW_API_H_
-#define _OPAE_HW_API_H_
-
-#include <stdint.h>
-#include <stdlib.h>
-#include <stdio.h>
-#include <sys/queue.h>
-
-#include "opae_osdep.h"
-#include "opae_intel_max10.h"
-#include "opae_eth_group.h"
-
-#ifndef PCI_MAX_RESOURCE
-#define PCI_MAX_RESOURCE 6
-#endif
-
-struct opae_adapter;
-
-enum opae_adapter_type {
- OPAE_FPGA_PCI,
- OPAE_FPGA_NET,
-};
-
-/* OPAE Manager Data Structure */
-struct opae_manager_ops;
-struct opae_manager_networking_ops;
-
-/*
- * opae_manager has pointer to its parent adapter, as it could be able to manage
- * all components on this FPGA device (adapter). If not the case, don't set this
- * adapter, which limit opae_manager ops to manager itself.
- */
-struct opae_manager {
- const char *name;
- struct opae_adapter *adapter;
- struct opae_manager_ops *ops;
- struct opae_manager_networking_ops *network_ops;
- void *data;
-};
-
-/* FIXME: add more management ops, e.g power/thermal and etc */
-struct opae_manager_ops {
- int (*flash)(struct opae_manager *mgr, int id, const char *buffer,
- u32 size, u64 *status);
- int (*get_eth_group_region_info)(struct opae_manager *mgr,
- struct opae_eth_group_region_info *info);
-};
-
-/* networking management ops in FME */
-struct opae_manager_networking_ops {
- int (*read_mac_rom)(struct opae_manager *mgr, int offset, void *buf,
- int size);
- int (*write_mac_rom)(struct opae_manager *mgr, int offset, void *buf,
- int size);
- int (*get_eth_group_nums)(struct opae_manager *mgr);
- int (*get_eth_group_info)(struct opae_manager *mgr,
- u8 group_id, struct opae_eth_group_info *info);
- int (*eth_group_reg_read)(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 *data);
- int (*eth_group_reg_write)(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 data);
- int (*get_retimer_info)(struct opae_manager *mgr,
- struct opae_retimer_info *info);
- int (*get_retimer_status)(struct opae_manager *mgr,
- struct opae_retimer_status *status);
-};
-
-/* OPAE Manager APIs */
-struct opae_manager *
-opae_manager_alloc(const char *name, struct opae_manager_ops *ops,
- struct opae_manager_networking_ops *network_ops, void *data);
-#define opae_manager_free(mgr) opae_free(mgr)
-int opae_manager_flash(struct opae_manager *mgr, int acc_id, const char *buf,
- u32 size, u64 *status);
-int opae_manager_get_eth_group_region_info(struct opae_manager *mgr,
- u8 group_id, struct opae_eth_group_region_info *info);
-
-/* OPAE Bridge Data Structure */
-struct opae_bridge_ops;
-
-/*
- * opae_bridge only has pointer to its downstream accelerator.
- */
-struct opae_bridge {
- const char *name;
- int id;
- struct opae_accelerator *acc;
- struct opae_bridge_ops *ops;
- void *data;
-};
-
-struct opae_bridge_ops {
- int (*reset)(struct opae_bridge *br);
-};
-
-/* OPAE Bridge APIs */
-struct opae_bridge *
-opae_bridge_alloc(const char *name, struct opae_bridge_ops *ops, void *data);
-int opae_bridge_reset(struct opae_bridge *br);
-#define opae_bridge_free(br) opae_free(br)
-
-/* OPAE Acceleraotr Data Structure */
-struct opae_accelerator_ops;
-
-/*
- * opae_accelerator has pointer to its upstream bridge(port).
- * In some cases, if we allow same user to do PR on its own accelerator, then
- * set the manager pointer during the enumeration. But in other cases, the PR
- * functions only could be done via manager in another module / thread / service
- * / application for better protection.
- */
-struct opae_accelerator {
- TAILQ_ENTRY(opae_accelerator) node;
- const char *name;
- int index;
- struct opae_bridge *br;
- struct opae_manager *mgr;
- struct opae_accelerator_ops *ops;
- void *data;
-};
-
-struct opae_acc_info {
- unsigned int num_regions;
- unsigned int num_irqs;
-};
-
-struct opae_acc_region_info {
- u32 flags;
-#define ACC_REGION_READ (1 << 0)
-#define ACC_REGION_WRITE (1 << 1)
-#define ACC_REGION_MMIO (1 << 2)
- u32 index;
- u64 phys_addr;
- u64 len;
- u8 *addr;
-};
-
-struct opae_accelerator_ops {
- int (*read)(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data);
- int (*write)(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data);
- int (*get_info)(struct opae_accelerator *acc,
- struct opae_acc_info *info);
- int (*get_region_info)(struct opae_accelerator *acc,
- struct opae_acc_region_info *info);
- int (*set_irq)(struct opae_accelerator *acc,
- u32 start, u32 count, s32 evtfds[]);
- int (*get_uuid)(struct opae_accelerator *acc,
- struct uuid *uuid);
-};
-
-/* OPAE accelerator APIs */
-struct opae_accelerator *
-opae_accelerator_alloc(const char *name, struct opae_accelerator_ops *ops,
- void *data);
-#define opae_accelerator_free(acc) opae_free(acc)
-int opae_acc_get_info(struct opae_accelerator *acc, struct opae_acc_info *info);
-int opae_acc_get_region_info(struct opae_accelerator *acc,
- struct opae_acc_region_info *info);
-int opae_acc_set_irq(struct opae_accelerator *acc,
- u32 start, u32 count, s32 evtfds[]);
-int opae_acc_get_uuid(struct opae_accelerator *acc,
- struct uuid *uuid);
-
-static inline struct opae_bridge *
-opae_acc_get_br(struct opae_accelerator *acc)
-{
- return acc ? acc->br : NULL;
-}
-
-static inline struct opae_manager *
-opae_acc_get_mgr(struct opae_accelerator *acc)
-{
- return acc ? acc->mgr : NULL;
-}
-
-int opae_acc_reg_read(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data);
-int opae_acc_reg_write(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data);
-
-#define opae_acc_reg_read64(acc, region, offset, data) \
- opae_acc_reg_read(acc, region, offset, 8, data)
-#define opae_acc_reg_write64(acc, region, offset, data) \
- opae_acc_reg_write(acc, region, offset, 8, data)
-#define opae_acc_reg_read32(acc, region, offset, data) \
- opae_acc_reg_read(acc, region, offset, 4, data)
-#define opae_acc_reg_write32(acc, region, offset, data) \
- opae_acc_reg_write(acc, region, offset, 4, data)
-#define opae_acc_reg_read16(acc, region, offset, data) \
- opae_acc_reg_read(acc, region, offset, 2, data)
-#define opae_acc_reg_write16(acc, region, offset, data) \
- opae_acc_reg_write(acc, region, offset, 2, data)
-#define opae_acc_reg_read8(acc, region, offset, data) \
- opae_acc_reg_read(acc, region, offset, 1, data)
-#define opae_acc_reg_write8(acc, region, offset, data) \
- opae_acc_reg_write(acc, region, offset, 1, data)
-
-/*for data stream read/write*/
-int opae_acc_data_read(struct opae_accelerator *acc, unsigned int flags,
- u64 offset, unsigned int byte, void *data);
-int opae_acc_data_write(struct opae_accelerator *acc, unsigned int flags,
- u64 offset, unsigned int byte, void *data);
-
-/* OPAE Adapter Data Structure */
-struct opae_adapter_data {
- enum opae_adapter_type type;
-};
-
-struct opae_reg_region {
- u64 phys_addr;
- u64 len;
- u8 *addr;
-};
-
-struct opae_adapter_data_pci {
- enum opae_adapter_type type;
- u16 device_id;
- u16 vendor_id;
- struct opae_reg_region region[PCI_MAX_RESOURCE];
- int vfio_dev_fd; /* VFIO device file descriptor */
-};
-
-/* FIXME: OPAE_FPGA_NET type */
-struct opae_adapter_data_net {
- enum opae_adapter_type type;
-};
-
-struct opae_adapter_ops {
- int (*enumerate)(struct opae_adapter *adapter);
- void (*destroy)(struct opae_adapter *adapter);
-};
-
-TAILQ_HEAD(opae_accelerator_list, opae_accelerator);
-
-#define opae_adapter_for_each_acc(adatper, acc) \
- TAILQ_FOREACH(acc, &adapter->acc_list, node)
-
-struct opae_adapter {
- const char *name;
- struct opae_manager *mgr;
- struct opae_accelerator_list acc_list;
- struct opae_adapter_ops *ops;
- void *data;
-};
-
-/* OPAE Adapter APIs */
-void *opae_adapter_data_alloc(enum opae_adapter_type type);
-#define opae_adapter_data_free(data) opae_free(data)
-
-int opae_adapter_init(struct opae_adapter *adapter,
- const char *name, void *data);
-#define opae_adapter_free(adapter) opae_free(adapter)
-
-int opae_adapter_enumerate(struct opae_adapter *adapter);
-void opae_adapter_destroy(struct opae_adapter *adapter);
-static inline struct opae_manager *
-opae_adapter_get_mgr(struct opae_adapter *adapter)
-{
- return adapter ? adapter->mgr : NULL;
-}
-
-struct opae_accelerator *
-opae_adapter_get_acc(struct opae_adapter *adapter, int acc_id);
-
-static inline void opae_adapter_add_acc(struct opae_adapter *adapter,
- struct opae_accelerator *acc)
-{
- TAILQ_INSERT_TAIL(&adapter->acc_list, acc, node);
-}
-
-static inline void opae_adapter_remove_acc(struct opae_adapter *adapter,
- struct opae_accelerator *acc)
-{
- TAILQ_REMOVE(&adapter->acc_list, acc, node);
-}
-
-/* OPAE vBNG network datastruct */
-#define OPAE_ETHER_ADDR_LEN 6
-
-struct opae_ether_addr {
- unsigned char addr_bytes[OPAE_ETHER_ADDR_LEN];
-} __attribute__((__packed__));
-
-/* OPAE vBNG network API*/
-int opae_manager_read_mac_rom(struct opae_manager *mgr, int port,
- struct opae_ether_addr *addr);
-int opae_manager_write_mac_rom(struct opae_manager *mgr, int port,
- struct opae_ether_addr *addr);
-int opae_manager_get_retimer_info(struct opae_manager *mgr,
- struct opae_retimer_info *info);
-int opae_manager_get_retimer_status(struct opae_manager *mgr,
- struct opae_retimer_status *status);
-int opae_manager_get_eth_group_nums(struct opae_manager *mgr);
-int opae_manager_get_eth_group_info(struct opae_manager *mgr,
- u8 group_id, struct opae_eth_group_info *info);
-int opae_manager_eth_group_write_reg(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 data);
-int opae_manager_eth_group_read_reg(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 *data);
-#endif /* _OPAE_HW_API_H_*/
+++ /dev/null
-
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#include "opae_osdep.h"
-#include "opae_i2c.h"
-
-static int i2c_transfer(struct altera_i2c_dev *dev,
- struct i2c_msg *msg, int num)
-{
- int ret, try;
-
- for (ret = 0, try = 0; try < I2C_XFER_RETRY; try++) {
- ret = dev->xfer(dev, msg, num);
- if (ret != -EAGAIN)
- break;
- }
-
- return ret;
-}
-
-/**
- * i2c read function
- */
-int i2c_read(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
- u32 offset, u8 *buf, u32 count)
-{
- u8 msgbuf[2];
- int i = 0;
-
- if (flags & I2C_FLAG_ADDR16)
- msgbuf[i++] = offset >> 8;
-
- msgbuf[i++] = offset;
-
- struct i2c_msg msg[2] = {
- {
- .addr = slave_addr,
- .flags = 0,
- .len = i,
- .buf = msgbuf,
- },
- {
- .addr = slave_addr,
- .flags = I2C_M_RD,
- .len = count,
- .buf = buf,
- },
- };
-
- if (!dev->xfer)
- return -ENODEV;
-
- return i2c_transfer(dev, msg, 2);
-}
-
-int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
- u32 offset, u8 *buffer, int len)
-{
- struct i2c_msg msg;
- u8 *buf;
- int ret;
- int i = 0;
-
- if (!dev->xfer)
- return -ENODEV;
-
- buf = opae_malloc(I2C_MAX_OFFSET_LEN + len);
- if (!buf)
- return -ENOMEM;
-
- msg.addr = slave_addr;
- msg.flags = 0;
- msg.buf = buf;
-
- if (flags & I2C_FLAG_ADDR16)
- msg.buf[i++] = offset >> 8;
-
- msg.buf[i++] = offset;
- opae_memcpy(&msg.buf[i], buffer, len);
- msg.len = i + len;
-
- ret = i2c_transfer(dev, &msg, 1);
-
- opae_free(buf);
- return ret;
-}
-
-int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count)
-{
- return i2c_read(dev, 0, slave_addr, offset, buf, count);
-}
-
-int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count)
-{
- return i2c_read(dev, I2C_FLAG_ADDR16, slave_addr, offset,
- buf, count);
-}
-
-int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count)
-{
- return i2c_write(dev, 0, slave_addr, offset, buf, count);
-}
-
-int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count)
-{
- return i2c_write(dev, I2C_FLAG_ADDR16, slave_addr, offset,
- buf, count);
-}
-
-static void i2c_indirect_write(struct altera_i2c_dev *dev, u32 reg,
- u32 value)
-{
- u64 ctrl;
-
- ctrl = I2C_CTRL_W | (reg >> 2);
-
- opae_writeq(value & I2C_WRITE_DATA_MASK, dev->base + I2C_WRITE);
- opae_writeq(ctrl, dev->base + I2C_CTRL);
-}
-
-static u32 i2c_indirect_read(struct altera_i2c_dev *dev, u32 reg)
-{
- u64 tmp;
- u64 ctrl;
- u32 value;
-
- ctrl = I2C_CTRL_R | (reg >> 2);
- opae_writeq(ctrl, dev->base + I2C_CTRL);
-
- /* FIXME: Read one more time to avoid HW timing issue. */
- tmp = opae_readq(dev->base + I2C_READ);
- tmp = opae_readq(dev->base + I2C_READ);
-
- value = tmp & I2C_READ_DATA_MASK;
-
- return value;
-}
-
-static void altera_i2c_transfer(struct altera_i2c_dev *dev, u32 data)
-{
- /*send STOP on last byte*/
- if (dev->msg_len == 1)
- data |= ALTERA_I2C_TFR_CMD_STO;
- if (dev->msg_len > 0)
- i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD, data);
-}
-
-static void altera_i2c_disable(struct altera_i2c_dev *dev)
-{
- u32 val = i2c_indirect_read(dev, ALTERA_I2C_CTRL);
-
- i2c_indirect_write(dev, ALTERA_I2C_CTRL, val&~ALTERA_I2C_CTRL_EN);
-}
-
-static void altera_i2c_enable(struct altera_i2c_dev *dev)
-{
- u32 val = i2c_indirect_read(dev, ALTERA_I2C_CTRL);
-
- i2c_indirect_write(dev, ALTERA_I2C_CTRL, val | ALTERA_I2C_CTRL_EN);
-}
-
-static void altera_i2c_reset(struct altera_i2c_dev *dev)
-{
- altera_i2c_disable(dev);
- altera_i2c_enable(dev);
-}
-
-static int altera_i2c_wait_core_idle(struct altera_i2c_dev *dev)
-{
- int retry = 0;
-
- while (i2c_indirect_read(dev, ALTERA_I2C_STATUS)
- & ALTERA_I2C_STAT_CORE) {
- if (retry++ > ALTERA_I2C_TIMEOUT_US) {
- dev_err(dev, "timeout: Core Status not IDLE...\n");
- return -EBUSY;
- }
- udelay(1);
- }
-
- return 0;
-}
-
-static void altera_i2c_enable_interrupt(struct altera_i2c_dev *dev,
- u32 mask, bool enable)
-{
- u32 status;
-
- status = i2c_indirect_read(dev, ALTERA_I2C_ISER);
- if (enable)
- dev->isr_mask = status | mask;
- else
- dev->isr_mask = status&~mask;
-
- i2c_indirect_write(dev, ALTERA_I2C_ISER, dev->isr_mask);
-}
-
-static void altera_i2c_interrupt_clear(struct altera_i2c_dev *dev, u32 mask)
-{
- u32 int_en;
-
- int_en = i2c_indirect_read(dev, ALTERA_I2C_ISR);
-
- i2c_indirect_write(dev, ALTERA_I2C_ISR, int_en | mask);
-}
-
-static void altera_i2c_read_rx_fifo(struct altera_i2c_dev *dev)
-{
- size_t rx_avail;
- size_t bytes;
-
- rx_avail = i2c_indirect_read(dev, ALTERA_I2C_RX_FIFO_LVL);
- bytes = min(rx_avail, dev->msg_len);
-
- while (bytes-- > 0) {
- *dev->buf++ = i2c_indirect_read(dev, ALTERA_I2C_RX_DATA);
- dev->msg_len--;
- altera_i2c_transfer(dev, 0);
- }
-}
-
-static void altera_i2c_stop(struct altera_i2c_dev *dev)
-{
- i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD, ALTERA_I2C_TFR_CMD_STO);
-}
-
-static int altera_i2c_fill_tx_fifo(struct altera_i2c_dev *dev)
-{
- size_t tx_avail;
- int bytes;
- int ret;
-
- tx_avail = dev->fifo_size -
- i2c_indirect_read(dev, ALTERA_I2C_TC_FIFO_LVL);
- bytes = min(tx_avail, dev->msg_len);
- ret = dev->msg_len - bytes;
-
- while (bytes-- > 0) {
- altera_i2c_transfer(dev, *dev->buf++);
- dev->msg_len--;
- }
-
- return ret;
-}
-
-static u8 i2c_8bit_addr_from_msg(const struct i2c_msg *msg)
-{
- return (msg->addr << 1) | (msg->flags & I2C_M_RD ? 1 : 0);
-}
-
-static int altera_i2c_wait_complete(struct altera_i2c_dev *dev,
- u32 *status)
-{
- int retry = 0;
-
- while (!((*status = i2c_indirect_read(dev, ALTERA_I2C_ISR))
- & dev->isr_mask)) {
- if (retry++ > ALTERA_I2C_TIMEOUT_US)
- return -EBUSY;
-
- udelay(1000);
- }
-
- return 0;
-}
-
-static bool altera_handle_i2c_status(struct altera_i2c_dev *dev, u32 status)
-{
- bool read, finish = false;
- int ret;
-
- read = (dev->msg->flags & I2C_M_RD) != 0;
-
- if (status & ALTERA_I2C_ISR_ARB) {
- altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_ARB);
- dev->msg_err = -EAGAIN;
- finish = true;
- } else if (status & ALTERA_I2C_ISR_NACK) {
- dev_debug(dev, "could not get ACK\n");
- dev->msg_err = -ENXIO;
- altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_NACK);
- altera_i2c_stop(dev);
- finish = true;
- } else if (read && (status & ALTERA_I2C_ISR_RXOF)) {
- /* RX FIFO Overflow */
- altera_i2c_read_rx_fifo(dev);
- altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISER_RXOF_EN);
- altera_i2c_stop(dev);
- dev_err(dev, "error: RX FIFO overflow\n");
- finish = true;
- } else if (read && (status & ALTERA_I2C_ISR_RXRDY)) {
- altera_i2c_read_rx_fifo(dev);
- altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_RXRDY);
- if (!dev->msg_len)
- finish = true;
- } else if (!read && (status & ALTERA_I2C_ISR_TXRDY)) {
- altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_TXRDY);
- if (dev->msg_len > 0)
- altera_i2c_fill_tx_fifo(dev);
- else
- finish = true;
- } else {
- dev_err(dev, "unexpected status:0x%x\n", status);
- altera_i2c_interrupt_clear(dev, ALTERA_I2C_ALL_IRQ);
- }
-
- if (finish) {
- ret = altera_i2c_wait_core_idle(dev);
- if (ret)
- dev_err(dev, "message timeout\n");
-
- altera_i2c_enable_interrupt(dev, ALTERA_I2C_ALL_IRQ, false);
- altera_i2c_interrupt_clear(dev, ALTERA_I2C_ALL_IRQ);
- dev_debug(dev, "message done\n");
- }
-
- return finish;
-}
-
-static bool altera_i2c_poll_status(struct altera_i2c_dev *dev)
-{
- u32 status;
- bool finish = false;
- int i = 0;
-
- do {
- if (altera_i2c_wait_complete(dev, &status)) {
- dev_err(dev, "altera i2c wait complete timeout, status=0x%x\n",
- status);
- return -EBUSY;
- }
-
- finish = altera_handle_i2c_status(dev, status);
-
- if (i++ > I2C_XFER_RETRY)
- break;
-
- } while (!finish);
-
- return finish;
-}
-
-static int altera_i2c_xfer_msg(struct altera_i2c_dev *dev,
- struct i2c_msg *msg)
-{
- u32 int_mask = ALTERA_I2C_ISR_RXOF |
- ALTERA_I2C_ISR_ARB | ALTERA_I2C_ISR_NACK;
- u8 addr = i2c_8bit_addr_from_msg(msg);
- bool finish;
-
- dev->msg = msg;
- dev->msg_len = msg->len;
- dev->buf = msg->buf;
- dev->msg_err = 0;
- altera_i2c_enable(dev);
-
- /*make sure RX FIFO is emtry*/
- do {
- i2c_indirect_read(dev, ALTERA_I2C_RX_DATA);
- } while (i2c_indirect_read(dev, ALTERA_I2C_RX_FIFO_LVL));
-
- i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD_RW_D,
- ALTERA_I2C_TFR_CMD_STA | addr);
-
- /*enable irq*/
- if (msg->flags & I2C_M_RD) {
- int_mask |= ALTERA_I2C_ISR_RXOF | ALTERA_I2C_ISR_RXRDY;
- /* in polling mode, we should set this ISR register? */
- altera_i2c_enable_interrupt(dev, int_mask, true);
- altera_i2c_transfer(dev, 0);
- } else {
- int_mask |= ALTERA_I2C_ISR_TXRDY;
- altera_i2c_enable_interrupt(dev, int_mask, true);
- altera_i2c_fill_tx_fifo(dev);
- }
-
- finish = altera_i2c_poll_status(dev);
- if (!finish) {
- dev->msg_err = -ETIMEDOUT;
- dev_err(dev, "%s: i2c transfer error\n", __func__);
- }
-
- altera_i2c_enable_interrupt(dev, int_mask, false);
-
- if (i2c_indirect_read(dev, ALTERA_I2C_STATUS) & ALTERA_I2C_STAT_CORE)
- dev_info(dev, "core not idle...\n");
-
- altera_i2c_disable(dev);
-
- return dev->msg_err;
-}
-
-static int altera_i2c_xfer(struct altera_i2c_dev *dev,
- struct i2c_msg *msg, int num)
-{
- int ret = 0;
- int i;
-
- for (i = 0; i < num; i++, msg++) {
- ret = altera_i2c_xfer_msg(dev, msg);
- if (ret)
- break;
- }
-
- return ret;
-}
-
-static void altera_i2c_hardware_init(struct altera_i2c_dev *dev)
-{
- u32 divisor = dev->i2c_clk / dev->bus_clk_rate;
- u32 clk_mhz = dev->i2c_clk / 1000000;
- u32 tmp = (ALTERA_I2C_THRESHOLD << ALTERA_I2C_CTRL_RXT_SHFT) |
- (ALTERA_I2C_THRESHOLD << ALTERA_I2C_CTRL_TCT_SHFT);
- u32 t_high, t_low;
-
- if (dev->bus_clk_rate <= 100000) {
- tmp &= ~ALTERA_I2C_CTRL_BSPEED;
- /*standard mode SCL 50/50*/
- t_high = divisor*1/2;
- t_low = divisor*1/2;
- } else {
- tmp |= ALTERA_I2C_CTRL_BSPEED;
- /*Fast mode SCL 33/66*/
- t_high = divisor*1/3;
- t_low = divisor*2/3;
- }
-
- i2c_indirect_write(dev, ALTERA_I2C_CTRL, tmp);
-
- dev_info(dev, "%s: rate=%uHz per_clk=%uMHz -> ratio=1:%u\n",
- __func__, dev->bus_clk_rate, clk_mhz, divisor);
-
- /*reset the i2c*/
- altera_i2c_reset(dev);
-
- /*Set SCL high Time*/
- i2c_indirect_write(dev, ALTERA_I2C_SCL_HIGH, t_high);
- /*Set SCL low time*/
- i2c_indirect_write(dev, ALTERA_I2C_SCL_LOW, t_low);
- /*Set SDA Hold time, 300ms*/
- i2c_indirect_write(dev, ALTERA_I2C_SDA_HOLD, (300*clk_mhz)/1000);
-
- altera_i2c_enable_interrupt(dev, ALTERA_I2C_ALL_IRQ, false);
-}
-
-struct altera_i2c_dev *altera_i2c_probe(void *base)
-{
- struct altera_i2c_dev *dev;
-
- dev = opae_malloc(sizeof(*dev));
- if (!dev)
- return NULL;
-
- dev->base = (u8 *)base;
- dev->i2c_param.info = opae_readq(dev->base + I2C_PARAM);
-
- if (dev->i2c_param.devid != 0xEE011) {
- dev_err(dev, "find a invalid i2c master\n");
- return NULL;
- }
-
- dev->fifo_size = dev->i2c_param.fifo_depth;
-
- if (dev->i2c_param.max_req == ALTERA_I2C_100KHZ)
- dev->bus_clk_rate = 100000;
- else if (dev->i2c_param.max_req == ALTERA_I2C_400KHZ)
- /* i2c bus clk 400KHz*/
- dev->bus_clk_rate = 400000;
-
- /* i2c input clock for vista creek is 100MHz */
- dev->i2c_clk = dev->i2c_param.ref_clk * 1000000;
- dev->xfer = altera_i2c_xfer;
-
- altera_i2c_hardware_init(dev);
-
- return dev;
-}
-
-int altera_i2c_remove(struct altera_i2c_dev *dev)
-{
- altera_i2c_disable(dev);
-
- return 0;
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#ifndef _OPAE_I2C_H
-#define _OPAE_I2C_H
-
-#include "opae_osdep.h"
-
-#define ALTERA_I2C_TFR_CMD 0x00 /* Transfer Command register */
-#define ALTERA_I2C_TFR_CMD_STA BIT(9) /* send START before byte */
-#define ALTERA_I2C_TFR_CMD_STO BIT(8) /* send STOP after byte */
-#define ALTERA_I2C_TFR_CMD_RW_D BIT(0) /* Direction of transfer */
-#define ALTERA_I2C_RX_DATA 0x04 /* RX data FIFO register */
-#define ALTERA_I2C_CTRL 0x8 /* Control register */
-#define ALTERA_I2C_CTRL_RXT_SHFT 4 /* RX FIFO Threshold */
-#define ALTERA_I2C_CTRL_TCT_SHFT 2 /* TFER CMD FIFO Threshold */
-#define ALTERA_I2C_CTRL_BSPEED BIT(1) /* Bus Speed */
-#define ALTERA_I2C_CTRL_EN BIT(0) /* Enable Core */
-#define ALTERA_I2C_ISER 0xc /* Interrupt Status Enable register */
-#define ALTERA_I2C_ISER_RXOF_EN BIT(4) /* Enable RX OVERFLOW IRQ */
-#define ALTERA_I2C_ISER_ARB_EN BIT(3) /* Enable ARB LOST IRQ */
-#define ALTERA_I2C_ISER_NACK_EN BIT(2) /* Enable NACK DET IRQ */
-#define ALTERA_I2C_ISER_RXRDY_EN BIT(1) /* Enable RX Ready IRQ */
-#define ALTERA_I2C_ISER_TXRDY_EN BIT(0) /* Enable TX Ready IRQ */
-#define ALTERA_I2C_ISR 0x10 /* Interrupt Status register */
-#define ALTERA_I2C_ISR_RXOF BIT(4) /* RX OVERFLOW */
-#define ALTERA_I2C_ISR_ARB BIT(3) /* ARB LOST */
-#define ALTERA_I2C_ISR_NACK BIT(2) /* NACK DET */
-#define ALTERA_I2C_ISR_RXRDY BIT(1) /* RX Ready */
-#define ALTERA_I2C_ISR_TXRDY BIT(0) /* TX Ready */
-#define ALTERA_I2C_STATUS 0x14 /* Status register */
-#define ALTERA_I2C_STAT_CORE BIT(0) /* Core Status */
-#define ALTERA_I2C_TC_FIFO_LVL 0x18 /* Transfer FIFO LVL register */
-#define ALTERA_I2C_RX_FIFO_LVL 0x1c /* Receive FIFO LVL register */
-#define ALTERA_I2C_SCL_LOW 0x20 /* SCL low count register */
-#define ALTERA_I2C_SCL_HIGH 0x24 /* SCL high count register */
-#define ALTERA_I2C_SDA_HOLD 0x28 /* SDA hold count register */
-
-#define ALTERA_I2C_ALL_IRQ (ALTERA_I2C_ISR_RXOF | ALTERA_I2C_ISR_ARB | \
- ALTERA_I2C_ISR_NACK | ALTERA_I2C_ISR_RXRDY | \
- ALTERA_I2C_ISR_TXRDY)
-
-#define ALTERA_I2C_THRESHOLD 0
-#define ALTERA_I2C_DFLT_FIFO_SZ 8
-#define ALTERA_I2C_TIMEOUT_US 250000 /* 250ms */
-
-#define I2C_PARAM 0x8
-#define I2C_CTRL 0x10
-#define I2C_CTRL_R BIT_ULL(9)
-#define I2C_CTRL_W BIT_ULL(8)
-#define I2C_CTRL_ADDR_MASK GENMASK_ULL(3, 0)
-#define I2C_READ 0x18
-#define I2C_READ_DATA_VALID BIT_ULL(32)
-#define I2C_READ_DATA_MASK GENMASK_ULL(31, 0)
-#define I2C_WRITE 0x20
-#define I2C_WRITE_DATA_MASK GENMASK_ULL(31, 0)
-
-#define ALTERA_I2C_100KHZ 0
-#define ALTERA_I2C_400KHZ 1
-
-/* i2c slave using 16bit address */
-#define I2C_FLAG_ADDR16 1
-
-#define I2C_XFER_RETRY 10
-
-struct i2c_core_param {
- union {
- u64 info;
- struct {
- u16 fifo_depth:9;
- u8 interface:1;
- /*reference clock of I2C core in MHz*/
- u32 ref_clk:10;
- /*Max I2C interface freq*/
- u8 max_req:4;
- u64 devid:32;
- /* number of MAC address*/
- u8 nu_macs:8;
- };
- };
-};
-
-struct altera_i2c_dev {
- u8 *base;
- struct i2c_core_param i2c_param;
- u32 fifo_size;
- u32 bus_clk_rate; /* i2c bus clock */
- u32 i2c_clk; /* i2c input clock */
- struct i2c_msg *msg;
- size_t msg_len;
- int msg_err;
- u32 isr_mask;
- u8 *buf;
- int (*xfer)(struct altera_i2c_dev *dev, struct i2c_msg *msg, int num);
-};
-
-/**
- * struct i2c_msg: an I2C message
- */
-struct i2c_msg {
- unsigned int addr;
- unsigned int flags;
- unsigned int len;
- u8 *buf;
-};
-
-#define I2C_MAX_OFFSET_LEN 4
-
-enum i2c_msg_flags {
- I2C_M_TEN = 0x0010, /*ten-bit chip address*/
- I2C_M_RD = 0x0001, /*read data*/
- I2C_M_STOP = 0x8000, /*send stop after this message*/
-};
-
-struct altera_i2c_dev *altera_i2c_probe(void *base);
-int altera_i2c_remove(struct altera_i2c_dev *dev);
-int i2c_read(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
- u32 offset, u8 *buf, u32 count);
-int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
- u32 offset, u8 *buffer, int len);
-int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count);
-int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count);
-int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count);
-int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count);
-#endif
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include "opae_ifpga_hw_api.h"
-#include "ifpga_api.h"
-
-int opae_manager_ifpga_get_prop(struct opae_manager *mgr,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme;
-
- if (!mgr || !mgr->data)
- return -EINVAL;
-
- fme = mgr->data;
-
- return ifpga_get_prop(fme->parent, FEATURE_FIU_ID_FME, 0, prop);
-}
-
-int opae_manager_ifpga_set_prop(struct opae_manager *mgr,
- struct feature_prop *prop)
-{
- struct ifpga_fme_hw *fme;
-
- if (!mgr || !mgr->data)
- return -EINVAL;
-
- fme = mgr->data;
-
- return ifpga_set_prop(fme->parent, FEATURE_FIU_ID_FME, 0, prop);
-}
-
-int opae_manager_ifpga_get_info(struct opae_manager *mgr,
- struct fpga_fme_info *fme_info)
-{
- struct ifpga_fme_hw *fme;
-
- if (!mgr || !mgr->data || !fme_info)
- return -EINVAL;
-
- fme = mgr->data;
-
- spinlock_lock(&fme->lock);
- fme_info->capability = fme->capability;
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
-int opae_manager_ifpga_set_err_irq(struct opae_manager *mgr,
- struct fpga_fme_err_irq_set *err_irq_set)
-{
- struct ifpga_fme_hw *fme;
-
- if (!mgr || !mgr->data)
- return -EINVAL;
-
- fme = mgr->data;
-
- return ifpga_set_irq(fme->parent, FEATURE_FIU_ID_FME, 0,
- IFPGA_FME_FEATURE_ID_GLOBAL_ERR, err_irq_set);
-}
-
-int opae_bridge_ifpga_get_prop(struct opae_bridge *br,
- struct feature_prop *prop)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data)
- return -EINVAL;
-
- port = br->data;
-
- return ifpga_get_prop(port->parent, FEATURE_FIU_ID_PORT,
- port->port_id, prop);
-}
-
-int opae_bridge_ifpga_set_prop(struct opae_bridge *br,
- struct feature_prop *prop)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data)
- return -EINVAL;
-
- port = br->data;
-
- return ifpga_set_prop(port->parent, FEATURE_FIU_ID_PORT,
- port->port_id, prop);
-}
-
-int opae_bridge_ifpga_get_info(struct opae_bridge *br,
- struct fpga_port_info *port_info)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data || !port_info)
- return -EINVAL;
-
- port = br->data;
-
- spinlock_lock(&port->lock);
- port_info->capability = port->capability;
- port_info->num_uafu_irqs = port->num_uafu_irqs;
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-int opae_bridge_ifpga_get_region_info(struct opae_bridge *br,
- struct fpga_port_region_info *info)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data || !info)
- return -EINVAL;
-
- /* Only support STP region now */
- if (info->index != PORT_REGION_INDEX_STP)
- return -EINVAL;
-
- port = br->data;
-
- spinlock_lock(&port->lock);
- info->addr = port->stp_addr;
- info->size = port->stp_size;
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-int opae_bridge_ifpga_set_err_irq(struct opae_bridge *br,
- struct fpga_port_err_irq_set *err_irq_set)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data)
- return -EINVAL;
-
- port = br->data;
-
- return ifpga_set_irq(port->parent, FEATURE_FIU_ID_PORT, port->port_id,
- IFPGA_PORT_FEATURE_ID_ERROR, err_irq_set);
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _OPAE_IFPGA_HW_API_H_
-#define _OPAE_IFPGA_HW_API_H_
-
-#include "opae_hw_api.h"
-
-/**
- * struct feature_prop - data structure for feature property
- * @feature_id: id of this feature.
- * @prop_id: id of this property under this feature.
- * @data: property value to set/get.
- */
-struct feature_prop {
- u64 feature_id;
- u64 prop_id;
- u64 data;
-};
-
-#define IFPGA_FIU_ID_FME 0x0
-#define IFPGA_FIU_ID_PORT 0x1
-
-#define IFPGA_FME_FEATURE_ID_HEADER 0x0
-#define IFPGA_FME_FEATURE_ID_THERMAL_MGMT 0x1
-#define IFPGA_FME_FEATURE_ID_POWER_MGMT 0x2
-#define IFPGA_FME_FEATURE_ID_GLOBAL_IPERF 0x3
-#define IFPGA_FME_FEATURE_ID_GLOBAL_ERR 0x4
-#define IFPGA_FME_FEATURE_ID_PR_MGMT 0x5
-#define IFPGA_FME_FEATURE_ID_HSSI 0x6
-#define IFPGA_FME_FEATURE_ID_GLOBAL_DPERF 0x7
-
-#define IFPGA_PORT_FEATURE_ID_HEADER 0x0
-#define IFPGA_PORT_FEATURE_ID_AFU 0xff
-#define IFPGA_PORT_FEATURE_ID_ERROR 0x10
-#define IFPGA_PORT_FEATURE_ID_UMSG 0x11
-#define IFPGA_PORT_FEATURE_ID_UINT 0x12
-#define IFPGA_PORT_FEATURE_ID_STP 0x13
-
-/*
- * PROP format (TOP + SUB + ID)
- *
- * (~0x0) means this field is unused.
- */
-#define PROP_TOP GENMASK(31, 24)
-#define PROP_TOP_UNUSED 0xff
-#define PROP_SUB GENMASK(23, 16)
-#define PROP_SUB_UNUSED 0xff
-#define PROP_ID GENMASK(15, 0)
-
-#define PROP(_top, _sub, _id) \
- (SET_FIELD(PROP_TOP, _top) | SET_FIELD(PROP_SUB, _sub) |\
- SET_FIELD(PROP_ID, _id))
-
-/* FME head feature's properties*/
-#define FME_HDR_PROP_REVISION 0x1 /* RDONLY */
-#define FME_HDR_PROP_PORTS_NUM 0x2 /* RDONLY */
-#define FME_HDR_PROP_CACHE_SIZE 0x3 /* RDONLY */
-#define FME_HDR_PROP_VERSION 0x4 /* RDONLY */
-#define FME_HDR_PROP_SOCKET_ID 0x5 /* RDONLY */
-#define FME_HDR_PROP_BITSTREAM_ID 0x6 /* RDONLY */
-#define FME_HDR_PROP_BITSTREAM_METADATA 0x7 /* RDONLY */
-
-/* FME error reporting feature's properties */
-/* FME error reporting properties format */
-#define ERR_PROP(_top, _id) PROP(_top, 0xff, _id)
-#define ERR_PROP_TOP_UNUSED PROP_TOP_UNUSED
-#define ERR_PROP_TOP_FME_ERR 0x1
-#define ERR_PROP_ROOT(_id) ERR_PROP(0xff, _id)
-#define ERR_PROP_FME_ERR(_id) ERR_PROP(ERR_PROP_TOP_FME_ERR, _id)
-
-#define FME_ERR_PROP_ERRORS ERR_PROP_FME_ERR(0x1)
-#define FME_ERR_PROP_FIRST_ERROR ERR_PROP_FME_ERR(0x2)
-#define FME_ERR_PROP_NEXT_ERROR ERR_PROP_FME_ERR(0x3)
-#define FME_ERR_PROP_CLEAR ERR_PROP_FME_ERR(0x4) /* WO */
-#define FME_ERR_PROP_REVISION ERR_PROP_ROOT(0x5)
-#define FME_ERR_PROP_PCIE0_ERRORS ERR_PROP_ROOT(0x6) /* RW */
-#define FME_ERR_PROP_PCIE1_ERRORS ERR_PROP_ROOT(0x7) /* RW */
-#define FME_ERR_PROP_NONFATAL_ERRORS ERR_PROP_ROOT(0x8)
-#define FME_ERR_PROP_CATFATAL_ERRORS ERR_PROP_ROOT(0x9)
-#define FME_ERR_PROP_INJECT_ERRORS ERR_PROP_ROOT(0xa) /* RW */
-
-/* FME thermal feature's properties */
-#define FME_THERMAL_PROP_THRESHOLD1 0x1 /* RW */
-#define FME_THERMAL_PROP_THRESHOLD2 0x2 /* RW */
-#define FME_THERMAL_PROP_THRESHOLD_TRIP 0x3 /* RDONLY */
-#define FME_THERMAL_PROP_THRESHOLD1_REACHED 0x4 /* RDONLY */
-#define FME_THERMAL_PROP_THRESHOLD2_REACHED 0x5 /* RDONLY */
-#define FME_THERMAL_PROP_THRESHOLD1_POLICY 0x6 /* RW */
-#define FME_THERMAL_PROP_TEMPERATURE 0x7 /* RDONLY */
-#define FME_THERMAL_PROP_REVISION 0x8 /* RDONLY */
-
-/* FME power feature's properties */
-#define FME_PWR_PROP_CONSUMED 0x1 /* RDONLY */
-#define FME_PWR_PROP_THRESHOLD1 0x2 /* RW */
-#define FME_PWR_PROP_THRESHOLD2 0x3 /* RW */
-#define FME_PWR_PROP_THRESHOLD1_STATUS 0x4 /* RDONLY */
-#define FME_PWR_PROP_THRESHOLD2_STATUS 0x5 /* RDONLY */
-#define FME_PWR_PROP_RTL 0x6 /* RDONLY */
-#define FME_PWR_PROP_XEON_LIMIT 0x7 /* RDONLY */
-#define FME_PWR_PROP_FPGA_LIMIT 0x8 /* RDONLY */
-#define FME_PWR_PROP_REVISION 0x9 /* RDONLY */
-
-/* FME iperf/dperf PROP format */
-#define PERF_PROP_TOP_CACHE 0x1
-#define PERF_PROP_TOP_VTD 0x2
-#define PERF_PROP_TOP_FAB 0x3
-#define PERF_PROP_TOP_UNUSED PROP_TOP_UNUSED
-#define PERF_PROP_SUB_UNUSED PROP_SUB_UNUSED
-
-#define PERF_PROP_ROOT(_id) PROP(0xff, 0xff, _id)
-#define PERF_PROP_CACHE(_id) PROP(PERF_PROP_TOP_CACHE, 0xff, _id)
-#define PERF_PROP_VTD(_sub, _id) PROP(PERF_PROP_TOP_VTD, _sub, _id)
-#define PERF_PROP_VTD_ROOT(_id) PROP(PERF_PROP_TOP_VTD, 0xff, _id)
-#define PERF_PROP_FAB(_sub, _id) PROP(PERF_PROP_TOP_FAB, _sub, _id)
-#define PERF_PROP_FAB_ROOT(_id) PROP(PERF_PROP_TOP_FAB, 0xff, _id)
-
-/* FME iperf feature's properties */
-#define FME_IPERF_PROP_CLOCK PERF_PROP_ROOT(0x1)
-#define FME_IPERF_PROP_REVISION PERF_PROP_ROOT(0x2)
-
-/* iperf CACHE properties */
-#define FME_IPERF_PROP_CACHE_FREEZE PERF_PROP_CACHE(0x1) /* RW */
-#define FME_IPERF_PROP_CACHE_READ_HIT PERF_PROP_CACHE(0x2)
-#define FME_IPERF_PROP_CACHE_READ_MISS PERF_PROP_CACHE(0x3)
-#define FME_IPERF_PROP_CACHE_WRITE_HIT PERF_PROP_CACHE(0x4)
-#define FME_IPERF_PROP_CACHE_WRITE_MISS PERF_PROP_CACHE(0x5)
-#define FME_IPERF_PROP_CACHE_HOLD_REQUEST PERF_PROP_CACHE(0x6)
-#define FME_IPERF_PROP_CACHE_TX_REQ_STALL PERF_PROP_CACHE(0x7)
-#define FME_IPERF_PROP_CACHE_RX_REQ_STALL PERF_PROP_CACHE(0x8)
-#define FME_IPERF_PROP_CACHE_RX_EVICTION PERF_PROP_CACHE(0x9)
-#define FME_IPERF_PROP_CACHE_DATA_WRITE_PORT_CONTENTION PERF_PROP_CACHE(0xa)
-#define FME_IPERF_PROP_CACHE_TAG_WRITE_PORT_CONTENTION PERF_PROP_CACHE(0xb)
-/* iperf VTD properties */
-#define FME_IPERF_PROP_VTD_FREEZE PERF_PROP_VTD_ROOT(0x1) /* RW */
-#define FME_IPERF_PROP_VTD_SIP_IOTLB_4K_HIT PERF_PROP_VTD_ROOT(0x2)
-#define FME_IPERF_PROP_VTD_SIP_IOTLB_2M_HIT PERF_PROP_VTD_ROOT(0x3)
-#define FME_IPERF_PROP_VTD_SIP_IOTLB_1G_HIT PERF_PROP_VTD_ROOT(0x4)
-#define FME_IPERF_PROP_VTD_SIP_SLPWC_L3_HIT PERF_PROP_VTD_ROOT(0x5)
-#define FME_IPERF_PROP_VTD_SIP_SLPWC_L4_HIT PERF_PROP_VTD_ROOT(0x6)
-#define FME_IPERF_PROP_VTD_SIP_RCC_HIT PERF_PROP_VTD_ROOT(0x7)
-#define FME_IPERF_PROP_VTD_SIP_IOTLB_4K_MISS PERF_PROP_VTD_ROOT(0x8)
-#define FME_IPERF_PROP_VTD_SIP_IOTLB_2M_MISS PERF_PROP_VTD_ROOT(0x9)
-#define FME_IPERF_PROP_VTD_SIP_IOTLB_1G_MISS PERF_PROP_VTD_ROOT(0xa)
-#define FME_IPERF_PROP_VTD_SIP_SLPWC_L3_MISS PERF_PROP_VTD_ROOT(0xb)
-#define FME_IPERF_PROP_VTD_SIP_SLPWC_L4_MISS PERF_PROP_VTD_ROOT(0xc)
-#define FME_IPERF_PROP_VTD_SIP_RCC_MISS PERF_PROP_VTD_ROOT(0xd)
-#define FME_IPERF_PROP_VTD_PORT_READ_TRANSACTION(n) PERF_PROP_VTD(n, 0xe)
-#define FME_IPERF_PROP_VTD_PORT_WRITE_TRANSACTION(n) PERF_PROP_VTD(n, 0xf)
-#define FME_IPERF_PROP_VTD_PORT_DEVTLB_READ_HIT(n) PERF_PROP_VTD(n, 0x10)
-#define FME_IPERF_PROP_VTD_PORT_DEVTLB_WRITE_HIT(n) PERF_PROP_VTD(n, 0x11)
-#define FME_IPERF_PROP_VTD_PORT_DEVTLB_4K_FILL(n) PERF_PROP_VTD(n, 0x12)
-#define FME_IPERF_PROP_VTD_PORT_DEVTLB_2M_FILL(n) PERF_PROP_VTD(n, 0x13)
-#define FME_IPERF_PROP_VTD_PORT_DEVTLB_1G_FILL(n) PERF_PROP_VTD(n, 0x14)
-/* iperf FAB properties */
-#define FME_IPERF_PROP_FAB_FREEZE PERF_PROP_FAB_ROOT(0x1) /* RW */
-#define FME_IPERF_PROP_FAB_PCIE0_READ PERF_PROP_FAB_ROOT(0x2)
-#define FME_IPERF_PROP_FAB_PORT_PCIE0_READ(n) PERF_PROP_FAB(n, 0x2)
-#define FME_IPERF_PROP_FAB_PCIE0_WRITE PERF_PROP_FAB_ROOT(0x3)
-#define FME_IPERF_PROP_FAB_PORT_PCIE0_WRITE(n) PERF_PROP_FAB(n, 0x3)
-#define FME_IPERF_PROP_FAB_PCIE1_READ PERF_PROP_FAB_ROOT(0x4)
-#define FME_IPERF_PROP_FAB_PORT_PCIE1_READ(n) PERF_PROP_FAB(n, 0x4)
-#define FME_IPERF_PROP_FAB_PCIE1_WRITE PERF_PROP_FAB_ROOT(0x5)
-#define FME_IPERF_PROP_FAB_PORT_PCIE1_WRITE(n) PERF_PROP_FAB(n, 0x5)
-#define FME_IPERF_PROP_FAB_UPI_READ PERF_PROP_FAB_ROOT(0x6)
-#define FME_IPERF_PROP_FAB_PORT_UPI_READ(n) PERF_PROP_FAB(n, 0x6)
-#define FME_IPERF_PROP_FAB_UPI_WRITE PERF_PROP_FAB_ROOT(0x7)
-#define FME_IPERF_PROP_FAB_PORT_UPI_WRITE(n) PERF_PROP_FAB(n, 0x7)
-#define FME_IPERF_PROP_FAB_MMIO_READ PERF_PROP_FAB_ROOT(0x8)
-#define FME_IPERF_PROP_FAB_PORT_MMIO_READ(n) PERF_PROP_FAB(n, 0x8)
-#define FME_IPERF_PROP_FAB_MMIO_WRITE PERF_PROP_FAB_ROOT(0x9)
-#define FME_IPERF_PROP_FAB_PORT_MMIO_WRITE(n) PERF_PROP_FAB(n, 0x9)
-#define FME_IPERF_PROP_FAB_ENABLE PERF_PROP_FAB_ROOT(0xa) /* RW */
-#define FME_IPERF_PROP_FAB_PORT_ENABLE(n) PERF_PROP_FAB(n, 0xa) /* RW */
-
-/* FME dperf properties */
-#define FME_DPERF_PROP_CLOCK PERF_PROP_ROOT(0x1)
-#define FME_DPERF_PROP_REVISION PERF_PROP_ROOT(0x2)
-
-/* dperf FAB properties */
-#define FME_DPERF_PROP_FAB_FREEZE PERF_PROP_FAB_ROOT(0x1) /* RW */
-#define FME_DPERF_PROP_FAB_PCIE0_READ PERF_PROP_FAB_ROOT(0x2)
-#define FME_DPERF_PROP_FAB_PORT_PCIE0_READ(n) PERF_PROP_FAB(n, 0x2)
-#define FME_DPERF_PROP_FAB_PCIE0_WRITE PERF_PROP_FAB_ROOT(0x3)
-#define FME_DPERF_PROP_FAB_PORT_PCIE0_WRITE(n) PERF_PROP_FAB(n, 0x3)
-#define FME_DPERF_PROP_FAB_MMIO_READ PERF_PROP_FAB_ROOT(0x4)
-#define FME_DPERF_PROP_FAB_PORT_MMIO_READ(n) PERF_PROP_FAB(n, 0x4)
-#define FME_DPERF_PROP_FAB_MMIO_WRITE PERF_PROP_FAB_ROOT(0x5)
-#define FME_DPERF_PROP_FAB_PORT_MMIO_WRITE(n) PERF_PROP_FAB(n, 0x5)
-#define FME_DPERF_PROP_FAB_ENABLE PERF_PROP_FAB_ROOT(0x6) /* RW */
-#define FME_DPERF_PROP_FAB_PORT_ENABLE(n) PERF_PROP_FAB(n, 0x6) /* RW */
-
-/*PORT hdr feature's properties*/
-#define PORT_HDR_PROP_REVISION 0x1 /* RDONLY */
-#define PORT_HDR_PROP_PORTIDX 0x2 /* RDONLY */
-#define PORT_HDR_PROP_LATENCY_TOLERANCE 0x3 /* RDONLY */
-#define PORT_HDR_PROP_AP1_EVENT 0x4 /* RW */
-#define PORT_HDR_PROP_AP2_EVENT 0x5 /* RW */
-#define PORT_HDR_PROP_POWER_STATE 0x6 /* RDONLY */
-#define PORT_HDR_PROP_USERCLK_FREQCMD 0x7 /* RW */
-#define PORT_HDR_PROP_USERCLK_FREQCNTRCMD 0x8 /* RW */
-#define PORT_HDR_PROP_USERCLK_FREQSTS 0x9 /* RDONLY */
-#define PORT_HDR_PROP_USERCLK_CNTRSTS 0xa /* RDONLY */
-
-/*PORT error feature's properties*/
-#define PORT_ERR_PROP_REVISION 0x1 /* RDONLY */
-#define PORT_ERR_PROP_ERRORS 0x2 /* RDONLY */
-#define PORT_ERR_PROP_FIRST_ERROR 0x3 /* RDONLY */
-#define PORT_ERR_PROP_FIRST_MALFORMED_REQ_LSB 0x4 /* RDONLY */
-#define PORT_ERR_PROP_FIRST_MALFORMED_REQ_MSB 0x5 /* RDONLY */
-#define PORT_ERR_PROP_CLEAR 0x6 /* WRONLY */
-
-int opae_manager_ifpga_get_prop(struct opae_manager *mgr,
- struct feature_prop *prop);
-int opae_manager_ifpga_set_prop(struct opae_manager *mgr,
- struct feature_prop *prop);
-int opae_bridge_ifpga_get_prop(struct opae_bridge *br,
- struct feature_prop *prop);
-int opae_bridge_ifpga_set_prop(struct opae_bridge *br,
- struct feature_prop *prop);
-
-/*
- * Retrieve information about the fpga fme.
- * Driver fills the info in provided struct fpga_fme_info.
- */
-struct fpga_fme_info {
- u32 capability; /* The capability of FME device */
-#define FPGA_FME_CAP_ERR_IRQ (1 << 0) /* Support fme error interrupt */
-};
-
-int opae_manager_ifpga_get_info(struct opae_manager *mgr,
- struct fpga_fme_info *fme_info);
-
-/* Set eventfd information for ifpga FME error interrupt */
-struct fpga_fme_err_irq_set {
- s32 evtfd; /* Eventfd handler */
-};
-
-int opae_manager_ifpga_set_err_irq(struct opae_manager *mgr,
- struct fpga_fme_err_irq_set *err_irq_set);
-
-/*
- * Retrieve information about the fpga port.
- * Driver fills the info in provided struct fpga_port_info.
- */
-struct fpga_port_info {
- u32 capability; /* The capability of port device */
-#define FPGA_PORT_CAP_ERR_IRQ (1 << 0) /* Support port error interrupt */
-#define FPGA_PORT_CAP_UAFU_IRQ (1 << 1) /* Support uafu error interrupt */
- u32 num_umsgs; /* The number of allocated umsgs */
- u32 num_uafu_irqs; /* The number of uafu interrupts */
-};
-
-int opae_bridge_ifpga_get_info(struct opae_bridge *br,
- struct fpga_port_info *port_info);
-/*
- * Retrieve region information about the fpga port.
- * Driver needs to fill the index of struct fpga_port_region_info.
- */
-struct fpga_port_region_info {
- u32 index;
-#define PORT_REGION_INDEX_STP (1 << 1) /* Signal Tap Region */
- u64 size; /* Region Size */
- u8 *addr; /* Base address of the region */
-};
-
-int opae_bridge_ifpga_get_region_info(struct opae_bridge *br,
- struct fpga_port_region_info *info);
-
-/* Set eventfd information for ifpga port error interrupt */
-struct fpga_port_err_irq_set {
- s32 evtfd; /* Eventfd handler */
-};
-
-int opae_bridge_ifpga_set_err_irq(struct opae_bridge *br,
- struct fpga_port_err_irq_set *err_irq_set);
-
-#endif /* _OPAE_IFPGA_HW_API_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#include "opae_intel_max10.h"
-
-static struct intel_max10_device *g_max10;
-
-int max10_reg_read(unsigned int reg, unsigned int *val)
-{
- if (!g_max10)
- return -ENODEV;
-
- return spi_transaction_read(g_max10->spi_tran_dev,
- reg, 4, (unsigned char *)val);
-}
-
-int max10_reg_write(unsigned int reg, unsigned int val)
-{
- unsigned int tmp = val;
-
- if (!g_max10)
- return -ENODEV;
-
- return spi_transaction_write(g_max10->spi_tran_dev,
- reg, 4, (unsigned char *)&tmp);
-}
-
-struct intel_max10_device *
-intel_max10_device_probe(struct altera_spi_device *spi,
- int chipselect)
-{
- struct intel_max10_device *dev;
- int ret;
- unsigned int val;
-
- dev = opae_malloc(sizeof(*dev));
- if (!dev)
- return NULL;
-
- dev->spi_master = spi;
-
- dev->spi_tran_dev = spi_transaction_init(spi, chipselect);
- if (!dev->spi_tran_dev) {
- dev_err(dev, "%s spi tran init fail\n", __func__);
- goto free_dev;
- }
-
- /* set the max10 device firstly */
- g_max10 = dev;
-
- /* read FPGA loading information */
- ret = max10_reg_read(FPGA_PAGE_INFO_OFF, &val);
- if (ret) {
- dev_err(dev, "fail to get FPGA loading info\n");
- goto spi_tran_fail;
- }
- dev_info(dev, "FPGA loaded from %s Image\n", val ? "User" : "Factory");
-
- return dev;
-
-spi_tran_fail:
- spi_transaction_remove(dev->spi_tran_dev);
-free_dev:
- g_max10 = NULL;
- opae_free(dev);
-
- return NULL;
-}
-
-int intel_max10_device_remove(struct intel_max10_device *dev)
-{
- if (!dev)
- return 0;
-
- if (dev->spi_tran_dev)
- spi_transaction_remove(dev->spi_tran_dev);
-
- g_max10 = NULL;
- opae_free(dev);
-
- return 0;
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Intel Corporation
- */
-
-#ifndef _OPAE_INTEL_MAX10_H_
-#define _OPAE_INTEL_MAX10_H_
-
-#include "opae_osdep.h"
-#include "opae_spi.h"
-
-/* max10 capability flags */
-#define MAX10_FLAGS_NO_I2C2 BIT(0)
-#define MAX10_FLAGS_NO_BMCIMG_FLASH BIT(1)
-#define MAX10_FLAGS_DEVICE_TABLE BIT(2)
-#define MAX10_FLAGS_SPI BIT(3)
-#define MAX10_FLGAS_NIOS_SPI BIT(4)
-#define MAX10_FLAGS_PKVL BIT(5)
-
-struct intel_max10_device {
- unsigned int flags; /*max10 hardware capability*/
- struct altera_spi_device *spi_master;
- struct spi_transaction_dev *spi_tran_dev;
-};
-
-/* retimer speed */
-enum retimer_speed {
- MXD_1GB = 1,
- MXD_2_5GB = 2,
- MXD_5GB = 5,
- MXD_10GB = 10,
- MXD_25GB = 25,
- MXD_40GB = 40,
- MXD_100GB = 100,
- MXD_SPEED_UNKNOWN,
-};
-
-/* retimer info */
-struct opae_retimer_info {
- unsigned int nums_retimer;
- unsigned int ports_per_retimer;
- unsigned int nums_fvl;
- unsigned int ports_per_fvl;
- enum retimer_speed support_speed;
-};
-
-/* retimer status*/
-struct opae_retimer_status {
- enum retimer_speed speed;
- /*
- * retimer line link status bitmap:
- * bit 0: Retimer0 Port0 link status
- * bit 1: Retimer0 Port1 link status
- * bit 2: Retimer0 Port2 link status
- * bit 3: Retimer0 Port3 link status
- *
- * bit 4: Retimer1 Port0 link status
- * bit 5: Retimer1 Port1 link status
- * bit 6: Retimer1 Port2 link status
- * bit 7: Retimer1 Port3 link status
- */
- unsigned int line_link_bitmap;
-};
-
-#define FLASH_BASE 0x10000000
-#define FLASH_OPTION_BITS 0x10000
-
-#define NIOS2_FW_VERSION_OFF 0x300400
-#define RSU_REG_OFF 0x30042c
-#define FPGA_RP_LOAD BIT(3)
-#define NIOS2_PRERESET BIT(4)
-#define NIOS2_HANG BIT(5)
-#define RSU_ENABLE BIT(6)
-#define NIOS2_RESET BIT(7)
-#define NIOS2_I2C2_POLL_STOP BIT(13)
-#define FPGA_RECONF_REG_OFF 0x300430
-#define COUNTDOWN_START BIT(18)
-#define MAX10_BUILD_VER_OFF 0x300468
-#define PCB_INFO GENMASK(31, 24)
-#define MAX10_BUILD_VERION GENMASK(23, 0)
-#define FPGA_PAGE_INFO_OFF 0x30046c
-#define DT_AVAIL_REG_OFF 0x300490
-#define DT_AVAIL BIT(0)
-#define DT_BASE_ADDR_REG_OFF 0x300494
-#define PKVL_POLLING_CTRL 0x300480
-#define PKVL_LINK_STATUS 0x300564
-
-#define DFT_MAX_SIZE 0x7e0000
-
-int max10_reg_read(unsigned int reg, unsigned int *val);
-int max10_reg_write(unsigned int reg, unsigned int val);
-struct intel_max10_device *
-intel_max10_device_probe(struct altera_spi_device *spi,
- int chipselect);
-int intel_max10_device_remove(struct intel_max10_device *dev);
-
-#endif
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _OPAE_OSDEP_H
-#define _OPAE_OSDEP_H
-
-#include <string.h>
-#include <stdbool.h>
-
-#ifdef RTE_LIBRTE_EAL
-#include "osdep_rte/osdep_generic.h"
-#else
-#include "osdep_raw/osdep_generic.h"
-#endif
-
-#define __iomem
-
-typedef uint8_t u8;
-typedef int8_t s8;
-typedef uint16_t u16;
-typedef uint32_t u32;
-typedef int32_t s32;
-typedef uint64_t u64;
-typedef uint64_t dma_addr_t;
-
-struct uuid {
- u8 b[16];
-};
-
-#ifndef LINUX_MACROS
-#ifndef BITS_PER_LONG
-#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
-#endif
-#ifndef BIT
-#define BIT(a) (1UL << (a))
-#endif /* BIT */
-#define U64_C(x) x ## ULL
-#ifndef BIT_ULL
-#define BIT_ULL(a) (1ULL << (a))
-#endif /* BIT_ULL */
-#ifndef GENMASK
-#define GENMASK(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
-#endif /* GENMASK */
-#ifndef GENMASK_ULL
-#define GENMASK_ULL(h, l) (((U64_C(1) << ((h) - (l) + 1)) - 1) << (l))
-#endif /* GENMASK_ULL */
-#endif /* LINUX_MACROS */
-
-#define SET_FIELD(m, v) (((v) << (__builtin_ffsll(m) - 1)) & (m))
-#define GET_FIELD(m, v) (((v) & (m)) >> (__builtin_ffsll(m) - 1))
-
-#define dev_err(x, args...) dev_printf(ERR, args)
-#define dev_info(x, args...) dev_printf(INFO, args)
-#define dev_warn(x, args...) dev_printf(WARNING, args)
-#define dev_debug(x, args...) dev_printf(DEBUG, args)
-
-#define pr_err(y, args...) dev_err(0, y, ##args)
-#define pr_warn(y, args...) dev_warn(0, y, ##args)
-#define pr_info(y, args...) dev_info(0, y, ##args)
-
-#ifndef WARN_ON
-#define WARN_ON(x) do { \
- int ret = !!(x); \
- if (unlikely(ret)) \
- pr_warn("WARN_ON: \"" #x "\" at %s:%d\n", __func__, __LINE__); \
-} while (0)
-#endif
-
-#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
-#define udelay(x) opae_udelay(x)
-#define msleep(x) opae_udelay(1000 * (x))
-#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
-
-#define time_after(a, b) ((long)((b) - (a)) < 0)
-#define time_before(a, b) time_after(b, a)
-#define opae_memset(a, b, c) memset((a), (b), (c))
-
-#define opae_readq_poll_timeout(addr, val, cond, invl, timeout)\
-({ \
- int wait = 0; \
- for (; wait <= timeout; wait += invl) { \
- (val) = opae_readq(addr); \
- if (cond) \
- break; \
- udelay(invl); \
- } \
- (cond) ? 0 : -ETIMEDOUT; \
-})
-#endif
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#include "opae_osdep.h"
-#include "opae_spi.h"
-
-static int nios_spi_indirect_read(struct altera_spi_device *dev, u32 reg,
- u32 *val)
-{
- u64 ctrl = 0;
- u64 stat = 0;
- int loops = SPI_MAX_RETRY;
-
- ctrl = NIOS_SPI_RD | ((u64)reg << 32);
- opae_writeq(ctrl, dev->regs + NIOS_SPI_CTRL);
-
- stat = opae_readq(dev->regs + NIOS_SPI_STAT);
- while (!(stat & NIOS_SPI_VALID) && --loops)
- stat = opae_readq(dev->regs + NIOS_SPI_STAT);
-
- *val = stat & NIOS_SPI_READ_DATA;
-
- return loops ? 0 : -ETIMEDOUT;
-}
-
-static int nios_spi_indirect_write(struct altera_spi_device *dev, u32 reg,
- u32 value)
-{
-
- u64 ctrl = 0;
- u64 stat = 0;
- int loops = SPI_MAX_RETRY;
-
- ctrl |= NIOS_SPI_WR | (u64)reg << 32;
- ctrl |= value & NIOS_SPI_WRITE_DATA;
-
- opae_writeq(ctrl, dev->regs + NIOS_SPI_CTRL);
-
- stat = opae_readq(dev->regs + NIOS_SPI_STAT);
- while (!(stat & NIOS_SPI_VALID) && --loops)
- stat = opae_readq(dev->regs + NIOS_SPI_STAT);
-
- return loops ? 0 : -ETIMEDOUT;
-}
-
-static int spi_indirect_write(struct altera_spi_device *dev, u32 reg,
- u32 value)
-{
- u64 ctrl;
-
- opae_writeq(value & WRITE_DATA_MASK, dev->regs + SPI_WRITE);
-
- ctrl = CTRL_W | (reg >> 2);
- opae_writeq(ctrl, dev->regs + SPI_CTRL);
-
- return 0;
-}
-
-static int spi_indirect_read(struct altera_spi_device *dev, u32 reg,
- u32 *val)
-{
- u64 tmp;
- u64 ctrl;
-
- ctrl = CTRL_R | (reg >> 2);
- opae_writeq(ctrl, dev->regs + SPI_CTRL);
-
- /**
- * FIXME: Read one more time to avoid HW timing issue. This is
- * a short term workaround solution, and must be removed once
- * hardware fixing is done.
- */
- tmp = opae_readq(dev->regs + SPI_READ);
-
- *val = (u32)tmp;
-
- return 0;
-}
-
-int spi_reg_write(struct altera_spi_device *dev, u32 reg,
- u32 value)
-{
- return dev->reg_write(dev, reg, value);
-}
-
-int spi_reg_read(struct altera_spi_device *dev, u32 reg,
- u32 *val)
-{
- return dev->reg_read(dev, reg, val);
-}
-
-void spi_cs_activate(struct altera_spi_device *dev, unsigned int chip_select)
-{
- spi_reg_write(dev, ALTERA_SPI_SLAVE_SEL, 1 << chip_select);
- spi_reg_write(dev, ALTERA_SPI_CONTROL, ALTERA_SPI_CONTROL_SSO_MSK);
-}
-
-void spi_cs_deactivate(struct altera_spi_device *dev)
-{
- spi_reg_write(dev, ALTERA_SPI_CONTROL, 0);
-}
-
-static int spi_flush_rx(struct altera_spi_device *dev)
-{
- u32 val = 0;
- int ret;
-
- ret = spi_reg_read(dev, ALTERA_SPI_STATUS, &val);
- if (ret)
- return ret;
-
- if (val & ALTERA_SPI_STATUS_RRDY_MSK) {
- ret = spi_reg_read(dev, ALTERA_SPI_RXDATA, &val);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
-static unsigned int spi_write_bytes(struct altera_spi_device *dev, int count)
-{
- unsigned int val = 0;
- u16 *p16;
- u32 *p32;
-
- if (dev->txbuf) {
- switch (dev->data_width) {
- case 1:
- val = dev->txbuf[count];
- break;
- case 2:
- p16 = (u16 *)(dev->txbuf + 2*count);
- val = *p16;
- if (dev->endian == SPI_BIG_ENDIAN)
- val = cpu_to_be16(val);
- break;
- case 4:
- p32 = (u32 *)(dev->txbuf + 4*count);
- val = *p32;
- break;
- }
- }
-
- return val;
-}
-
-static void spi_fill_readbuffer(struct altera_spi_device *dev,
- unsigned int value, int count)
-{
- u16 *p16;
- u32 *p32;
-
- if (dev->rxbuf) {
- switch (dev->data_width) {
- case 1:
- dev->rxbuf[count] = value;
- break;
- case 2:
- p16 = (u16 *)(dev->rxbuf + 2*count);
- if (dev->endian == SPI_BIG_ENDIAN)
- *p16 = cpu_to_be16((u16)value);
- else
- *p16 = (u16)value;
- break;
- case 4:
- p32 = (u32 *)(dev->rxbuf + 4*count);
- if (dev->endian == SPI_BIG_ENDIAN)
- *p32 = cpu_to_be32(value);
- else
- *p32 = value;
- break;
- }
- }
-}
-
-static int spi_txrx(struct altera_spi_device *dev)
-{
- unsigned int count = 0;
- u32 rxd;
- unsigned int tx_data;
- u32 status;
- int retry = 0;
- int ret;
-
- while (count < dev->len) {
- tx_data = spi_write_bytes(dev, count);
- spi_reg_write(dev, ALTERA_SPI_TXDATA, tx_data);
-
- while (1) {
- ret = spi_reg_read(dev, ALTERA_SPI_STATUS, &status);
- if (ret)
- return -EIO;
- if (status & ALTERA_SPI_STATUS_RRDY_MSK)
- break;
- if (retry++ > SPI_MAX_RETRY) {
- dev_err(dev, "%s, read timeout\n", __func__);
- return -EBUSY;
- }
- }
-
- ret = spi_reg_read(dev, ALTERA_SPI_RXDATA, &rxd);
- if (ret)
- return -EIO;
-
- spi_fill_readbuffer(dev, rxd, count);
-
- count++;
- }
-
- return 0;
-}
-
-int spi_command(struct altera_spi_device *dev, unsigned int chip_select,
- unsigned int wlen, void *wdata,
- unsigned int rlen, void *rdata)
-{
- if (((wlen > 0) && !wdata) || ((rlen > 0) && !rdata)) {
- dev_err(dev, "error on spi command checking\n");
- return -EINVAL;
- }
-
- wlen = wlen / dev->data_width;
- rlen = rlen / dev->data_width;
-
- /* flush rx buffer */
- spi_flush_rx(dev);
-
- spi_cs_activate(dev, chip_select);
- if (wlen) {
- dev->txbuf = wdata;
- dev->rxbuf = rdata;
- dev->len = wlen;
- spi_txrx(dev);
- }
- if (rlen) {
- dev->rxbuf = rdata;
- dev->txbuf = NULL;
- dev->len = rlen;
- spi_txrx(dev);
- }
- spi_cs_deactivate(dev);
- return 0;
-}
-
-struct altera_spi_device *altera_spi_alloc(void *base, int type)
-{
- struct altera_spi_device *spi_dev =
- opae_malloc(sizeof(struct altera_spi_device));
-
- if (!spi_dev)
- return NULL;
-
- spi_dev->regs = base;
-
- switch (type) {
- case TYPE_SPI:
- spi_dev->reg_read = spi_indirect_read;
- spi_dev->reg_write = spi_indirect_write;
- break;
- case TYPE_NIOS_SPI:
- spi_dev->reg_read = nios_spi_indirect_read;
- spi_dev->reg_write = nios_spi_indirect_write;
- break;
- default:
- dev_err(dev, "%s: invalid SPI type\n", __func__);
- goto error;
- }
-
- return spi_dev;
-
-error:
- altera_spi_release(spi_dev);
- return NULL;
-}
-
-void altera_spi_init(struct altera_spi_device *spi_dev)
-{
- spi_dev->spi_param.info = opae_readq(spi_dev->regs + SPI_CORE_PARAM);
-
- spi_dev->data_width = spi_dev->spi_param.data_width / 8;
- spi_dev->endian = spi_dev->spi_param.endian;
- spi_dev->num_chipselect = spi_dev->spi_param.num_chipselect;
- dev_info(spi_dev, "spi param: type=%d, data width:%d, endian:%d, clock_polarity=%d, clock=%dMHz, chips=%d, cpha=%d\n",
- spi_dev->spi_param.type,
- spi_dev->data_width, spi_dev->endian,
- spi_dev->spi_param.clock_polarity,
- spi_dev->spi_param.clock,
- spi_dev->num_chipselect,
- spi_dev->spi_param.clock_phase);
-
- /* clear */
- spi_reg_write(spi_dev, ALTERA_SPI_CONTROL, 0);
- spi_reg_write(spi_dev, ALTERA_SPI_STATUS, 0);
- /* flush rxdata */
- spi_flush_rx(spi_dev);
-}
-
-void altera_spi_release(struct altera_spi_device *dev)
-{
- if (dev)
- opae_free(dev);
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#ifndef _OPAE_SPI_H
-#define _OPAE_SPI_H
-
-#include "opae_osdep.h"
-
-#define ALTERA_SPI_RXDATA 0
-#define ALTERA_SPI_TXDATA 4
-#define ALTERA_SPI_STATUS 8
-#define ALTERA_SPI_CONTROL 12
-#define ALTERA_SPI_SLAVE_SEL 20
-
-#define ALTERA_SPI_STATUS_ROE_MSK 0x8
-#define ALTERA_SPI_STATUS_TOE_MSK 0x10
-#define ALTERA_SPI_STATUS_TMT_MSK 0x20
-#define ALTERA_SPI_STATUS_TRDY_MSK 0x40
-#define ALTERA_SPI_STATUS_RRDY_MSK 0x80
-#define ALTERA_SPI_STATUS_E_MSK 0x100
-
-#define ALTERA_SPI_CONTROL_IROE_MSK 0x8
-#define ALTERA_SPI_CONTROL_ITOE_MSK 0x10
-#define ALTERA_SPI_CONTROL_ITRDY_MSK 0x40
-#define ALTERA_SPI_CONTROL_IRRDY_MSK 0x80
-#define ALTERA_SPI_CONTROL_IE_MSK 0x100
-#define ALTERA_SPI_CONTROL_SSO_MSK 0x400
-
-#define SPI_CORE_PARAM 0x8
-#define SPI_CTRL 0x10
-#define CTRL_R BIT_ULL(9)
-#define CTRL_W BIT_ULL(8)
-#define CTRL_ADDR_MASK GENMASK_ULL(2, 0)
-#define SPI_READ 0x18
-#define READ_DATA_VALID BIT_ULL(32)
-#define READ_DATA_MASK GENMASK_ULL(31, 0)
-#define SPI_WRITE 0x20
-#define WRITE_DATA_MASK GENMASK_ULL(31, 0)
-
-#define SPI_MAX_RETRY 100000
-
-#define TYPE_SPI 0
-#define TYPE_NIOS_SPI 1
-
-struct spi_core_param {
- union {
- u64 info;
- struct {
- u8 type:1;
- u8 endian:1;
- u8 data_width:6;
- u8 num_chipselect:6;
- u8 clock_polarity:1;
- u8 clock_phase:1;
- u8 stages:2;
- u8 resvd:4;
- u16 clock:10;
- u16 peripheral_id:16;
- u8 controller_type:1;
- u16 resvd1:15;
- };
- };
-};
-
-struct altera_spi_device {
- u8 *regs;
- struct spi_core_param spi_param;
- int data_width; /* how many bytes for data width */
- int endian;
- #define SPI_BIG_ENDIAN 0
- #define SPI_LITTLE_ENDIAN 1
- int num_chipselect;
- unsigned char *rxbuf;
- unsigned char *txbuf;
- unsigned int len;
- int (*reg_read)(struct altera_spi_device *dev, u32 reg, u32 *val);
- int (*reg_write)(struct altera_spi_device *dev, u32 reg,
- u32 value);
-};
-
-#define HEADER_LEN 8
-#define RESPONSE_LEN 4
-#define SPI_TRANSACTION_MAX_LEN 1024
-#define TRAN_SEND_MAX_LEN (SPI_TRANSACTION_MAX_LEN + HEADER_LEN)
-#define TRAN_RESP_MAX_LEN SPI_TRANSACTION_MAX_LEN
-#define PACKET_SEND_MAX_LEN (2*TRAN_SEND_MAX_LEN + 4)
-#define PACKET_RESP_MAX_LEN (2*TRAN_RESP_MAX_LEN + 4)
-#define BYTES_SEND_MAX_LEN (2*PACKET_SEND_MAX_LEN)
-#define BYTES_RESP_MAX_LEN (2*PACKET_RESP_MAX_LEN)
-
-struct spi_tran_buffer {
- unsigned char tran_send[TRAN_SEND_MAX_LEN];
- unsigned char tran_resp[TRAN_RESP_MAX_LEN];
- unsigned char packet_send[PACKET_SEND_MAX_LEN];
- unsigned char packet_resp[PACKET_RESP_MAX_LEN];
- unsigned char bytes_send[BYTES_SEND_MAX_LEN];
- unsigned char bytes_resp[2*BYTES_RESP_MAX_LEN];
-};
-
-struct spi_transaction_dev {
- struct altera_spi_device *dev;
- int chipselect;
- struct spi_tran_buffer *buffer;
-};
-
-struct spi_tran_header {
- u8 trans_type;
- u8 reserve;
- u16 size;
- u32 addr;
-};
-
-int spi_command(struct altera_spi_device *dev, unsigned int chip_select,
- unsigned int wlen, void *wdata, unsigned int rlen, void *rdata);
-void spi_cs_deactivate(struct altera_spi_device *dev);
-void spi_cs_activate(struct altera_spi_device *dev, unsigned int chip_select);
-struct altera_spi_device *altera_spi_alloc(void *base, int type);
-void altera_spi_init(struct altera_spi_device *dev);
-void altera_spi_release(struct altera_spi_device *dev);
-int spi_transaction_read(struct spi_transaction_dev *dev, unsigned int addr,
- unsigned int size, unsigned char *data);
-int spi_transaction_write(struct spi_transaction_dev *dev, unsigned int addr,
- unsigned int size, unsigned char *data);
-struct spi_transaction_dev *spi_transaction_init(struct altera_spi_device *dev,
- int chipselect);
-void spi_transaction_remove(struct spi_transaction_dev *dev);
-int spi_reg_write(struct altera_spi_device *dev, u32 reg,
- u32 value);
-int spi_reg_read(struct altera_spi_device *dev, u32 reg, u32 *val);
-
-#define NIOS_SPI_PARAM 0x8
-#define CONTROL_TYPE BIT_ULL(48)
-#define PERI_ID GENMASK_ULL(47, 32)
-#define SPI_CLK GENMASK_ULL(31, 22)
-#define SYNC_STAGES GENMASK_ULL(17, 16)
-#define CLOCK_PHASE BIT_ULL(15)
-#define CLOCK_POLARITY BIT_ULL(14)
-#define NUM_SELECT GENMASK_ULL(13, 8)
-#define DATA_WIDTH GENMASK_ULL(7, 2)
-#define SHIFT_DIRECTION BIT_ULL(1)
-#define SPI_TYPE BIT_ULL(0)
-#define NIOS_SPI_CTRL 0x10
-#define NIOS_SPI_RD (0x1ULL << 62)
-#define NIOS_SPI_WR (0x2ULL << 62)
-#define NIOS_SPI_COMMAND GENMASK_ULL(63, 62)
-#define NIOS_SPI_ADDR GENMASK_ULL(44, 32)
-#define NIOS_SPI_WRITE_DATA GENMASK_ULL(31, 0)
-#define NIOS_SPI_STAT 0x18
-#define NIOS_SPI_VALID BIT_ULL(32)
-#define NIOS_SPI_READ_DATA GENMASK_ULL(31, 0)
-#define NIOS_SPI_INIT_DONE 0x1000
-
-#define NIOS_SPI_INIT_DONE 0x1000
-#define NIOS_SPI_INIT_STS0 0x1020
-#define NIOS_SPI_INIT_STS1 0x1024
-#define PKVL_STATUS_RESET 0
-#define PKVL_10G_MODE 1
-#define PKVL_25G_MODE 2
-#endif
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2019 Intel Corporation
- */
-
-#include "opae_spi.h"
-#include "ifpga_compat.h"
-
-/*transaction opcodes*/
-#define SPI_TRAN_SEQ_WRITE 0x04 /* SPI transaction sequential write */
-#define SPI_TRAN_SEQ_READ 0x14 /* SPI transaction sequential read */
-#define SPI_TRAN_NON_SEQ_WRITE 0x00 /* SPI transaction non-sequential write */
-#define SPI_TRAN_NON_SEQ_READ 0x10 /* SPI transaction non-sequential read*/
-
-/*specail packet characters*/
-#define SPI_PACKET_SOP 0x7a
-#define SPI_PACKET_EOP 0x7b
-#define SPI_PACKET_CHANNEL 0x7c
-#define SPI_PACKET_ESC 0x7d
-
-/*special byte characters*/
-#define SPI_BYTE_IDLE 0x4a
-#define SPI_BYTE_ESC 0x4d
-
-#define SPI_REG_BYTES 4
-
-#define INIT_SPI_TRAN_HEADER(trans_type, size, address) \
-({ \
- header.trans_type = trans_type; \
- header.reserve = 0; \
- header.size = cpu_to_be16(size); \
- header.addr = cpu_to_be32(addr); \
-})
-
-#ifdef OPAE_SPI_DEBUG
-static void print_buffer(const char *string, void *buffer, int len)
-{
- int i;
- unsigned char *p = buffer;
-
- printf("%s print buffer, len=%d\n", string, len);
-
- for (i = 0; i < len; i++)
- printf("%x ", *(p+i));
- printf("\n");
-}
-#else
-static void print_buffer(const char *string, void *buffer, int len)
-{
- UNUSED(string);
- UNUSED(buffer);
- UNUSED(len);
-}
-#endif
-
-static unsigned char xor_20(unsigned char val)
-{
- return val^0x20;
-}
-
-static void reorder_phy_data(u8 bits_per_word,
- void *buf, unsigned int len)
-{
- unsigned int count = len / (bits_per_word/8);
- u32 *p;
-
- if (bits_per_word == 32) {
- p = (u32 *)buf;
- while (count--) {
- *p = cpu_to_be32(*p);
- p++;
- }
- }
-}
-
-enum {
- SPI_FOUND_SOP,
- SPI_FOUND_EOP,
- SPI_NOT_FOUND,
-};
-
-static int resp_find_sop_eop(unsigned char *resp, unsigned int len,
- int flags)
-{
- int ret = SPI_NOT_FOUND;
-
- unsigned char *b = resp;
-
- /* find SOP */
- if (flags != SPI_FOUND_SOP) {
- while (b < resp + len && *b != SPI_PACKET_SOP)
- b++;
-
- if (*b != SPI_PACKET_SOP)
- goto done;
-
- ret = SPI_FOUND_SOP;
- }
-
- /* find EOP */
- while (b < resp + len && *b != SPI_PACKET_EOP)
- b++;
-
- if (*b != SPI_PACKET_EOP)
- goto done;
-
- ret = SPI_FOUND_EOP;
-
-done:
- return ret;
-}
-
-static int byte_to_core_convert(struct spi_transaction_dev *dev,
- unsigned int send_len, unsigned char *send_data,
- unsigned int resp_len, unsigned char *resp_data,
- unsigned int *valid_resp_len)
-{
- unsigned int i;
- int ret = 0;
- unsigned char *send_packet = dev->buffer->bytes_send;
- unsigned char *resp_packet = dev->buffer->bytes_resp;
- unsigned char *p;
- unsigned char current_byte;
- unsigned char *tx_buffer;
- unsigned int tx_len = 0;
- unsigned char *rx_buffer;
- unsigned int rx_len = 0;
- int retry = 0;
- int spi_flags;
- unsigned int resp_max_len = 2 * resp_len;
-
- print_buffer("before bytes:", send_data, send_len);
-
- p = send_packet;
-
- for (i = 0; i < send_len; i++) {
- current_byte = send_data[i];
- switch (current_byte) {
- case SPI_BYTE_IDLE:
- *p++ = SPI_BYTE_IDLE;
- *p++ = xor_20(current_byte);
- break;
- case SPI_BYTE_ESC:
- *p++ = SPI_BYTE_ESC;
- *p++ = xor_20(current_byte);
- break;
- default:
- *p++ = current_byte;
- break;
- }
- }
-
- print_buffer("before spi:", send_packet, p-send_packet);
-
- reorder_phy_data(32, send_packet, p - send_packet);
-
- print_buffer("after order to spi:", send_packet, p-send_packet);
-
- /* call spi */
- tx_buffer = send_packet;
- tx_len = p - send_packet;
- rx_buffer = resp_packet;
- rx_len = resp_max_len;
- spi_flags = SPI_NOT_FOUND;
-
-read_again:
- ret = spi_command(dev->dev, dev->chipselect, tx_len, tx_buffer,
- rx_len, rx_buffer);
- if (ret)
- return -EBUSY;
-
- print_buffer("read from spi:", rx_buffer, rx_len);
-
- /* look for SOP firstly*/
- ret = resp_find_sop_eop(rx_buffer, rx_len - 1, spi_flags);
- if (ret != SPI_FOUND_EOP) {
- tx_buffer = NULL;
- tx_len = 0;
- if (retry++ > 10) {
- dev_err(NULL, "cannot found valid data from SPI\n");
- return -EBUSY;
- }
-
- if (ret == SPI_FOUND_SOP) {
- rx_buffer += rx_len;
- resp_max_len += rx_len;
- }
-
- spi_flags = ret;
- goto read_again;
- }
-
- print_buffer("found valid data:", resp_packet, resp_max_len);
-
- /* analyze response packet */
- i = 0;
- p = resp_data;
- while (i < resp_max_len) {
- current_byte = resp_packet[i];
- switch (current_byte) {
- case SPI_BYTE_IDLE:
- i++;
- break;
- case SPI_BYTE_ESC:
- i++;
- current_byte = resp_packet[i];
- *p++ = xor_20(current_byte);
- i++;
- break;
- default:
- *p++ = current_byte;
- i++;
- break;
- }
- }
-
- /* receive "4a" means the SPI is idle, not valid data */
- *valid_resp_len = p - resp_data;
- if (*valid_resp_len == 0) {
- dev_err(NULL, "error: repond package without valid data\n");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int packet_to_byte_conver(struct spi_transaction_dev *dev,
- unsigned int send_len, unsigned char *send_buf,
- unsigned int resp_len, unsigned char *resp_buf,
- unsigned int *valid)
-{
- int ret = 0;
- unsigned int i;
- unsigned char current_byte;
- unsigned int resp_max_len;
- unsigned char *send_packet = dev->buffer->packet_send;
- unsigned char *resp_packet = dev->buffer->packet_resp;
- unsigned char *p;
- unsigned int valid_resp_len = 0;
-
- print_buffer("before packet:", send_buf, send_len);
-
- resp_max_len = 2 * resp_len + 4;
-
- p = send_packet;
-
- /* SOP header */
- *p++ = SPI_PACKET_SOP;
-
- *p++ = SPI_PACKET_CHANNEL;
- *p++ = 0;
-
- /* append the data into a packet */
- for (i = 0; i < send_len; i++) {
- current_byte = send_buf[i];
-
- /* EOP for last byte */
- if (i == send_len - 1)
- *p++ = SPI_PACKET_EOP;
-
- switch (current_byte) {
- case SPI_PACKET_SOP:
- case SPI_PACKET_EOP:
- case SPI_PACKET_CHANNEL:
- case SPI_PACKET_ESC:
- *p++ = SPI_PACKET_ESC;
- *p++ = xor_20(current_byte);
- break;
- default:
- *p++ = current_byte;
- }
- }
-
- ret = byte_to_core_convert(dev, p - send_packet,
- send_packet, resp_max_len, resp_packet,
- &valid_resp_len);
- if (ret)
- return -EBUSY;
-
- print_buffer("after byte conver:", resp_packet, valid_resp_len);
-
- /* analyze the response packet */
- p = resp_buf;
-
- /* look for SOP */
- for (i = 0; i < valid_resp_len; i++) {
- if (resp_packet[i] == SPI_PACKET_SOP)
- break;
- }
-
- if (i == valid_resp_len) {
- dev_err(NULL, "error on analyze response packet 0x%x\n",
- resp_packet[i]);
- return -EINVAL;
- }
-
- i++;
-
- /* continue parsing data after SOP */
- while (i < valid_resp_len) {
- current_byte = resp_packet[i];
-
- switch (current_byte) {
- case SPI_PACKET_ESC:
- case SPI_PACKET_CHANNEL:
- case SPI_PACKET_SOP:
- i++;
- current_byte = resp_packet[i];
- *p++ = xor_20(current_byte);
- i++;
- break;
- case SPI_PACKET_EOP:
- i++;
- current_byte = resp_packet[i];
- if (current_byte == SPI_PACKET_ESC ||
- current_byte == SPI_PACKET_CHANNEL ||
- current_byte == SPI_PACKET_SOP) {
- i++;
- current_byte = resp_packet[i];
- *p++ = xor_20(current_byte);
- } else
- *p++ = current_byte;
- i = valid_resp_len;
- break;
- default:
- *p++ = current_byte;
- i++;
- }
-
- }
-
- *valid = p - resp_buf;
-
- print_buffer("after packet:", resp_buf, *valid);
-
- return ret;
-}
-
-static int do_transaction(struct spi_transaction_dev *dev, unsigned int addr,
- unsigned int size, unsigned char *data,
- unsigned int trans_type)
-{
-
- struct spi_tran_header header;
- unsigned char *transaction = dev->buffer->tran_send;
- unsigned char *response = dev->buffer->tran_resp;
- unsigned char *p;
- int ret = 0;
- unsigned int i;
- unsigned int valid_len = 0;
-
- /* make transacation header */
- INIT_SPI_TRAN_HEADER(trans_type, size, addr);
-
- /* fill the header */
- p = transaction;
- opae_memcpy(p, &header, sizeof(struct spi_tran_header));
- p = p + sizeof(struct spi_tran_header);
-
- switch (trans_type) {
- case SPI_TRAN_SEQ_WRITE:
- case SPI_TRAN_NON_SEQ_WRITE:
- for (i = 0; i < size; i++)
- *p++ = *data++;
-
- ret = packet_to_byte_conver(dev, size + HEADER_LEN,
- transaction, RESPONSE_LEN, response,
- &valid_len);
- if (ret)
- return -EBUSY;
-
- /* check the result */
- if (size != ((unsigned int)(response[2] & 0xff) << 8 |
- (unsigned int)(response[3] & 0xff)))
- ret = -EBUSY;
-
- break;
- case SPI_TRAN_SEQ_READ:
- case SPI_TRAN_NON_SEQ_READ:
- ret = packet_to_byte_conver(dev, HEADER_LEN,
- transaction, size, response,
- &valid_len);
- if (ret || valid_len != size)
- return -EBUSY;
-
- for (i = 0; i < size; i++)
- *data++ = *response++;
-
- ret = 0;
- break;
- }
-
- return ret;
-}
-
-int spi_transaction_read(struct spi_transaction_dev *dev, unsigned int addr,
- unsigned int size, unsigned char *data)
-{
- return do_transaction(dev, addr, size, data,
- (size > SPI_REG_BYTES) ?
- SPI_TRAN_SEQ_READ : SPI_TRAN_NON_SEQ_READ);
-}
-
-int spi_transaction_write(struct spi_transaction_dev *dev, unsigned int addr,
- unsigned int size, unsigned char *data)
-{
- return do_transaction(dev, addr, size, data,
- (size > SPI_REG_BYTES) ?
- SPI_TRAN_SEQ_WRITE : SPI_TRAN_NON_SEQ_WRITE);
-}
-
-struct spi_transaction_dev *spi_transaction_init(struct altera_spi_device *dev,
- int chipselect)
-{
- struct spi_transaction_dev *spi_tran_dev;
-
- spi_tran_dev = opae_malloc(sizeof(struct spi_transaction_dev));
- if (!spi_tran_dev)
- return NULL;
-
- spi_tran_dev->dev = dev;
- spi_tran_dev->chipselect = chipselect;
-
- spi_tran_dev->buffer = opae_malloc(sizeof(struct spi_tran_buffer));
- if (!spi_tran_dev->buffer) {
- opae_free(spi_tran_dev);
- return NULL;
- }
-
- return spi_tran_dev;
-}
-
-void spi_transaction_remove(struct spi_transaction_dev *dev)
-{
- if (dev && dev->buffer)
- opae_free(dev->buffer);
- if (dev)
- opae_free(dev);
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _OSDEP_RAW_GENERIC_H
-#define _OSDEP_RAW_GENERIC_H
-
-#define compiler_barrier() (asm volatile ("" : : : "memory"))
-
-#define io_wmb() compiler_barrier()
-#define io_rmb() compiler_barrier()
-
-static inline uint8_t opae_readb(const volatile void *addr)
-{
- uint8_t val;
-
- val = *(const volatile uint8_t *)addr;
- io_rmb();
- return val;
-}
-
-static inline uint16_t opae_readw(const volatile void *addr)
-{
- uint16_t val;
-
- val = *(const volatile uint16_t *)addr;
- io_rmb();
- return val;
-}
-
-static inline uint32_t opae_readl(const volatile void *addr)
-{
- uint32_t val;
-
- val = *(const volatile uint32_t *)addr;
- io_rmb();
- return val;
-}
-
-static inline uint64_t opae_readq(const volatile void *addr)
-{
- uint64_t val;
-
- val = *(const volatile uint64_t *)addr;
- io_rmb();
- return val;
-}
-
-static inline void opae_writeb(uint8_t value, volatile void *addr)
-{
- io_wmb();
- *(volatile uint8_t *)addr = value;
-}
-
-static inline void opae_writew(uint16_t value, volatile void *addr)
-{
- io_wmb();
- *(volatile uint16_t *)addr = value;
-}
-
-static inline void opae_writel(uint32_t value, volatile void *addr)
-{
- io_wmb();
- *(volatile uint32_t *)addr = value;
-}
-
-static inline void opae_writeq(uint64_t value, volatile void *addr)
-{
- io_wmb();
- *(volatile uint64_t *)addr = value;
-}
-
-#define opae_free(addr) free(addr)
-#define opae_memcpy(a, b, c) memcpy((a), (b), (c))
-
-#endif
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _OSDEP_RTE_GENERIC_H
-#define _OSDEP_RTE_GENERIC_H
-
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_spinlock.h>
-#include <rte_log.h>
-#include <rte_io.h>
-#include <rte_malloc.h>
-#include <rte_byteorder.h>
-#include <rte_memcpy.h>
-
-#define dev_printf(level, fmt, args...) \
- RTE_LOG(level, PMD, "osdep_rte: " fmt, ## args)
-
-#define osdep_panic(...) rte_panic(...)
-
-#define opae_udelay(x) rte_delay_us(x)
-
-#define opae_readb(addr) rte_read8(addr)
-#define opae_readw(addr) rte_read16(addr)
-#define opae_readl(addr) rte_read32(addr)
-#define opae_readq(addr) rte_read64(addr)
-#define opae_writeb(value, addr) rte_write8(value, addr)
-#define opae_writew(value, addr) rte_write16(value, addr)
-#define opae_writel(value, addr) rte_write32(value, addr)
-#define opae_writeq(value, addr) rte_write64(value, addr)
-
-#define opae_malloc(size) rte_malloc(NULL, size, 0)
-#define opae_zmalloc(size) rte_zmalloc(NULL, size, 0)
-#define opae_free(addr) rte_free(addr)
-
-#define ARRAY_SIZE(arr) RTE_DIM(arr)
-
-#define min(a, b) RTE_MIN(a, b)
-#define max(a, b) RTE_MAX(a, b)
-
-#define spinlock_t rte_spinlock_t
-#define spinlock_init(x) rte_spinlock_init(x)
-#define spinlock_lock(x) rte_spinlock_lock(x)
-#define spinlock_unlock(x) rte_spinlock_unlock(x)
-
-#define cpu_to_be16(o) rte_cpu_to_be_16(o)
-#define cpu_to_be32(o) rte_cpu_to_be_32(o)
-#define cpu_to_be64(o) rte_cpu_to_be_64(o)
-#define cpu_to_le16(o) rte_cpu_to_le_16(o)
-#define cpu_to_le32(o) rte_cpu_to_le_32(o)
-#define cpu_to_le64(o) rte_cpu_to_le_64(o)
-
-#define opae_memcpy(a, b, c) rte_memcpy((a), (b), (c))
-
-static inline unsigned long msecs_to_timer_cycles(unsigned int m)
-{
- return rte_get_timer_hz() * (m / 1000);
-}
-
-#endif
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include <string.h>
-#include <dirent.h>
-#include <sys/stat.h>
-#include <unistd.h>
-#include <sys/types.h>
-#include <fcntl.h>
-#include <rte_log.h>
-#include <rte_bus.h>
-#include <rte_eal_memconfig.h>
-#include <rte_malloc.h>
-#include <rte_devargs.h>
-#include <rte_memcpy.h>
-#include <rte_pci.h>
-#include <rte_bus_pci.h>
-#include <rte_kvargs.h>
-#include <rte_alarm.h>
-
-#include <rte_errno.h>
-#include <rte_per_lcore.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_eal.h>
-#include <rte_common.h>
-#include <rte_bus_vdev.h>
-
-#include "base/opae_hw_api.h"
-#include "rte_rawdev.h"
-#include "rte_rawdev_pmd.h"
-#include "rte_bus_ifpga.h"
-#include "ifpga_common.h"
-#include "ifpga_logs.h"
-#include "ifpga_rawdev.h"
-#include "ipn3ke_rawdev_api.h"
-
-int ifpga_rawdev_logtype;
-
-#define PCI_VENDOR_ID_INTEL 0x8086
-/* PCI Device ID */
-#define PCIE_DEVICE_ID_PF_INT_5_X 0xBCBD
-#define PCIE_DEVICE_ID_PF_INT_6_X 0xBCC0
-#define PCIE_DEVICE_ID_PF_DSC_1_X 0x09C4
-#define PCIE_DEVICE_ID_PAC_N3000 0x0B30
-/* VF Device */
-#define PCIE_DEVICE_ID_VF_INT_5_X 0xBCBF
-#define PCIE_DEVICE_ID_VF_INT_6_X 0xBCC1
-#define PCIE_DEVICE_ID_VF_DSC_1_X 0x09C5
-#define PCIE_DEVICE_ID_VF_PAC_N3000 0x0B31
-#define RTE_MAX_RAW_DEVICE 10
-
-static const struct rte_pci_id pci_ifpga_map[] = {
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_5_X) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_5_X) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_6_X) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_6_X) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_DSC_1_X) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_DSC_1_X) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PAC_N3000),},
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_PAC_N3000),},
- { .vendor_id = 0, /* sentinel */ },
-};
-
-static int
-ifpga_fill_afu_dev(struct opae_accelerator *acc,
- struct rte_afu_device *afu_dev)
-{
- struct rte_mem_resource *res = afu_dev->mem_resource;
- struct opae_acc_region_info region_info;
- struct opae_acc_info info;
- unsigned long i;
- int ret;
-
- ret = opae_acc_get_info(acc, &info);
- if (ret)
- return ret;
-
- if (info.num_regions > PCI_MAX_RESOURCE)
- return -EFAULT;
-
- afu_dev->num_region = info.num_regions;
-
- for (i = 0; i < info.num_regions; i++) {
- region_info.index = i;
- ret = opae_acc_get_region_info(acc, ®ion_info);
- if (ret)
- return ret;
-
- if ((region_info.flags & ACC_REGION_MMIO) &&
- (region_info.flags & ACC_REGION_READ) &&
- (region_info.flags & ACC_REGION_WRITE)) {
- res[i].phys_addr = region_info.phys_addr;
- res[i].len = region_info.len;
- res[i].addr = region_info.addr;
- } else
- return -EFAULT;
- }
-
- return 0;
-}
-
-static void
-ifpga_rawdev_info_get(struct rte_rawdev *dev,
- rte_rawdev_obj_t dev_info)
-{
- struct opae_adapter *adapter;
- struct opae_accelerator *acc;
- struct rte_afu_device *afu_dev;
- struct opae_manager *mgr = NULL;
- struct opae_eth_group_region_info opae_lside_eth_info;
- struct opae_eth_group_region_info opae_nside_eth_info;
- int lside_bar_idx, nside_bar_idx;
-
- IFPGA_RAWDEV_PMD_FUNC_TRACE();
-
- if (!dev_info) {
- IFPGA_RAWDEV_PMD_ERR("Invalid request");
- return;
- }
-
- adapter = ifpga_rawdev_get_priv(dev);
- if (!adapter)
- return;
-
- afu_dev = dev_info;
- afu_dev->rawdev = dev;
-
- /* find opae_accelerator and fill info into afu_device */
- opae_adapter_for_each_acc(adapter, acc) {
- if (acc->index != afu_dev->id.port)
- continue;
-
- if (ifpga_fill_afu_dev(acc, afu_dev)) {
- IFPGA_RAWDEV_PMD_ERR("cannot get info\n");
- return;
- }
- }
-
- /* get opae_manager to rawdev */
- mgr = opae_adapter_get_mgr(adapter);
- if (mgr) {
- /* get LineSide BAR Index */
- if (opae_manager_get_eth_group_region_info(mgr, 0,
- &opae_lside_eth_info)) {
- return;
- }
- lside_bar_idx = opae_lside_eth_info.mem_idx;
-
- /* get NICSide BAR Index */
- if (opae_manager_get_eth_group_region_info(mgr, 1,
- &opae_nside_eth_info)) {
- return;
- }
- nside_bar_idx = opae_nside_eth_info.mem_idx;
-
- if (lside_bar_idx >= PCI_MAX_RESOURCE ||
- nside_bar_idx >= PCI_MAX_RESOURCE ||
- lside_bar_idx == nside_bar_idx)
- return;
-
- /* fill LineSide BAR Index */
- afu_dev->mem_resource[lside_bar_idx].phys_addr =
- opae_lside_eth_info.phys_addr;
- afu_dev->mem_resource[lside_bar_idx].len =
- opae_lside_eth_info.len;
- afu_dev->mem_resource[lside_bar_idx].addr =
- opae_lside_eth_info.addr;
-
- /* fill NICSide BAR Index */
- afu_dev->mem_resource[nside_bar_idx].phys_addr =
- opae_nside_eth_info.phys_addr;
- afu_dev->mem_resource[nside_bar_idx].len =
- opae_nside_eth_info.len;
- afu_dev->mem_resource[nside_bar_idx].addr =
- opae_nside_eth_info.addr;
- }
-}
-
-static int
-ifpga_rawdev_configure(const struct rte_rawdev *dev,
- rte_rawdev_obj_t config)
-{
- IFPGA_RAWDEV_PMD_FUNC_TRACE();
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- return config ? 0 : 1;
-}
-
-static int
-ifpga_rawdev_start(struct rte_rawdev *dev)
-{
- int ret = 0;
- struct opae_adapter *adapter;
-
- IFPGA_RAWDEV_PMD_FUNC_TRACE();
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- adapter = ifpga_rawdev_get_priv(dev);
- if (!adapter)
- return -ENODEV;
-
- return ret;
-}
-
-static void
-ifpga_rawdev_stop(struct rte_rawdev *dev)
-{
- dev->started = 0;
-}
-
-static int
-ifpga_rawdev_close(struct rte_rawdev *dev)
-{
- return dev ? 0:1;
-}
-
-static int
-ifpga_rawdev_reset(struct rte_rawdev *dev)
-{
- return dev ? 0:1;
-}
-
-static int
-fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
- u64 *status)
-{
-
- struct opae_adapter *adapter;
- struct opae_manager *mgr;
- struct opae_accelerator *acc;
- struct opae_bridge *br;
- int ret;
-
- adapter = ifpga_rawdev_get_priv(raw_dev);
- if (!adapter)
- return -ENODEV;
-
- mgr = opae_adapter_get_mgr(adapter);
- if (!mgr)
- return -ENODEV;
-
- acc = opae_adapter_get_acc(adapter, port_id);
- if (!acc)
- return -ENODEV;
-
- br = opae_acc_get_br(acc);
- if (!br)
- return -ENODEV;
-
- ret = opae_manager_flash(mgr, port_id, buffer, size, status);
- if (ret) {
- IFPGA_RAWDEV_PMD_ERR("%s pr error %d\n", __func__, ret);
- return ret;
- }
-
- ret = opae_bridge_reset(br);
- if (ret) {
- IFPGA_RAWDEV_PMD_ERR("%s reset port:%d error %d\n",
- __func__, port_id, ret);
- return ret;
- }
-
- return ret;
-}
-
-static int
-rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
- const char *file_name)
-{
- struct stat file_stat;
- int file_fd;
- int ret = 0;
- ssize_t buffer_size;
- void *buffer;
- u64 pr_error;
-
- if (!file_name)
- return -EINVAL;
-
- file_fd = open(file_name, O_RDONLY);
- if (file_fd < 0) {
- IFPGA_RAWDEV_PMD_ERR("%s: open file error: %s\n",
- __func__, file_name);
- IFPGA_RAWDEV_PMD_ERR("Message : %s\n", strerror(errno));
- return -EINVAL;
- }
- ret = stat(file_name, &file_stat);
- if (ret) {
- IFPGA_RAWDEV_PMD_ERR("stat on bitstream file failed: %s\n",
- file_name);
- ret = -EINVAL;
- goto close_fd;
- }
- buffer_size = file_stat.st_size;
- if (buffer_size <= 0) {
- ret = -EINVAL;
- goto close_fd;
- }
-
- IFPGA_RAWDEV_PMD_INFO("bitstream file size: %zu\n", buffer_size);
- buffer = rte_malloc(NULL, buffer_size, 0);
- if (!buffer) {
- ret = -ENOMEM;
- goto close_fd;
- }
-
- /*read the raw data*/
- if (buffer_size != read(file_fd, (void *)buffer, buffer_size)) {
- ret = -EINVAL;
- goto free_buffer;
- }
-
- /*do PR now*/
- ret = fpga_pr(rawdev, port_id, buffer, buffer_size, &pr_error);
- IFPGA_RAWDEV_PMD_INFO("downloading to device port %d....%s.\n", port_id,
- ret ? "failed" : "success");
- if (ret) {
- ret = -EINVAL;
- goto free_buffer;
- }
-
-free_buffer:
- if (buffer)
- rte_free(buffer);
-close_fd:
- close(file_fd);
- file_fd = 0;
- return ret;
-}
-
-static int
-ifpga_rawdev_pr(struct rte_rawdev *dev,
- rte_rawdev_obj_t pr_conf)
-{
- struct opae_adapter *adapter;
- struct rte_afu_pr_conf *afu_pr_conf;
- int ret;
- struct uuid uuid;
- struct opae_accelerator *acc;
-
- IFPGA_RAWDEV_PMD_FUNC_TRACE();
-
- adapter = ifpga_rawdev_get_priv(dev);
- if (!adapter)
- return -ENODEV;
-
- if (!pr_conf)
- return -EINVAL;
-
- afu_pr_conf = pr_conf;
-
- if (afu_pr_conf->pr_enable) {
- ret = rte_fpga_do_pr(dev,
- afu_pr_conf->afu_id.port,
- afu_pr_conf->bs_path);
- if (ret) {
- IFPGA_RAWDEV_PMD_ERR("do pr error %d\n", ret);
- return ret;
- }
- }
-
- acc = opae_adapter_get_acc(adapter, afu_pr_conf->afu_id.port);
- if (!acc)
- return -ENODEV;
-
- ret = opae_acc_get_uuid(acc, &uuid);
- if (ret)
- return ret;
-
- memcpy(&afu_pr_conf->afu_id.uuid.uuid_low, uuid.b, sizeof(u64));
- memcpy(&afu_pr_conf->afu_id.uuid.uuid_high, uuid.b + 8, sizeof(u64));
-
- IFPGA_RAWDEV_PMD_INFO("%s: uuid_l=0x%lx, uuid_h=0x%lx\n", __func__,
- (unsigned long)afu_pr_conf->afu_id.uuid.uuid_low,
- (unsigned long)afu_pr_conf->afu_id.uuid.uuid_high);
-
- return 0;
-}
-
-static int
-ifpga_rawdev_get_attr(struct rte_rawdev *dev,
- const char *attr_name, uint64_t *attr_value)
-{
- struct opae_adapter *adapter;
- struct opae_manager *mgr;
- struct opae_retimer_info opae_rtm_info;
- struct opae_retimer_status opae_rtm_status;
- struct opae_eth_group_info opae_eth_grp_info;
- struct opae_eth_group_region_info opae_eth_grp_reg_info;
- int eth_group_num = 0;
- uint64_t port_link_bitmap = 0, port_link_bit;
- uint32_t i, j, p, q;
-
-#define MAX_PORT_PER_RETIMER 4
-
- IFPGA_RAWDEV_PMD_FUNC_TRACE();
-
- if (!dev || !attr_name || !attr_value) {
- IFPGA_RAWDEV_PMD_ERR("Invalid arguments for getting attributes");
- return -1;
- }
-
- adapter = ifpga_rawdev_get_priv(dev);
- if (!adapter) {
- IFPGA_RAWDEV_PMD_ERR("Adapter of dev %s is NULL", dev->name);
- return -1;
- }
-
- mgr = opae_adapter_get_mgr(adapter);
- if (!mgr) {
- IFPGA_RAWDEV_PMD_ERR("opae_manager of opae_adapter is NULL");
- return -1;
- }
-
- /* currently, eth_group_num is always 2 */
- eth_group_num = opae_manager_get_eth_group_nums(mgr);
- if (eth_group_num < 0)
- return -1;
-
- if (!strcmp(attr_name, "LineSideBaseMAC")) {
- /* Currently FPGA not implement, so just set all zeros*/
- *attr_value = (uint64_t)0;
- return 0;
- }
- if (!strcmp(attr_name, "LineSideMACType")) {
- /* eth_group 0 on FPGA connect to LineSide */
- if (opae_manager_get_eth_group_info(mgr, 0,
- &opae_eth_grp_info))
- return -1;
- switch (opae_eth_grp_info.speed) {
- case ETH_SPEED_10G:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI);
- break;
- case ETH_SPEED_25G:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI);
- break;
- default:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_RETIMER_MAC_TYPE_UNKNOWN);
- break;
- }
- return 0;
- }
- if (!strcmp(attr_name, "LineSideLinkSpeed")) {
- if (opae_manager_get_retimer_status(mgr, &opae_rtm_status))
- return -1;
- switch (opae_rtm_status.speed) {
- case MXD_1GB:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
- break;
- case MXD_2_5GB:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
- break;
- case MXD_5GB:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
- break;
- case MXD_10GB:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_10GB);
- break;
- case MXD_25GB:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_25GB);
- break;
- case MXD_40GB:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_40GB);
- break;
- case MXD_100GB:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
- break;
- case MXD_SPEED_UNKNOWN:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
- break;
- default:
- *attr_value =
- (uint64_t)(IFPGA_RAWDEV_LINK_SPEED_UNKNOWN);
- break;
- }
- return 0;
- }
- if (!strcmp(attr_name, "LineSideLinkRetimerNum")) {
- if (opae_manager_get_retimer_info(mgr, &opae_rtm_info))
- return -1;
- *attr_value = (uint64_t)(opae_rtm_info.nums_retimer);
- return 0;
- }
- if (!strcmp(attr_name, "LineSideLinkPortNum")) {
- if (opae_manager_get_retimer_info(mgr, &opae_rtm_info))
- return -1;
- uint64_t tmp = (uint64_t)opae_rtm_info.ports_per_retimer *
- (uint64_t)opae_rtm_info.nums_retimer;
- *attr_value = tmp;
- return 0;
- }
- if (!strcmp(attr_name, "LineSideLinkStatus")) {
- if (opae_manager_get_retimer_info(mgr, &opae_rtm_info))
- return -1;
- if (opae_manager_get_retimer_status(mgr, &opae_rtm_status))
- return -1;
- (*attr_value) = 0;
- q = 0;
- port_link_bitmap = (uint64_t)(opae_rtm_status.line_link_bitmap);
- for (i = 0; i < opae_rtm_info.nums_retimer; i++) {
- p = i * MAX_PORT_PER_RETIMER;
- for (j = 0; j < opae_rtm_info.ports_per_retimer; j++) {
- port_link_bit = 0;
- IFPGA_BIT_SET(port_link_bit, (p+j));
- port_link_bit &= port_link_bitmap;
- if (port_link_bit)
- IFPGA_BIT_SET((*attr_value), q);
- q++;
- }
- }
- return 0;
- }
- if (!strcmp(attr_name, "LineSideBARIndex")) {
- /* eth_group 0 on FPGA connect to LineSide */
- if (opae_manager_get_eth_group_region_info(mgr, 0,
- &opae_eth_grp_reg_info))
- return -1;
- *attr_value = (uint64_t)opae_eth_grp_reg_info.mem_idx;
- return 0;
- }
- if (!strcmp(attr_name, "NICSideMACType")) {
- /* eth_group 1 on FPGA connect to NicSide */
- if (opae_manager_get_eth_group_info(mgr, 1,
- &opae_eth_grp_info))
- return -1;
- *attr_value = (uint64_t)(opae_eth_grp_info.speed);
- return 0;
- }
- if (!strcmp(attr_name, "NICSideLinkSpeed")) {
- /* eth_group 1 on FPGA connect to NicSide */
- if (opae_manager_get_eth_group_info(mgr, 1,
- &opae_eth_grp_info))
- return -1;
- *attr_value = (uint64_t)(opae_eth_grp_info.speed);
- return 0;
- }
- if (!strcmp(attr_name, "NICSideLinkPortNum")) {
- if (opae_manager_get_retimer_info(mgr, &opae_rtm_info))
- return -1;
- uint64_t tmp = (uint64_t)opae_rtm_info.nums_fvl *
- (uint64_t)opae_rtm_info.ports_per_fvl;
- *attr_value = tmp;
- return 0;
- }
- if (!strcmp(attr_name, "NICSideLinkStatus"))
- return 0;
- if (!strcmp(attr_name, "NICSideBARIndex")) {
- /* eth_group 1 on FPGA connect to NicSide */
- if (opae_manager_get_eth_group_region_info(mgr, 1,
- &opae_eth_grp_reg_info))
- return -1;
- *attr_value = (uint64_t)opae_eth_grp_reg_info.mem_idx;
- return 0;
- }
-
- IFPGA_RAWDEV_PMD_ERR("%s not support", attr_name);
- return -1;
-}
-
-static const struct rte_rawdev_ops ifpga_rawdev_ops = {
- .dev_info_get = ifpga_rawdev_info_get,
- .dev_configure = ifpga_rawdev_configure,
- .dev_start = ifpga_rawdev_start,
- .dev_stop = ifpga_rawdev_stop,
- .dev_close = ifpga_rawdev_close,
- .dev_reset = ifpga_rawdev_reset,
-
- .queue_def_conf = NULL,
- .queue_setup = NULL,
- .queue_release = NULL,
-
- .attr_get = ifpga_rawdev_get_attr,
- .attr_set = NULL,
-
- .enqueue_bufs = NULL,
- .dequeue_bufs = NULL,
-
- .dump = NULL,
-
- .xstats_get = NULL,
- .xstats_get_names = NULL,
- .xstats_get_by_name = NULL,
- .xstats_reset = NULL,
-
- .firmware_status_get = NULL,
- .firmware_version_get = NULL,
- .firmware_load = ifpga_rawdev_pr,
- .firmware_unload = NULL,
-
- .dev_selftest = NULL,
-};
-
-static int
-ifpga_rawdev_create(struct rte_pci_device *pci_dev,
- int socket_id)
-{
- int ret = 0;
- struct rte_rawdev *rawdev = NULL;
- struct opae_adapter *adapter = NULL;
- struct opae_manager *mgr = NULL;
- struct opae_adapter_data_pci *data = NULL;
- char name[RTE_RAWDEV_NAME_MAX_LEN];
- int i;
-
- if (!pci_dev) {
- IFPGA_RAWDEV_PMD_ERR("Invalid pci_dev of the device!");
- ret = -EINVAL;
- goto cleanup;
- }
-
- memset(name, 0, sizeof(name));
- snprintf(name, RTE_RAWDEV_NAME_MAX_LEN, "IFPGA:%x:%02x.%x",
- pci_dev->addr.bus, pci_dev->addr.devid, pci_dev->addr.function);
-
- IFPGA_RAWDEV_PMD_INFO("Init %s on NUMA node %d", name, rte_socket_id());
-
- /* Allocate device structure */
- rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct opae_adapter),
- socket_id);
- if (rawdev == NULL) {
- IFPGA_RAWDEV_PMD_ERR("Unable to allocate rawdevice");
- ret = -EINVAL;
- goto cleanup;
- }
-
- /* alloc OPAE_FPGA_PCI data to register to OPAE hardware level API */
- data = opae_adapter_data_alloc(OPAE_FPGA_PCI);
- if (!data) {
- ret = -ENOMEM;
- goto cleanup;
- }
-
- /* init opae_adapter_data_pci for device specific information */
- for (i = 0; i < PCI_MAX_RESOURCE; i++) {
- data->region[i].phys_addr = pci_dev->mem_resource[i].phys_addr;
- data->region[i].len = pci_dev->mem_resource[i].len;
- data->region[i].addr = pci_dev->mem_resource[i].addr;
- }
- data->device_id = pci_dev->id.device_id;
- data->vendor_id = pci_dev->id.vendor_id;
-
- adapter = rawdev->dev_private;
- /* create a opae_adapter based on above device data */
- ret = opae_adapter_init(adapter, pci_dev->device.name, data);
- if (ret) {
- ret = -ENOMEM;
- goto free_adapter_data;
- }
-
- rawdev->dev_ops = &ifpga_rawdev_ops;
- rawdev->device = &pci_dev->device;
- rawdev->driver_name = pci_dev->driver->driver.name;
-
- /* must enumerate the adapter before use it */
- ret = opae_adapter_enumerate(adapter);
- if (ret)
- goto free_adapter_data;
-
- /* get opae_manager to rawdev */
- mgr = opae_adapter_get_mgr(adapter);
- if (mgr) {
- /* PF function */
- IFPGA_RAWDEV_PMD_INFO("this is a PF function");
- }
-
- return ret;
-
-free_adapter_data:
- if (data)
- opae_adapter_data_free(data);
-cleanup:
- if (rawdev)
- rte_rawdev_pmd_release(rawdev);
-
- return ret;
-}
-
-static int
-ifpga_rawdev_destroy(struct rte_pci_device *pci_dev)
-{
- int ret;
- struct rte_rawdev *rawdev;
- char name[RTE_RAWDEV_NAME_MAX_LEN];
- struct opae_adapter *adapter;
-
- if (!pci_dev) {
- IFPGA_RAWDEV_PMD_ERR("Invalid pci_dev of the device!");
- ret = -EINVAL;
- return ret;
- }
-
- memset(name, 0, sizeof(name));
- snprintf(name, RTE_RAWDEV_NAME_MAX_LEN, "IFPGA:%x:%02x.%x",
- pci_dev->addr.bus, pci_dev->addr.devid, pci_dev->addr.function);
-
- IFPGA_RAWDEV_PMD_INFO("Closing %s on NUMA node %d",
- name, rte_socket_id());
-
- rawdev = rte_rawdev_pmd_get_named_dev(name);
- if (!rawdev) {
- IFPGA_RAWDEV_PMD_ERR("Invalid device name (%s)", name);
- return -EINVAL;
- }
-
- adapter = ifpga_rawdev_get_priv(rawdev);
- if (!adapter)
- return -ENODEV;
-
- opae_adapter_data_free(adapter->data);
- opae_adapter_free(adapter);
-
- /* rte_rawdev_close is called by pmd_release */
- ret = rte_rawdev_pmd_release(rawdev);
- if (ret)
- IFPGA_RAWDEV_PMD_DEBUG("Device cleanup failed");
-
- return ret;
-}
-
-static int
-ifpga_rawdev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
- struct rte_pci_device *pci_dev)
-{
- IFPGA_RAWDEV_PMD_FUNC_TRACE();
- return ifpga_rawdev_create(pci_dev, rte_socket_id());
-}
-
-static int
-ifpga_rawdev_pci_remove(struct rte_pci_device *pci_dev)
-{
- return ifpga_rawdev_destroy(pci_dev);
-}
-
-static struct rte_pci_driver rte_ifpga_rawdev_pmd = {
- .id_table = pci_ifpga_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = ifpga_rawdev_pci_probe,
- .remove = ifpga_rawdev_pci_remove,
-};
-
-RTE_PMD_REGISTER_PCI(ifpga_rawdev_pci_driver, rte_ifpga_rawdev_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(ifpga_rawdev_pci_driver, rte_ifpga_rawdev_pmd);
-RTE_PMD_REGISTER_KMOD_DEP(ifpga_rawdev_pci_driver, "* igb_uio | uio_pci_generic | vfio-pci");
-
-RTE_INIT(ifpga_rawdev_init_log)
-{
- ifpga_rawdev_logtype = rte_log_register("driver.raw.init");
- if (ifpga_rawdev_logtype >= 0)
- rte_log_set_level(ifpga_rawdev_logtype, RTE_LOG_NOTICE);
-}
-
-static const char * const valid_args[] = {
-#define IFPGA_ARG_NAME "ifpga"
- IFPGA_ARG_NAME,
-#define IFPGA_ARG_PORT "port"
- IFPGA_ARG_PORT,
-#define IFPGA_AFU_BTS "afu_bts"
- IFPGA_AFU_BTS,
- NULL
-};
-
-static int
-ifpga_cfg_probe(struct rte_vdev_device *dev)
-{
- struct rte_devargs *devargs;
- struct rte_kvargs *kvlist = NULL;
- int port;
- char *name = NULL;
- char dev_name[RTE_RAWDEV_NAME_MAX_LEN];
- int ret = -1;
-
- devargs = dev->device.devargs;
-
- kvlist = rte_kvargs_parse(devargs->args, valid_args);
- if (!kvlist) {
- IFPGA_RAWDEV_PMD_LOG(ERR, "error when parsing param");
- goto end;
- }
-
- if (rte_kvargs_count(kvlist, IFPGA_ARG_NAME) == 1) {
- if (rte_kvargs_process(kvlist, IFPGA_ARG_NAME,
- &rte_ifpga_get_string_arg, &name) < 0) {
- IFPGA_RAWDEV_PMD_ERR("error to parse %s",
- IFPGA_ARG_NAME);
- goto end;
- }
- } else {
- IFPGA_RAWDEV_PMD_ERR("arg %s is mandatory for ifpga bus",
- IFPGA_ARG_NAME);
- goto end;
- }
-
- if (rte_kvargs_count(kvlist, IFPGA_ARG_PORT) == 1) {
- if (rte_kvargs_process(kvlist,
- IFPGA_ARG_PORT,
- &rte_ifpga_get_integer32_arg,
- &port) < 0) {
- IFPGA_RAWDEV_PMD_ERR("error to parse %s",
- IFPGA_ARG_PORT);
- goto end;
- }
- } else {
- IFPGA_RAWDEV_PMD_ERR("arg %s is mandatory for ifpga bus",
- IFPGA_ARG_PORT);
- goto end;
- }
-
- memset(dev_name, 0, sizeof(dev_name));
- snprintf(dev_name, RTE_RAWDEV_NAME_MAX_LEN, "%d|%s",
- port, name);
-
- ret = rte_eal_hotplug_add(RTE_STR(IFPGA_BUS_NAME),
- dev_name, devargs->args);
-end:
- if (kvlist)
- rte_kvargs_free(kvlist);
- if (name)
- free(name);
-
- return ret;
-}
-
-static int
-ifpga_cfg_remove(struct rte_vdev_device *vdev)
-{
- IFPGA_RAWDEV_PMD_INFO("Remove ifpga_cfg %p",
- vdev);
-
- return 0;
-}
-
-static struct rte_vdev_driver ifpga_cfg_driver = {
- .probe = ifpga_cfg_probe,
- .remove = ifpga_cfg_remove,
-};
-
-RTE_PMD_REGISTER_VDEV(ifpga_rawdev_cfg, ifpga_cfg_driver);
-RTE_PMD_REGISTER_ALIAS(ifpga_rawdev_cfg, ifpga_cfg);
-RTE_PMD_REGISTER_PARAM_STRING(ifpga_rawdev_cfg,
- "ifpga=<string> "
- "port=<int> "
- "afu_bts=<path>");
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _IFPGA_RAWDEV_H_
-#define _IFPGA_RAWDEV_H_
-
-extern int ifpga_rawdev_logtype;
-
-#define IFPGA_RAWDEV_PMD_LOG(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, ifpga_rawdev_logtype, "%s(): " fmt "\n", \
- __func__, ##args)
-
-#define IFPGA_RAWDEV_PMD_FUNC_TRACE() IFPGA_RAWDEV_PMD_LOG(DEBUG, ">>")
-
-#define IFPGA_RAWDEV_PMD_DEBUG(fmt, args...) \
- IFPGA_RAWDEV_PMD_LOG(DEBUG, fmt, ## args)
-#define IFPGA_RAWDEV_PMD_INFO(fmt, args...) \
- IFPGA_RAWDEV_PMD_LOG(INFO, fmt, ## args)
-#define IFPGA_RAWDEV_PMD_ERR(fmt, args...) \
- IFPGA_RAWDEV_PMD_LOG(ERR, fmt, ## args)
-#define IFPGA_RAWDEV_PMD_WARN(fmt, args...) \
- IFPGA_RAWDEV_PMD_LOG(WARNING, fmt, ## args)
-
-enum ifpga_rawdev_device_state {
- IFPGA_IDLE,
- IFPGA_READY,
- IFPGA_ERROR
-};
-
-/** Set a bit in the uint64 variable */
-#define IFPGA_BIT_SET(var, pos) \
- ((var) |= ((uint64_t)1 << ((pos))))
-
-/** Reset the bit in the variable */
-#define IFPGA_BIT_RESET(var, pos) \
- ((var) &= ~((uint64_t)1 << ((pos))))
-
-/** Check the bit is set in the variable */
-#define IFPGA_BIT_ISSET(var, pos) \
- (((var) & ((uint64_t)1 << ((pos)))) ? 1 : 0)
-
-static inline struct opae_adapter *
-ifpga_rawdev_get_priv(const struct rte_rawdev *rawdev)
-{
- return rawdev->dev_private;
-}
-
-#endif /* _IFPGA_RAWDEV_H_ */
+++ /dev/null
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-version = 1
-
-subdir('base')
-objs = [base_objs]
-
-dep = dependency('libfdt', required: false)
-if not dep.found()
- build = false
- reason = 'missing dependency, "libfdt"'
-endif
-deps += ['rawdev', 'pci', 'bus_pci', 'kvargs',
- 'bus_vdev', 'bus_ifpga', 'net']
-sources = files('ifpga_rawdev.c')
-
-includes += include_directories('base')
-
-allow_experimental_apis = true
+++ /dev/null
-DPDK_18.05 {
-
- local: *;
-};
# Copyright 2018 NXP
drivers = ['dpaa2_cmdif', 'dpaa2_qdma',
- 'ifpga_rawdev', 'ioat', 'ntb',
+ 'ifpga', 'ioat', 'ntb',
'octeontx2_dma',
- 'skeleton_rawdev']
+ 'skeleton']
std_deps = ['rawdev']
config_flag_fmt = 'RTE_LIBRTE_PMD_@0@_RAWDEV'
driver_name_fmt = 'rte_pmd_@0@'
--- /dev/null
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2017 NXP
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_skeleton_rawdev.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+LDLIBS += -lrte_eal
+LDLIBS += -lrte_rawdev
+LDLIBS += -lrte_bus_vdev
+LDLIBS += -lrte_kvargs
+
+EXPORT_MAP := rte_pmd_skeleton_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_RAWDEV) += skeleton_rawdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_RAWDEV) += skeleton_rawdev_test.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
--- /dev/null
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2018 NXP
+
+deps += ['rawdev', 'kvargs', 'mbuf', 'bus_vdev']
+sources = files('skeleton_rawdev.c',
+ 'skeleton_rawdev_test.c')
--- /dev/null
+DPDK_18.02 {
+
+ local: *;
+};
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017 NXP
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <string.h>
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_kvargs.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_lcore.h>
+#include <rte_bus_vdev.h>
+
+#include <rte_rawdev.h>
+#include <rte_rawdev_pmd.h>
+
+#include "skeleton_rawdev.h"
+
+/* Dynamic log type identifier */
+int skeleton_pmd_logtype;
+
+/* Count of instances */
+static uint16_t skeldev_init_once;
+
+/**< Rawdev Skeleton dummy driver name */
+#define SKELETON_PMD_RAWDEV_NAME rawdev_skeleton
+
+struct queue_buffers {
+ void *bufs[SKELETON_QUEUE_MAX_DEPTH];
+};
+
+static struct queue_buffers queue_buf[SKELETON_MAX_QUEUES] = {};
+static void clear_queue_bufs(int queue_id);
+
+static void skeleton_rawdev_info_get(struct rte_rawdev *dev,
+ rte_rawdev_obj_t dev_info)
+{
+ struct skeleton_rawdev *skeldev;
+ struct skeleton_rawdev_conf *skeldev_conf;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ if (!dev_info) {
+ SKELETON_PMD_ERR("Invalid request");
+ return;
+ }
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ skeldev_conf = dev_info;
+
+ skeldev_conf->num_queues = skeldev->num_queues;
+ skeldev_conf->capabilities = skeldev->capabilities;
+ skeldev_conf->device_state = skeldev->device_state;
+ skeldev_conf->firmware_state = skeldev->fw.firmware_state;
+}
+
+static int skeleton_rawdev_configure(const struct rte_rawdev *dev,
+ rte_rawdev_obj_t config)
+{
+ struct skeleton_rawdev *skeldev;
+ struct skeleton_rawdev_conf *skeldev_conf;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ if (!config) {
+ SKELETON_PMD_ERR("Invalid configuration");
+ return -EINVAL;
+ }
+
+ skeldev_conf = config;
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ if (skeldev_conf->num_queues <= SKELETON_MAX_QUEUES)
+ skeldev->num_queues = skeldev_conf->num_queues;
+ else
+ return -EINVAL;
+
+ skeldev->capabilities = skeldev_conf->capabilities;
+ skeldev->num_queues = skeldev_conf->num_queues;
+
+ return 0;
+}
+
+static int skeleton_rawdev_start(struct rte_rawdev *dev)
+{
+ int ret = 0;
+ struct skeleton_rawdev *skeldev;
+ enum skeleton_firmware_state fw_state;
+ enum skeleton_device_state device_state;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ fw_state = skeldev->fw.firmware_state;
+ device_state = skeldev->device_state;
+
+ if (fw_state == SKELETON_FW_LOADED &&
+ device_state == SKELETON_DEV_STOPPED) {
+ skeldev->device_state = SKELETON_DEV_RUNNING;
+ } else {
+ SKELETON_PMD_ERR("Device not ready for starting");
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static void skeleton_rawdev_stop(struct rte_rawdev *dev)
+{
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ if (dev) {
+ skeldev = skeleton_rawdev_get_priv(dev);
+ skeldev->device_state = SKELETON_DEV_STOPPED;
+ }
+}
+
+static void
+reset_queues(struct skeleton_rawdev *skeldev)
+{
+ int i;
+
+ for (i = 0; i < SKELETON_MAX_QUEUES; i++) {
+ skeldev->queues[i].depth = SKELETON_QUEUE_DEF_DEPTH;
+ skeldev->queues[i].state = SKELETON_QUEUE_DETACH;
+ }
+}
+
+static void
+reset_attribute_table(struct skeleton_rawdev *skeldev)
+{
+ int i;
+
+ for (i = 0; i < SKELETON_MAX_ATTRIBUTES; i++) {
+ if (skeldev->attr[i].name) {
+ free(skeldev->attr[i].name);
+ skeldev->attr[i].name = NULL;
+ }
+ }
+}
+
+static int skeleton_rawdev_close(struct rte_rawdev *dev)
+{
+ int ret = 0, i;
+ struct skeleton_rawdev *skeldev;
+ enum skeleton_firmware_state fw_state;
+ enum skeleton_device_state device_state;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ fw_state = skeldev->fw.firmware_state;
+ device_state = skeldev->device_state;
+
+ reset_queues(skeldev);
+ reset_attribute_table(skeldev);
+
+ switch (fw_state) {
+ case SKELETON_FW_LOADED:
+ if (device_state == SKELETON_DEV_RUNNING) {
+ SKELETON_PMD_ERR("Cannot close running device");
+ ret = -EINVAL;
+ } else {
+ /* Probably call fw reset here */
+ skeldev->fw.firmware_state = SKELETON_FW_READY;
+ }
+ break;
+ case SKELETON_FW_READY:
+ case SKELETON_FW_ERROR:
+ default:
+ SKELETON_PMD_DEBUG("Device already in stopped state");
+ ret = -EINVAL;
+ break;
+ }
+
+ /* Clear all allocated queues */
+ for (i = 0; i < SKELETON_MAX_QUEUES; i++)
+ clear_queue_bufs(i);
+
+ return ret;
+}
+
+static int skeleton_rawdev_reset(struct rte_rawdev *dev)
+{
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ SKELETON_PMD_DEBUG("Resetting device");
+ skeldev->fw.firmware_state = SKELETON_FW_READY;
+
+ return 0;
+}
+
+static void skeleton_rawdev_queue_def_conf(struct rte_rawdev *dev,
+ uint16_t queue_id,
+ rte_rawdev_obj_t queue_conf)
+{
+ struct skeleton_rawdev *skeldev;
+ struct skeleton_rawdev_queue *skelq;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ if (!dev || !queue_conf)
+ return;
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+ skelq = &skeldev->queues[queue_id];
+
+ if (queue_id < SKELETON_MAX_QUEUES)
+ rte_memcpy(queue_conf, skelq,
+ sizeof(struct skeleton_rawdev_queue));
+}
+
+static void
+clear_queue_bufs(int queue_id)
+{
+ int i;
+
+ /* Clear buffers for queue_id */
+ for (i = 0; i < SKELETON_QUEUE_MAX_DEPTH; i++)
+ queue_buf[queue_id].bufs[i] = NULL;
+}
+
+static int skeleton_rawdev_queue_setup(struct rte_rawdev *dev,
+ uint16_t queue_id,
+ rte_rawdev_obj_t queue_conf)
+{
+ int ret = 0;
+ struct skeleton_rawdev *skeldev;
+ struct skeleton_rawdev_queue *q;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ if (!dev || !queue_conf)
+ return -EINVAL;
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+ q = &skeldev->queues[queue_id];
+
+ if (skeldev->num_queues > queue_id &&
+ q->depth < SKELETON_QUEUE_MAX_DEPTH) {
+ rte_memcpy(q, queue_conf,
+ sizeof(struct skeleton_rawdev_queue));
+ clear_queue_bufs(queue_id);
+ } else {
+ SKELETON_PMD_ERR("Invalid queue configuration");
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static int skeleton_rawdev_queue_release(struct rte_rawdev *dev,
+ uint16_t queue_id)
+{
+ int ret = 0;
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ if (skeldev->num_queues > queue_id) {
+ skeldev->queues[queue_id].state = SKELETON_QUEUE_DETACH;
+ skeldev->queues[queue_id].depth = SKELETON_QUEUE_DEF_DEPTH;
+ clear_queue_bufs(queue_id);
+ } else {
+ SKELETON_PMD_ERR("Invalid queue configuration");
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static uint16_t skeleton_rawdev_queue_count(struct rte_rawdev *dev)
+{
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+ return skeldev->num_queues;
+}
+
+static int skeleton_rawdev_get_attr(struct rte_rawdev *dev,
+ const char *attr_name,
+ uint64_t *attr_value)
+{
+ int i;
+ uint8_t done = 0;
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ if (!dev || !attr_name || !attr_value) {
+ SKELETON_PMD_ERR("Invalid arguments for getting attributes");
+ return -EINVAL;
+ }
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ for (i = 0; i < SKELETON_MAX_ATTRIBUTES; i++) {
+ if (!skeldev->attr[i].name)
+ continue;
+
+ if (!strncmp(skeldev->attr[i].name, attr_name,
+ SKELETON_ATTRIBUTE_NAME_MAX)) {
+ *attr_value = skeldev->attr[i].value;
+ done = 1;
+ SKELETON_PMD_DEBUG("Attribute (%s) Value (%" PRIu64 ")",
+ attr_name, *attr_value);
+ break;
+ }
+ }
+
+ if (done)
+ return 0;
+
+ /* Attribute not found */
+ return -EINVAL;
+}
+
+static int skeleton_rawdev_set_attr(struct rte_rawdev *dev,
+ const char *attr_name,
+ const uint64_t attr_value)
+{
+ int i;
+ uint8_t done = 0;
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ if (!dev || !attr_name) {
+ SKELETON_PMD_ERR("Invalid arguments for setting attributes");
+ return -EINVAL;
+ }
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ /* Check if attribute already exists */
+ for (i = 0; i < SKELETON_MAX_ATTRIBUTES; i++) {
+ if (!skeldev->attr[i].name)
+ break;
+
+ if (!strncmp(skeldev->attr[i].name, attr_name,
+ SKELETON_ATTRIBUTE_NAME_MAX)) {
+ /* Update value */
+ skeldev->attr[i].value = attr_value;
+ done = 1;
+ break;
+ }
+ }
+
+ if (!done) {
+ if (i < (SKELETON_MAX_ATTRIBUTES - 1)) {
+ /* There is still space to insert one more */
+ skeldev->attr[i].name = strdup(attr_name);
+ if (!skeldev->attr[i].name)
+ return -ENOMEM;
+
+ skeldev->attr[i].value = attr_value;
+ return 0;
+ }
+ }
+
+ return -EINVAL;
+}
+
+static int skeleton_rawdev_enqueue_bufs(struct rte_rawdev *dev,
+ struct rte_rawdev_buf **buffers,
+ unsigned int count,
+ rte_rawdev_obj_t context)
+{
+ unsigned int i;
+ uint16_t q_id;
+ RTE_SET_USED(dev);
+
+ /* context is essentially the queue_id which is
+ * transferred as opaque object through the library layer. This can
+ * help in complex implementation which require more information than
+ * just an integer - for example, a queue-pair.
+ */
+ q_id = *((int *)context);
+
+ for (i = 0; i < count; i++)
+ queue_buf[q_id].bufs[i] = buffers[i]->buf_addr;
+
+ return i;
+}
+
+static int skeleton_rawdev_dequeue_bufs(struct rte_rawdev *dev,
+ struct rte_rawdev_buf **buffers,
+ unsigned int count,
+ rte_rawdev_obj_t context)
+{
+ unsigned int i;
+ uint16_t q_id;
+ RTE_SET_USED(dev);
+
+ /* context is essentially the queue_id which is
+ * transferred as opaque object through the library layer. This can
+ * help in complex implementation which require more information than
+ * just an integer - for example, a queue-pair.
+ */
+ q_id = *((int *)context);
+
+ for (i = 0; i < count; i++)
+ buffers[i]->buf_addr = queue_buf[q_id].bufs[i];
+
+ return i;
+}
+
+static int skeleton_rawdev_dump(struct rte_rawdev *dev, FILE *f)
+{
+ RTE_SET_USED(dev);
+ RTE_SET_USED(f);
+
+ return 0;
+}
+
+static int skeleton_rawdev_firmware_status_get(struct rte_rawdev *dev,
+ rte_rawdev_obj_t status_info)
+{
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
+
+ if (status_info)
+ memcpy(status_info, &skeldev->fw.firmware_state,
+ sizeof(enum skeleton_firmware_state));
+
+ return 0;
+}
+
+
+static int skeleton_rawdev_firmware_version_get(
+ struct rte_rawdev *dev,
+ rte_rawdev_obj_t version_info)
+{
+ struct skeleton_rawdev *skeldev;
+ struct skeleton_firmware_version_info *vi;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+ vi = version_info;
+
+ vi->major = skeldev->fw.firmware_version.major;
+ vi->minor = skeldev->fw.firmware_version.minor;
+ vi->subrel = skeldev->fw.firmware_version.subrel;
+
+ return 0;
+}
+
+static int skeleton_rawdev_firmware_load(struct rte_rawdev *dev,
+ rte_rawdev_obj_t firmware_buf)
+{
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ /* firmware_buf is a mmaped, possibly DMA'able area, buffer. Being
+ * dummy, all this does is check if firmware_buf is not NULL and
+ * sets the state of the firmware.
+ */
+ if (!firmware_buf)
+ return -EINVAL;
+
+ skeldev->fw.firmware_state = SKELETON_FW_LOADED;
+
+ return 0;
+}
+
+static int skeleton_rawdev_firmware_unload(struct rte_rawdev *dev)
+{
+ struct skeleton_rawdev *skeldev;
+
+ SKELETON_PMD_FUNC_TRACE();
+
+ skeldev = skeleton_rawdev_get_priv(dev);
+
+ skeldev->fw.firmware_state = SKELETON_FW_READY;
+
+ return 0;
+}
+
+static const struct rte_rawdev_ops skeleton_rawdev_ops = {
+ .dev_info_get = skeleton_rawdev_info_get,
+ .dev_configure = skeleton_rawdev_configure,
+ .dev_start = skeleton_rawdev_start,
+ .dev_stop = skeleton_rawdev_stop,
+ .dev_close = skeleton_rawdev_close,
+ .dev_reset = skeleton_rawdev_reset,
+
+ .queue_def_conf = skeleton_rawdev_queue_def_conf,
+ .queue_setup = skeleton_rawdev_queue_setup,
+ .queue_release = skeleton_rawdev_queue_release,
+ .queue_count = skeleton_rawdev_queue_count,
+
+ .attr_get = skeleton_rawdev_get_attr,
+ .attr_set = skeleton_rawdev_set_attr,
+
+ .enqueue_bufs = skeleton_rawdev_enqueue_bufs,
+ .dequeue_bufs = skeleton_rawdev_dequeue_bufs,
+
+ .dump = skeleton_rawdev_dump,
+
+ .xstats_get = NULL,
+ .xstats_get_names = NULL,
+ .xstats_get_by_name = NULL,
+ .xstats_reset = NULL,
+
+ .firmware_status_get = skeleton_rawdev_firmware_status_get,
+ .firmware_version_get = skeleton_rawdev_firmware_version_get,
+ .firmware_load = skeleton_rawdev_firmware_load,
+ .firmware_unload = skeleton_rawdev_firmware_unload,
+
+ .dev_selftest = test_rawdev_skeldev,
+};
+
+static int
+skeleton_rawdev_create(const char *name,
+ struct rte_vdev_device *vdev,
+ int socket_id)
+{
+ int ret = 0, i;
+ struct rte_rawdev *rawdev = NULL;
+ struct skeleton_rawdev *skeldev = NULL;
+
+ if (!name) {
+ SKELETON_PMD_ERR("Invalid name of the device!");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /* Allocate device structure */
+ rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct skeleton_rawdev),
+ socket_id);
+ if (rawdev == NULL) {
+ SKELETON_PMD_ERR("Unable to allocate rawdevice");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = rawdev->dev_id; /* return the rawdev id of new device */
+
+ rawdev->dev_ops = &skeleton_rawdev_ops;
+ rawdev->device = &vdev->device;
+
+ skeldev = skeleton_rawdev_get_priv(rawdev);
+
+ skeldev->device_id = SKELETON_DEVICE_ID;
+ skeldev->vendor_id = SKELETON_VENDOR_ID;
+ skeldev->capabilities = SKELETON_DEFAULT_CAPA;
+
+ memset(&skeldev->fw, 0, sizeof(struct skeleton_firmware));
+
+ skeldev->fw.firmware_state = SKELETON_FW_READY;
+ skeldev->fw.firmware_version.major = SKELETON_MAJOR_VER;
+ skeldev->fw.firmware_version.minor = SKELETON_MINOR_VER;
+ skeldev->fw.firmware_version.subrel = SKELETON_SUB_VER;
+
+ skeldev->device_state = SKELETON_DEV_STOPPED;
+
+ /* Reset/set to default queue configuration for this device */
+ for (i = 0; i < SKELETON_MAX_QUEUES; i++) {
+ skeldev->queues[i].state = SKELETON_QUEUE_DETACH;
+ skeldev->queues[i].depth = SKELETON_QUEUE_DEF_DEPTH;
+ }
+
+ /* Clear all allocated queue buffers */
+ for (i = 0; i < SKELETON_MAX_QUEUES; i++)
+ clear_queue_bufs(i);
+
+ return ret;
+
+cleanup:
+ if (rawdev)
+ rte_rawdev_pmd_release(rawdev);
+
+ return ret;
+}
+
+static int
+skeleton_rawdev_destroy(const char *name)
+{
+ int ret;
+ struct rte_rawdev *rdev;
+
+ if (!name) {
+ SKELETON_PMD_ERR("Invalid device name");
+ return -EINVAL;
+ }
+
+ rdev = rte_rawdev_pmd_get_named_dev(name);
+ if (!rdev) {
+ SKELETON_PMD_ERR("Invalid device name (%s)", name);
+ return -EINVAL;
+ }
+
+ /* rte_rawdev_close is called by pmd_release */
+ ret = rte_rawdev_pmd_release(rdev);
+ if (ret)
+ SKELETON_PMD_DEBUG("Device cleanup failed");
+
+ return 0;
+}
+
+static int
+skeldev_get_selftest(const char *key __rte_unused,
+ const char *value,
+ void *opaque)
+{
+ int *flag = opaque;
+ *flag = atoi(value);
+ return 0;
+}
+
+static int
+skeldev_parse_vdev_args(struct rte_vdev_device *vdev)
+{
+ int selftest = 0;
+ const char *name;
+ const char *params;
+
+ static const char *const args[] = {
+ SKELETON_SELFTEST_ARG,
+ NULL
+ };
+
+ name = rte_vdev_device_name(vdev);
+
+ params = rte_vdev_device_args(vdev);
+ if (params != NULL && params[0] != '\0') {
+ struct rte_kvargs *kvlist = rte_kvargs_parse(params, args);
+
+ if (!kvlist) {
+ SKELETON_PMD_INFO(
+ "Ignoring unsupported params supplied '%s'",
+ name);
+ } else {
+ int ret = rte_kvargs_process(kvlist,
+ SKELETON_SELFTEST_ARG,
+ skeldev_get_selftest, &selftest);
+ if (ret != 0 || (selftest < 0 || selftest > 1)) {
+ SKELETON_PMD_ERR("%s: Error in parsing args",
+ name);
+ rte_kvargs_free(kvlist);
+ ret = -1; /* enforce if selftest is invalid */
+ return ret;
+ }
+ }
+
+ rte_kvargs_free(kvlist);
+ }
+
+ return selftest;
+}
+
+static int
+skeleton_rawdev_probe(struct rte_vdev_device *vdev)
+{
+ const char *name;
+ int selftest = 0, ret = 0;
+
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+
+ /* More than one instance is not supported */
+ if (skeldev_init_once) {
+ SKELETON_PMD_ERR("Multiple instance not supported for %s",
+ name);
+ return -EINVAL;
+ }
+
+ SKELETON_PMD_INFO("Init %s on NUMA node %d", name, rte_socket_id());
+
+ selftest = skeldev_parse_vdev_args(vdev);
+ /* In case of invalid argument, selftest != 1; ignore other values */
+
+ ret = skeleton_rawdev_create(name, vdev, rte_socket_id());
+ if (ret >= 0) {
+ /* In case command line argument for 'selftest' was passed;
+ * if invalid arguments were passed, execution continues but
+ * without selftest.
+ */
+ if (selftest == 1)
+ test_rawdev_skeldev(ret);
+ }
+
+ /* Device instance created; Second instance not possible */
+ skeldev_init_once = 1;
+
+ return ret < 0 ? ret : 0;
+}
+
+static int
+skeleton_rawdev_remove(struct rte_vdev_device *vdev)
+{
+ const char *name;
+ int ret;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -1;
+
+ SKELETON_PMD_INFO("Closing %s on NUMA node %d", name, rte_socket_id());
+
+ ret = skeleton_rawdev_destroy(name);
+ if (!ret)
+ skeldev_init_once = 0;
+
+ return ret;
+}
+
+static struct rte_vdev_driver skeleton_pmd_drv = {
+ .probe = skeleton_rawdev_probe,
+ .remove = skeleton_rawdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(SKELETON_PMD_RAWDEV_NAME, skeleton_pmd_drv);
+
+RTE_INIT(skeleton_pmd_init_log)
+{
+ skeleton_pmd_logtype = rte_log_register("rawdev.skeleton");
+ if (skeleton_pmd_logtype >= 0)
+ rte_log_set_level(skeleton_pmd_logtype, RTE_LOG_INFO);
+}
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017 NXP
+ */
+
+#ifndef __SKELETON_RAWDEV_H__
+#define __SKELETON_RAWDEV_H__
+
+#include <rte_rawdev.h>
+
+extern int skeleton_pmd_logtype;
+
+#define SKELETON_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, skeleton_pmd_logtype, "%s(): " fmt "\n", \
+ __func__, ##args)
+
+#define SKELETON_PMD_FUNC_TRACE() SKELETON_PMD_LOG(DEBUG, ">>")
+
+#define SKELETON_PMD_DEBUG(fmt, args...) \
+ SKELETON_PMD_LOG(DEBUG, fmt, ## args)
+#define SKELETON_PMD_INFO(fmt, args...) \
+ SKELETON_PMD_LOG(INFO, fmt, ## args)
+#define SKELETON_PMD_ERR(fmt, args...) \
+ SKELETON_PMD_LOG(ERR, fmt, ## args)
+#define SKELETON_PMD_WARN(fmt, args...) \
+ SKELETON_PMD_LOG(WARNING, fmt, ## args)
+/* Macros for self test application */
+#define SKELETON_TEST_INFO SKELETON_PMD_INFO
+#define SKELETON_TEST_DEBUG SKELETON_PMD_DEBUG
+#define SKELETON_TEST_ERR SKELETON_PMD_ERR
+#define SKELETON_TEST_WARN SKELETON_PMD_WARN
+
+#define SKELETON_SELFTEST_ARG ("selftest")
+
+#define SKELETON_VENDOR_ID 0x10
+#define SKELETON_DEVICE_ID 0x01
+
+#define SKELETON_MAJOR_VER 1
+#define SKELETON_MINOR_VER 0
+#define SKELETON_SUB_VER 0
+
+#define SKELETON_MAX_QUEUES 1
+
+enum skeleton_firmware_state {
+ SKELETON_FW_READY,
+ SKELETON_FW_LOADED,
+ SKELETON_FW_ERROR
+};
+
+enum skeleton_device_state {
+ SKELETON_DEV_RUNNING,
+ SKELETON_DEV_STOPPED
+};
+
+enum skeleton_queue_state {
+ SKELETON_QUEUE_DETACH,
+ SKELETON_QUEUE_ATTACH
+};
+
+#define SKELETON_QUEUE_DEF_DEPTH 10
+#define SKELETON_QUEUE_MAX_DEPTH 25
+
+struct skeleton_firmware_version_info {
+ uint8_t major;
+ uint8_t minor;
+ uint8_t subrel;
+};
+
+struct skeleton_firmware {
+ /**< Device firmware information */
+ struct skeleton_firmware_version_info firmware_version;
+ /**< Device state */
+ enum skeleton_firmware_state firmware_state;
+
+};
+
+#define SKELETON_MAX_ATTRIBUTES 10
+#define SKELETON_ATTRIBUTE_NAME_MAX 20
+
+struct skeleton_rawdev_attributes {
+ /**< Name of the attribute */
+ char *name;
+ /**< Value or reference of value of attribute */
+ uint64_t value;
+};
+
+/**< Device supports firmware loading/unloading */
+#define SKELETON_CAPA_FW_LOAD 0x0001
+/**< Device supports firmware reset */
+#define SKELETON_CAPA_FW_RESET 0x0002
+/**< Device support queue based communication */
+#define SKELETON_CAPA_QUEUES 0x0004
+/**< Default Capabilities: FW_LOAD, FW_RESET, QUEUES */
+#define SKELETON_DEFAULT_CAPA 0x7
+
+struct skeleton_rawdev_queue {
+ uint8_t state;
+ uint32_t depth;
+};
+
+struct skeleton_rawdev {
+ uint16_t device_id;
+ uint16_t vendor_id;
+ uint16_t num_queues;
+ /**< One of SKELETON_CAPA_* */
+ uint16_t capabilities;
+ /**< State of device; linked to firmware state */
+ enum skeleton_device_state device_state;
+ /**< Firmware configuration */
+ struct skeleton_firmware fw;
+ /**< Collection of all communication channels - which can be referred
+ * to as queues.
+ */
+ struct skeleton_rawdev_queue queues[SKELETON_MAX_QUEUES];
+ /**< Global table containing various pre-defined and user-defined
+ * attributes.
+ */
+ struct skeleton_rawdev_attributes attr[SKELETON_MAX_ATTRIBUTES];
+ struct rte_device *device;
+};
+
+struct skeleton_rawdev_conf {
+ uint16_t num_queues;
+ unsigned int capabilities;
+ enum skeleton_device_state device_state;
+ enum skeleton_firmware_state firmware_state;
+};
+
+static inline struct skeleton_rawdev *
+skeleton_rawdev_get_priv(const struct rte_rawdev *rawdev)
+{
+ return rawdev->dev_private;
+}
+
+int test_rawdev_skeldev(uint16_t dev_id);
+
+#endif /* __SKELETON_RAWDEV_H__ */
--- /dev/null
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017 NXP
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_dev.h>
+#include <rte_rawdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_test.h>
+
+/* Using relative path as skeleton_rawdev is not part of exported headers */
+#include "skeleton_rawdev.h"
+
+#define TEST_DEV_NAME "rawdev_skeleton"
+
+#define SKELDEV_LOGS(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, skeleton_pmd_logtype, fmt "\n", \
+ ##args)
+
+#define SKELDEV_TEST_INFO(fmt, args...) \
+ SKELDEV_LOGS(INFO, fmt, ## args)
+#define SKELDEV_TEST_DEBUG(fmt, args...) \
+ SKELDEV_LOGS(DEBUG, fmt, ## args)
+
+#define SKELDEV_TEST_RUN(setup, teardown, test) \
+ skeldev_test_run(setup, teardown, test, #test)
+
+#define TEST_SUCCESS 0
+#define TEST_FAILED -1
+
+static int total;
+static int passed;
+static int failed;
+static int unsupported;
+
+static uint16_t test_dev_id;
+
+static int
+testsuite_setup(void)
+{
+ uint8_t count;
+ count = rte_rawdev_count();
+ if (!count) {
+ SKELDEV_TEST_INFO("\tNo existing rawdev; "
+ "Creating 'skeldev_rawdev'");
+ return rte_vdev_init(TEST_DEV_NAME, NULL);
+ }
+
+ return TEST_SUCCESS;
+}
+
+static void local_teardown(void);
+
+static void
+testsuite_teardown(void)
+{
+ local_teardown();
+}
+
+static void
+local_teardown(void)
+{
+ rte_vdev_uninit(TEST_DEV_NAME);
+}
+
+static int
+test_rawdev_count(void)
+{
+ uint8_t count;
+ count = rte_rawdev_count();
+ RTE_TEST_ASSERT(count > 0, "Invalid rawdev count %" PRIu8, count);
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_get_dev_id(void)
+{
+ int ret;
+ ret = rte_rawdev_get_dev_id("invalid_rawdev_device");
+ RTE_TEST_ASSERT_FAIL(ret, "Expected <0 for invalid dev name ret=%d",
+ ret);
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_socket_id(void)
+{
+ int socket_id;
+ socket_id = rte_rawdev_socket_id(test_dev_id);
+ RTE_TEST_ASSERT(socket_id != -EINVAL,
+ "Failed to get socket_id %d", socket_id);
+ socket_id = rte_rawdev_socket_id(RTE_RAWDEV_MAX_DEVS);
+ RTE_TEST_ASSERT(socket_id == -EINVAL,
+ "Expected -EINVAL %d", socket_id);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_info_get(void)
+{
+ int ret;
+ struct rte_rawdev_info rdev_info = {0};
+ struct skeleton_rawdev_conf skel_conf = {0};
+
+ ret = rte_rawdev_info_get(test_dev_id, NULL);
+ RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+ rdev_info.dev_private = &skel_conf;
+
+ ret = rte_rawdev_info_get(test_dev_id, &rdev_info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get raw dev info");
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_configure(void)
+{
+ int ret;
+ struct rte_rawdev_info rdev_info = {0};
+ struct skeleton_rawdev_conf rdev_conf_set = {0};
+ struct skeleton_rawdev_conf rdev_conf_get = {0};
+
+ /* Check invalid configuration */
+ ret = rte_rawdev_configure(test_dev_id, NULL);
+ RTE_TEST_ASSERT(ret == -EINVAL,
+ "Null configure; Expected -EINVAL, got %d", ret);
+
+ /* Valid configuration test */
+ rdev_conf_set.num_queues = 1;
+ rdev_conf_set.capabilities = SKELETON_CAPA_FW_LOAD |
+ SKELETON_CAPA_FW_RESET;
+
+ rdev_info.dev_private = &rdev_conf_set;
+ ret = rte_rawdev_configure(test_dev_id,
+ (rte_rawdev_obj_t)&rdev_info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure rawdev (%d)", ret);
+
+ rdev_info.dev_private = &rdev_conf_get;
+ ret = rte_rawdev_info_get(test_dev_id,
+ (rte_rawdev_obj_t)&rdev_info);
+ RTE_TEST_ASSERT_SUCCESS(ret,
+ "Failed to obtain rawdev configuration (%d)",
+ ret);
+
+ RTE_TEST_ASSERT_EQUAL(rdev_conf_set.num_queues,
+ rdev_conf_get.num_queues,
+ "Configuration test failed; num_queues (%d)(%d)",
+ rdev_conf_set.num_queues,
+ rdev_conf_get.num_queues);
+ RTE_TEST_ASSERT_EQUAL(rdev_conf_set.capabilities,
+ rdev_conf_get.capabilities,
+ "Configuration test failed; capabilities");
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_queue_default_conf_get(void)
+{
+ int ret, i;
+ struct rte_rawdev_info rdev_info = {0};
+ struct skeleton_rawdev_conf rdev_conf_get = {0};
+ struct skeleton_rawdev_queue q = {0};
+
+ /* Get the current configuration */
+ rdev_info.dev_private = &rdev_conf_get;
+ ret = rte_rawdev_info_get(test_dev_id,
+ (rte_rawdev_obj_t)&rdev_info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain rawdev configuration (%d)",
+ ret);
+
+ /* call to test_rawdev_configure would have set the num_queues = 1 */
+ RTE_TEST_ASSERT_SUCCESS(!(rdev_conf_get.num_queues > 0),
+ "Invalid number of queues (%d). Expected 1",
+ rdev_conf_get.num_queues);
+ /* All queues by default should have state = DETACH and
+ * depth = DEF_DEPTH
+ */
+ for (i = 0; i < rdev_conf_get.num_queues; i++) {
+ rte_rawdev_queue_conf_get(test_dev_id, i, &q);
+ RTE_TEST_ASSERT_EQUAL(q.depth, SKELETON_QUEUE_DEF_DEPTH,
+ "Invalid default depth of queue (%d)",
+ q.depth);
+ RTE_TEST_ASSERT_EQUAL(q.state, SKELETON_QUEUE_DETACH,
+ "Invalid default state of queue (%d)",
+ q.state);
+ }
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_queue_count(void)
+{
+ unsigned int q_count;
+
+ /* Get the current configuration */
+ q_count = rte_rawdev_queue_count(test_dev_id);
+ RTE_TEST_ASSERT_EQUAL(q_count, 1, "Invalid queue count (%d)", q_count);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_queue_setup(void)
+{
+ int ret;
+ struct rte_rawdev_info rdev_info = {0};
+ struct skeleton_rawdev_conf rdev_conf_get = {0};
+ struct skeleton_rawdev_queue qset = {0};
+ struct skeleton_rawdev_queue qget = {0};
+
+ /* Get the current configuration */
+ rdev_info.dev_private = &rdev_conf_get;
+ ret = rte_rawdev_info_get(test_dev_id,
+ (rte_rawdev_obj_t)&rdev_info);
+ RTE_TEST_ASSERT_SUCCESS(ret,
+ "Failed to obtain rawdev configuration (%d)",
+ ret);
+
+ /* call to test_rawdev_configure would have set the num_queues = 1 */
+ RTE_TEST_ASSERT_SUCCESS(!(rdev_conf_get.num_queues > 0),
+ "Invalid number of queues (%d). Expected 1",
+ rdev_conf_get.num_queues);
+
+ /* Modify the queue depth for Queue 0 and attach it */
+ qset.depth = 15;
+ qset.state = SKELETON_QUEUE_ATTACH;
+ ret = rte_rawdev_queue_setup(test_dev_id, 0, &qset);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue (%d)", ret);
+
+ /* Now, fetching the queue 0 should show depth as 15 */
+ ret = rte_rawdev_queue_conf_get(test_dev_id, 0, &qget);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get queue config (%d)", ret);
+
+ RTE_TEST_ASSERT_EQUAL(qset.depth, qget.depth,
+ "Failed to set queue depth: Need(%d), has(%d)",
+ qset.depth, qget.depth);
+
+ return TEST_SUCCESS;
+}
+
+/* After executing test_rawdev_queue_setup, queue_id=0 would have depth as 15.
+ * Releasing should set it back to default. state would set to DETACH
+ */
+static int
+test_rawdev_queue_release(void)
+{
+ int ret;
+ struct skeleton_rawdev_queue qget = {0};
+
+ /* Now, fetching the queue 0 should show depth as 100 */
+ ret = rte_rawdev_queue_release(test_dev_id, 0);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to release queue 0; (%d)", ret);
+
+ /* Now, fetching the queue 0 should show depth as default */
+ ret = rte_rawdev_queue_conf_get(test_dev_id, 0, &qget);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get queue config (%d)", ret);
+
+ RTE_TEST_ASSERT_EQUAL(qget.depth, SKELETON_QUEUE_DEF_DEPTH,
+ "Release of Queue 0 failed; (depth)");
+
+ RTE_TEST_ASSERT_EQUAL(qget.state, SKELETON_QUEUE_DETACH,
+ "Release of Queue 0 failed; (state)");
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_attr_set_get(void)
+{
+ int ret;
+ int *dummy_value, set_value;
+ uint64_t ret_value;
+
+ /* Set an attribute and fetch it */
+ ret = rte_rawdev_set_attr(test_dev_id, "Test1", 100);
+ RTE_TEST_ASSERT(!ret, "Unable to set an attribute (Test1)");
+
+ dummy_value = &set_value;
+ *dummy_value = 200;
+ ret = rte_rawdev_set_attr(test_dev_id, "Test2", (uintptr_t)dummy_value);
+
+ /* Check if attributes have been set */
+ ret = rte_rawdev_get_attr(test_dev_id, "Test1", &ret_value);
+ RTE_TEST_ASSERT_EQUAL(ret_value, 100,
+ "Attribute (Test1) not set correctly (%" PRIu64 ")",
+ ret_value);
+
+ ret_value = 0;
+ ret = rte_rawdev_get_attr(test_dev_id, "Test2", &ret_value);
+ RTE_TEST_ASSERT_EQUAL(*((int *)(uintptr_t)ret_value), set_value,
+ "Attribute (Test2) not set correctly (%" PRIu64 ")",
+ ret_value);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_start_stop(void)
+{
+ int ret;
+ struct rte_rawdev_info rdev_info = {0};
+ struct skeleton_rawdev_conf rdev_conf_get = {0};
+ char *dummy_firmware = NULL;
+
+ /* Get the current configuration */
+ rdev_info.dev_private = &rdev_conf_get;
+
+ /* Load a firmware using a dummy address area */
+ dummy_firmware = rte_zmalloc("RAWDEV SKELETON", sizeof(int) * 10, 0);
+ RTE_TEST_ASSERT(dummy_firmware != NULL,
+ "Failed to create firmware memory backing");
+
+ ret = rte_rawdev_firmware_load(test_dev_id, dummy_firmware);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Firmware loading failed (%d)", ret);
+
+ /* Skeleton doesn't do anything with the firmware area - that is dummy
+ * and can be removed.
+ */
+ rte_free(dummy_firmware);
+ dummy_firmware = NULL;
+
+ rte_rawdev_start(test_dev_id);
+ ret = rte_rawdev_info_get(test_dev_id, (rte_rawdev_obj_t)&rdev_info);
+ RTE_TEST_ASSERT_SUCCESS(ret,
+ "Failed to obtain rawdev configuration (%d)",
+ ret);
+ RTE_TEST_ASSERT_EQUAL(rdev_conf_get.device_state, SKELETON_DEV_RUNNING,
+ "Device start failed. State is (%d)",
+ rdev_conf_get.device_state);
+
+ rte_rawdev_stop(test_dev_id);
+ ret = rte_rawdev_info_get(test_dev_id, (rte_rawdev_obj_t)&rdev_info);
+ RTE_TEST_ASSERT_SUCCESS(ret,
+ "Failed to obtain rawdev configuration (%d)",
+ ret);
+ RTE_TEST_ASSERT_EQUAL(rdev_conf_get.device_state, SKELETON_DEV_STOPPED,
+ "Device stop failed. State is (%d)",
+ rdev_conf_get.device_state);
+
+ /* Unloading the firmware once device is stopped */
+ ret = rte_rawdev_firmware_unload(test_dev_id);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to unload firmware (%d)", ret);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_rawdev_enqdeq(void)
+{
+ int ret;
+ unsigned int count = 1;
+ uint16_t queue_id = 0;
+ struct rte_rawdev_buf buffers[1];
+ struct rte_rawdev_buf *deq_buffers = NULL;
+
+ buffers[0].buf_addr = malloc(strlen(TEST_DEV_NAME) + 3);
+ if (!buffers[0].buf_addr)
+ goto cleanup;
+ snprintf(buffers[0].buf_addr, strlen(TEST_DEV_NAME) + 2, "%s%d",
+ TEST_DEV_NAME, 0);
+
+ ret = rte_rawdev_enqueue_buffers(test_dev_id,
+ (struct rte_rawdev_buf **)&buffers,
+ count, &queue_id);
+ RTE_TEST_ASSERT_EQUAL((unsigned int)ret, count,
+ "Unable to enqueue buffers");
+
+ deq_buffers = malloc(sizeof(struct rte_rawdev_buf) * count);
+ if (!deq_buffers)
+ goto cleanup;
+
+ ret = rte_rawdev_dequeue_buffers(test_dev_id,
+ (struct rte_rawdev_buf **)&deq_buffers,
+ count, &queue_id);
+ RTE_TEST_ASSERT_EQUAL((unsigned int)ret, count,
+ "Unable to dequeue buffers");
+
+ if (deq_buffers)
+ free(deq_buffers);
+
+ return TEST_SUCCESS;
+cleanup:
+ if (buffers[0].buf_addr)
+ free(buffers[0].buf_addr);
+
+ return TEST_FAILED;
+}
+
+static void skeldev_test_run(int (*setup)(void),
+ void (*teardown)(void),
+ int (*test)(void),
+ const char *name)
+{
+ int ret = 0;
+
+ if (setup) {
+ ret = setup();
+ if (ret < 0) {
+ SKELDEV_TEST_INFO("Error setting up test %s", name);
+ unsupported++;
+ }
+ }
+
+ if (test) {
+ ret = test();
+ if (ret < 0) {
+ failed++;
+ SKELDEV_TEST_INFO("%s Failed", name);
+ } else {
+ passed++;
+ SKELDEV_TEST_DEBUG("%s Passed", name);
+ }
+ }
+
+ if (teardown)
+ teardown();
+
+ total++;
+}
+
+int
+test_rawdev_skeldev(uint16_t dev_id)
+{
+ test_dev_id = dev_id;
+ testsuite_setup();
+
+ SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_count);
+ SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_get_dev_id);
+ SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_socket_id);
+ SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_info_get);
+ SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_configure);
+ SKELDEV_TEST_RUN(test_rawdev_configure, NULL,
+ test_rawdev_queue_default_conf_get);
+ SKELDEV_TEST_RUN(test_rawdev_configure, NULL, test_rawdev_queue_setup);
+ SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_queue_count);
+ SKELDEV_TEST_RUN(test_rawdev_queue_setup, NULL,
+ test_rawdev_queue_release);
+ SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_attr_set_get);
+ SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_start_stop);
+ SKELDEV_TEST_RUN(test_rawdev_queue_setup, NULL, test_rawdev_enqdeq);
+
+ testsuite_teardown();
+
+ SKELDEV_TEST_INFO("Total tests : %d", total);
+ SKELDEV_TEST_INFO("Passed : %d", passed);
+ SKELDEV_TEST_INFO("Failed : %d", failed);
+ SKELDEV_TEST_INFO("Not supported : %d", unsupported);
+
+ if (failed)
+ return -1;
+
+ return 0;
+};
+++ /dev/null
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2017 NXP
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-#
-# library name
-#
-LIB = librte_pmd_skeleton_rawdev.a
-
-CFLAGS += -O3
-CFLAGS += $(WERROR_FLAGS)
-LDLIBS += -lrte_eal
-LDLIBS += -lrte_rawdev
-LDLIBS += -lrte_bus_vdev
-LDLIBS += -lrte_kvargs
-
-EXPORT_MAP := rte_pmd_skeleton_rawdev_version.map
-
-LIBABIVER := 1
-
-#
-# all source are stored in SRCS-y
-#
-SRCS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_RAWDEV) += skeleton_rawdev.c
-SRCS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_RAWDEV) += skeleton_rawdev_test.c
-
-include $(RTE_SDK)/mk/rte.lib.mk
+++ /dev/null
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018 NXP
-
-deps += ['rawdev', 'kvargs', 'mbuf', 'bus_vdev']
-sources = files('skeleton_rawdev.c',
- 'skeleton_rawdev_test.c')
+++ /dev/null
-DPDK_18.02 {
-
- local: *;
-};
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017 NXP
- */
-
-#include <assert.h>
-#include <stdio.h>
-#include <stdbool.h>
-#include <errno.h>
-#include <stdint.h>
-#include <inttypes.h>
-#include <string.h>
-
-#include <rte_byteorder.h>
-#include <rte_common.h>
-#include <rte_debug.h>
-#include <rte_dev.h>
-#include <rte_eal.h>
-#include <rte_kvargs.h>
-#include <rte_log.h>
-#include <rte_malloc.h>
-#include <rte_memory.h>
-#include <rte_memcpy.h>
-#include <rte_lcore.h>
-#include <rte_bus_vdev.h>
-
-#include <rte_rawdev.h>
-#include <rte_rawdev_pmd.h>
-
-#include "skeleton_rawdev.h"
-
-/* Dynamic log type identifier */
-int skeleton_pmd_logtype;
-
-/* Count of instances */
-static uint16_t skeldev_init_once;
-
-/**< Rawdev Skeleton dummy driver name */
-#define SKELETON_PMD_RAWDEV_NAME rawdev_skeleton
-
-struct queue_buffers {
- void *bufs[SKELETON_QUEUE_MAX_DEPTH];
-};
-
-static struct queue_buffers queue_buf[SKELETON_MAX_QUEUES] = {};
-static void clear_queue_bufs(int queue_id);
-
-static void skeleton_rawdev_info_get(struct rte_rawdev *dev,
- rte_rawdev_obj_t dev_info)
-{
- struct skeleton_rawdev *skeldev;
- struct skeleton_rawdev_conf *skeldev_conf;
-
- SKELETON_PMD_FUNC_TRACE();
-
- if (!dev_info) {
- SKELETON_PMD_ERR("Invalid request");
- return;
- }
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- skeldev_conf = dev_info;
-
- skeldev_conf->num_queues = skeldev->num_queues;
- skeldev_conf->capabilities = skeldev->capabilities;
- skeldev_conf->device_state = skeldev->device_state;
- skeldev_conf->firmware_state = skeldev->fw.firmware_state;
-}
-
-static int skeleton_rawdev_configure(const struct rte_rawdev *dev,
- rte_rawdev_obj_t config)
-{
- struct skeleton_rawdev *skeldev;
- struct skeleton_rawdev_conf *skeldev_conf;
-
- SKELETON_PMD_FUNC_TRACE();
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- if (!config) {
- SKELETON_PMD_ERR("Invalid configuration");
- return -EINVAL;
- }
-
- skeldev_conf = config;
- skeldev = skeleton_rawdev_get_priv(dev);
-
- if (skeldev_conf->num_queues <= SKELETON_MAX_QUEUES)
- skeldev->num_queues = skeldev_conf->num_queues;
- else
- return -EINVAL;
-
- skeldev->capabilities = skeldev_conf->capabilities;
- skeldev->num_queues = skeldev_conf->num_queues;
-
- return 0;
-}
-
-static int skeleton_rawdev_start(struct rte_rawdev *dev)
-{
- int ret = 0;
- struct skeleton_rawdev *skeldev;
- enum skeleton_firmware_state fw_state;
- enum skeleton_device_state device_state;
-
- SKELETON_PMD_FUNC_TRACE();
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- fw_state = skeldev->fw.firmware_state;
- device_state = skeldev->device_state;
-
- if (fw_state == SKELETON_FW_LOADED &&
- device_state == SKELETON_DEV_STOPPED) {
- skeldev->device_state = SKELETON_DEV_RUNNING;
- } else {
- SKELETON_PMD_ERR("Device not ready for starting");
- ret = -EINVAL;
- }
-
- return ret;
-}
-
-static void skeleton_rawdev_stop(struct rte_rawdev *dev)
-{
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- if (dev) {
- skeldev = skeleton_rawdev_get_priv(dev);
- skeldev->device_state = SKELETON_DEV_STOPPED;
- }
-}
-
-static void
-reset_queues(struct skeleton_rawdev *skeldev)
-{
- int i;
-
- for (i = 0; i < SKELETON_MAX_QUEUES; i++) {
- skeldev->queues[i].depth = SKELETON_QUEUE_DEF_DEPTH;
- skeldev->queues[i].state = SKELETON_QUEUE_DETACH;
- }
-}
-
-static void
-reset_attribute_table(struct skeleton_rawdev *skeldev)
-{
- int i;
-
- for (i = 0; i < SKELETON_MAX_ATTRIBUTES; i++) {
- if (skeldev->attr[i].name) {
- free(skeldev->attr[i].name);
- skeldev->attr[i].name = NULL;
- }
- }
-}
-
-static int skeleton_rawdev_close(struct rte_rawdev *dev)
-{
- int ret = 0, i;
- struct skeleton_rawdev *skeldev;
- enum skeleton_firmware_state fw_state;
- enum skeleton_device_state device_state;
-
- SKELETON_PMD_FUNC_TRACE();
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- fw_state = skeldev->fw.firmware_state;
- device_state = skeldev->device_state;
-
- reset_queues(skeldev);
- reset_attribute_table(skeldev);
-
- switch (fw_state) {
- case SKELETON_FW_LOADED:
- if (device_state == SKELETON_DEV_RUNNING) {
- SKELETON_PMD_ERR("Cannot close running device");
- ret = -EINVAL;
- } else {
- /* Probably call fw reset here */
- skeldev->fw.firmware_state = SKELETON_FW_READY;
- }
- break;
- case SKELETON_FW_READY:
- case SKELETON_FW_ERROR:
- default:
- SKELETON_PMD_DEBUG("Device already in stopped state");
- ret = -EINVAL;
- break;
- }
-
- /* Clear all allocated queues */
- for (i = 0; i < SKELETON_MAX_QUEUES; i++)
- clear_queue_bufs(i);
-
- return ret;
-}
-
-static int skeleton_rawdev_reset(struct rte_rawdev *dev)
-{
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- SKELETON_PMD_DEBUG("Resetting device");
- skeldev->fw.firmware_state = SKELETON_FW_READY;
-
- return 0;
-}
-
-static void skeleton_rawdev_queue_def_conf(struct rte_rawdev *dev,
- uint16_t queue_id,
- rte_rawdev_obj_t queue_conf)
-{
- struct skeleton_rawdev *skeldev;
- struct skeleton_rawdev_queue *skelq;
-
- SKELETON_PMD_FUNC_TRACE();
-
- if (!dev || !queue_conf)
- return;
-
- skeldev = skeleton_rawdev_get_priv(dev);
- skelq = &skeldev->queues[queue_id];
-
- if (queue_id < SKELETON_MAX_QUEUES)
- rte_memcpy(queue_conf, skelq,
- sizeof(struct skeleton_rawdev_queue));
-}
-
-static void
-clear_queue_bufs(int queue_id)
-{
- int i;
-
- /* Clear buffers for queue_id */
- for (i = 0; i < SKELETON_QUEUE_MAX_DEPTH; i++)
- queue_buf[queue_id].bufs[i] = NULL;
-}
-
-static int skeleton_rawdev_queue_setup(struct rte_rawdev *dev,
- uint16_t queue_id,
- rte_rawdev_obj_t queue_conf)
-{
- int ret = 0;
- struct skeleton_rawdev *skeldev;
- struct skeleton_rawdev_queue *q;
-
- SKELETON_PMD_FUNC_TRACE();
-
- if (!dev || !queue_conf)
- return -EINVAL;
-
- skeldev = skeleton_rawdev_get_priv(dev);
- q = &skeldev->queues[queue_id];
-
- if (skeldev->num_queues > queue_id &&
- q->depth < SKELETON_QUEUE_MAX_DEPTH) {
- rte_memcpy(q, queue_conf,
- sizeof(struct skeleton_rawdev_queue));
- clear_queue_bufs(queue_id);
- } else {
- SKELETON_PMD_ERR("Invalid queue configuration");
- ret = -EINVAL;
- }
-
- return ret;
-}
-
-static int skeleton_rawdev_queue_release(struct rte_rawdev *dev,
- uint16_t queue_id)
-{
- int ret = 0;
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- if (skeldev->num_queues > queue_id) {
- skeldev->queues[queue_id].state = SKELETON_QUEUE_DETACH;
- skeldev->queues[queue_id].depth = SKELETON_QUEUE_DEF_DEPTH;
- clear_queue_bufs(queue_id);
- } else {
- SKELETON_PMD_ERR("Invalid queue configuration");
- ret = -EINVAL;
- }
-
- return ret;
-}
-
-static uint16_t skeleton_rawdev_queue_count(struct rte_rawdev *dev)
-{
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- skeldev = skeleton_rawdev_get_priv(dev);
- return skeldev->num_queues;
-}
-
-static int skeleton_rawdev_get_attr(struct rte_rawdev *dev,
- const char *attr_name,
- uint64_t *attr_value)
-{
- int i;
- uint8_t done = 0;
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- if (!dev || !attr_name || !attr_value) {
- SKELETON_PMD_ERR("Invalid arguments for getting attributes");
- return -EINVAL;
- }
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- for (i = 0; i < SKELETON_MAX_ATTRIBUTES; i++) {
- if (!skeldev->attr[i].name)
- continue;
-
- if (!strncmp(skeldev->attr[i].name, attr_name,
- SKELETON_ATTRIBUTE_NAME_MAX)) {
- *attr_value = skeldev->attr[i].value;
- done = 1;
- SKELETON_PMD_DEBUG("Attribute (%s) Value (%" PRIu64 ")",
- attr_name, *attr_value);
- break;
- }
- }
-
- if (done)
- return 0;
-
- /* Attribute not found */
- return -EINVAL;
-}
-
-static int skeleton_rawdev_set_attr(struct rte_rawdev *dev,
- const char *attr_name,
- const uint64_t attr_value)
-{
- int i;
- uint8_t done = 0;
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- if (!dev || !attr_name) {
- SKELETON_PMD_ERR("Invalid arguments for setting attributes");
- return -EINVAL;
- }
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- /* Check if attribute already exists */
- for (i = 0; i < SKELETON_MAX_ATTRIBUTES; i++) {
- if (!skeldev->attr[i].name)
- break;
-
- if (!strncmp(skeldev->attr[i].name, attr_name,
- SKELETON_ATTRIBUTE_NAME_MAX)) {
- /* Update value */
- skeldev->attr[i].value = attr_value;
- done = 1;
- break;
- }
- }
-
- if (!done) {
- if (i < (SKELETON_MAX_ATTRIBUTES - 1)) {
- /* There is still space to insert one more */
- skeldev->attr[i].name = strdup(attr_name);
- if (!skeldev->attr[i].name)
- return -ENOMEM;
-
- skeldev->attr[i].value = attr_value;
- return 0;
- }
- }
-
- return -EINVAL;
-}
-
-static int skeleton_rawdev_enqueue_bufs(struct rte_rawdev *dev,
- struct rte_rawdev_buf **buffers,
- unsigned int count,
- rte_rawdev_obj_t context)
-{
- unsigned int i;
- uint16_t q_id;
- RTE_SET_USED(dev);
-
- /* context is essentially the queue_id which is
- * transferred as opaque object through the library layer. This can
- * help in complex implementation which require more information than
- * just an integer - for example, a queue-pair.
- */
- q_id = *((int *)context);
-
- for (i = 0; i < count; i++)
- queue_buf[q_id].bufs[i] = buffers[i]->buf_addr;
-
- return i;
-}
-
-static int skeleton_rawdev_dequeue_bufs(struct rte_rawdev *dev,
- struct rte_rawdev_buf **buffers,
- unsigned int count,
- rte_rawdev_obj_t context)
-{
- unsigned int i;
- uint16_t q_id;
- RTE_SET_USED(dev);
-
- /* context is essentially the queue_id which is
- * transferred as opaque object through the library layer. This can
- * help in complex implementation which require more information than
- * just an integer - for example, a queue-pair.
- */
- q_id = *((int *)context);
-
- for (i = 0; i < count; i++)
- buffers[i]->buf_addr = queue_buf[q_id].bufs[i];
-
- return i;
-}
-
-static int skeleton_rawdev_dump(struct rte_rawdev *dev, FILE *f)
-{
- RTE_SET_USED(dev);
- RTE_SET_USED(f);
-
- return 0;
-}
-
-static int skeleton_rawdev_firmware_status_get(struct rte_rawdev *dev,
- rte_rawdev_obj_t status_info)
-{
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- RTE_FUNC_PTR_OR_ERR_RET(dev, -EINVAL);
-
- if (status_info)
- memcpy(status_info, &skeldev->fw.firmware_state,
- sizeof(enum skeleton_firmware_state));
-
- return 0;
-}
-
-
-static int skeleton_rawdev_firmware_version_get(
- struct rte_rawdev *dev,
- rte_rawdev_obj_t version_info)
-{
- struct skeleton_rawdev *skeldev;
- struct skeleton_firmware_version_info *vi;
-
- SKELETON_PMD_FUNC_TRACE();
-
- skeldev = skeleton_rawdev_get_priv(dev);
- vi = version_info;
-
- vi->major = skeldev->fw.firmware_version.major;
- vi->minor = skeldev->fw.firmware_version.minor;
- vi->subrel = skeldev->fw.firmware_version.subrel;
-
- return 0;
-}
-
-static int skeleton_rawdev_firmware_load(struct rte_rawdev *dev,
- rte_rawdev_obj_t firmware_buf)
-{
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- /* firmware_buf is a mmaped, possibly DMA'able area, buffer. Being
- * dummy, all this does is check if firmware_buf is not NULL and
- * sets the state of the firmware.
- */
- if (!firmware_buf)
- return -EINVAL;
-
- skeldev->fw.firmware_state = SKELETON_FW_LOADED;
-
- return 0;
-}
-
-static int skeleton_rawdev_firmware_unload(struct rte_rawdev *dev)
-{
- struct skeleton_rawdev *skeldev;
-
- SKELETON_PMD_FUNC_TRACE();
-
- skeldev = skeleton_rawdev_get_priv(dev);
-
- skeldev->fw.firmware_state = SKELETON_FW_READY;
-
- return 0;
-}
-
-static const struct rte_rawdev_ops skeleton_rawdev_ops = {
- .dev_info_get = skeleton_rawdev_info_get,
- .dev_configure = skeleton_rawdev_configure,
- .dev_start = skeleton_rawdev_start,
- .dev_stop = skeleton_rawdev_stop,
- .dev_close = skeleton_rawdev_close,
- .dev_reset = skeleton_rawdev_reset,
-
- .queue_def_conf = skeleton_rawdev_queue_def_conf,
- .queue_setup = skeleton_rawdev_queue_setup,
- .queue_release = skeleton_rawdev_queue_release,
- .queue_count = skeleton_rawdev_queue_count,
-
- .attr_get = skeleton_rawdev_get_attr,
- .attr_set = skeleton_rawdev_set_attr,
-
- .enqueue_bufs = skeleton_rawdev_enqueue_bufs,
- .dequeue_bufs = skeleton_rawdev_dequeue_bufs,
-
- .dump = skeleton_rawdev_dump,
-
- .xstats_get = NULL,
- .xstats_get_names = NULL,
- .xstats_get_by_name = NULL,
- .xstats_reset = NULL,
-
- .firmware_status_get = skeleton_rawdev_firmware_status_get,
- .firmware_version_get = skeleton_rawdev_firmware_version_get,
- .firmware_load = skeleton_rawdev_firmware_load,
- .firmware_unload = skeleton_rawdev_firmware_unload,
-
- .dev_selftest = test_rawdev_skeldev,
-};
-
-static int
-skeleton_rawdev_create(const char *name,
- struct rte_vdev_device *vdev,
- int socket_id)
-{
- int ret = 0, i;
- struct rte_rawdev *rawdev = NULL;
- struct skeleton_rawdev *skeldev = NULL;
-
- if (!name) {
- SKELETON_PMD_ERR("Invalid name of the device!");
- ret = -EINVAL;
- goto cleanup;
- }
-
- /* Allocate device structure */
- rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct skeleton_rawdev),
- socket_id);
- if (rawdev == NULL) {
- SKELETON_PMD_ERR("Unable to allocate rawdevice");
- ret = -EINVAL;
- goto cleanup;
- }
-
- ret = rawdev->dev_id; /* return the rawdev id of new device */
-
- rawdev->dev_ops = &skeleton_rawdev_ops;
- rawdev->device = &vdev->device;
-
- skeldev = skeleton_rawdev_get_priv(rawdev);
-
- skeldev->device_id = SKELETON_DEVICE_ID;
- skeldev->vendor_id = SKELETON_VENDOR_ID;
- skeldev->capabilities = SKELETON_DEFAULT_CAPA;
-
- memset(&skeldev->fw, 0, sizeof(struct skeleton_firmware));
-
- skeldev->fw.firmware_state = SKELETON_FW_READY;
- skeldev->fw.firmware_version.major = SKELETON_MAJOR_VER;
- skeldev->fw.firmware_version.minor = SKELETON_MINOR_VER;
- skeldev->fw.firmware_version.subrel = SKELETON_SUB_VER;
-
- skeldev->device_state = SKELETON_DEV_STOPPED;
-
- /* Reset/set to default queue configuration for this device */
- for (i = 0; i < SKELETON_MAX_QUEUES; i++) {
- skeldev->queues[i].state = SKELETON_QUEUE_DETACH;
- skeldev->queues[i].depth = SKELETON_QUEUE_DEF_DEPTH;
- }
-
- /* Clear all allocated queue buffers */
- for (i = 0; i < SKELETON_MAX_QUEUES; i++)
- clear_queue_bufs(i);
-
- return ret;
-
-cleanup:
- if (rawdev)
- rte_rawdev_pmd_release(rawdev);
-
- return ret;
-}
-
-static int
-skeleton_rawdev_destroy(const char *name)
-{
- int ret;
- struct rte_rawdev *rdev;
-
- if (!name) {
- SKELETON_PMD_ERR("Invalid device name");
- return -EINVAL;
- }
-
- rdev = rte_rawdev_pmd_get_named_dev(name);
- if (!rdev) {
- SKELETON_PMD_ERR("Invalid device name (%s)", name);
- return -EINVAL;
- }
-
- /* rte_rawdev_close is called by pmd_release */
- ret = rte_rawdev_pmd_release(rdev);
- if (ret)
- SKELETON_PMD_DEBUG("Device cleanup failed");
-
- return 0;
-}
-
-static int
-skeldev_get_selftest(const char *key __rte_unused,
- const char *value,
- void *opaque)
-{
- int *flag = opaque;
- *flag = atoi(value);
- return 0;
-}
-
-static int
-skeldev_parse_vdev_args(struct rte_vdev_device *vdev)
-{
- int selftest = 0;
- const char *name;
- const char *params;
-
- static const char *const args[] = {
- SKELETON_SELFTEST_ARG,
- NULL
- };
-
- name = rte_vdev_device_name(vdev);
-
- params = rte_vdev_device_args(vdev);
- if (params != NULL && params[0] != '\0') {
- struct rte_kvargs *kvlist = rte_kvargs_parse(params, args);
-
- if (!kvlist) {
- SKELETON_PMD_INFO(
- "Ignoring unsupported params supplied '%s'",
- name);
- } else {
- int ret = rte_kvargs_process(kvlist,
- SKELETON_SELFTEST_ARG,
- skeldev_get_selftest, &selftest);
- if (ret != 0 || (selftest < 0 || selftest > 1)) {
- SKELETON_PMD_ERR("%s: Error in parsing args",
- name);
- rte_kvargs_free(kvlist);
- ret = -1; /* enforce if selftest is invalid */
- return ret;
- }
- }
-
- rte_kvargs_free(kvlist);
- }
-
- return selftest;
-}
-
-static int
-skeleton_rawdev_probe(struct rte_vdev_device *vdev)
-{
- const char *name;
- int selftest = 0, ret = 0;
-
-
- name = rte_vdev_device_name(vdev);
- if (name == NULL)
- return -EINVAL;
-
- /* More than one instance is not supported */
- if (skeldev_init_once) {
- SKELETON_PMD_ERR("Multiple instance not supported for %s",
- name);
- return -EINVAL;
- }
-
- SKELETON_PMD_INFO("Init %s on NUMA node %d", name, rte_socket_id());
-
- selftest = skeldev_parse_vdev_args(vdev);
- /* In case of invalid argument, selftest != 1; ignore other values */
-
- ret = skeleton_rawdev_create(name, vdev, rte_socket_id());
- if (ret >= 0) {
- /* In case command line argument for 'selftest' was passed;
- * if invalid arguments were passed, execution continues but
- * without selftest.
- */
- if (selftest == 1)
- test_rawdev_skeldev(ret);
- }
-
- /* Device instance created; Second instance not possible */
- skeldev_init_once = 1;
-
- return ret < 0 ? ret : 0;
-}
-
-static int
-skeleton_rawdev_remove(struct rte_vdev_device *vdev)
-{
- const char *name;
- int ret;
-
- name = rte_vdev_device_name(vdev);
- if (name == NULL)
- return -1;
-
- SKELETON_PMD_INFO("Closing %s on NUMA node %d", name, rte_socket_id());
-
- ret = skeleton_rawdev_destroy(name);
- if (!ret)
- skeldev_init_once = 0;
-
- return ret;
-}
-
-static struct rte_vdev_driver skeleton_pmd_drv = {
- .probe = skeleton_rawdev_probe,
- .remove = skeleton_rawdev_remove
-};
-
-RTE_PMD_REGISTER_VDEV(SKELETON_PMD_RAWDEV_NAME, skeleton_pmd_drv);
-
-RTE_INIT(skeleton_pmd_init_log)
-{
- skeleton_pmd_logtype = rte_log_register("rawdev.skeleton");
- if (skeleton_pmd_logtype >= 0)
- rte_log_set_level(skeleton_pmd_logtype, RTE_LOG_INFO);
-}
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017 NXP
- */
-
-#ifndef __SKELETON_RAWDEV_H__
-#define __SKELETON_RAWDEV_H__
-
-#include <rte_rawdev.h>
-
-extern int skeleton_pmd_logtype;
-
-#define SKELETON_PMD_LOG(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, skeleton_pmd_logtype, "%s(): " fmt "\n", \
- __func__, ##args)
-
-#define SKELETON_PMD_FUNC_TRACE() SKELETON_PMD_LOG(DEBUG, ">>")
-
-#define SKELETON_PMD_DEBUG(fmt, args...) \
- SKELETON_PMD_LOG(DEBUG, fmt, ## args)
-#define SKELETON_PMD_INFO(fmt, args...) \
- SKELETON_PMD_LOG(INFO, fmt, ## args)
-#define SKELETON_PMD_ERR(fmt, args...) \
- SKELETON_PMD_LOG(ERR, fmt, ## args)
-#define SKELETON_PMD_WARN(fmt, args...) \
- SKELETON_PMD_LOG(WARNING, fmt, ## args)
-/* Macros for self test application */
-#define SKELETON_TEST_INFO SKELETON_PMD_INFO
-#define SKELETON_TEST_DEBUG SKELETON_PMD_DEBUG
-#define SKELETON_TEST_ERR SKELETON_PMD_ERR
-#define SKELETON_TEST_WARN SKELETON_PMD_WARN
-
-#define SKELETON_SELFTEST_ARG ("selftest")
-
-#define SKELETON_VENDOR_ID 0x10
-#define SKELETON_DEVICE_ID 0x01
-
-#define SKELETON_MAJOR_VER 1
-#define SKELETON_MINOR_VER 0
-#define SKELETON_SUB_VER 0
-
-#define SKELETON_MAX_QUEUES 1
-
-enum skeleton_firmware_state {
- SKELETON_FW_READY,
- SKELETON_FW_LOADED,
- SKELETON_FW_ERROR
-};
-
-enum skeleton_device_state {
- SKELETON_DEV_RUNNING,
- SKELETON_DEV_STOPPED
-};
-
-enum skeleton_queue_state {
- SKELETON_QUEUE_DETACH,
- SKELETON_QUEUE_ATTACH
-};
-
-#define SKELETON_QUEUE_DEF_DEPTH 10
-#define SKELETON_QUEUE_MAX_DEPTH 25
-
-struct skeleton_firmware_version_info {
- uint8_t major;
- uint8_t minor;
- uint8_t subrel;
-};
-
-struct skeleton_firmware {
- /**< Device firmware information */
- struct skeleton_firmware_version_info firmware_version;
- /**< Device state */
- enum skeleton_firmware_state firmware_state;
-
-};
-
-#define SKELETON_MAX_ATTRIBUTES 10
-#define SKELETON_ATTRIBUTE_NAME_MAX 20
-
-struct skeleton_rawdev_attributes {
- /**< Name of the attribute */
- char *name;
- /**< Value or reference of value of attribute */
- uint64_t value;
-};
-
-/**< Device supports firmware loading/unloading */
-#define SKELETON_CAPA_FW_LOAD 0x0001
-/**< Device supports firmware reset */
-#define SKELETON_CAPA_FW_RESET 0x0002
-/**< Device support queue based communication */
-#define SKELETON_CAPA_QUEUES 0x0004
-/**< Default Capabilities: FW_LOAD, FW_RESET, QUEUES */
-#define SKELETON_DEFAULT_CAPA 0x7
-
-struct skeleton_rawdev_queue {
- uint8_t state;
- uint32_t depth;
-};
-
-struct skeleton_rawdev {
- uint16_t device_id;
- uint16_t vendor_id;
- uint16_t num_queues;
- /**< One of SKELETON_CAPA_* */
- uint16_t capabilities;
- /**< State of device; linked to firmware state */
- enum skeleton_device_state device_state;
- /**< Firmware configuration */
- struct skeleton_firmware fw;
- /**< Collection of all communication channels - which can be referred
- * to as queues.
- */
- struct skeleton_rawdev_queue queues[SKELETON_MAX_QUEUES];
- /**< Global table containing various pre-defined and user-defined
- * attributes.
- */
- struct skeleton_rawdev_attributes attr[SKELETON_MAX_ATTRIBUTES];
- struct rte_device *device;
-};
-
-struct skeleton_rawdev_conf {
- uint16_t num_queues;
- unsigned int capabilities;
- enum skeleton_device_state device_state;
- enum skeleton_firmware_state firmware_state;
-};
-
-static inline struct skeleton_rawdev *
-skeleton_rawdev_get_priv(const struct rte_rawdev *rawdev)
-{
- return rawdev->dev_private;
-}
-
-int test_rawdev_skeldev(uint16_t dev_id);
-
-#endif /* __SKELETON_RAWDEV_H__ */
+++ /dev/null
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017 NXP
- */
-
-#include <rte_common.h>
-#include <rte_mbuf.h>
-#include <rte_malloc.h>
-#include <rte_memcpy.h>
-#include <rte_dev.h>
-#include <rte_rawdev.h>
-#include <rte_bus_vdev.h>
-#include <rte_test.h>
-
-/* Using relative path as skeleton_rawdev is not part of exported headers */
-#include "skeleton_rawdev.h"
-
-#define TEST_DEV_NAME "rawdev_skeleton"
-
-#define SKELDEV_LOGS(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, skeleton_pmd_logtype, fmt "\n", \
- ##args)
-
-#define SKELDEV_TEST_INFO(fmt, args...) \
- SKELDEV_LOGS(INFO, fmt, ## args)
-#define SKELDEV_TEST_DEBUG(fmt, args...) \
- SKELDEV_LOGS(DEBUG, fmt, ## args)
-
-#define SKELDEV_TEST_RUN(setup, teardown, test) \
- skeldev_test_run(setup, teardown, test, #test)
-
-#define TEST_SUCCESS 0
-#define TEST_FAILED -1
-
-static int total;
-static int passed;
-static int failed;
-static int unsupported;
-
-static uint16_t test_dev_id;
-
-static int
-testsuite_setup(void)
-{
- uint8_t count;
- count = rte_rawdev_count();
- if (!count) {
- SKELDEV_TEST_INFO("\tNo existing rawdev; "
- "Creating 'skeldev_rawdev'");
- return rte_vdev_init(TEST_DEV_NAME, NULL);
- }
-
- return TEST_SUCCESS;
-}
-
-static void local_teardown(void);
-
-static void
-testsuite_teardown(void)
-{
- local_teardown();
-}
-
-static void
-local_teardown(void)
-{
- rte_vdev_uninit(TEST_DEV_NAME);
-}
-
-static int
-test_rawdev_count(void)
-{
- uint8_t count;
- count = rte_rawdev_count();
- RTE_TEST_ASSERT(count > 0, "Invalid rawdev count %" PRIu8, count);
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_get_dev_id(void)
-{
- int ret;
- ret = rte_rawdev_get_dev_id("invalid_rawdev_device");
- RTE_TEST_ASSERT_FAIL(ret, "Expected <0 for invalid dev name ret=%d",
- ret);
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_socket_id(void)
-{
- int socket_id;
- socket_id = rte_rawdev_socket_id(test_dev_id);
- RTE_TEST_ASSERT(socket_id != -EINVAL,
- "Failed to get socket_id %d", socket_id);
- socket_id = rte_rawdev_socket_id(RTE_RAWDEV_MAX_DEVS);
- RTE_TEST_ASSERT(socket_id == -EINVAL,
- "Expected -EINVAL %d", socket_id);
-
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_info_get(void)
-{
- int ret;
- struct rte_rawdev_info rdev_info = {0};
- struct skeleton_rawdev_conf skel_conf = {0};
-
- ret = rte_rawdev_info_get(test_dev_id, NULL);
- RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
-
- rdev_info.dev_private = &skel_conf;
-
- ret = rte_rawdev_info_get(test_dev_id, &rdev_info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get raw dev info");
-
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_configure(void)
-{
- int ret;
- struct rte_rawdev_info rdev_info = {0};
- struct skeleton_rawdev_conf rdev_conf_set = {0};
- struct skeleton_rawdev_conf rdev_conf_get = {0};
-
- /* Check invalid configuration */
- ret = rte_rawdev_configure(test_dev_id, NULL);
- RTE_TEST_ASSERT(ret == -EINVAL,
- "Null configure; Expected -EINVAL, got %d", ret);
-
- /* Valid configuration test */
- rdev_conf_set.num_queues = 1;
- rdev_conf_set.capabilities = SKELETON_CAPA_FW_LOAD |
- SKELETON_CAPA_FW_RESET;
-
- rdev_info.dev_private = &rdev_conf_set;
- ret = rte_rawdev_configure(test_dev_id,
- (rte_rawdev_obj_t)&rdev_info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure rawdev (%d)", ret);
-
- rdev_info.dev_private = &rdev_conf_get;
- ret = rte_rawdev_info_get(test_dev_id,
- (rte_rawdev_obj_t)&rdev_info);
- RTE_TEST_ASSERT_SUCCESS(ret,
- "Failed to obtain rawdev configuration (%d)",
- ret);
-
- RTE_TEST_ASSERT_EQUAL(rdev_conf_set.num_queues,
- rdev_conf_get.num_queues,
- "Configuration test failed; num_queues (%d)(%d)",
- rdev_conf_set.num_queues,
- rdev_conf_get.num_queues);
- RTE_TEST_ASSERT_EQUAL(rdev_conf_set.capabilities,
- rdev_conf_get.capabilities,
- "Configuration test failed; capabilities");
-
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_queue_default_conf_get(void)
-{
- int ret, i;
- struct rte_rawdev_info rdev_info = {0};
- struct skeleton_rawdev_conf rdev_conf_get = {0};
- struct skeleton_rawdev_queue q = {0};
-
- /* Get the current configuration */
- rdev_info.dev_private = &rdev_conf_get;
- ret = rte_rawdev_info_get(test_dev_id,
- (rte_rawdev_obj_t)&rdev_info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain rawdev configuration (%d)",
- ret);
-
- /* call to test_rawdev_configure would have set the num_queues = 1 */
- RTE_TEST_ASSERT_SUCCESS(!(rdev_conf_get.num_queues > 0),
- "Invalid number of queues (%d). Expected 1",
- rdev_conf_get.num_queues);
- /* All queues by default should have state = DETACH and
- * depth = DEF_DEPTH
- */
- for (i = 0; i < rdev_conf_get.num_queues; i++) {
- rte_rawdev_queue_conf_get(test_dev_id, i, &q);
- RTE_TEST_ASSERT_EQUAL(q.depth, SKELETON_QUEUE_DEF_DEPTH,
- "Invalid default depth of queue (%d)",
- q.depth);
- RTE_TEST_ASSERT_EQUAL(q.state, SKELETON_QUEUE_DETACH,
- "Invalid default state of queue (%d)",
- q.state);
- }
-
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_queue_count(void)
-{
- unsigned int q_count;
-
- /* Get the current configuration */
- q_count = rte_rawdev_queue_count(test_dev_id);
- RTE_TEST_ASSERT_EQUAL(q_count, 1, "Invalid queue count (%d)", q_count);
-
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_queue_setup(void)
-{
- int ret;
- struct rte_rawdev_info rdev_info = {0};
- struct skeleton_rawdev_conf rdev_conf_get = {0};
- struct skeleton_rawdev_queue qset = {0};
- struct skeleton_rawdev_queue qget = {0};
-
- /* Get the current configuration */
- rdev_info.dev_private = &rdev_conf_get;
- ret = rte_rawdev_info_get(test_dev_id,
- (rte_rawdev_obj_t)&rdev_info);
- RTE_TEST_ASSERT_SUCCESS(ret,
- "Failed to obtain rawdev configuration (%d)",
- ret);
-
- /* call to test_rawdev_configure would have set the num_queues = 1 */
- RTE_TEST_ASSERT_SUCCESS(!(rdev_conf_get.num_queues > 0),
- "Invalid number of queues (%d). Expected 1",
- rdev_conf_get.num_queues);
-
- /* Modify the queue depth for Queue 0 and attach it */
- qset.depth = 15;
- qset.state = SKELETON_QUEUE_ATTACH;
- ret = rte_rawdev_queue_setup(test_dev_id, 0, &qset);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue (%d)", ret);
-
- /* Now, fetching the queue 0 should show depth as 15 */
- ret = rte_rawdev_queue_conf_get(test_dev_id, 0, &qget);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get queue config (%d)", ret);
-
- RTE_TEST_ASSERT_EQUAL(qset.depth, qget.depth,
- "Failed to set queue depth: Need(%d), has(%d)",
- qset.depth, qget.depth);
-
- return TEST_SUCCESS;
-}
-
-/* After executing test_rawdev_queue_setup, queue_id=0 would have depth as 15.
- * Releasing should set it back to default. state would set to DETACH
- */
-static int
-test_rawdev_queue_release(void)
-{
- int ret;
- struct skeleton_rawdev_queue qget = {0};
-
- /* Now, fetching the queue 0 should show depth as 100 */
- ret = rte_rawdev_queue_release(test_dev_id, 0);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to release queue 0; (%d)", ret);
-
- /* Now, fetching the queue 0 should show depth as default */
- ret = rte_rawdev_queue_conf_get(test_dev_id, 0, &qget);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get queue config (%d)", ret);
-
- RTE_TEST_ASSERT_EQUAL(qget.depth, SKELETON_QUEUE_DEF_DEPTH,
- "Release of Queue 0 failed; (depth)");
-
- RTE_TEST_ASSERT_EQUAL(qget.state, SKELETON_QUEUE_DETACH,
- "Release of Queue 0 failed; (state)");
-
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_attr_set_get(void)
-{
- int ret;
- int *dummy_value, set_value;
- uint64_t ret_value;
-
- /* Set an attribute and fetch it */
- ret = rte_rawdev_set_attr(test_dev_id, "Test1", 100);
- RTE_TEST_ASSERT(!ret, "Unable to set an attribute (Test1)");
-
- dummy_value = &set_value;
- *dummy_value = 200;
- ret = rte_rawdev_set_attr(test_dev_id, "Test2", (uintptr_t)dummy_value);
-
- /* Check if attributes have been set */
- ret = rte_rawdev_get_attr(test_dev_id, "Test1", &ret_value);
- RTE_TEST_ASSERT_EQUAL(ret_value, 100,
- "Attribute (Test1) not set correctly (%" PRIu64 ")",
- ret_value);
-
- ret_value = 0;
- ret = rte_rawdev_get_attr(test_dev_id, "Test2", &ret_value);
- RTE_TEST_ASSERT_EQUAL(*((int *)(uintptr_t)ret_value), set_value,
- "Attribute (Test2) not set correctly (%" PRIu64 ")",
- ret_value);
-
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_start_stop(void)
-{
- int ret;
- struct rte_rawdev_info rdev_info = {0};
- struct skeleton_rawdev_conf rdev_conf_get = {0};
- char *dummy_firmware = NULL;
-
- /* Get the current configuration */
- rdev_info.dev_private = &rdev_conf_get;
-
- /* Load a firmware using a dummy address area */
- dummy_firmware = rte_zmalloc("RAWDEV SKELETON", sizeof(int) * 10, 0);
- RTE_TEST_ASSERT(dummy_firmware != NULL,
- "Failed to create firmware memory backing");
-
- ret = rte_rawdev_firmware_load(test_dev_id, dummy_firmware);
- RTE_TEST_ASSERT_SUCCESS(ret, "Firmware loading failed (%d)", ret);
-
- /* Skeleton doesn't do anything with the firmware area - that is dummy
- * and can be removed.
- */
- rte_free(dummy_firmware);
- dummy_firmware = NULL;
-
- rte_rawdev_start(test_dev_id);
- ret = rte_rawdev_info_get(test_dev_id, (rte_rawdev_obj_t)&rdev_info);
- RTE_TEST_ASSERT_SUCCESS(ret,
- "Failed to obtain rawdev configuration (%d)",
- ret);
- RTE_TEST_ASSERT_EQUAL(rdev_conf_get.device_state, SKELETON_DEV_RUNNING,
- "Device start failed. State is (%d)",
- rdev_conf_get.device_state);
-
- rte_rawdev_stop(test_dev_id);
- ret = rte_rawdev_info_get(test_dev_id, (rte_rawdev_obj_t)&rdev_info);
- RTE_TEST_ASSERT_SUCCESS(ret,
- "Failed to obtain rawdev configuration (%d)",
- ret);
- RTE_TEST_ASSERT_EQUAL(rdev_conf_get.device_state, SKELETON_DEV_STOPPED,
- "Device stop failed. State is (%d)",
- rdev_conf_get.device_state);
-
- /* Unloading the firmware once device is stopped */
- ret = rte_rawdev_firmware_unload(test_dev_id);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to unload firmware (%d)", ret);
-
- return TEST_SUCCESS;
-}
-
-static int
-test_rawdev_enqdeq(void)
-{
- int ret;
- unsigned int count = 1;
- uint16_t queue_id = 0;
- struct rte_rawdev_buf buffers[1];
- struct rte_rawdev_buf *deq_buffers = NULL;
-
- buffers[0].buf_addr = malloc(strlen(TEST_DEV_NAME) + 3);
- if (!buffers[0].buf_addr)
- goto cleanup;
- snprintf(buffers[0].buf_addr, strlen(TEST_DEV_NAME) + 2, "%s%d",
- TEST_DEV_NAME, 0);
-
- ret = rte_rawdev_enqueue_buffers(test_dev_id,
- (struct rte_rawdev_buf **)&buffers,
- count, &queue_id);
- RTE_TEST_ASSERT_EQUAL((unsigned int)ret, count,
- "Unable to enqueue buffers");
-
- deq_buffers = malloc(sizeof(struct rte_rawdev_buf) * count);
- if (!deq_buffers)
- goto cleanup;
-
- ret = rte_rawdev_dequeue_buffers(test_dev_id,
- (struct rte_rawdev_buf **)&deq_buffers,
- count, &queue_id);
- RTE_TEST_ASSERT_EQUAL((unsigned int)ret, count,
- "Unable to dequeue buffers");
-
- if (deq_buffers)
- free(deq_buffers);
-
- return TEST_SUCCESS;
-cleanup:
- if (buffers[0].buf_addr)
- free(buffers[0].buf_addr);
-
- return TEST_FAILED;
-}
-
-static void skeldev_test_run(int (*setup)(void),
- void (*teardown)(void),
- int (*test)(void),
- const char *name)
-{
- int ret = 0;
-
- if (setup) {
- ret = setup();
- if (ret < 0) {
- SKELDEV_TEST_INFO("Error setting up test %s", name);
- unsupported++;
- }
- }
-
- if (test) {
- ret = test();
- if (ret < 0) {
- failed++;
- SKELDEV_TEST_INFO("%s Failed", name);
- } else {
- passed++;
- SKELDEV_TEST_DEBUG("%s Passed", name);
- }
- }
-
- if (teardown)
- teardown();
-
- total++;
-}
-
-int
-test_rawdev_skeldev(uint16_t dev_id)
-{
- test_dev_id = dev_id;
- testsuite_setup();
-
- SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_count);
- SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_get_dev_id);
- SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_socket_id);
- SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_info_get);
- SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_configure);
- SKELDEV_TEST_RUN(test_rawdev_configure, NULL,
- test_rawdev_queue_default_conf_get);
- SKELDEV_TEST_RUN(test_rawdev_configure, NULL, test_rawdev_queue_setup);
- SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_queue_count);
- SKELDEV_TEST_RUN(test_rawdev_queue_setup, NULL,
- test_rawdev_queue_release);
- SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_attr_set_get);
- SKELDEV_TEST_RUN(NULL, NULL, test_rawdev_start_stop);
- SKELDEV_TEST_RUN(test_rawdev_queue_setup, NULL, test_rawdev_enqdeq);
-
- testsuite_teardown();
-
- SKELDEV_TEST_INFO("Total tests : %d", total);
- SKELDEV_TEST_INFO("Passed : %d", passed);
- SKELDEV_TEST_INFO("Failed : %d", failed);
- SKELDEV_TEST_INFO("Not supported : %d", unsupported);
-
- if (failed)
- return -1;
-
- return 0;
-};