1 .. SPDX-License-Identifier: BSD-3-Clause
8 The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
9 support for the inbuilt NIC found in the **NXP DPAA** SoC family.
11 More information can be found at `NXP Official Website
12 <http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
14 NXP DPAA (Data Path Acceleration Architecture - Gen 1)
15 ------------------------------------------------------
17 This section provides an overview of the NXP DPAA architecture
18 and how it is integrated into the DPDK.
23 - DPAA driver architecture overview
30 Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
32 The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
33 components on specific QorIQ series multicore processors. This architecture
34 provides the infrastructure to support simplified sharing of networking
35 interfaces and accelerators by multiple CPU cores, and the accelerators
41 - Network and packet I/O
42 - Hardware offload accelerators
43 - Infrastructure required to facilitate flow of packets between the components above
45 Infrastructure components are:
47 - The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
48 It allows CPUs and other accelerators connected to the SoC datapath to
49 enqueue and dequeue ethernet frames, thus providing the infrastructure for
50 data exchange among CPUs and datapath accelerators.
51 - The Buffer Manager (BMan) is a hardware buffer pool management block that
52 allows software and accelerators on the datapath to acquire and release
53 buffers in order to build frames.
55 Hardware accelerators are:
57 - SEC - Cryptographic accelerator
58 - PME - Pattern matching engine
60 The Network and packet I/O component:
62 - The Frame Manager (FMan) is a key component in the DPAA and makes use of the
63 DPAA infrastructure (QMan and BMan). FMan is responsible for packet
64 distribution and policing. Each frame can be parsed, classified and results
65 may be attached to the frame. This meta data can be used to select
66 particular QMan queue, which the packet is forwarded to.
69 DPAA DPDK - Poll Mode Driver Overview
70 -------------------------------------
72 This section provides an overview of the drivers for DPAA:
74 * Bus driver and associated "DPAA infrastructure" drivers
75 * Functional object drivers (such as Ethernet).
77 Brief description of each driver is provided in layout below as well as
78 in the following sections.
80 .. code-block:: console
87 +-----+------+ +---------------+
88 : Ethernet :.......| DPDK DPAA |
89 . . . . . . . . . : (FMAN) : | Mempool driver|
90 . +---+---+----+ | (BMAN) |
91 . ^ | +-----+---------+
96 . . . . . . . . . . .: Portal drv : .
101 +----+------+-------+ +-----+------+ .
102 | DPDK DPAA Bus | | .
103 | driver |....................|.....................
105 +-------------------+ |
107 ========================== HARDWARE =====|========================
109 =========================================|========================
111 In the above representation, solid lines represent components which interface
112 with DPDK RTE Framework and dotted lines represent DPAA internal components.
117 The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
118 Key functions include:
120 - Scanning and parsing the various objects and adding them to their respective
122 - Performing probe for available drivers against each scanned device
123 - Creating necessary ethernet instance before passing control to the PMD
125 DPAA NIC Driver (PMD)
126 ~~~~~~~~~~~~~~~~~~~~~
128 DPAA PMD is traditional DPDK PMD which provides necessary interface between
129 RTE framework and DPAA internal components/drivers.
131 - Once devices have been identified by DPAA Bus, each device is associated
133 - PMD is responsible for implementing necessary glue layer between RTE APIs
134 and lower level QMan and FMan blocks.
135 The Ethernet driver is bound to a FMAN port and implements the interfaces
136 needed to connect the DPAA network interface to the network stack.
137 Each FMAN Port corresponds to a DPDK network interface.
143 Features of the DPAA PMD are:
145 - Multiple queues for TX and RX
146 - Receive Side Scaling (RSS)
147 - Packet type information
154 DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
157 - Using standard Mempools operations RTE API, the mempool driver interfaces
158 with RTE to service each mempool creation, deletion, buffer allocation and
159 deallocation requests.
160 - Each FMAN instance has a BMan pool attached to it during initialization.
161 Each Tx frame can be automatically released by hardware, if allocated from
165 Whitelisting & Blacklisting
166 ---------------------------
168 For blacklisting a DPAA device, following commands can be used.
170 .. code-block:: console
172 <dpdk app> <EAL args> -b "dpaa_bus:fmX-macY" -- ...
173 e.g. "dpaa_bus:fm1-mac4"
184 See :doc:`../platform/dpaa` for setup information
187 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
188 to setup the basic DPDK environment.
192 Some part of dpaa bus code (qbman and fman - library) routines are
193 dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
195 Pre-Installation Configuration
196 ------------------------------
201 The following options can be modified in the ``config`` file.
202 Please note that enabling debugging options may affect system performance.
204 - ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``y``)
206 Toggle compilation of the ``librte_bus_dpaa`` driver.
208 - ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``y``)
210 Toggle compilation of the ``librte_pmd_dpaa`` driver.
212 - ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
214 Toggles display of bus configurations and enables a debugging queue
215 to fetch error (Rx/Tx) packets to driver. By default, packets with errors
216 (like wrong checksum) are dropped by the hardware.
218 - ``CONFIG_RTE_LIBRTE_DPAA_HWDEBUG`` (default ``n``)
220 Enables debugging of the Queue and Buffer Manager layer which interacts
221 with the DPAA hardware.
224 Environment Variables
225 ~~~~~~~~~~~~~~~~~~~~~
227 DPAA drivers uses the following environment variables to configure its
228 state during application initialization:
230 - ``DPAA_NUM_RX_QUEUES`` (default 1)
232 This defines the number of Rx queues configured for an application, per
233 port. Hardware would distribute across these many number of queues on Rx
235 In case the application is configured to use lesser number of queues than
236 configured above, it might result in packet loss (because of distribution).
238 - ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
240 This defines the number of High performance queues to be used for ethdev Rx.
241 These queues use one private HW portal per queue configured, so they are
242 limited in the system. The first configured ethdev queues will be
243 automatically be assigned from the these high perf PUSH queues. Any queue
244 configuration beyond that will be standard Rx queues. The application can
245 choose to change their number if HW portals are limited.
246 The valid values are from '0' to '4'. The values shall be set to '0' if the
247 application want to use eventdev with DPAA device.
248 Currently these queues are not used for LS1023/LS1043 platform by default.
251 Driver compilation and testing
252 ------------------------------
254 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
259 Follow instructions available in the document
260 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
265 .. code-block:: console
267 ./arm64-dpaa-linux-gcc/testpmd -c 0xff -n 1 \
268 -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
271 EAL: Registered [pci] bus.
272 EAL: Registered [dpaa] bus.
273 EAL: Detected 4 lcore(s)
275 EAL: dpaa: Bus scan completed
277 Configuring Port 0 (socket 0)
278 Port 0: 00:00:00:00:00:01
279 Configuring Port 1 (socket 0)
280 Port 1: 00:00:00:00:00:02
282 Checking link statuses...
283 Port 0 Link Up - speed 10000 Mbps - full-duplex
284 Port 1 Link Up - speed 10000 Mbps - full-duplex
294 DPAA drivers for DPDK can only work on NXP SoCs as listed in the
295 ``Supported DPAA SoCs``.
297 Maximum packet length
298 ~~~~~~~~~~~~~~~~~~~~~
300 The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
301 is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
302 member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
303 up to 10240 bytes can still reach the host interface.
308 Current version of DPAA driver doesn't support multi-process applications
309 where I/O is performed using secondary processes. This feature would be
310 implemented in subsequent versions.