1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2017,2020 NXP
8 The DPAA NIC PMD (**librte_net_dpaa**) provides poll mode driver
9 support for the inbuilt NIC found in the **NXP DPAA** SoC family.
11 More information can be found at `NXP Official Website
12 <http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
14 NXP DPAA (Data Path Acceleration Architecture - Gen 1)
15 ------------------------------------------------------
17 This section provides an overview of the NXP DPAA architecture
18 and how it is integrated into the DPDK.
23 - DPAA driver architecture overview
24 - FMAN configuration tools and library
31 Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
33 The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
34 components on specific QorIQ series multicore processors. This architecture
35 provides the infrastructure to support simplified sharing of networking
36 interfaces and accelerators by multiple CPU cores, and the accelerators
42 - Network and packet I/O
43 - Hardware offload accelerators
44 - Infrastructure required to facilitate flow of packets between the components above
46 Infrastructure components are:
48 - The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
49 It allows CPUs and other accelerators connected to the SoC datapath to
50 enqueue and dequeue ethernet frames, thus providing the infrastructure for
51 data exchange among CPUs and datapath accelerators.
52 - The Buffer Manager (BMan) is a hardware buffer pool management block that
53 allows software and accelerators on the datapath to acquire and release
54 buffers in order to build frames.
56 Hardware accelerators are:
58 - SEC - Cryptographic accelerator
59 - PME - Pattern matching engine
61 The Network and packet I/O component:
63 - The Frame Manager (FMan) is a key component in the DPAA and makes use of the
64 DPAA infrastructure (QMan and BMan). FMan is responsible for packet
65 distribution and policing. Each frame can be parsed, classified and results
66 may be attached to the frame. This meta data can be used to select
67 particular QMan queue, which the packet is forwarded to.
70 DPAA DPDK - Poll Mode Driver Overview
71 -------------------------------------
73 This section provides an overview of the drivers for DPAA:
75 * Bus driver and associated "DPAA infrastructure" drivers
76 * Functional object drivers (such as Ethernet).
78 Brief description of each driver is provided in layout below as well as
79 in the following sections.
81 .. code-block:: console
88 +-----+------+ +---------------+
89 : Ethernet :.......| DPDK DPAA |
90 . . . . . . . . . : (FMAN) : | Mempool driver|
91 . +---+---+----+ | (BMAN) |
92 . ^ | +-----+---------+
97 . . . . . . . . . . .: Portal drv : .
102 +----+------+-------+ +-----+------+ .
103 | DPDK DPAA Bus | | .
104 | driver |....................|.....................
106 +-------------------+ |
108 ========================== HARDWARE =====|========================
110 =========================================|========================
112 In the above representation, solid lines represent components which interface
113 with DPDK RTE Framework and dotted lines represent DPAA internal components.
118 The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
119 Key functions include:
121 - Scanning and parsing the various objects and adding them to their respective
123 - Performing probe for available drivers against each scanned device
124 - Creating necessary ethernet instance before passing control to the PMD
126 DPAA NIC Driver (PMD)
127 ~~~~~~~~~~~~~~~~~~~~~
129 DPAA PMD is traditional DPDK PMD which provides necessary interface between
130 RTE framework and DPAA internal components/drivers.
132 - Once devices have been identified by DPAA Bus, each device is associated
134 - PMD is responsible for implementing necessary glue layer between RTE APIs
135 and lower level QMan and FMan blocks.
136 The Ethernet driver is bound to a FMAN port and implements the interfaces
137 needed to connect the DPAA network interface to the network stack.
138 Each FMAN Port corresponds to a DPDK network interface.
144 Features of the DPAA PMD are:
146 - Multiple queues for TX and RX
147 - Receive Side Scaling (RSS)
148 - Packet type information
155 DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
158 - Using standard Mempools operations RTE API, the mempool driver interfaces
159 with RTE to service each mempool creation, deletion, buffer allocation and
160 deallocation requests.
161 - Each FMAN instance has a BMan pool attached to it during initialization.
162 Each Tx frame can be automatically released by hardware, if allocated from
169 For blocking a DPAA device, following commands can be used.
171 .. code-block:: console
173 <dpdk app> <EAL args> -b "dpaa_bus:fmX-macY" -- ...
174 e.g. "dpaa_bus:fm1-mac4"
185 See :doc:`../platform/dpaa` for setup information
188 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
189 to setup the basic DPDK environment.
193 Some part of dpaa bus code (qbman and fman - library) routines are
194 dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
196 Pre-Installation Configuration
197 ------------------------------
200 Environment Variables
201 ~~~~~~~~~~~~~~~~~~~~~
203 DPAA drivers uses the following environment variables to configure its
204 state during application initialization:
206 - ``DPAA_NUM_RX_QUEUES`` (default 1)
208 This defines the number of Rx queues configured for an application, per
209 port. Hardware would distribute across these many number of queues on Rx
211 In case the application is configured to use lesser number of queues than
212 configured above, it might result in packet loss (because of distribution).
214 - ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
216 This defines the number of High performance queues to be used for ethdev Rx.
217 These queues use one private HW portal per queue configured, so they are
218 limited in the system. The first configured ethdev queues will be
219 automatically be assigned from the these high perf PUSH queues. Any queue
220 configuration beyond that will be standard Rx queues. The application can
221 choose to change their number if HW portals are limited.
222 The valid values are from '0' to '4'. The values shall be set to '0' if the
223 application want to use eventdev with DPAA device.
224 Currently these queues are not used for LS1023/LS1043 platform by default.
227 Driver compilation and testing
228 ------------------------------
230 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
235 Follow instructions available in the document
236 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
241 .. code-block:: console
243 ./<build_dir>/app/dpdk-testpmd -c 0xff -n 1 \
244 -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
247 EAL: Registered [pci] bus.
248 EAL: Registered [dpaa] bus.
249 EAL: Detected 4 lcore(s)
251 EAL: dpaa: Bus scan completed
253 Configuring Port 0 (socket 0)
254 Port 0: 00:00:00:00:00:01
255 Configuring Port 1 (socket 0)
256 Port 1: 00:00:00:00:00:02
258 Checking link statuses...
259 Port 0 Link Up - speed 10000 Mbps - full-duplex
260 Port 1 Link Up - speed 10000 Mbps - full-duplex
267 Frame Manager is also responsible for parser, classify and distribute
268 functionality in the DPAA.
271 Packet parsing at wire speed. It supports standard protocols parsing and
272 identification by HW (VLAN/IP/UDP/TCP/SCTP/PPPoE/PPP/MPLS/GRE/IPSec).
273 It supports non-standard UDF header parsing for custom protocols.
274 Classification / Distribution: Coarse classification based on Key generation
275 Hash and exact match lookup
277 FMC - FMAN Configuration Tool
278 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
279 This tool is available in User Space. The tool is used to configure FMAN
280 Physical (MAC) or Ephemeral (OH)ports for Parse/Classify/distribute.
281 The PCDs can be hash based where a set of fields are key input for hash
282 generation within FMAN keygen. The hash value is used to generate a FQID for
283 frame. There is a provision to setup exact match lookup too where field
284 values within a packet drives corresponding FQID.
285 Currently it works on XML file inputs.
288 1.For Dynamic Configuration change, currently no support is available.
289 E.g. enable/disable a port, a operator (set of VLANs and associate rules).
291 2.During FMC configuration, port for which policy is being configured is
292 brought down and the policy is flushed on port before new policy is updated
293 for the port. Support is required to add/append/delete etc.
295 3.FMC, being a separate user-space application, needs to be invoked from
299 The details can be found in FMC Doc at:
300 `Frame Mnager Configuration Tool <https://www.nxp.com/docs/en/application-note/AN4760.pdf>`_.
304 The Frame Manager library provides an API on top of the Frame Manager driver
305 ioctl calls, that provides a user space application with a simple way to
306 configure driver parameters and PCD (parse - classify - distribute) rules.
308 This is an alternate to the FMC based configuration. This library provides
309 direct ioctl based interfaces for FMAN configuration as used by the FMC tool
310 as well. This helps in overcoming the main limitaiton of FMC - i.e. lack
311 of dynamic configuration.
313 The location for the fmd driver as used by FMLIB and FMC is as follows:
315 <https://source.codeaurora.org/external/qoriq/qoriq-components/linux/tree/drivers/net/ethernet/freescale/sdk_fman?h=linux-4.19-rt>`_.
317 VSP (Virtual Storage Profile)
318 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
319 The storage profiled are means to provide virtualized interface. A ranges of
320 storage profiles cab be associated to Ethernet ports.
321 They are selected during classification. Specify how the frame should be
322 written to memory and which buffer pool to select for packet storange in
323 queues. Start and End margin of buffer can also be configured.
331 DPAA drivers for DPDK can only work on NXP SoCs as listed in the
332 ``Supported DPAA SoCs``.
334 Maximum packet length
335 ~~~~~~~~~~~~~~~~~~~~~
337 The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
338 is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
339 member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
340 up to 10240 bytes can still reach the host interface.
345 Current version of DPAA driver doesn't support multi-process applications
346 where I/O is performed using secondary processes. This feature would be
347 implemented in subsequent versions.