1 .. SPDX-License-Identifier: BSD-3-Clause
8 The DPAA2 NIC PMD (**librte_pmd_dpaa2**) provides poll mode driver
9 support for the inbuilt NIC found in the **NXP DPAA2** SoC family.
11 More information can be found at `NXP Official Website
12 <http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
14 NXP DPAA2 (Data Path Acceleration Architecture Gen2)
15 ----------------------------------------------------
17 This section provides an overview of the NXP DPAA2 architecture
18 and how it is integrated into the DPDK.
23 - Overview of DPAA2 objects
24 - DPAA2 driver architecture overview
31 Reference: `FSL MC BUS in Linux Kernel <https://www.kernel.org/doc/readme/drivers-staging-fsl-mc-README.txt>`_.
33 DPAA2 is a hardware architecture designed for high-speed network
34 packet processing. DPAA2 consists of sophisticated mechanisms for
35 processing Ethernet packets, queue management, buffer management,
36 autonomous L2 switching, virtual Ethernet bridging, and accelerator
37 (e.g. crypto) sharing.
39 A DPAA2 hardware component called the Management Complex (or MC) manages the
40 DPAA2 hardware resources. The MC provides an object-based abstraction for
41 software drivers to use the DPAA2 hardware.
43 The MC uses DPAA2 hardware resources such as queues, buffer pools, and
44 network ports to create functional objects/devices such as network
45 interfaces, an L2 switch, or accelerator instances.
47 The MC provides memory-mapped I/O command interfaces (MC portals)
48 which DPAA2 software drivers use to operate on DPAA2 objects:
50 The diagram below shows an overview of the DPAA2 resource management
53 .. code-block:: console
55 +--------------------------------------+
59 +-----------------------------|--------+
61 | (create,discover,connect
65 +------------------------| mc portal |-+
67 | +- - - - - - - - - - - - -V- - -+ |
69 | | Management Complex (MC) | |
71 | +- - - - - - - - - - - - - - - -+ |
77 | -buffer pools -DPMCP |
78 | -Eth MACs/ports -DPIO |
79 | -network interface -DPNI |
81 | -queue portals -DPBP |
85 +--------------------------------------+
87 The MC mediates operations such as create, discover,
88 connect, configuration, and destroy. Fast-path operations
89 on data, such as packet transmit/receive, are not mediated by
90 the MC and are done directly using memory mapped regions in
93 Overview of DPAA2 Objects
94 ~~~~~~~~~~~~~~~~~~~~~~~~~
96 The section provides a brief overview of some key DPAA2 objects.
97 A simple scenario is described illustrating the objects involved
98 in creating a network interfaces.
100 DPRC (Datapath Resource Container)
102 A DPRC is a container object that holds all the other
103 types of DPAA2 objects. In the example diagram below there
104 are 8 objects of 5 types (DPMCP, DPIO, DPBP, DPNI, and DPMAC)
107 .. code-block:: console
109 +---------------------------------------------------------+
112 | +-------+ +-------+ +-------+ +-------+ +-------+ |
113 | | DPMCP | | DPIO | | DPBP | | DPNI | | DPMAC | |
114 | +-------+ +-------+ +-------+ +---+---+ +---+---+ |
115 | | DPMCP | | DPIO | |
116 | +-------+ +-------+ |
120 +---------------------------------------------------------+
122 From the point of view of an OS, a DPRC behaves similar to a plug and
123 play bus, like PCI. DPRC commands can be used to enumerate the contents
124 of the DPRC, discover the hardware objects present (including mappable
125 regions and interrupts).
127 .. code-block:: console
131 +--+--------+-------+-------+-------+
133 DPMCP.1 DPIO.1 DPBP.1 DPNI.1 DPMAC.1
137 Hardware objects can be created and destroyed dynamically, providing
138 the ability to hot plug/unplug objects in and out of the DPRC.
140 A DPRC has a mappable MMIO region (an MC portal) that can be used
141 to send MC commands. It has an interrupt for status events (like
144 All objects in a container share the same hardware "isolation context".
145 This means that with respect to an IOMMU the isolation granularity
146 is at the DPRC (container) level, not at the individual object
149 DPRCs can be defined statically and populated with objects
150 via a config file passed to the MC when firmware starts
151 it. There is also a Linux user space tool called "restool"
152 that can be used to create/destroy containers and objects
155 DPAA2 Objects for an Ethernet Network Interface
156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158 A typical Ethernet NIC is monolithic-- the NIC device contains TX/RX
159 queuing mechanisms, configuration mechanisms, buffer management,
160 physical ports, and interrupts. DPAA2 uses a more granular approach
161 utilizing multiple hardware objects. Each object provides specialized
162 functions. Groups of these objects are used by software to provide
163 Ethernet network interface functionality. This approach provides
164 efficient use of finite hardware resources, flexibility, and
165 performance advantages.
167 The diagram below shows the objects needed for a simple
168 network interface configuration on a system with 2 CPUs.
170 .. code-block:: console
193 Below the objects are described. For each object a brief description
194 is provided along with a summary of the kinds of operations the object
195 supports and a summary of key resources of the object (MMIO regions
198 DPMAC (Datapath Ethernet MAC): represents an Ethernet MAC, a
199 hardware device that connects to an Ethernet PHY and allows
200 physical transmission and reception of Ethernet frames.
203 - IRQs: DPNI link change
204 - commands: set link up/down, link config, get stats, IRQ config, enable, reset
206 DPNI (Datapath Network Interface): contains TX/RX queues,
207 network interface configuration, and RX buffer pool configuration
208 mechanisms. The TX/RX queues are in memory and are identified by
213 - commands: port config, offload config, queue config, parse/classify config, IRQ config, enable, reset
215 DPIO (Datapath I/O): provides interfaces to enqueue and dequeue
216 packets and do hardware buffer pool management operations. The DPAA2
217 architecture separates the mechanism to access queues (the DPIO object)
218 from the queues themselves. The DPIO provides an MMIO interface to
219 enqueue/dequeue packets. To enqueue something a descriptor is written
220 to the DPIO MMIO region, which includes the target queue number.
221 There will typically be one DPIO assigned to each CPU. This allows all
222 CPUs to simultaneously perform enqueue/dequeued operations. DPIOs are
223 expected to be shared by different DPAA2 drivers.
225 - MMIO regions: queue operations, buffer management
226 - IRQs: data availability, congestion notification, buffer pool depletion
227 - commands: IRQ config, enable, reset
229 DPBP (Datapath Buffer Pool): represents a hardware buffer
234 - commands: enable, reset
236 DPMCP (Datapath MC Portal): provides an MC command portal.
237 Used by drivers to send commands to the MC to manage
240 - MMIO regions: MC command portal
241 - IRQs: command completion
242 - commands: IRQ config, enable, reset
247 Some objects have explicit relationships that must
252 - DPNI <--> L2-switch-port
254 A DPNI must be connected to something such as a DPMAC,
255 another DPNI, or L2 switch port. The DPNI connection
256 is made via a DPRC command.
258 .. code-block:: console
268 A network interface requires a 'buffer pool' (DPBP object) which provides
269 a list of pointers to memory where received Ethernet data is to be copied.
270 The Ethernet driver configures the DPBPs associated with the network
276 All interrupts generated by DPAA2 objects are message
277 interrupts. At the hardware level message interrupts
278 generated by devices will normally have 3 components--
279 1) a non-spoofable 'device-id' expressed on the hardware
280 bus, 2) an address, 3) a data value.
282 In the case of DPAA2 devices/objects, all objects in the
283 same container/DPRC share the same 'device-id'.
284 For ARM-based SoC this is the same as the stream ID.
287 DPAA2 DPDK - Poll Mode Driver Overview
288 --------------------------------------
290 This section provides an overview of the drivers for
291 DPAA2-- 1) the bus driver and associated "DPAA2 infrastructure"
292 drivers and 2) functional object drivers (such as Ethernet).
294 As described previously, a DPRC is a container that holds the other
295 types of DPAA2 objects. It is functionally similar to a plug-and-play
298 Each object in the DPRC is a Linux "device" and is bound to a driver.
299 The diagram below shows the dpaa2 drivers involved in a networking
300 scenario and the objects bound to each driver. A brief description
301 of each driver follows.
303 .. code-block: console
309 +------------+ +------------+
310 | Ethernet |.......| Mempool |
311 . . . . . . . . . | (DPNI) | | (DPBP) |
312 . +---+---+----+ +-----+------+
318 . . . . . . . . . . .| DPIO driver| .
323 +----+------+-------+ +-----+----- | .
325 | VFIO fslmc-bus |....................|.....................
328 +-------------------+ |
330 ========================== HARDWARE =====|=======================
338 =========================================|========================
341 A brief description of each driver is provided below.
346 The DPAA2 bus driver is a rte_bus driver which scans the fsl-mc bus.
347 Key functions include:
349 - Reading the container and setting up vfio group
350 - Scanning and parsing the various MC objects and adding them to
351 their respective device list.
353 Additionally, it also provides the object driver for generic MC objects.
358 The DPIO driver is bound to DPIO objects and provides services that allow
359 other drivers such as the Ethernet driver to enqueue and dequeue data for
360 their respective objects.
361 Key services include:
363 - Data availability notifications
364 - Hardware queuing operations (enqueue and dequeue of data)
365 - Hardware buffer pool management
367 To transmit a packet the Ethernet driver puts data on a queue and
368 invokes a DPIO API. For receive, the Ethernet driver registers
369 a data availability notification callback. To dequeue a packet
372 There is typically one DPIO object per physical CPU for optimum
373 performance, allowing different CPUs to simultaneously enqueue
376 The DPIO driver operates on behalf of all DPAA2 drivers
377 active -- Ethernet, crypto, compression, etc.
379 DPBP based Mempool driver
380 ~~~~~~~~~~~~~~~~~~~~~~~~~
382 The DPBP driver is bound to a DPBP objects and provides services to
383 create a hardware offloaded packet buffer mempool.
387 The Ethernet driver is bound to a DPNI and implements the kernel
388 interfaces needed to connect the DPAA2 network interface to
391 Each DPNI corresponds to a DPDK network interface.
396 Features of the DPAA2 PMD are:
398 - Multiple queues for TX and RX
399 - Receive Side Scaling (RSS)
401 - Packet type information
405 - Port hardware statistics
408 - Scattered and gather for TX and RX
420 See :doc:`../platform/dpaa2` for setup information
422 Currently supported by DPDK:
424 - NXP LSDK **19.08+**.
425 - MC Firmware version **10.18.0** and higher.
426 - Supported architectures: **arm64 LE**.
428 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
432 Some part of fslmc bus code (mc flib - object library) routines are
433 dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
435 Pre-Installation Configuration
436 ------------------------------
441 The following options can be modified in the ``config`` file.
442 Please note that enabling debugging options may affect system performance.
444 - ``CONFIG_RTE_LIBRTE_FSLMC_BUS`` (default ``y``)
446 Toggle compilation of the ``librte_bus_fslmc`` driver.
448 - ``CONFIG_RTE_LIBRTE_DPAA2_PMD`` (default ``y``)
450 Toggle compilation of the ``librte_pmd_dpaa2`` driver.
452 - ``CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER`` (default ``n``)
454 Toggle display of debugging messages/logic
456 - ``CONFIG_RTE_LIBRTE_DPAA2_USE_PHYS_IOVA`` (default ``n``)
458 Toggle to use physical address vs virtual address for hardware accelerators.
460 Driver compilation and testing
461 ------------------------------
463 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
468 Follow instructions available in the document
469 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
474 .. code-block:: console
476 ./testpmd -c 0xff -n 1 -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
479 EAL: Registered [pci] bus.
480 EAL: Registered [fslmc] bus.
481 EAL: Detected 8 lcore(s)
482 EAL: Probing VFIO support...
483 EAL: VFIO support initialized
485 PMD: DPAA2: Processing Container = dprc.2
486 EAL: fslmc: DPRC contains = 51 devices
487 EAL: fslmc: Bus scan completed
489 Configuring Port 0 (socket 0)
490 Port 0: 00:00:00:00:00:01
491 Configuring Port 1 (socket 0)
492 Port 1: 00:00:00:00:00:02
494 Checking link statuses...
495 Port 0 Link Up - speed 10000 Mbps - full-duplex
496 Port 1 Link Up - speed 10000 Mbps - full-duplex
501 * Use dev arg option ``drv_loopback=1`` to loopback packets at
502 driver level. Any packet received will be reflected back by the
503 driver on same port. e.g. ``fslmc:dpni.1,drv_loopback=1``
505 * Use dev arg option ``drv_no_prefetch=1`` to disable prefetching
506 of the packet pull command which is issued in the previous cycle.
507 e.g. ``fslmc:dpni.1,drv_no_prefetch=1``
512 For enabling logging for DPAA2 PMD, following log-level prefix can be used:
514 .. code-block:: console
516 <dpdk app> <EAL args> --log-level=bus.fslmc:<level> -- ...
518 Using ``bus.fslmc`` as log matching criteria, all FSLMC bus logs can be enabled
519 which are lower than logging ``level``.
523 .. code-block:: console
525 <dpdk app> <EAL args> --log-level=pmd.net.dpaa2:<level> -- ...
527 Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
528 which are lower than logging ``level``.
530 Whitelisting & Blacklisting
531 ---------------------------
533 For blacklisting a DPAA2 device, following commands can be used.
535 .. code-block:: console
537 <dpdk app> <EAL args> -b "fslmc:dpni.x" -- ...
539 Where x is the device object id as configured in resource container.
546 DPAA2 drivers for DPDK can only work on NXP SoCs as listed in the
547 ``Supported DPAA2 SoCs``.
549 Maximum packet length
550 ~~~~~~~~~~~~~~~~~~~~~
552 The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
553 is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
554 member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
555 up to 10240 bytes can still reach the host interface.
560 - RSS hash key cannot be modified.
561 - RSS RETA cannot be configured.