1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2016,2020-2021 NXP
8 The DPAA2 NIC PMD (**librte_net_dpaa2**) provides poll mode driver
9 support for the inbuilt NIC found in the **NXP DPAA2** SoC family.
11 More information can be found at `NXP Official Website
12 <http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
14 NXP DPAA2 (Data Path Acceleration Architecture Gen2)
15 ----------------------------------------------------
17 This section provides an overview of the NXP DPAA2 architecture
18 and how it is integrated into the DPDK.
23 - Overview of DPAA2 objects
24 - DPAA2 driver architecture overview
31 Reference: `FSL MC BUS in Linux Kernel <https://www.kernel.org/doc/readme/drivers-staging-fsl-mc-README.txt>`_.
33 DPAA2 is a hardware architecture designed for high-speed network
34 packet processing. DPAA2 consists of sophisticated mechanisms for
35 processing Ethernet packets, queue management, buffer management,
36 autonomous L2 switching, virtual Ethernet bridging, and accelerator
37 (e.g. crypto) sharing.
39 A DPAA2 hardware component called the Management Complex (or MC) manages the
40 DPAA2 hardware resources. The MC provides an object-based abstraction for
41 software drivers to use the DPAA2 hardware.
43 The MC uses DPAA2 hardware resources such as queues, buffer pools, and
44 network ports to create functional objects/devices such as network
45 interfaces, an L2 switch, or accelerator instances.
47 The MC provides memory-mapped I/O command interfaces (MC portals)
48 which DPAA2 software drivers use to operate on DPAA2 objects:
50 The diagram below shows an overview of the DPAA2 resource management
53 .. code-block:: console
55 +--------------------------------------+
59 +-----------------------------|--------+
61 | (create,discover,connect
65 +------------------------| mc portal |-+
67 | +- - - - - - - - - - - - -V- - -+ |
69 | | Management Complex (MC) | |
71 | +- - - - - - - - - - - - - - - -+ |
77 | -buffer pools -DPMCP |
78 | -Eth MACs/ports -DPIO |
79 | -network interface -DPNI |
81 | -queue portals -DPBP |
85 +--------------------------------------+
87 The MC mediates operations such as create, discover,
88 connect, configuration, and destroy. Fast-path operations
89 on data, such as packet transmit/receive, are not mediated by
90 the MC and are done directly using memory mapped regions in
93 Overview of DPAA2 Objects
94 ~~~~~~~~~~~~~~~~~~~~~~~~~
96 The section provides a brief overview of some key DPAA2 objects.
97 A simple scenario is described illustrating the objects involved
98 in creating a network interfaces.
100 DPRC (Datapath Resource Container)
102 A DPRC is a container object that holds all the other
103 types of DPAA2 objects. In the example diagram below there
104 are 8 objects of 5 types (DPMCP, DPIO, DPBP, DPNI, and DPMAC)
107 .. code-block:: console
109 +---------------------------------------------------------+
112 | +-------+ +-------+ +-------+ +-------+ +-------+ |
113 | | DPMCP | | DPIO | | DPBP | | DPNI | | DPMAC | |
114 | +-------+ +-------+ +-------+ +---+---+ +---+---+ |
115 | | DPMCP | | DPIO | |
116 | +-------+ +-------+ |
120 +---------------------------------------------------------+
122 From the point of view of an OS, a DPRC behaves similar to a plug and
123 play bus, like PCI. DPRC commands can be used to enumerate the contents
124 of the DPRC, discover the hardware objects present (including mappable
125 regions and interrupts).
127 .. code-block:: console
131 +--+--------+-------+-------+-------+
133 DPMCP.1 DPIO.1 DPBP.1 DPNI.1 DPMAC.1
137 Hardware objects can be created and destroyed dynamically, providing
138 the ability to hot plug/unplug objects in and out of the DPRC.
140 A DPRC has a mappable MMIO region (an MC portal) that can be used
141 to send MC commands. It has an interrupt for status events (like
144 All objects in a container share the same hardware "isolation context".
145 This means that with respect to an IOMMU the isolation granularity
146 is at the DPRC (container) level, not at the individual object
149 DPRCs can be defined statically and populated with objects
150 via a config file passed to the MC when firmware starts
151 it. There is also a Linux user space tool called "restool"
152 that can be used to create/destroy containers and objects
155 DPAA2 Objects for an Ethernet Network Interface
156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158 A typical Ethernet NIC is monolithic-- the NIC device contains TX/RX
159 queuing mechanisms, configuration mechanisms, buffer management,
160 physical ports, and interrupts. DPAA2 uses a more granular approach
161 utilizing multiple hardware objects. Each object provides specialized
162 functions. Groups of these objects are used by software to provide
163 Ethernet network interface functionality. This approach provides
164 efficient use of finite hardware resources, flexibility, and
165 performance advantages.
167 The diagram below shows the objects needed for a simple
168 network interface configuration on a system with 2 CPUs.
170 .. code-block:: console
193 Below the objects are described. For each object a brief description
194 is provided along with a summary of the kinds of operations the object
195 supports and a summary of key resources of the object (MMIO regions
198 DPMAC (Datapath Ethernet MAC): represents an Ethernet MAC, a
199 hardware device that connects to an Ethernet PHY and allows
200 physical transmission and reception of Ethernet frames.
203 - IRQs: DPNI link change
204 - commands: set link up/down, link config, get stats, IRQ config, enable, reset
206 DPNI (Datapath Network Interface): contains TX/RX queues,
207 network interface configuration, and RX buffer pool configuration
208 mechanisms. The TX/RX queues are in memory and are identified by
213 - commands: port config, offload config, queue config, parse/classify config, IRQ config, enable, reset
215 DPIO (Datapath I/O): provides interfaces to enqueue and dequeue
216 packets and do hardware buffer pool management operations. The DPAA2
217 architecture separates the mechanism to access queues (the DPIO object)
218 from the queues themselves. The DPIO provides an MMIO interface to
219 enqueue/dequeue packets. To enqueue something a descriptor is written
220 to the DPIO MMIO region, which includes the target queue number.
221 There will typically be one DPIO assigned to each CPU. This allows all
222 CPUs to simultaneously perform enqueue/dequeued operations. DPIOs are
223 expected to be shared by different DPAA2 drivers.
225 - MMIO regions: queue operations, buffer management
226 - IRQs: data availability, congestion notification, buffer pool depletion
227 - commands: IRQ config, enable, reset
229 DPBP (Datapath Buffer Pool): represents a hardware buffer
234 - commands: enable, reset
236 DPMCP (Datapath MC Portal): provides an MC command portal.
237 Used by drivers to send commands to the MC to manage
240 - MMIO regions: MC command portal
241 - IRQs: command completion
242 - commands: IRQ config, enable, reset
247 Some objects have explicit relationships that must
252 - DPNI <--> L2-switch-port
254 A DPNI must be connected to something such as a DPMAC,
255 another DPNI, or L2 switch port. The DPNI connection
256 is made via a DPRC command.
258 .. code-block:: console
268 A network interface requires a 'buffer pool' (DPBP object) which provides
269 a list of pointers to memory where received Ethernet data is to be copied.
270 The Ethernet driver configures the DPBPs associated with the network
276 All interrupts generated by DPAA2 objects are message
277 interrupts. At the hardware level message interrupts
278 generated by devices will normally have 3 components--
279 1) a non-spoofable 'device-id' expressed on the hardware
280 bus, 2) an address, 3) a data value.
282 In the case of DPAA2 devices/objects, all objects in the
283 same container/DPRC share the same 'device-id'.
284 For ARM-based SoC this is the same as the stream ID.
287 DPAA2 DPDK - Poll Mode Driver Overview
288 --------------------------------------
290 This section provides an overview of the drivers for
291 DPAA2-- 1) the bus driver and associated "DPAA2 infrastructure"
292 drivers and 2) functional object drivers (such as Ethernet).
294 As described previously, a DPRC is a container that holds the other
295 types of DPAA2 objects. It is functionally similar to a plug-and-play
298 Each object in the DPRC is a Linux "device" and is bound to a driver.
299 The diagram below shows the dpaa2 drivers involved in a networking
300 scenario and the objects bound to each driver. A brief description
301 of each driver follows.
303 .. code-block:: console
309 +------------+ +------------+
310 | Ethernet |.......| Mempool |
311 . . . . . . . . . | (DPNI) | | (DPBP) |
312 . +---+---+----+ +-----+------+
318 . . . . . . . . . . .| DPIO driver| .
323 +----+------+-------+ +-----+----- | .
325 | VFIO fslmc-bus |....................|.....................
328 +-------------------+ |
330 ========================== HARDWARE =====|=======================
338 =========================================|========================
341 A brief description of each driver is provided below.
346 The DPAA2 bus driver is a rte_bus driver which scans the fsl-mc bus.
347 Key functions include:
349 - Reading the container and setting up vfio group
350 - Scanning and parsing the various MC objects and adding them to
351 their respective device list.
353 Additionally, it also provides the object driver for generic MC objects.
358 The DPIO driver is bound to DPIO objects and provides services that allow
359 other drivers such as the Ethernet driver to enqueue and dequeue data for
360 their respective objects.
361 Key services include:
363 - Data availability notifications
364 - Hardware queuing operations (enqueue and dequeue of data)
365 - Hardware buffer pool management
367 To transmit a packet the Ethernet driver puts data on a queue and
368 invokes a DPIO API. For receive, the Ethernet driver registers
369 a data availability notification callback. To dequeue a packet
372 There is typically one DPIO object per physical CPU for optimum
373 performance, allowing different CPUs to simultaneously enqueue
376 The DPIO driver operates on behalf of all DPAA2 drivers
377 active -- Ethernet, crypto, compression, etc.
379 DPBP based Mempool driver
380 ~~~~~~~~~~~~~~~~~~~~~~~~~
382 The DPBP driver is bound to a DPBP objects and provides services to
383 create a hardware offloaded packet buffer mempool.
387 The Ethernet driver is bound to a DPNI and implements the kernel
388 interfaces needed to connect the DPAA2 network interface to
391 Each DPNI corresponds to a DPDK network interface.
396 Features of the DPAA2 PMD are:
398 - Multiple queues for TX and RX
399 - Receive Side Scaling (RSS)
401 - Packet type information
405 - Port hardware statistics
408 - Scattered and gather for TX and RX
409 - :ref:`Traffic Management API <dptmapi>`
422 See :doc:`../platform/dpaa2` for setup information
424 Currently supported by DPDK:
426 - NXP LSDK **19.08+**.
427 - MC Firmware version **10.18.0** and higher.
428 - Supported architectures: **arm64 LE**.
430 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
434 Some part of fslmc bus code (mc flib - object library) routines are
435 dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
438 Driver compilation and testing
439 ------------------------------
441 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
446 Follow instructions available in the document
447 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
452 .. code-block:: console
454 ./dpdk-testpmd -c 0xff -n 1 -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
457 EAL: Registered [pci] bus.
458 EAL: Registered [fslmc] bus.
459 EAL: Detected 8 lcore(s)
460 EAL: Probing VFIO support...
461 EAL: VFIO support initialized
463 PMD: DPAA2: Processing Container = dprc.2
464 EAL: fslmc: DPRC contains = 51 devices
465 EAL: fslmc: Bus scan completed
467 Configuring Port 0 (socket 0)
468 Port 0: 00:00:00:00:00:01
469 Configuring Port 1 (socket 0)
470 Port 1: 00:00:00:00:00:02
472 Checking link statuses...
473 Port 0 Link Up - speed 10000 Mbps - full-duplex
474 Port 1 Link Up - speed 10000 Mbps - full-duplex
479 * Use dev arg option ``drv_loopback=1`` to loopback packets at
480 driver level. Any packet received will be reflected back by the
481 driver on same port. e.g. ``fslmc:dpni.1,drv_loopback=1``
483 * Use dev arg option ``drv_no_prefetch=1`` to disable prefetching
484 of the packet pull command which is issued in the previous cycle.
485 e.g. ``fslmc:dpni.1,drv_no_prefetch=1``
487 * Use dev arg option ``drv_tx_conf=1`` to enable TX confirmation mode.
488 In this mode tx conf queues need to be polled to free the buffers.
489 e.g. ``fslmc:dpni.1,drv_tx_conf=1``
491 * Use dev arg option ``drv_error_queue=1`` to enable Packets in Error queue.
492 DPAA2 hardware drops the error packet in hardware. This option enables the
493 hardware to not drop the error packet and let the driver dump the error
494 packets, so that user can check what is wrong with those packets.
495 e.g. ``fslmc:dpni.1,drv_error_queue=1``
500 For enabling logging for DPAA2 PMD, following log-level prefix can be used:
502 .. code-block:: console
504 <dpdk app> <EAL args> --log-level=bus.fslmc:<level> -- ...
506 Using ``bus.fslmc`` as log matching criteria, all FSLMC bus logs can be enabled
507 which are lower than logging ``level``.
511 .. code-block:: console
513 <dpdk app> <EAL args> --log-level=pmd.net.dpaa2:<level> -- ...
515 Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
516 which are lower than logging ``level``.
521 For blocking a DPAA2 device, following commands can be used.
523 .. code-block:: console
525 <dpdk app> <EAL args> -b "fslmc:dpni.x" -- ...
527 Where x is the device object id as configured in resource container.
529 Running secondary debug app without blocklist
530 ---------------------------------------------
532 dpaa2 hardware imposes limits on some H/W access devices like Management
533 Control Port and H/W portal. This causes issue in their shared usages in
534 case of multi-process applications. It can overcome by using
535 allowlist/blocklist in primary and secondary applications.
537 In order to ease usage of standard debugging apps like dpdk-procinfo, dpaa2
538 driver reserves extra Management Control Port and H/W portal which can be
539 used by debug application to debug any existing application without
540 blocking these devices in primary process.
547 DPAA2 drivers for DPDK can only work on NXP SoCs as listed in the
548 ``Supported DPAA2 SoCs``.
550 Maximum packet length
551 ~~~~~~~~~~~~~~~~~~~~~
553 The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
554 is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
555 member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
556 up to 10240 bytes can still reach the host interface.
561 - RSS hash key cannot be modified.
562 - RSS RETA cannot be configured.
566 Traffic Management API
567 ----------------------
569 DPAA2 PMD supports generic DPDK Traffic Management API which allows to
570 configure the following features:
572 1. Hierarchical scheduling
575 Internally TM is represented by a hierarchy (tree) of nodes.
576 Node which has a parent is called a leaf whereas node without
577 parent is called a non-leaf (root).
579 Nodes hold following types of settings:
581 - for egress scheduler configuration: weight
582 - for egress rate limiter: private shaper
584 Hierarchy is always constructed from the top, i.e first a root node is added
585 then some number of leaf nodes. Number of leaf nodes cannot exceed number
586 of configured tx queues.
588 After hierarchy is complete it can be committed.
590 For an additional description please refer to DPDK :doc:`Traffic Management API <../prog_guide/traffic_management>`.
595 The following capabilities are supported:
597 - Level0 (root node) and Level1 are supported.
598 - 1 private shaper at root node (port level) is supported.
599 - 8 TX queues per port supported (1 channel per port)
600 - Both SP and WFQ scheduling mechanisms are supported on all 8 queues.
601 - Congestion notification is supported. It means if there is congestion on
602 the network, DPDK driver will not enqueue any packet (no taildrop or WRED)
604 User can also check node, level capabilities using testpmd commands.
609 For a detailed usage description please refer to "Traffic Management" section in DPDK :doc:`Testpmd Runtime Functions <../testpmd_app_ug/testpmd_funcs>`.
611 1. Run testpmd as follows:
613 .. code-block:: console
615 ./dpdk-testpmd -c 0xf -n 1 -- -i --portmask 0x3 --nb-cores=1 --txq=4 --rxq=4
619 .. code-block:: console
621 testpmd> port stop all
623 3. Add shaper profile:
625 One port level shaper and strict priority on all 4 queues of port 0:
627 .. code-block:: console
629 add port tm node shaper profile 0 1 104857600 64 100 0 0
630 add port tm nonleaf node 0 8 -1 0 1 0 1 1 1 0
631 add port tm leaf node 0 0 8 0 1 1 -1 0 0 0 0
632 add port tm leaf node 0 1 8 1 1 1 -1 0 0 0 0
633 add port tm leaf node 0 2 8 2 1 1 -1 0 0 0 0
634 add port tm leaf node 0 3 8 3 1 1 -1 0 0 0 0
635 port tm hierarchy commit 0 no
639 One port level shaper and WFQ on all 4 queues of port 0:
641 .. code-block:: console
643 add port tm node shaper profile 0 1 104857600 64 100 0 0
644 add port tm nonleaf node 0 8 -1 0 1 0 1 1 1 0
645 add port tm leaf node 0 0 8 0 200 1 -1 0 0 0 0
646 add port tm leaf node 0 1 8 0 300 1 -1 0 0 0 0
647 add port tm leaf node 0 2 8 0 400 1 -1 0 0 0 0
648 add port tm leaf node 0 3 8 0 500 1 -1 0 0 0 0
649 port tm hierarchy commit 0 no
651 4. Create flows as per the source IP addresses:
653 .. code-block:: console
655 flow create 1 group 0 priority 1 ingress pattern ipv4 src is \
656 10.10.10.1 / end actions queue index 0 / end
657 flow create 1 group 0 priority 2 ingress pattern ipv4 src is \
658 10.10.10.2 / end actions queue index 1 / end
659 flow create 1 group 0 priority 3 ingress pattern ipv4 src is \
660 10.10.10.3 / end actions queue index 2 / end
661 flow create 1 group 0 priority 4 ingress pattern ipv4 src is \
662 10.10.10.4 / end actions queue index 3 / end
666 .. code-block:: console
668 testpmd> port start all
674 .. code-block:: console
678 7. Inject the traffic on port1 as per the configured flows, you will see shaped and scheduled forwarded traffic on port0