2 Copyright (C) NXP. 2016.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of NXP nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 DPAA2 Poll Mode Driver
32 ======================
34 The DPAA2 NIC PMD (**librte_pmd_dpaa2**) provides poll mode driver
35 support for the inbuilt NIC found in the **NXP DPAA2** SoC family.
37 More information can be found at `NXP Official Website
38 <http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
40 NXP DPAA2 (Data Path Acceleration Architecture Gen2)
41 ----------------------------------------------------
43 This section provides an overview of the NXP DPAA2 architecture
44 and how it is integrated into the DPDK.
49 - Overview of DPAA2 objects
50 - DPAA2 driver architecture overview
55 Reference: `FSL MC BUS in Linux Kernel <https://www.kernel.org/doc/readme/drivers-staging-fsl-mc-README.txt>`_.
57 DPAA2 is a hardware architecture designed for high-speed network
58 packet processing. DPAA2 consists of sophisticated mechanisms for
59 processing Ethernet packets, queue management, buffer management,
60 autonomous L2 switching, virtual Ethernet bridging, and accelerator
61 (e.g. crypto) sharing.
63 A DPAA2 hardware component called the Management Complex (or MC) manages the
64 DPAA2 hardware resources. The MC provides an object-based abstraction for
65 software drivers to use the DPAA2 hardware.
67 The MC uses DPAA2 hardware resources such as queues, buffer pools, and
68 network ports to create functional objects/devices such as network
69 interfaces, an L2 switch, or accelerator instances.
71 The MC provides memory-mapped I/O command interfaces (MC portals)
72 which DPAA2 software drivers use to operate on DPAA2 objects:
74 The diagram below shows an overview of the DPAA2 resource management
77 .. code-block:: console
79 +--------------------------------------+
83 +-----------------------------|--------+
85 | (create,discover,connect
89 +------------------------| mc portal |-+
91 | +- - - - - - - - - - - - -V- - -+ |
93 | | Management Complex (MC) | |
95 | +- - - - - - - - - - - - - - - -+ |
101 | -buffer pools -DPMCP |
102 | -Eth MACs/ports -DPIO |
103 | -network interface -DPNI |
105 | -queue portals -DPBP |
109 +--------------------------------------+
111 The MC mediates operations such as create, discover,
112 connect, configuration, and destroy. Fast-path operations
113 on data, such as packet transmit/receive, are not mediated by
114 the MC and are done directly using memory mapped regions in
117 Overview of DPAA2 Objects
118 ~~~~~~~~~~~~~~~~~~~~~~~~~
120 The section provides a brief overview of some key DPAA2 objects.
121 A simple scenario is described illustrating the objects involved
122 in creating a network interfaces.
124 DPRC (Datapath Resource Container)
126 A DPRC is a container object that holds all the other
127 types of DPAA2 objects. In the example diagram below there
128 are 8 objects of 5 types (DPMCP, DPIO, DPBP, DPNI, and DPMAC)
131 .. code-block:: console
133 +---------------------------------------------------------+
136 | +-------+ +-------+ +-------+ +-------+ +-------+ |
137 | | DPMCP | | DPIO | | DPBP | | DPNI | | DPMAC | |
138 | +-------+ +-------+ +-------+ +---+---+ +---+---+ |
139 | | DPMCP | | DPIO | |
140 | +-------+ +-------+ |
144 +---------------------------------------------------------+
146 From the point of view of an OS, a DPRC behaves similar to a plug and
147 play bus, like PCI. DPRC commands can be used to enumerate the contents
148 of the DPRC, discover the hardware objects present (including mappable
149 regions and interrupts).
151 .. code-block:: console
155 +--+--------+-------+-------+-------+
157 DPMCP.1 DPIO.1 DPBP.1 DPNI.1 DPMAC.1
161 Hardware objects can be created and destroyed dynamically, providing
162 the ability to hot plug/unplug objects in and out of the DPRC.
164 A DPRC has a mappable MMIO region (an MC portal) that can be used
165 to send MC commands. It has an interrupt for status events (like
168 All objects in a container share the same hardware "isolation context".
169 This means that with respect to an IOMMU the isolation granularity
170 is at the DPRC (container) level, not at the individual object
173 DPRCs can be defined statically and populated with objects
174 via a config file passed to the MC when firmware starts
175 it. There is also a Linux user space tool called "restool"
176 that can be used to create/destroy containers and objects
179 DPAA2 Objects for an Ethernet Network Interface
180 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
182 A typical Ethernet NIC is monolithic-- the NIC device contains TX/RX
183 queuing mechanisms, configuration mechanisms, buffer management,
184 physical ports, and interrupts. DPAA2 uses a more granular approach
185 utilizing multiple hardware objects. Each object provides specialized
186 functions. Groups of these objects are used by software to provide
187 Ethernet network interface functionality. This approach provides
188 efficient use of finite hardware resources, flexibility, and
189 performance advantages.
191 The diagram below shows the objects needed for a simple
192 network interface configuration on a system with 2 CPUs.
194 .. code-block:: console
217 Below the objects are described. For each object a brief description
218 is provided along with a summary of the kinds of operations the object
219 supports and a summary of key resources of the object (MMIO regions
222 DPMAC (Datapath Ethernet MAC): represents an Ethernet MAC, a
223 hardware device that connects to an Ethernet PHY and allows
224 physical transmission and reception of Ethernet frames.
227 - IRQs: DPNI link change
228 - commands: set link up/down, link config, get stats, IRQ config, enable, reset
230 DPNI (Datapath Network Interface): contains TX/RX queues,
231 network interface configuration, and RX buffer pool configuration
232 mechanisms. The TX/RX queues are in memory and are identified by
237 - commands: port config, offload config, queue config, parse/classify config, IRQ config, enable, reset
239 DPIO (Datapath I/O): provides interfaces to enqueue and dequeue
240 packets and do hardware buffer pool management operations. The DPAA2
241 architecture separates the mechanism to access queues (the DPIO object)
242 from the queues themselves. The DPIO provides an MMIO interface to
243 enqueue/dequeue packets. To enqueue something a descriptor is written
244 to the DPIO MMIO region, which includes the target queue number.
245 There will typically be one DPIO assigned to each CPU. This allows all
246 CPUs to simultaneously perform enqueue/dequeued operations. DPIOs are
247 expected to be shared by different DPAA2 drivers.
249 - MMIO regions: queue operations, buffer management
250 - IRQs: data availability, congestion notification, buffer pool depletion
251 - commands: IRQ config, enable, reset
253 DPBP (Datapath Buffer Pool): represents a hardware buffer
258 - commands: enable, reset
260 DPMCP (Datapath MC Portal): provides an MC command portal.
261 Used by drivers to send commands to the MC to manage
264 - MMIO regions: MC command portal
265 - IRQs: command completion
266 - commands: IRQ config, enable, reset
271 Some objects have explicit relationships that must
276 - DPNI <--> L2-switch-port
278 A DPNI must be connected to something such as a DPMAC,
279 another DPNI, or L2 switch port. The DPNI connection
280 is made via a DPRC command.
282 .. code-block:: console
292 A network interface requires a 'buffer pool' (DPBP object) which provides
293 a list of pointers to memory where received Ethernet data is to be copied.
294 The Ethernet driver configures the DPBPs associated with the network
300 All interrupts generated by DPAA2 objects are message
301 interrupts. At the hardware level message interrupts
302 generated by devices will normally have 3 components--
303 1) a non-spoofable 'device-id' expressed on the hardware
304 bus, 2) an address, 3) a data value.
306 In the case of DPAA2 devices/objects, all objects in the
307 same container/DPRC share the same 'device-id'.
308 For ARM-based SoC this is the same as the stream ID.
311 DPAA2 DPDK - Poll Mode Driver Overview
312 --------------------------------------
314 This section provides an overview of the drivers for
315 DPAA2-- 1) the bus driver and associated "DPAA2 infrastructure"
316 drivers and 2) functional object drivers (such as Ethernet).
318 As described previously, a DPRC is a container that holds the other
319 types of DPAA2 objects. It is functionally similar to a plug-and-play
322 Each object in the DPRC is a Linux "device" and is bound to a driver.
323 The diagram below shows the dpaa2 drivers involved in a networking
324 scenario and the objects bound to each driver. A brief description
325 of each driver follows.
327 .. code-block: console
333 +------------+ +------------+
334 | Ethernet |.......| Mempool |
335 . . . . . . . . . | (DPNI) | | (DPBP) |
336 . +---+---+----+ +-----+------+
342 . . . . . . . . . . .| DPIO driver| .
347 +----+------+-------+ +-----+----- | .
349 | VFIO fslmc-bus |....................|.....................
352 +-------------------+ |
354 ========================== HARDWARE =====|=======================
362 =========================================|========================
365 A brief description of each driver is provided below.
370 The DPAA2 bus driver is a rte_bus driver which scans the fsl-mc bus.
371 Key functions include:
373 - Reading the container and setting up vfio group
374 - Scanning and parsing the various MC objects and adding them to
375 their respective device list.
377 Additionally, it also provides the object driver for generic MC objects.
382 The DPIO driver is bound to DPIO objects and provides services that allow
383 other drivers such as the Ethernet driver to enqueue and dequeue data for
384 their respective objects.
385 Key services include:
387 - Data availability notifications
388 - Hardware queuing operations (enqueue and dequeue of data)
389 - Hardware buffer pool management
391 To transmit a packet the Ethernet driver puts data on a queue and
392 invokes a DPIO API. For receive, the Ethernet driver registers
393 a data availability notification callback. To dequeue a packet
396 There is typically one DPIO object per physical CPU for optimum
397 performance, allowing different CPUs to simultaneously enqueue
400 The DPIO driver operates on behalf of all DPAA2 drivers
401 active -- Ethernet, crypto, compression, etc.
403 DPBP based Mempool driver
404 ~~~~~~~~~~~~~~~~~~~~~~~~~
406 The DPBP driver is bound to a DPBP objects and provides sevices to
407 create a hardware offloaded packet buffer mempool.
411 The Ethernet driver is bound to a DPNI and implements the kernel
412 interfaces needed to connect the DPAA2 network interface to
415 Each DPNI corresponds to a DPDK network interface.
420 Features of the DPAA2 PMD are:
422 - Multiple queues for TX and RX
423 - Receive Side Scaling (RSS)
424 - Packet type information
439 There are three main pre-requisities for executing DPAA2 PMD on a DPAA2
442 1. **ARM 64 Tool Chain**
444 For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
448 It can be obtained from `NXP's Github hosting <https://github.com/qoriq-open-source/linux>`_.
450 3. **Rootfile system**
452 Any *aarch64* supporting filesystem can be used. For example,
453 Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
454 from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
456 As an alternative method, DPAA2 PMD can also be executed using images provided
457 as part of SDK from NXP. The SDK includes all the above prerequisites necessary
458 to bring up a DPAA2 board.
460 The following dependencies are not part of DPDK and must be installed
465 NXP Linux software development kit (SDK) includes support for family
466 of QorIQ® ARM-Architecture-based system on chip (SoC) processors
467 and corresponding boards.
469 It includes the Linux board support packages (BSPs) for NXP SoCs,
470 a fully operational tool chain, kernel and board specific modules.
472 SDK and related information can be obtained from: `NXP QorIQ SDK <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
474 - **DPDK Helper Scripts**
476 DPAA2 based resources can be configured easily with the help of ready scripts
477 as provided in the DPDK helper repository.
479 `DPDK Helper Scripts <https://github.com/qoriq-open-source/dpdk-helper>`_.
481 Currently supported by DPDK:
484 - MC Firmware version **10.0.0** and higher.
485 - Supported architectures: **arm64 LE**.
487 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
491 Some part of fslmc bus code (mc flib - object library) routines are
492 dual licensed (BSD & GPLv2).
494 Pre-Installation Configuration
495 ------------------------------
500 The following options can be modified in the ``config`` file.
501 Please note that enabling debugging options may affect system performance.
503 - ``CONFIG_RTE_LIBRTE_FSLMC_BUS`` (default ``n``)
505 By default it is enabled only for defconfig_arm64-dpaa2-* config.
506 Toggle compilation of the ``librte_bus_fslmc`` driver.
508 - ``CONFIG_RTE_LIBRTE_DPAA2_PMD`` (default ``n``)
510 By default it is enabled only for defconfig_arm64-dpaa2-* config.
511 Toggle compilation of the ``librte_pmd_dpaa2`` driver.
513 - ``CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER`` (default ``n``)
515 Toggle display of generic debugging messages
517 - ``CONFIG_RTE_LIBRTE_DPAA2_USE_PHYS_IOVA`` (default ``y``)
519 Toggle to use physical address vs virtual address for hardware accelerators.
521 - ``CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT`` (default ``n``)
523 Toggle display of initialization related messages.
525 - ``CONFIG_RTE_LIBRTE_DPAA2_DEBUG_RX`` (default ``n``)
527 Toggle display of receive fast path run-time message
529 - ``CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX`` (default ``n``)
531 Toggle display of transmit fast path run-time message
533 - ``CONFIG_RTE_LIBRTE_DPAA2_DEBUG_TX_FREE`` (default ``n``)
535 Toggle display of transmit fast path buffer free run-time message
537 Driver compilation and testing
538 ------------------------------
540 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
545 Follow instructions available in the document
546 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
551 .. code-block:: console
553 ./arm64-dpaa2-linuxapp-gcc/testpmd -c 0xff -n 1 \
554 -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
557 EAL: Registered [pci] bus.
558 EAL: Registered [fslmc] bus.
559 EAL: Detected 8 lcore(s)
560 EAL: Probing VFIO support...
561 EAL: VFIO support initialized
563 PMD: DPAA2: Processing Container = dprc.2
564 EAL: fslmc: DPRC contains = 51 devices
565 EAL: fslmc: Bus scan completed
567 Configuring Port 0 (socket 0)
568 Port 0: 00:00:00:00:00:01
569 Configuring Port 1 (socket 0)
570 Port 1: 00:00:00:00:00:02
572 Checking link statuses...
573 Port 0 Link Up - speed 10000 Mbps - full-duplex
574 Port 1 Link Up - speed 10000 Mbps - full-duplex
583 DPAA2 drivers for DPDK can only work on NXP SoCs as listed in the
584 ``Supported DPAA2 SoCs``.
586 Maximum packet length
587 ~~~~~~~~~~~~~~~~~~~~~
589 The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
590 is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
591 member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
592 up to 10240 bytes can still reach the host interface.