1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(C) 2021 Marvell.
4 Marvell cnxk platform guide
5 ===========================
7 This document gives an overview of **Marvell OCTEON CN9K and CN10K** RVU H/W block,
8 packet flow and procedure to build DPDK on OCTEON cnxk platform.
10 More information about CN9K and CN10K SoC can be found at `Marvell Official Website
11 <https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
13 Supported OCTEON cnxk SoCs
14 --------------------------
19 Resource Virtualization Unit architecture
20 -----------------------------------------
22 The :numref:`figure_cnxk_resource_virtualization` diagram depicts the
23 RVU architecture and a resource provisioning example.
25 .. _figure_cnxk_resource_virtualization:
27 .. figure:: img/cnxk_resource_virtualization.*
29 cnxk Resource virtualization architecture and provisioning example
32 Resource Virtualization Unit (RVU) on Marvell's OCTEON CN9K/CN10K SoC maps HW
33 resources belonging to the network, crypto and other functional blocks onto
34 PCI-compatible physical and virtual functions.
36 Each functional block has multiple local functions (LFs) for
37 provisioning to different PCIe devices. RVU supports multiple PCIe SRIOV
38 physical functions (PFs) and virtual functions (VFs).
40 The :numref:`table_cnxk_rvu_dpdk_mapping` shows the various local
41 functions (LFs) provided by the RVU and its functional mapping to
44 .. _table_cnxk_rvu_dpdk_mapping:
46 .. table:: RVU managed functional blocks and its mapping to DPDK subsystem
48 +---+-----+--------------------------------------------------------------+
49 | # | LF | DPDK subsystem mapping |
50 +===+=====+==============================================================+
51 | 1 | NIX | rte_ethdev, rte_tm, rte_event_eth_[rt]x_adapter, rte_security|
52 +---+-----+--------------------------------------------------------------+
53 | 2 | NPA | rte_mempool |
54 +---+-----+--------------------------------------------------------------+
55 | 3 | NPC | rte_flow |
56 +---+-----+--------------------------------------------------------------+
57 | 4 | CPT | rte_cryptodev, rte_event_crypto_adapter |
58 +---+-----+--------------------------------------------------------------+
59 | 5 | SSO | rte_eventdev |
60 +---+-----+--------------------------------------------------------------+
61 | 6 | TIM | rte_event_timer_adapter |
62 +---+-----+--------------------------------------------------------------+
63 | 7 | LBK | rte_ethdev |
64 +---+-----+--------------------------------------------------------------+
65 | 8 | DPI | rte_rawdev |
66 +---+-----+--------------------------------------------------------------+
67 | 9 | SDP | rte_ethdev |
68 +---+-----+--------------------------------------------------------------+
69 | 10| REE | rte_regexdev |
70 +---+-----+--------------------------------------------------------------+
72 PF0 is called the administrative / admin function (AF) and has exclusive
73 privileges to provision RVU functional block's LFs to each of the PF/VF.
75 PF/VFs communicates with AF via a shared memory region (mailbox).Upon receiving
76 requests from PF/VF, AF does resource provisioning and other HW configuration.
78 AF is always attached to host, but PF/VFs may be used by host kernel itself,
79 or attached to VMs or to userspace applications like DPDK, etc. So, AF has to
80 handle provisioning/configuration requests sent by any device from any domain.
82 The AF driver does not receive or process any data.
83 It is only a configuration driver used in control path.
85 The :numref:`figure_cnxk_resource_virtualization` diagram also shows a
86 resource provisioning example where,
88 1. PFx and PFx-VF0 bound to Linux netdev driver.
89 2. PFx-VF1 ethdev driver bound to the first DPDK application.
90 3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
95 Loopback HW Unit (LBK) receives packets from NIX-RX and sends packets back to NIX-TX.
96 The loopback block has N channels and contains data buffering that is shared across
97 all channels. The LBK HW Unit is abstracted using ethdev subsystem, Where PF0's
98 VFs are exposed as ethdev device and odd-even pairs of VFs are tied together,
99 that is, packets sent on odd VF end up received on even VF and vice versa.
100 This would enable HW accelerated means of communication between two domains
101 where even VF bound to the first domain and odd VF bound to the second domain.
103 Typical application usage models are,
105 #. Communication between the Linux kernel and DPDK application.
106 #. Exception path to Linux kernel from DPDK application as SW ``KNI`` replacement.
107 #. Communication between two different DPDK applications.
112 System DPI Packet Interface unit(SDP) provides PCIe endpoint support for remote host
113 to DMA packets into and out of cnxk SoC. SDP interface comes in to live only when
114 cnxk SoC is connected in PCIe endpoint mode. It can be used to send/receive
115 packets to/from remote host machine using input/output queue pairs exposed to it.
116 SDP interface receives input packets from remote host from NIX-RX and sends packets
117 to remote host using NIX-TX. Remote host machine need to use corresponding driver
118 (kernel/user mode) to communicate with SDP interface on cnxk SoC. SDP supports
119 single PCIe SRIOV physical function(PF) and multiple virtual functions(VF's). Users
120 can bind PF or VF to use SDP interface and it will be enumerated as ethdev ports.
122 The primary use case for SDP is to enable the smart NIC use case. Typical usage models are,
124 #. Communication channel between remote host and cnxk SoC over PCIe.
125 #. Transfer packets received from network interface to remote host over PCIe and
129 ----------------------
131 The :numref:`figure_cnxk_packet_flow_hw_accelerators` diagram depicts
132 the packet flow on cnxk SoC in conjunction with use of various HW accelerators.
134 .. _figure_cnxk_packet_flow_hw_accelerators:
136 .. figure:: img/cnxk_packet_flow_hw_accelerators.*
138 cnxk packet flow in conjunction with use of HW accelerators
143 This section lists dataplane H/W block(s) available in cnxk SoC.
145 #. **Mempool Driver**
146 See :doc:`../mempool/cnxk` for NPA mempool driver information.
148 Procedure to Setup Platform
149 ---------------------------
151 There are three main prerequisites for setting up DPDK on cnxk
154 1. **RVU AF Linux kernel driver**
156 The dependent kernel drivers can be obtained from the
157 `kernel.org <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/marvell/octeontx2>`_.
159 Alternatively, the Marvell SDK also provides the required kernel drivers.
161 Linux kernel should be configured with the following features enabled:
163 .. code-block:: console
165 # 64K pages enabled for better performance
166 CONFIG_ARM64_64K_PAGES=y
167 CONFIG_ARM64_VA_BITS_48=y
168 # huge pages support enabled
170 CONFIG_HUGETLB_PAGE=y
171 # VFIO enabled with TYPE1 IOMMU at minimum
172 CONFIG_VFIO_IOMMU_TYPE1=y
175 CONFIG_VFIO_NOIOMMU=y
177 CONFIG_VFIO_PCI_MMAP=y
180 # ARMv8.1 LSE atomics
181 CONFIG_ARM64_LSE_ATOMICS=y
183 CONFIG_OCTEONTX2_MBOX=y
184 CONFIG_OCTEONTX2_AF=y
185 # Enable if netdev PF driver required
186 CONFIG_OCTEONTX2_PF=y
187 # Enable if netdev VF driver required
188 CONFIG_OCTEONTX2_VF=y
189 CONFIG_CRYPTO_DEV_OCTEONTX2_CPT=y
190 # Enable if OCTEONTX2 DMA PF driver required
191 CONFIG_OCTEONTX2_DPI_PF=n
193 2. **ARM64 Linux Tool Chain**
195 For example, the *aarch64* Linaro Toolchain, which can be obtained from
196 `here <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/>`_.
198 Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is
199 optimized for cnxk CPU.
201 3. **Rootfile system**
203 Any *aarch64* supporting filesystem may be used. For example,
204 Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
205 from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
207 Alternatively, the Marvell SDK provides the buildroot based root filesystem.
208 The SDK includes all the above prerequisites necessary to bring up the cnxk board.
210 - Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
216 .. _table_cnxk_common_debug_options:
218 .. table:: cnxk common debug options
220 +---+------------+-------------------------------------------------------+
221 | # | Component | EAL log command |
222 +===+============+=======================================================+
223 | 1 | Common | --log-level='pmd\.cnxk\.base,8' |
224 +---+------------+-------------------------------------------------------+
225 | 2 | Mailbox | --log-level='pmd\.cnxk\.mbox,8' |
226 +---+------------+-------------------------------------------------------+
231 The **RVU AF Linux kernel driver** provides support to dump RVU blocks
232 context or stats using debugfs.
234 Enable ``debugfs`` by:
236 1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUGFS=y``.
237 2. Boot OCTEON CN9K/CN10K with debugfs supported kernel.
238 3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
240 .. code-block:: console
242 # mount -t debugfs none /sys/kernel/debug
244 Currently ``debugfs`` supports the following RVU blocks NIX, NPA, NPC, NDC,
247 The file structure under ``/sys/kernel/debug`` is as follows
249 .. code-block:: console
267 | |-- cpt_engines_info
268 | |-- cpt_engines_sts
275 | |-- ndc_rx_hits_miss
277 | |-- ndc_tx_hits_miss
290 | '-- rx_miss_act_stats
296 |-- sso_hwgrp_aq_thresh
297 |-- sso_hwgrp_iaq_walk
299 |-- sso_hwgrp_free_list_walk
300 |-- sso_hwgrp_ient_walk
301 '-- sso_hwgrp_taq_walk
303 RVU block LF allocation:
305 .. code-block:: console
307 cat /sys/kernel/debug/cn10k/rsrc_alloc
309 pcifunc NPA NIX SSO GROUP SSOWS TIM CPT
316 .. code-block:: console
318 cat /sys/kernel/debug/cn10k/rpm/rpm0/lmac0/stats
320 =======Link Status======
322 Link is UP 25000 Mbps
324 =======NIX RX_STATS(rpm port level)======
334 =======NIX TX_STATS(rpm port level)======
343 =======rpm RX_STATS======
345 Octets of received packets: 0
346 Octets of received packets with out error: 0
347 Received packets with alignment errors: 0
348 Control/PAUSE packets received: 0
349 Packets received with Frame too long Errors: 0
350 Packets received with a1nrange length Errors: 0
352 Packets received with FrameCheckSequenceErrors: 0
353 Packets received with VLAN header: 0
355 Packets recievd with unicast DMAC: 0
356 Packets received with multicast DMAC: 0
357 Packets received with broadcast DMAC: 0
359 Total frames received on interface: 0
360 Packets received with an octet count < 64: 0
361 Packets received with an octet count == 64: 0
362 Packets received with an octet count of 65–127: 0
363 Packets received with an octet count of 128-255: 0
364 Packets received with an octet count of 256-511: 0
365 Packets received with an octet count of 512-1023: 0
366 Packets received with an octet count of 1024-1518: 0
367 Packets received with an octet count of > 1518: 0
370 Fragmented Packets: 0
371 CBFC(class based flow control) pause frames received for class 0: 0
372 CBFC pause frames received for class 1: 0
373 CBFC pause frames received for class 2: 0
374 CBFC pause frames received for class 3: 0
375 CBFC pause frames received for class 4: 0
376 CBFC pause frames received for class 5: 0
377 CBFC pause frames received for class 6: 0
378 CBFC pause frames received for class 7: 0
379 CBFC pause frames received for class 8: 0
380 CBFC pause frames received for class 9: 0
381 CBFC pause frames received for class 10: 0
382 CBFC pause frames received for class 11: 0
383 CBFC pause frames received for class 12: 0
384 CBFC pause frames received for class 13: 0
385 CBFC pause frames received for class 14: 0
386 CBFC pause frames received for class 15: 0
387 MAC control packets received: 0
389 =======rpm TX_STATS======
391 Total octets sent on the interface: 0
392 Total octets transmitted OK: 0
393 Control/Pause frames sent: 0
394 Total frames transmitted OK: 0
395 Total frames sent with VLAN header: 0
397 Packets sent to unicast DMAC: 0
398 Packets sent to the multicast DMAC: 0
399 Packets sent to a broadcast DMAC: 0
400 Packets sent with an octet count == 64: 0
401 Packets sent with an octet count of 65–127: 0
402 Packets sent with an octet count of 128-255: 0
403 Packets sent with an octet count of 256-511: 0
404 Packets sent with an octet count of 512-1023: 0
405 Packets sent with an octet count of 1024-1518: 0
406 Packets sent with an octet count of > 1518: 0
407 CBFC(class based flow control) pause frames transmitted for class 0: 0
408 CBFC pause frames transmitted for class 1: 0
409 CBFC pause frames transmitted for class 2: 0
410 CBFC pause frames transmitted for class 3: 0
411 CBFC pause frames transmitted for class 4: 0
412 CBFC pause frames transmitted for class 5: 0
413 CBFC pause frames transmitted for class 6: 0
414 CBFC pause frames transmitted for class 7: 0
415 CBFC pause frames transmitted for class 8: 0
416 CBFC pause frames transmitted for class 9: 0
417 CBFC pause frames transmitted for class 10: 0
418 CBFC pause frames transmitted for class 11: 0
419 CBFC pause frames transmitted for class 12: 0
420 CBFC pause frames transmitted for class 13: 0
421 CBFC pause frames transmitted for class 14: 0
422 CBFC pause frames transmitted for class 15: 0
423 MAC control packets sent: 0
424 Total frames sent on the interface: 0
428 .. code-block:: console
430 cat /sys/kernel/debug/cn10k/cpt/cpt_pc
432 CPT instruction requests 0
433 CPT instruction latency 0
434 CPT NCB read requests 0
435 CPT NCB read latency 0
436 CPT read requests caused by UC fills 0
437 CPT active cycles pc 1395642
438 CPT clock count pc 5579867595493
442 .. code-block:: console
444 Usage: echo <nixlf> [cq number/all] > /sys/kernel/debug/cn10k/nix/cq_ctx
445 cat /sys/kernel/debug/cn10k/nix/cq_ctx
446 echo 0 0 > /sys/kernel/debug/cn10k/nix/cq_ctx
447 cat /sys/kernel/debug/cn10k/nix/cq_ctx
449 =====cq_ctx for nixlf:0 and qidx:0 is=====
460 W2: update_time 31043
477 .. code-block:: console
479 Usage: echo <npalf> [pool number/all] > /sys/kernel/debug/cn10k/npa/pool_ctx
480 cat /sys/kernel/debug/cn10k/npa/pool_ctx
481 echo 0 0 > /sys/kernel/debug/cn10k/npa/pool_ctx
482 cat /sys/kernel/debug/cn10k/npa/pool_ctx
484 ======POOL : 0=======
485 W0: Stack base 1375bff00
492 W2: stack_max_pages 24315
493 W2: stack_pages 24314
503 W4: update_time 62993
505 W6: ptr_start 1593adf00
506 W7: ptr_end 180000000
512 W8: thresh_qint_idx 0
517 .. code-block:: console
519 cat /sys/kernel/debug/cn10k/npc/mcam_info
522 RX keywidth : 224bits
523 TX keywidth : 224bits
535 .. code-block:: console
537 Usage: echo [<hws>/all] > /sys/kernel/debug/cn10k/sso/hws/sso_hws_info
538 echo 0 > /sys/kernel/debug/cn10k/sso/hws/sso_hws_info
540 ==================================================
541 SSOW HWS[0] Arbitration State 0x0
542 SSOW HWS[0] Guest Machine Control 0x0
543 SSOW HWS[0] SET[0] Group Mask[0] 0xffffffffffffffff
544 SSOW HWS[0] SET[0] Group Mask[1] 0xffffffffffffffff
545 SSOW HWS[0] SET[0] Group Mask[2] 0xffffffffffffffff
546 SSOW HWS[0] SET[0] Group Mask[3] 0xffffffffffffffff
547 SSOW HWS[0] SET[1] Group Mask[0] 0xffffffffffffffff
548 SSOW HWS[0] SET[1] Group Mask[1] 0xffffffffffffffff
549 SSOW HWS[0] SET[1] Group Mask[2] 0xffffffffffffffff
550 SSOW HWS[0] SET[1] Group Mask[3] 0xffffffffffffffff
551 ==================================================
556 DPDK may be compiled either natively on OCTEON CN9K/CN10K platform or cross-compiled on
557 an x86 based platform.
562 .. code-block:: console
570 Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
572 .. code-block:: console
574 meson build --cross-file config/arm/arm64_cn10k_linux_gcc
579 By default, meson cross compilation uses ``aarch64-linux-gnu-gcc`` toolchain,
580 if Marvell toolchain is available then it can be used by overriding the
581 c, cpp, ar, strip ``binaries`` attributes to respective Marvell
582 toolchain binaries in ``config/arm/arm64_cn10k_linux_gcc`` file.