1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(C) 2021 Marvell.
7 The CNXK ETHDEV PMD (**librte_net_cnxk**) provides poll mode ethdev driver
8 support for the inbuilt network device found in **Marvell OCTEON CN9K/CN10K**
9 SoC family as well as for their virtual functions (VF) in SR-IOV context.
11 More information can be found at `Marvell Official Website
12 <https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
17 Features of the CNXK Ethdev PMD are:
19 - Packet type information
23 - Multiple queues for TX and RX
24 - Receiver Side Scaling (RSS)
25 - Inner and Outer Checksum offload
26 - Link state information
28 - Scatter-Gather IO support
29 - Vector Poll mode driver
34 See :doc:`../platform/cnxk` for setup information.
37 Driver compilation and testing
38 ------------------------------
40 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
45 Follow instructions available in the document
46 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
51 .. code-block:: console
53 ./<build_dir>/app/dpdk-testpmd -c 0xc -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
54 EAL: Detected 4 lcore(s)
55 EAL: Detected 1 NUMA nodes
56 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
57 EAL: Selected IOVA mode 'VA'
58 EAL: No available hugepages reported in hugepages-16777216kB
59 EAL: No available hugepages reported in hugepages-2048kB
60 EAL: Probing VFIO support...
61 EAL: VFIO support initialized
62 EAL: using IOMMU type 1 (Type 1)
63 [ 2003.202721] vfio-pci 0002:02:00.0: vfio_cap_init: hiding cap 0x14@0x98
64 EAL: Probe PCI driver: net_cn10k (177d:a063) device: 0002:02:00.0 (socket 0)
66 EAL: No legacy callbacks, legacy socket not created
67 testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
68 testpmd: preferred mempool ops selected: cn10k_mempool_ops
69 Configuring Port 0 (socket 0)
70 PMD: Port 0: Link Up - speed 25000 Mbps - full-duplex
72 Port 0: link state change event
73 Port 0: 96:D4:99:72:A5:BF
74 Checking link statuses...
76 No commandline core given, start packet forwarding
77 io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
78 Logical Core 3 (socket 0) forwards packets on 1 streams:
79 RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
81 io packet forwarding packets/burst=32
82 nb forwarding cores=1 - nb forwarding ports=1
83 port 0: RX queue number: 1 Tx queue number: 1
84 Rx offloads=0x0 Tx offloads=0x10000
86 RX desc=4096 - RX free threshold=0
87 RX threshold registers: pthresh=0 hthresh=0 wthresh=0
90 TX desc=512 - TX free threshold=0
91 TX threshold registers: pthresh=0 hthresh=0 wthresh=0
92 TX offloads=0x0 - TX RS bit threshold=0
95 Runtime Config Options
96 ----------------------
98 - ``Rx&Tx scalar mode enable`` (default ``0``)
100 PMD supports both scalar and vector mode, it may be selected at runtime
101 using ``scalar_enable`` ``devargs`` parameter.
103 - ``RSS reta size`` (default ``64``)
105 RSS redirection table size may be configured during runtime using ``reta_size``
106 ``devargs`` parameter.
110 -a 0002:02:00.0,reta_size=256
112 With the above configuration, reta table of size 256 is populated.
114 - ``Flow priority levels`` (default ``3``)
116 RTE Flow priority levels can be configured during runtime using
117 ``flow_max_priority`` ``devargs`` parameter.
121 -a 0002:02:00.0,flow_max_priority=10
123 With the above configuration, priority level was set to 10 (0-9). Max
124 priority level supported is 32.
126 - ``Reserve Flow entries`` (default ``8``)
128 RTE flow entries can be pre allocated and the size of pre allocation can be
129 selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
133 -a 0002:02:00.0,flow_prealloc_size=4
135 With the above configuration, pre alloc size was set to 4. Max pre alloc
136 size supported is 32.
138 - ``Max SQB buffer count`` (default ``512``)
140 Send queue descriptor buffer count may be limited during runtime using
141 ``max_sqb_count`` ``devargs`` parameter.
145 -a 0002:02:00.0,max_sqb_count=64
147 With the above configuration, each send queue's descriptor buffer count is
148 limited to a maximum of 64 buffers.
150 - ``Switch header enable`` (default ``none``)
152 A port can be configured to a specific switch header type by using
153 ``switch_header`` ``devargs`` parameter.
157 -a 0002:02:00.0,switch_header="higig2"
159 With the above configuration, higig2 will be enabled on that port and the
160 traffic on this port should be higig2 traffic only. Supported switch header
161 types are "higig2", "dsa", "chlen90b" and "chlen24b".
163 - ``RSS tag as XOR`` (default ``0``)
165 The HW gives two options to configure the RSS adder i.e
167 * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
169 * ``rss_adder<7:0> = flow_tag<7:0>``
171 Latter one aligns with standard NIC behavior vs former one is a legacy
172 RSS adder scheme used in OCTEON TX2 products.
174 By default, the driver runs in the latter mode.
175 Setting this flag to 1 to select the legacy mode.
177 For example to select the legacy mode(RSS tag adder as XOR)::
179 -a 0002:02:00.0,tag_as_xor=1
184 Above devarg parameters are configurable per device, user needs to pass the
185 parameters to all the PCIe devices if application requires to configure on
186 all the ethdev ports.
191 ``mempool_cnxk`` external mempool handler dependency
192 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
194 The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool manager.
195 ``net_cnxk`` pmd only works with ``mempool_cnxk`` mempool handler
196 as it is performance wise most effective way for packet allocation and Tx buffer
197 recycling on OCTEON TX2 SoC platform.
202 The OCTEON CN9K/CN10K SoC family NICs strip the CRC for every packet being received by
203 the host interface irrespective of the offload configuration.
208 .. _table_cnxk_ethdev_debug_options:
210 .. table:: cnxk ethdev debug options
212 +---+------------+-------------------------------------------------------+
213 | # | Component | EAL log command |
214 +===+============+=======================================================+
215 | 1 | NIX | --log-level='pmd\.net.cnxk,8' |
216 +---+------------+-------------------------------------------------------+
217 | 2 | NPC | --log-level='pmd\.net.cnxk\.flow,8' |
218 +---+------------+-------------------------------------------------------+