1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(C) 2019 Marvell International Ltd.
4 OCTEON TX2 Poll Mode driver
5 ===========================
7 The OCTEON TX2 ETHDEV PMD (**librte_pmd_octeontx2**) provides poll mode ethdev
8 driver support for the inbuilt network device found in **Marvell OCTEON TX2**
9 SoC family as well as for their virtual functions (VF) in SR-IOV context.
11 More information can be found at `Marvell Official Website
12 <https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
17 Features of the OCTEON TX2 Ethdev PMD are:
19 - Packet type information
24 - Multiple queues for TX and RX
25 - Receiver Side Scaling (RSS)
27 - Multicast MAC filtering
29 - Inner and Outer Checksum offload
30 - VLAN/QinQ stripping and insertion
31 - Port hardware statistics
32 - Link state information
35 - Scatter-Gather IO support
36 - Vector Poll mode driver
37 - Debug utilities - Context dump and error interrupt support
38 - IEEE1588 timestamping
39 - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
40 - Support Rx interrupt
41 - Inline IPsec processing support
42 - :ref:`Traffic Management API <otx2_tmapi>`
47 See :doc:`../platform/octeontx2` for setup information.
49 Compile time Config Options
50 ---------------------------
52 The following options may be modified in the ``config`` file.
54 - ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``)
56 Toggle compilation of the ``librte_pmd_octeontx2`` driver.
58 Driver compilation and testing
59 ------------------------------
61 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
64 To compile the OCTEON TX2 PMD for Linux arm64 gcc,
65 use arm64-octeontx2-linux-gcc as target.
69 Follow instructions available in the document
70 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
75 .. code-block:: console
77 ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
78 EAL: Detected 24 lcore(s)
79 EAL: Detected 1 NUMA nodes
80 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
81 EAL: No available hugepages reported in hugepages-2048kB
82 EAL: Probing VFIO support...
83 EAL: VFIO support initialized
84 EAL: PCI device 0002:02:00.0 on NUMA socket 0
85 EAL: probe driver: 177d:a063 net_octeontx2
86 EAL: using IOMMU type 1 (Type 1)
87 testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
88 testpmd: preferred mempool ops selected: octeontx2_npa
89 Configuring Port 0 (socket 0)
90 PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
92 Port 0: link state change event
93 Port 0: 36:10:66:88:7A:57
94 Checking link statuses...
96 No commandline core given, start packet forwarding
97 io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
98 Logical Core 9 (socket 0) forwards packets on 1 streams:
99 RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
101 io packet forwarding packets/burst=32
102 nb forwarding cores=1 - nb forwarding ports=1
103 port 0: RX queue number: 1 Tx queue number: 1
104 Rx offloads=0x0 Tx offloads=0x10000
106 RX desc=512 - RX free threshold=0
107 RX threshold registers: pthresh=0 hthresh=0 wthresh=0
110 TX desc=512 - TX free threshold=0
111 TX threshold registers: pthresh=0 hthresh=0 wthresh=0
112 TX offloads=0x10000 - TX RS bit threshold=0
115 Runtime Config Options
116 ----------------------
118 - ``Rx&Tx scalar mode enable`` (default ``0``)
120 Ethdev supports both scalar and vector mode, it may be selected at runtime
121 using ``scalar_enable`` ``devargs`` parameter.
123 - ``RSS reta size`` (default ``64``)
125 RSS redirection table size may be configured during runtime using ``reta_size``
126 ``devargs`` parameter.
130 -w 0002:02:00.0,reta_size=256
132 With the above configuration, reta table of size 256 is populated.
134 - ``Flow priority levels`` (default ``3``)
136 RTE Flow priority levels can be configured during runtime using
137 ``flow_max_priority`` ``devargs`` parameter.
141 -w 0002:02:00.0,flow_max_priority=10
143 With the above configuration, priority level was set to 10 (0-9). Max
144 priority level supported is 32.
146 - ``Reserve Flow entries`` (default ``8``)
148 RTE flow entries can be pre allocated and the size of pre allocation can be
149 selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
153 -w 0002:02:00.0,flow_prealloc_size=4
155 With the above configuration, pre alloc size was set to 4. Max pre alloc
156 size supported is 32.
158 - ``Max SQB buffer count`` (default ``512``)
160 Send queue descriptor buffer count may be limited during runtime using
161 ``max_sqb_count`` ``devargs`` parameter.
165 -w 0002:02:00.0,max_sqb_count=64
167 With the above configuration, each send queue's decscriptor buffer count is
168 limited to a maximum of 64 buffers.
170 - ``Switch header enable`` (default ``none``)
172 A port can be configured to a specific switch header type by using
173 ``switch_header`` ``devargs`` parameter.
177 -w 0002:02:00.0,switch_header="higig2"
179 With the above configuration, higig2 will be enabled on that port and the
180 traffic on this port should be higig2 traffic only. Supported switch header
181 types are "higig2" and "dsa".
183 - ``RSS tag as XOR`` (default ``0``)
185 C0 HW revision onward, The HW gives an option to configure the RSS adder as
187 * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
189 * ``rss_adder<7:0> = flow_tag<7:0>``
191 Latter one aligns with standard NIC behavior vs former one is a legacy
192 RSS adder scheme used in OCTEON TX2 products.
194 By default, the driver runs in the latter mode from C0 HW revision onward.
195 Setting this flag to 1 to select the legacy mode.
197 For example to select the legacy mode(RSS tag adder as XOR)::
198 -w 0002:02:00.0,tag_as_xor=1
200 - ``Max SPI for inbound inline IPsec`` (default ``1``)
202 Max SPI supported for inbound inline IPsec processing can be specified by
203 ``ipsec_in_max_spi`` ``devargs`` parameter.
206 -w 0002:02:00.0,ipsec_in_max_spi=128
208 With the above configuration, application can enable inline IPsec processing
209 on 128 SAs (SPI 0-127).
213 Above devarg parameters are configurable per device, user needs to pass the
214 parameters to all the PCIe devices if application requires to configure on
215 all the ethdev ports.
219 Traffic Management API
220 ----------------------
222 OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
223 configure the following features:
225 #. Hierarchical scheduling
226 #. Single rate - Two color, Two rate - Three color shaping
228 Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
230 Every parent can have atmost 10 SP Children and unlimited DWRR children.
232 Both PF & VF supports traffic management API with PF supporting 6 levels
233 and VF supporting 5 levels of topology.
238 ``mempool_octeontx2`` external mempool handler dependency
239 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
241 The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
242 ``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler
243 as it is performance wise most effective way for packet allocation and Tx buffer
244 recycling on OCTEON TX2 SoC platform.
249 The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
250 the host interface irrespective of the offload configuration.
252 Multicast MAC filtering
253 ~~~~~~~~~~~~~~~~~~~~~~~
255 ``net_octeontx2`` pmd supports multicast mac filtering feature only on physical
258 SDP interface support
259 ~~~~~~~~~~~~~~~~~~~~~
260 OCTEON TX2 SDP interface support is limited to PF device, No VF support.
262 Inline Protocol Processing
263 ~~~~~~~~~~~~~~~~~~~~~~~~~~
264 ``net_octeontx2`` pmd doesn't support the following features for packets to be
265 inline protocol processed.
273 .. _table_octeontx2_ethdev_debug_options:
275 .. table:: OCTEON TX2 ethdev debug options
277 +---+------------+-------------------------------------------------------+
278 | # | Component | EAL log command |
279 +===+============+=======================================================+
280 | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
281 +---+------------+-------------------------------------------------------+
282 | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
283 +---+------------+-------------------------------------------------------+
288 The OCTEON TX2 SoC family NIC has support for the following patterns and
293 .. _table_octeontx2_supported_flow_item_types:
295 .. table:: Item types
297 +----+--------------------------------+
299 +====+================================+
300 | 1 | RTE_FLOW_ITEM_TYPE_ETH |
301 +----+--------------------------------+
302 | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
303 +----+--------------------------------+
304 | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
305 +----+--------------------------------+
306 | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
307 +----+--------------------------------+
308 | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
309 +----+--------------------------------+
310 | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
311 +----+--------------------------------+
312 | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
313 +----+--------------------------------+
314 | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
315 +----+--------------------------------+
316 | 9 | RTE_FLOW_ITEM_TYPE_UDP |
317 +----+--------------------------------+
318 | 10 | RTE_FLOW_ITEM_TYPE_TCP |
319 +----+--------------------------------+
320 | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
321 +----+--------------------------------+
322 | 12 | RTE_FLOW_ITEM_TYPE_ESP |
323 +----+--------------------------------+
324 | 13 | RTE_FLOW_ITEM_TYPE_GRE |
325 +----+--------------------------------+
326 | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
327 +----+--------------------------------+
328 | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
329 +----+--------------------------------+
330 | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
331 +----+--------------------------------+
332 | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
333 +----+--------------------------------+
334 | 18 | RTE_FLOW_ITEM_TYPE_GENEVE |
335 +----+--------------------------------+
336 | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE |
337 +----+--------------------------------+
338 | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT |
339 +----+--------------------------------+
340 | 21 | RTE_FLOW_ITEM_TYPE_VOID |
341 +----+--------------------------------+
342 | 22 | RTE_FLOW_ITEM_TYPE_ANY |
343 +----+--------------------------------+
344 | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY |
345 +----+--------------------------------+
346 | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 |
347 +----+--------------------------------+
351 ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
352 bits in the GRE header are equal to 0.
356 .. _table_octeontx2_supported_ingress_action_types:
358 .. table:: Ingress action types
360 +----+--------------------------------+
362 +====+================================+
363 | 1 | RTE_FLOW_ACTION_TYPE_VOID |
364 +----+--------------------------------+
365 | 2 | RTE_FLOW_ACTION_TYPE_MARK |
366 +----+--------------------------------+
367 | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
368 +----+--------------------------------+
369 | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
370 +----+--------------------------------+
371 | 5 | RTE_FLOW_ACTION_TYPE_DROP |
372 +----+--------------------------------+
373 | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
374 +----+--------------------------------+
375 | 7 | RTE_FLOW_ACTION_TYPE_RSS |
376 +----+--------------------------------+
377 | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
378 +----+--------------------------------+
379 | 9 | RTE_FLOW_ACTION_TYPE_PF |
380 +----+--------------------------------+
381 | 10 | RTE_FLOW_ACTION_TYPE_VF |
382 +----+--------------------------------+
384 .. _table_octeontx2_supported_egress_action_types:
386 .. table:: Egress action types
388 +----+--------------------------------+
390 +====+================================+
391 | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
392 +----+--------------------------------+
393 | 2 | RTE_FLOW_ACTION_TYPE_DROP |
394 +----+--------------------------------+