1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(C) 2019 Marvell International Ltd.
4 OCTEON TX2 Poll Mode driver
5 ===========================
7 The OCTEON TX2 ETHDEV PMD (**librte_net_octeontx2**) provides poll mode ethdev
8 driver support for the inbuilt network device found in **Marvell OCTEON TX2**
9 SoC family as well as for their virtual functions (VF) in SR-IOV context.
11 More information can be found at `Marvell Official Website
12 <https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
17 Features of the OCTEON TX2 Ethdev PMD are:
19 - Packet type information
24 - Multiple queues for TX and RX
25 - Receiver Side Scaling (RSS)
27 - Multicast MAC filtering
29 - Inner and Outer Checksum offload
30 - VLAN/QinQ stripping and insertion
31 - Port hardware statistics
32 - Link state information
35 - Scatter-Gather IO support
36 - Vector Poll mode driver
37 - Debug utilities - Context dump and error interrupt support
38 - IEEE1588 timestamping
39 - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
40 - Support Rx interrupt
41 - Inline IPsec processing support
42 - :ref:`Traffic Management API <otx2_tmapi>`
47 See :doc:`../platform/octeontx2` for setup information.
50 Driver compilation and testing
51 ------------------------------
53 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
58 Follow instructions available in the document
59 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
64 .. code-block:: console
66 ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
67 EAL: Detected 24 lcore(s)
68 EAL: Detected 1 NUMA nodes
69 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
70 EAL: No available hugepages reported in hugepages-2048kB
71 EAL: Probing VFIO support...
72 EAL: VFIO support initialized
73 EAL: PCI device 0002:02:00.0 on NUMA socket 0
74 EAL: probe driver: 177d:a063 net_octeontx2
75 EAL: using IOMMU type 1 (Type 1)
76 testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
77 testpmd: preferred mempool ops selected: octeontx2_npa
78 Configuring Port 0 (socket 0)
79 PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
81 Port 0: link state change event
82 Port 0: 36:10:66:88:7A:57
83 Checking link statuses...
85 No commandline core given, start packet forwarding
86 io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
87 Logical Core 9 (socket 0) forwards packets on 1 streams:
88 RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
90 io packet forwarding packets/burst=32
91 nb forwarding cores=1 - nb forwarding ports=1
92 port 0: RX queue number: 1 Tx queue number: 1
93 Rx offloads=0x0 Tx offloads=0x10000
95 RX desc=512 - RX free threshold=0
96 RX threshold registers: pthresh=0 hthresh=0 wthresh=0
99 TX desc=512 - TX free threshold=0
100 TX threshold registers: pthresh=0 hthresh=0 wthresh=0
101 TX offloads=0x10000 - TX RS bit threshold=0
104 Runtime Config Options
105 ----------------------
107 - ``Rx&Tx scalar mode enable`` (default ``0``)
109 Ethdev supports both scalar and vector mode, it may be selected at runtime
110 using ``scalar_enable`` ``devargs`` parameter.
112 - ``RSS reta size`` (default ``64``)
114 RSS redirection table size may be configured during runtime using ``reta_size``
115 ``devargs`` parameter.
119 -a 0002:02:00.0,reta_size=256
121 With the above configuration, reta table of size 256 is populated.
123 - ``Flow priority levels`` (default ``3``)
125 RTE Flow priority levels can be configured during runtime using
126 ``flow_max_priority`` ``devargs`` parameter.
130 -a 0002:02:00.0,flow_max_priority=10
132 With the above configuration, priority level was set to 10 (0-9). Max
133 priority level supported is 32.
135 - ``Reserve Flow entries`` (default ``8``)
137 RTE flow entries can be pre allocated and the size of pre allocation can be
138 selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
142 -a 0002:02:00.0,flow_prealloc_size=4
144 With the above configuration, pre alloc size was set to 4. Max pre alloc
145 size supported is 32.
147 - ``Max SQB buffer count`` (default ``512``)
149 Send queue descriptor buffer count may be limited during runtime using
150 ``max_sqb_count`` ``devargs`` parameter.
154 -a 0002:02:00.0,max_sqb_count=64
156 With the above configuration, each send queue's decscriptor buffer count is
157 limited to a maximum of 64 buffers.
159 - ``Switch header enable`` (default ``none``)
161 A port can be configured to a specific switch header type by using
162 ``switch_header`` ``devargs`` parameter.
166 -a 0002:02:00.0,switch_header="higig2"
168 With the above configuration, higig2 will be enabled on that port and the
169 traffic on this port should be higig2 traffic only. Supported switch header
170 types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2" and "vlan_exdsa".
172 - ``RSS tag as XOR`` (default ``0``)
174 C0 HW revision onward, The HW gives an option to configure the RSS adder as
176 * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
178 * ``rss_adder<7:0> = flow_tag<7:0>``
180 Latter one aligns with standard NIC behavior vs former one is a legacy
181 RSS adder scheme used in OCTEON TX2 products.
183 By default, the driver runs in the latter mode from C0 HW revision onward.
184 Setting this flag to 1 to select the legacy mode.
186 For example to select the legacy mode(RSS tag adder as XOR)::
188 -a 0002:02:00.0,tag_as_xor=1
190 - ``Max SPI for inbound inline IPsec`` (default ``1``)
192 Max SPI supported for inbound inline IPsec processing can be specified by
193 ``ipsec_in_max_spi`` ``devargs`` parameter.
197 -a 0002:02:00.0,ipsec_in_max_spi=128
199 With the above configuration, application can enable inline IPsec processing
200 on 128 SAs (SPI 0-127).
202 - ``Lock Rx contexts in NDC cache``
204 Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
208 -a 0002:02:00.0,lock_rx_ctx=1
210 - ``Lock Tx contexts in NDC cache``
212 Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
216 -a 0002:02:00.0,lock_tx_ctx=1
220 Above devarg parameters are configurable per device, user needs to pass the
221 parameters to all the PCIe devices if application requires to configure on
222 all the ethdev ports.
224 - ``Lock NPA contexts in NDC``
226 Lock NPA aura and pool contexts in NDC cache.
227 The device args take hexadecimal bitmask where each bit represent the
228 corresponding aura/pool id.
232 -a 0002:02:00.0,npa_lock_mask=0xf
236 Traffic Management API
237 ----------------------
239 OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
240 configure the following features:
242 #. Hierarchical scheduling
243 #. Single rate - Two color, Two rate - Three color shaping
245 Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
247 Every parent can have atmost 10 SP Children and unlimited DWRR children.
249 Both PF & VF supports traffic management API with PF supporting 6 levels
250 and VF supporting 5 levels of topology.
255 ``mempool_octeontx2`` external mempool handler dependency
256 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
258 The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
259 ``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler
260 as it is performance wise most effective way for packet allocation and Tx buffer
261 recycling on OCTEON TX2 SoC platform.
266 The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
267 the host interface irrespective of the offload configuration.
269 Multicast MAC filtering
270 ~~~~~~~~~~~~~~~~~~~~~~~
272 ``net_octeontx2`` pmd supports multicast mac filtering feature only on physical
275 SDP interface support
276 ~~~~~~~~~~~~~~~~~~~~~
277 OCTEON TX2 SDP interface support is limited to PF device, No VF support.
279 Inline Protocol Processing
280 ~~~~~~~~~~~~~~~~~~~~~~~~~~
281 ``net_octeontx2`` pmd doesn't support the following features for packets to be
282 inline protocol processed.
290 .. _table_octeontx2_ethdev_debug_options:
292 .. table:: OCTEON TX2 ethdev debug options
294 +---+------------+-------------------------------------------------------+
295 | # | Component | EAL log command |
296 +===+============+=======================================================+
297 | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
298 +---+------------+-------------------------------------------------------+
299 | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
300 +---+------------+-------------------------------------------------------+
305 The OCTEON TX2 SoC family NIC has support for the following patterns and
310 .. _table_octeontx2_supported_flow_item_types:
312 .. table:: Item types
314 +----+--------------------------------+
316 +====+================================+
317 | 1 | RTE_FLOW_ITEM_TYPE_ETH |
318 +----+--------------------------------+
319 | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
320 +----+--------------------------------+
321 | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
322 +----+--------------------------------+
323 | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
324 +----+--------------------------------+
325 | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
326 +----+--------------------------------+
327 | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
328 +----+--------------------------------+
329 | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
330 +----+--------------------------------+
331 | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
332 +----+--------------------------------+
333 | 9 | RTE_FLOW_ITEM_TYPE_UDP |
334 +----+--------------------------------+
335 | 10 | RTE_FLOW_ITEM_TYPE_TCP |
336 +----+--------------------------------+
337 | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
338 +----+--------------------------------+
339 | 12 | RTE_FLOW_ITEM_TYPE_ESP |
340 +----+--------------------------------+
341 | 13 | RTE_FLOW_ITEM_TYPE_GRE |
342 +----+--------------------------------+
343 | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
344 +----+--------------------------------+
345 | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
346 +----+--------------------------------+
347 | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
348 +----+--------------------------------+
349 | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
350 +----+--------------------------------+
351 | 18 | RTE_FLOW_ITEM_TYPE_GENEVE |
352 +----+--------------------------------+
353 | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE |
354 +----+--------------------------------+
355 | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT |
356 +----+--------------------------------+
357 | 21 | RTE_FLOW_ITEM_TYPE_VOID |
358 +----+--------------------------------+
359 | 22 | RTE_FLOW_ITEM_TYPE_ANY |
360 +----+--------------------------------+
361 | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY |
362 +----+--------------------------------+
363 | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 |
364 +----+--------------------------------+
365 | 25 | RTE_FLOW_ITEM_TYPE_RAW |
366 +----+--------------------------------+
370 ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
371 bits in the GRE header are equal to 0.
375 .. _table_octeontx2_supported_ingress_action_types:
377 .. table:: Ingress action types
379 +----+-----------------------------------------+
381 +====+=========================================+
382 | 1 | RTE_FLOW_ACTION_TYPE_VOID |
383 +----+-----------------------------------------+
384 | 2 | RTE_FLOW_ACTION_TYPE_MARK |
385 +----+-----------------------------------------+
386 | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
387 +----+-----------------------------------------+
388 | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
389 +----+-----------------------------------------+
390 | 5 | RTE_FLOW_ACTION_TYPE_DROP |
391 +----+-----------------------------------------+
392 | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
393 +----+-----------------------------------------+
394 | 7 | RTE_FLOW_ACTION_TYPE_RSS |
395 +----+-----------------------------------------+
396 | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
397 +----+-----------------------------------------+
398 | 9 | RTE_FLOW_ACTION_TYPE_PF |
399 +----+-----------------------------------------+
400 | 10 | RTE_FLOW_ACTION_TYPE_VF |
401 +----+-----------------------------------------+
402 | 11 | RTE_FLOW_ACTION_TYPE_OF_POP_VLAN |
403 +----+-----------------------------------------+
404 | 12 | RTE_FLOW_ACTION_TYPE_PORT_ID |
405 +----+-----------------------------------------+
409 ``RTE_FLOW_ACTION_TYPE_PORT_ID`` is only supported between PF and its VFs.
411 .. _table_octeontx2_supported_egress_action_types:
413 .. table:: Egress action types
415 +----+-----------------------------------------+
417 +====+=========================================+
418 | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
419 +----+-----------------------------------------+
420 | 2 | RTE_FLOW_ACTION_TYPE_DROP |
421 +----+-----------------------------------------+
422 | 3 | RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN |
423 +----+-----------------------------------------+
424 | 4 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID |
425 +----+-----------------------------------------+
426 | 5 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP |
427 +----+-----------------------------------------+
429 Custom protocols supported in RTE Flow
430 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
432 The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols.
434 * ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level.
435 * ``NGIO`` can be parsed at L3 level.
437 For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the
438 respective switch header.
442 -a 0002:02:00.0,switch_header="vlan_exdsa"
444 The below fields of ``struct rte_flow_item_raw`` shall be used to specify the
447 - ``relative`` Selects the layer at which parsing is done.
449 - 0 for ``exdsa`` and ``vlan_exdsa``.
453 - ``offset`` The offset in the header where the pattern should be matched.
454 - ``length`` Length of the pattern.
455 - ``pattern`` Pattern as a byte string.
457 Example usage in testpmd::
459 ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \
460 --rx-offloads=0x00080000 --rxq 8 --txq 8
461 testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
462 spec ab pattern mask ab offset is 4 / end actions queue index 1 / end