+#. Running testpmd:
+
+ Follow instructions available in the document
+ :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+ to run testpmd.
+
+ Example output:
+
+ .. code-block:: console
+
+ ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ EAL: Detected 24 lcore(s)
+ EAL: Detected 1 NUMA nodes
+ EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
+ EAL: No available hugepages reported in hugepages-2048kB
+ EAL: Probing VFIO support...
+ EAL: VFIO support initialized
+ EAL: PCI device 0002:02:00.0 on NUMA socket 0
+ EAL: probe driver: 177d:a063 net_octeontx2
+ EAL: using IOMMU type 1 (Type 1)
+ testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
+ testpmd: preferred mempool ops selected: octeontx2_npa
+ Configuring Port 0 (socket 0)
+ PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
+
+ Port 0: link state change event
+ Port 0: 36:10:66:88:7A:57
+ Checking link statuses...
+ Done
+ No commandline core given, start packet forwarding
+ io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
+ Logical Core 9 (socket 0) forwards packets on 1 streams:
+ RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
+
+ io packet forwarding packets/burst=32
+ nb forwarding cores=1 - nb forwarding ports=1
+ port 0: RX queue number: 1 Tx queue number: 1
+ Rx offloads=0x0 Tx offloads=0x10000
+ RX queue: 0
+ RX desc=512 - RX free threshold=0
+ RX threshold registers: pthresh=0 hthresh=0 wthresh=0
+ RX Offloads=0x0
+ TX queue: 0
+ TX desc=512 - TX free threshold=0
+ TX threshold registers: pthresh=0 hthresh=0 wthresh=0
+ TX offloads=0x10000 - TX RS bit threshold=0
+ Press enter to exit
+
+Runtime Config Options
+----------------------
+
+- ``Rx&Tx scalar mode enable`` (default ``0``)
+
+ Ethdev supports both scalar and vector mode, it may be selected at runtime
+ using ``scalar_enable`` ``devargs`` parameter.
+
+- ``RSS reta size`` (default ``64``)
+
+ RSS redirection table size may be configured during runtime using ``reta_size``
+ ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:02:00.0,reta_size=256
+
+ With the above configuration, reta table of size 256 is populated.
+
+- ``Flow priority levels`` (default ``3``)
+
+ RTE Flow priority levels can be configured during runtime using
+ ``flow_max_priority`` ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:02:00.0,flow_max_priority=10
+
+ With the above configuration, priority level was set to 10 (0-9). Max
+ priority level supported is 32.
+
+- ``Reserve Flow entries`` (default ``8``)
+
+ RTE flow entries can be pre allocated and the size of pre allocation can be
+ selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:02:00.0,flow_prealloc_size=4
+
+ With the above configuration, pre alloc size was set to 4. Max pre alloc
+ size supported is 32.
+
+- ``Max SQB buffer count`` (default ``512``)
+
+ Send queue descriptor buffer count may be limited during runtime using
+ ``max_sqb_count`` ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:02:00.0,max_sqb_count=64
+
+ With the above configuration, each send queue's decscriptor buffer count is
+ limited to a maximum of 64 buffers.
+
+- ``Switch header enable`` (default ``none``)
+
+ A port can be configured to a specific switch header type by using
+ ``switch_header`` ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:02:00.0,switch_header="higig2"
+
+ With the above configuration, higig2 will be enabled on that port and the
+ traffic on this port should be higig2 traffic only. Supported switch header
+ types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2" and "vlan_exdsa".
+
+- ``RSS tag as XOR`` (default ``0``)
+
+ C0 HW revision onward, The HW gives an option to configure the RSS adder as
+
+ * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
+
+ * ``rss_adder<7:0> = flow_tag<7:0>``
+
+ Latter one aligns with standard NIC behavior vs former one is a legacy
+ RSS adder scheme used in OCTEON TX2 products.
+
+ By default, the driver runs in the latter mode from C0 HW revision onward.
+ Setting this flag to 1 to select the legacy mode.
+
+ For example to select the legacy mode(RSS tag adder as XOR)::
+
+ -a 0002:02:00.0,tag_as_xor=1
+
+- ``Max SPI for inbound inline IPsec`` (default ``1``)
+
+ Max SPI supported for inbound inline IPsec processing can be specified by
+ ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:02:00.0,ipsec_in_max_spi=128
+
+ With the above configuration, application can enable inline IPsec processing
+ on 128 SAs (SPI 0-127).
+
+- ``Lock Rx contexts in NDC cache``
+
+ Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
+
+ For example::
+
+ -a 0002:02:00.0,lock_rx_ctx=1
+
+- ``Lock Tx contexts in NDC cache``
+
+ Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
+
+ For example::
+
+ -a 0002:02:00.0,lock_tx_ctx=1
+
+.. note::
+
+ Above devarg parameters are configurable per device, user needs to pass the
+ parameters to all the PCIe devices if application requires to configure on
+ all the ethdev ports.
+
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -a 0002:02:00.0,npa_lock_mask=0xf
+
+.. _otx2_tmapi:
+
+Traffic Management API
+----------------------
+
+OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
+configure the following features:
+
+#. Hierarchical scheduling
+#. Single rate - Two color, Two rate - Three color shaping
+
+Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
+
+Every parent can have atmost 10 SP Children and unlimited DWRR children.
+
+Both PF & VF supports traffic management API with PF supporting 6 levels
+and VF supporting 5 levels of topology.
+
+Limitations
+-----------
+
+``mempool_octeontx2`` external mempool handler dependency
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
+``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler
+as it is performance wise most effective way for packet allocation and Tx buffer
+recycling on OCTEON TX2 SoC platform.
+
+CRC stripping
+~~~~~~~~~~~~~
+
+The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
+the host interface irrespective of the offload configuration.
+
+Multicast MAC filtering
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``net_octeontx2`` pmd supports multicast mac filtering feature only on physical
+function devices.
+
+SDP interface support
+~~~~~~~~~~~~~~~~~~~~~
+OCTEON TX2 SDP interface support is limited to PF device, No VF support.
+
+Inline Protocol Processing
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+``net_octeontx2`` pmd doesn't support the following features for packets to be
+inline protocol processed.
+- TSO offload
+- VLAN/QinQ offload
+- Fragmentation
+
+Debugging Options
+-----------------
+
+.. _table_octeontx2_ethdev_debug_options:
+
+.. table:: OCTEON TX2 ethdev debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
+ +---+------------+-------------------------------------------------------+
+ | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
+ +---+------------+-------------------------------------------------------+
+
+RTE Flow Support
+----------------
+
+The OCTEON TX2 SoC family NIC has support for the following patterns and
+actions.
+
+Patterns:
+
+.. _table_octeontx2_supported_flow_item_types:
+
+.. table:: Item types
+
+ +----+--------------------------------+
+ | # | Pattern Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ITEM_TYPE_ETH |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
+ +----+--------------------------------+
+ | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
+ +----+--------------------------------+
+ | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
+ +----+--------------------------------+
+ | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
+ +----+--------------------------------+
+ | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
+ +----+--------------------------------+
+ | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
+ +----+--------------------------------+
+ | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
+ +----+--------------------------------+
+ | 9 | RTE_FLOW_ITEM_TYPE_UDP |
+ +----+--------------------------------+
+ | 10 | RTE_FLOW_ITEM_TYPE_TCP |
+ +----+--------------------------------+
+ | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
+ +----+--------------------------------+
+ | 12 | RTE_FLOW_ITEM_TYPE_ESP |
+ +----+--------------------------------+
+ | 13 | RTE_FLOW_ITEM_TYPE_GRE |
+ +----+--------------------------------+
+ | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
+ +----+--------------------------------+
+ | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
+ +----+--------------------------------+
+ | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
+ +----+--------------------------------+
+ | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
+ +----+--------------------------------+
+ | 18 | RTE_FLOW_ITEM_TYPE_GENEVE |
+ +----+--------------------------------+
+ | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE |
+ +----+--------------------------------+
+ | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT |
+ +----+--------------------------------+
+ | 21 | RTE_FLOW_ITEM_TYPE_VOID |
+ +----+--------------------------------+
+ | 22 | RTE_FLOW_ITEM_TYPE_ANY |
+ +----+--------------------------------+
+ | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY |
+ +----+--------------------------------+
+ | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 |
+ +----+--------------------------------+
+ | 25 | RTE_FLOW_ITEM_TYPE_RAW |
+ +----+--------------------------------+
+
+.. note::
+
+ ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
+ bits in the GRE header are equal to 0.
+
+Actions:
+
+.. _table_octeontx2_supported_ingress_action_types:
+
+.. table:: Ingress action types
+
+ +----+-----------------------------------------+
+ | # | Action Type |
+ +====+=========================================+
+ | 1 | RTE_FLOW_ACTION_TYPE_VOID |
+ +----+-----------------------------------------+
+ | 2 | RTE_FLOW_ACTION_TYPE_MARK |
+ +----+-----------------------------------------+
+ | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
+ +----+-----------------------------------------+
+ | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
+ +----+-----------------------------------------+
+ | 5 | RTE_FLOW_ACTION_TYPE_DROP |
+ +----+-----------------------------------------+
+ | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
+ +----+-----------------------------------------+
+ | 7 | RTE_FLOW_ACTION_TYPE_RSS |
+ +----+-----------------------------------------+
+ | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
+ +----+-----------------------------------------+
+ | 9 | RTE_FLOW_ACTION_TYPE_PF |
+ +----+-----------------------------------------+
+ | 10 | RTE_FLOW_ACTION_TYPE_VF |
+ +----+-----------------------------------------+
+ | 11 | RTE_FLOW_ACTION_TYPE_OF_POP_VLAN |
+ +----+-----------------------------------------+
+ | 12 | RTE_FLOW_ACTION_TYPE_PORT_ID |
+ +----+-----------------------------------------+
+
+.. note::
+
+ ``RTE_FLOW_ACTION_TYPE_PORT_ID`` is only supported between PF and its VFs.
+
+.. _table_octeontx2_supported_egress_action_types:
+
+.. table:: Egress action types
+
+ +----+-----------------------------------------+
+ | # | Action Type |
+ +====+=========================================+
+ | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
+ +----+-----------------------------------------+
+ | 2 | RTE_FLOW_ACTION_TYPE_DROP |
+ +----+-----------------------------------------+
+ | 3 | RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN |
+ +----+-----------------------------------------+
+ | 4 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID |
+ +----+-----------------------------------------+
+ | 5 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP |
+ +----+-----------------------------------------+
+
+Custom protocols supported in RTE Flow
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols.
+
+* ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level.
+* ``NGIO`` can be parsed at L3 level.
+
+For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the
+respective switch header.
+
+For example::
+
+ -a 0002:02:00.0,switch_header="vlan_exdsa"
+
+The below fields of ``struct rte_flow_item_raw`` shall be used to specify the
+pattern.
+
+- ``relative`` Selects the layer at which parsing is done.
+
+ - 0 for ``exdsa`` and ``vlan_exdsa``.
+
+ - 1 for ``NGIO``.
+
+- ``offset`` The offset in the header where the pattern should be matched.
+- ``length`` Length of the pattern.
+- ``pattern`` Pattern as a byte string.
+
+Example usage in testpmd::
+
+ ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \
+ --rx-offloads=0x00080000 --rxq 8 --txq 8
+ testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
+ spec ab pattern mask ab offset is 4 / end actions queue index 1 / end