OCTEON TX2 Poll Mode driver
===========================
-The OCTEON TX2 ETHDEV PMD (**librte_pmd_octeontx2**) provides poll mode ethdev
+The OCTEON TX2 ETHDEV PMD (**librte_net_octeontx2**) provides poll mode ethdev
driver support for the inbuilt network device found in **Marvell OCTEON TX2**
SoC family as well as for their virtual functions (VF) in SR-IOV context.
See :doc:`../platform/octeontx2` for setup information.
-Compile time Config Options
----------------------------
-
-The following options may be modified in the ``config`` file.
-
-- ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``)
-
- Toggle compilation of the ``librte_pmd_octeontx2`` driver.
Driver compilation and testing
------------------------------
Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
for details.
-To compile the OCTEON TX2 PMD for Linux arm64 gcc,
-use arm64-octeontx2-linux-gcc as target.
-
#. Running testpmd:
Follow instructions available in the document
.. code-block:: console
- ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
- With the above configuration, each send queue's decscriptor buffer count is
+ With the above configuration, each send queue's descriptor buffer count is
limited to a maximum of 64 buffers.
- ``Switch header enable`` (default ``none``)
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
- types are "higig2", "dsa" and "chlen90b".
+ types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2" and "vlan_exdsa".
- ``RSS tag as XOR`` (default ``0``)
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
#. Hierarchical scheduling
#. Single rate - Two color, Two rate - Three color shaping
-Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
+Both DWRR and Static Priority(SP) hierarchical scheduling is supported.
Every parent can have atmost 10 SP Children and unlimited DWRR children.
+----+--------------------------------+
| 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 |
+----+--------------------------------+
+ | 25 | RTE_FLOW_ITEM_TYPE_RAW |
+ +----+--------------------------------+
.. note::
+----+-----------------------------------------+
| 11 | RTE_FLOW_ACTION_TYPE_OF_POP_VLAN |
+----+-----------------------------------------+
+ | 12 | RTE_FLOW_ACTION_TYPE_PORT_ID |
+ +----+-----------------------------------------+
+
+.. note::
+
+ ``RTE_FLOW_ACTION_TYPE_PORT_ID`` is only supported between PF and its VFs.
.. _table_octeontx2_supported_egress_action_types:
+----+-----------------------------------------+
| 5 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP |
+----+-----------------------------------------+
+
+Custom protocols supported in RTE Flow
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols.
+
+* ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level.
+* ``NGIO`` can be parsed at L3 level.
+
+For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the
+respective switch header.
+
+For example::
+
+ -a 0002:02:00.0,switch_header="vlan_exdsa"
+
+The below fields of ``struct rte_flow_item_raw`` shall be used to specify the
+pattern.
+
+- ``relative`` Selects the layer at which parsing is done.
+
+ - 0 for ``exdsa`` and ``vlan_exdsa``.
+
+ - 1 for ``NGIO``.
+
+- ``offset`` The offset in the header where the pattern should be matched.
+- ``length`` Length of the pattern.
+- ``pattern`` Pattern as a byte string.
+
+Example usage in testpmd::
+
+ ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \
+ --rx-offloads=0x00080000 --rxq 8 --txq 8
+ testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
+ spec ab pattern mask ab offset is 4 / end actions queue index 1 / end