X-Git-Url: http://git.droids-corp.org/?a=blobdiff_plain;f=doc%2Fguides%2Fnics%2Fice.rst;h=8af32dabf641ab2d5945c71f3b5d0466942cf52e;hb=af3f83032b457771e3268a3d310f4890672ab442;hp=641f34840b1cbb6bb3b7ea1d296c53a76eb5de09;hpb=7e124ff12c85a438010e49f0cee7eb10593b0ed6;p=dpdk.git diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index 641f34840b..8af32dabf6 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -5,7 +5,7 @@ ICE Poll Mode Driver ====================== The ice PMD (librte_pmd_ice) provides poll mode driver support for -10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on +10/25/50/100 Gbps Intel® Ethernet 810 Series Network Adapters based on the Intel Ethernet Controller E810. @@ -20,6 +20,22 @@ Prerequisites - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms" section of the :ref:`Getting Started Guide for Linux `. +Recommended Matching List +------------------------- + +It is highly recommended to upgrade the ice kernel driver and firmware and +DDP packages to avoid the compatibility issues with ice PMD. Here is the +suggested matching list. + + +----------------------+-----------------------+------------------+----------------+-------------------+ + | DPDK version | Kernel driver version | Firmware version | DDP OS Package | DDP COMMS Package | + +======================+=======================+==================+================+===================+ + | 19.11 | 0.12.25 | 1.1.16.39 | 1.3.4 | 1.3.10 | + +----------------------+-----------------------+------------------+----------------+-------------------+ + | 19.08 (experimental) | 0.10.1 | 1.1.12.7 | 1.2.0 | N/A | + +----------------------+-----------------------+------------------+----------------+-------------------+ + | 19.05 (experimental) | 0.9.4 | 1.1.10.16 | 1.1.0 | N/A | + +----------------------+-----------------------+------------------+----------------+-------------------+ Pre-Installation Configuration ------------------------------ @@ -38,10 +54,6 @@ Please note that enabling debugging options may affect system performance. Toggle display of generic debugging messages. -- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``) - - Toggle bulk allocation for RX. - - ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``) Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte. @@ -61,10 +73,41 @@ Runtime Config Options NOTE: In Safe mode, only very limited features are available, features like RSS, checksum, fdir, tunneling ... are all disabled. +- ``Generic Flow Pipeline Mode Support`` (default ``0``) + + In pipeline mode, a flow can be set at one specific stage by setting parameter + ``priority``. Currently, we support two stages: priority = 0 or !0. Flows with + priority 0 located at the first pipeline stage which typically be used as a firewall + to drop the packet on a blacklist(we called it permission stage). At this stage, + flow rules are created for the device's exact match engine: switch. Flows with priority + !0 located at the second stage, typically packets are classified here and be steered to + specific queue or queue group (we called it distribution stage), At this stage, flow + rules are created for device's flow director engine. + For none-pipeline mode, ``priority`` is ignored, a flow rule can be created as a flow director + rule or a switch rule depends on its pattern/action and the resource allocation situation, + all flows are virtually at the same pipeline stage. + By default, generic flow API is enabled in none-pipeline mode, user can choose to + use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``, + for example:: + + -w 80:00.0,pipeline-mode-support=1 + +- ``Flow Mark Support`` (default ``0``) + + This is a hint to the driver to select the data path that supports flow mark extraction + by default. + NOTE: This is an experimental devarg, it will be removed when any of below conditions + is ready. + 1) all data paths support flow mark (currently vPMD does not) + 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint. + Example:: + + -w 80:00.0,flow-mark-support=1 + - ``Protocol extraction for per queue`` - Configure the RX queues to do protocol extraction into ``rte_mbuf::udata64`` - for protocol handling acceleration, like checking the TCP SYN packets quickly. + Configure the RX queues to do protocol extraction into mbuf for protocol + handling acceleration, like checking the TCP SYN packets quickly. The argument format is:: @@ -92,7 +135,8 @@ Runtime Config Options This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are IPv6 extraction, other queues use the default VLAN extraction. - The extraction will be copied into the lower 32 bit of ``rte_mbuf::udata64``. + The extraction metadata is copied into the registered dynamic mbuf field, and + the related dynamic mbuf flags is set. .. table:: Protocol extraction : ``vlan`` @@ -156,10 +200,11 @@ Runtime Config Options TCPHDR2 - Reserved - Use ``get_proto_xtr_flds(struct rte_mbuf *mb)`` to access the protocol - extraction, do not use ``rte_mbuf::udata64`` directly. + Use ``rte_net_ice_dynf_proto_xtr_metadata_get`` to access the protocol + extraction metadata, and use ``RTE_PKT_RX_DYNF_PROTO_XTR_*`` to get the + metadata type of ``struct rte_mbuf::ol_flags``. - The ``dump_proto_xtr_flds(struct rte_mbuf *mb)`` routine shows how to + The ``rte_net_ice_dump_proto_xtr_metadata`` routine shows how to access the protocol extraction result in ``struct rte_mbuf``. Driver compilation and testing