1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2018 Intel Corporation.
7 The ice PMD (librte_pmd_ice) provides poll mode driver support for
8 10/25 Gbps IntelĀ® Ethernet 810 Series Network Adapters based on
9 the Intel Ethernet Controller E810.
15 - Identifying your adapter using `Intel Support
16 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
18 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
20 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
21 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
24 Pre-Installation Configuration
25 ------------------------------
30 The following options can be modified in the ``config`` file.
31 Please note that enabling debugging options may affect system performance.
33 - ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
35 Toggle compilation of the ``librte_pmd_ice`` driver.
37 - ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
39 Toggle display of generic debugging messages.
41 - ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
43 Toggle bulk allocation for RX.
45 - ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
47 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
49 Runtime Config Options
50 ~~~~~~~~~~~~~~~~~~~~~~
52 - ``Safe Mode Support`` (default ``0``)
54 If driver failed to load OS package, by default driver's initialization failed.
55 But if user intend to use the device without OS package, user can take ``devargs``
56 parameter ``safe-mode-support``, for example::
58 -w 80:00.0,safe-mode-support=1
60 Then the driver will be initialized successfully and the device will enter Safe Mode.
61 NOTE: In Safe mode, only very limited features are available, features like RSS,
62 checksum, fdir, tunneling ... are all disabled.
64 - ``Generic Flow Pipeline Mode Support`` (default ``0``)
66 In pipeline mode, a flow can be set at one specific stage by setting parameter
67 ``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
68 priority 0 located at the first pipeline stage which typically be used as a firewall
69 to drop the packet on a blacklist(we called it permission stage). At this stage,
70 flow rules are created for the device's exact match engine: switch. Flows with priority
71 !0 located at the second stage, typically packets are classified here and be steered to
72 specific queue or queue group (we called it distribution stage), At this stage, flow
73 rules are created for device's flow director engine.
74 For none-pipeline mode, ``priority`` is ignored, a flow rule can be created as a flow director
75 rule or a switch rule depends on its pattern/action and the resource allocation situation,
76 all flows are virtually at the same pipeline stage.
77 By default, generic flow API is enabled in none-pipeline mode, user can choose to
78 use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
81 -w 80:00.0,pipeline-mode-support=1
83 - ``Protocol extraction for per queue``
85 Configure the RX queues to do protocol extraction into mbuf for protocol
86 handling acceleration, like checking the TCP SYN packets quickly.
88 The argument format is::
90 -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
91 -w 18:00.0,proto_xtr=<protocol>
93 Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
94 is used as a range separator and ``,`` is used as a single number separator.
95 The grouping ``()`` can be omitted for single element group. If no queues are
96 specified, PMD will use this protocol extraction type for all queues.
98 Protocol is : ``vlan, ipv4, ipv6, ipv6_flow, tcp``.
100 .. code-block:: console
102 testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
104 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
105 VLAN extraction, other queues run with no protocol extraction.
107 .. code-block:: console
109 testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
111 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
112 IPv6 extraction, other queues use the default VLAN extraction.
114 The extraction metadata is copied into the registered dynamic mbuf field, and
115 the related dynamic mbuf flags is set.
117 .. table:: Protocol extraction : ``vlan``
119 +----------------------------+----------------------------+
121 +======+===+=================+======+===+=================+
122 | PCP | D | VID | PCP | D | VID |
123 +------+---+-----------------+------+---+-----------------+
125 VLAN1 - single or EVLAN (first for QinQ).
127 VLAN2 - C-VLAN (second for QinQ).
129 .. table:: Protocol extraction : ``ipv4``
131 +----------------------------+----------------------------+
133 +======+=======+=============+==============+=============+
134 | Ver |Hdr Len| ToS | TTL | Protocol |
135 +------+-------+-------------+--------------+-------------+
137 IPHDR1 - IPv4 header word 4, "TTL" and "Protocol" fields.
139 IPHDR2 - IPv4 header word 0, "Ver", "Hdr Len" and "Type of Service" fields.
141 .. table:: Protocol extraction : ``ipv6``
143 +----------------------------+----------------------------+
145 +=====+=============+========+=============+==============+
146 | Ver |Traffic class| Flow | Next Header | Hop Limit |
147 +-----+-------------+--------+-------------+--------------+
149 IPHDR1 - IPv6 header word 3, "Next Header" and "Hop Limit" fields.
151 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
154 .. table:: Protocol extraction : ``ipv6_flow``
156 +----------------------------+----------------------------+
158 +=====+=============+========+============================+
159 | Ver |Traffic class| Flow Label |
160 +-----+-------------+-------------------------------------+
162 IPHDR1 - IPv6 header word 1, 16 low bits of the "Flow Label" field.
164 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
167 .. table:: Protocol extraction : ``tcp``
169 +----------------------------+----------------------------+
170 | TCPHDR2 | TCPHDR1 |
171 +============================+======+======+==============+
172 | Reserved |Offset| RSV | Flags |
173 +----------------------------+------+------+--------------+
175 TCPHDR1 - TCP header word 6, "Data Offset" and "Flags" fields.
179 Use ``rte_net_ice_dynf_proto_xtr_metadata_get`` to access the protocol
180 extraction metadata, and use ``RTE_PKT_RX_DYNF_PROTO_XTR_*`` to get the
181 metadata type of ``struct rte_mbuf::ol_flags``.
183 The ``rte_net_ice_dump_proto_xtr_metadata`` routine shows how to
184 access the protocol extraction result in ``struct rte_mbuf``.
186 Driver compilation and testing
187 ------------------------------
189 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
198 Vector PMD for RX and TX path are selected automatically. The paths
199 are chosen based on 2 conditions.
202 On the X86 platform, the driver checks if the CPU supports AVX2.
203 If it's supported, AVX2 paths will be chosen. If not, SSE is chosen.
205 - ``Offload features``
206 The supported HW offload features are described in the document ice_vec.ini.
207 If any not supported features are used, ICE vector PMD is disabled and the
208 normal paths are chosen.
210 Malicious driver detection (MDD)
211 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
213 It's not appropriate to send a packet, if this packet's destination MAC address
214 is just this port's MAC address. If SW tries to send such packets, HW will
215 report a MDD event and drop the packets.
217 The APPs based on DPDK should avoid providing such packets.
219 Sample Application Notes
220 ------------------------
225 Vlan filter only works when Promiscuous mode is off.
227 To start ``testpmd``, and add vlan 10 to port 0:
229 .. code-block:: console
231 ./app/testpmd -l 0-15 -n 4 -- -i
234 testpmd> rx_vlan add 10 0
236 Limitations or Known issues
237 ---------------------------
239 The Intel E810 requires a programmable pipeline package be downloaded
240 by the driver to support normal operations. The E810 has a limited
241 functionality built in to allow PXE boot and other use cases, but the
242 driver must download a package file during the driver initialization
245 The default DDP package file name is ice.pkg. For a specific NIC, the
246 DDP package supposed to be loaded can have a filename: ice-xxxxxx.pkg,
247 where 'xxxxxx' is the 64-bit PCIe Device Serial Number of the NIC. For
248 example, if the NIC's device serial number is 00-CC-BB-FF-FF-AA-05-68,
249 the device-specific DDP package filename is ice-00ccbbffffaa0568.pkg
250 (in hex and all low case). During initialization, the driver searches
251 in the following paths in order: /lib/firmware/updates/intel/ice/ddp
252 and /lib/firmware/intel/ice/ddp. The corresponding device-specific DDP
253 package will be downloaded first if the file exists. If not, then the
254 driver tries to load the default package. The type of loaded package
255 is stored in ``ice_adapter->active_pkg_type``.
257 A symbolic link to the DDP package file is also ok. The same package
258 file is used by both the kernel driver and the DPDK PMD.
263 Ice code released in 19.02 is for evaluation only.