1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2018 Intel Corporation.
7 The ice PMD (librte_pmd_ice) provides poll mode driver support for
8 10/25/50/100 Gbps IntelĀ® Ethernet 810 Series Network Adapters based on
9 the Intel Ethernet Controller E810.
15 - Identifying your adapter using `Intel Support
16 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
18 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
20 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
21 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
23 Recommended Matching List
24 -------------------------
26 It is highly recommended to upgrade the ice kernel driver and firmware and
27 DDP packages to avoid the compatibility issues with ice PMD. Here is the
28 suggested matching list.
30 +----------------------+-----------------------+------------------+----------------+-------------------+
31 | DPDK version | Kernel driver version | Firmware version | DDP OS Package | DDP COMMS Package |
32 +======================+=======================+==================+================+===================+
33 | 20.02 | 0.12.25 | 1.1.16.39 | 1.3.4 | 1.3.10 |
34 +----------------------+-----------------------+------------------+----------------+-------------------+
35 | 19.11 | 0.12.25 | 1.1.16.39 | 1.3.4 | 1.3.10 |
36 +----------------------+-----------------------+------------------+----------------+-------------------+
37 | 19.08 (experimental) | 0.10.1 | 1.1.12.7 | 1.2.0 | N/A |
38 +----------------------+-----------------------+------------------+----------------+-------------------+
39 | 19.05 (experimental) | 0.9.4 | 1.1.10.16 | 1.1.0 | N/A |
40 +----------------------+-----------------------+------------------+----------------+-------------------+
42 Pre-Installation Configuration
43 ------------------------------
48 The following options can be modified in the ``config`` file.
49 Please note that enabling debugging options may affect system performance.
51 - ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
53 Toggle compilation of the ``librte_pmd_ice`` driver.
55 - ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
57 Toggle display of generic debugging messages.
59 - ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
61 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
63 Runtime Config Options
64 ~~~~~~~~~~~~~~~~~~~~~~
66 - ``Safe Mode Support`` (default ``0``)
68 If driver failed to load OS package, by default driver's initialization failed.
69 But if user intend to use the device without OS package, user can take ``devargs``
70 parameter ``safe-mode-support``, for example::
72 -w 80:00.0,safe-mode-support=1
74 Then the driver will be initialized successfully and the device will enter Safe Mode.
75 NOTE: In Safe mode, only very limited features are available, features like RSS,
76 checksum, fdir, tunneling ... are all disabled.
78 - ``Generic Flow Pipeline Mode Support`` (default ``0``)
80 In pipeline mode, a flow can be set at one specific stage by setting parameter
81 ``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
82 priority 0 located at the first pipeline stage which typically be used as a firewall
83 to drop the packet on a blacklist(we called it permission stage). At this stage,
84 flow rules are created for the device's exact match engine: switch. Flows with priority
85 !0 located at the second stage, typically packets are classified here and be steered to
86 specific queue or queue group (we called it distribution stage), At this stage, flow
87 rules are created for device's flow director engine.
88 For none-pipeline mode, ``priority`` is ignored, a flow rule can be created as a flow director
89 rule or a switch rule depends on its pattern/action and the resource allocation situation,
90 all flows are virtually at the same pipeline stage.
91 By default, generic flow API is enabled in none-pipeline mode, user can choose to
92 use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
95 -w 80:00.0,pipeline-mode-support=1
97 - ``Flow Mark Support`` (default ``0``)
99 This is a hint to the driver to select the data path that supports flow mark extraction
101 NOTE: This is an experimental devarg, it will be removed when any of below conditions
103 1) all data paths support flow mark (currently vPMD does not)
104 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
107 -w 80:00.0,flow-mark-support=1
109 - ``Protocol extraction for per queue``
111 Configure the RX queues to do protocol extraction into mbuf for protocol
112 handling acceleration, like checking the TCP SYN packets quickly.
114 The argument format is::
116 -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
117 -w 18:00.0,proto_xtr=<protocol>
119 Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
120 is used as a range separator and ``,`` is used as a single number separator.
121 The grouping ``()`` can be omitted for single element group. If no queues are
122 specified, PMD will use this protocol extraction type for all queues.
124 Protocol is : ``vlan, ipv4, ipv6, ipv6_flow, tcp``.
126 .. code-block:: console
128 testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
130 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
131 VLAN extraction, other queues run with no protocol extraction.
133 .. code-block:: console
135 testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
137 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
138 IPv6 extraction, other queues use the default VLAN extraction.
140 The extraction metadata is copied into the registered dynamic mbuf field, and
141 the related dynamic mbuf flags is set.
143 .. table:: Protocol extraction : ``vlan``
145 +----------------------------+----------------------------+
147 +======+===+=================+======+===+=================+
148 | PCP | D | VID | PCP | D | VID |
149 +------+---+-----------------+------+---+-----------------+
151 VLAN1 - single or EVLAN (first for QinQ).
153 VLAN2 - C-VLAN (second for QinQ).
155 .. table:: Protocol extraction : ``ipv4``
157 +----------------------------+----------------------------+
159 +======+=======+=============+==============+=============+
160 | Ver |Hdr Len| ToS | TTL | Protocol |
161 +------+-------+-------------+--------------+-------------+
163 IPHDR1 - IPv4 header word 4, "TTL" and "Protocol" fields.
165 IPHDR2 - IPv4 header word 0, "Ver", "Hdr Len" and "Type of Service" fields.
167 .. table:: Protocol extraction : ``ipv6``
169 +----------------------------+----------------------------+
171 +=====+=============+========+=============+==============+
172 | Ver |Traffic class| Flow | Next Header | Hop Limit |
173 +-----+-------------+--------+-------------+--------------+
175 IPHDR1 - IPv6 header word 3, "Next Header" and "Hop Limit" fields.
177 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
180 .. table:: Protocol extraction : ``ipv6_flow``
182 +----------------------------+----------------------------+
184 +=====+=============+========+============================+
185 | Ver |Traffic class| Flow Label |
186 +-----+-------------+-------------------------------------+
188 IPHDR1 - IPv6 header word 1, 16 low bits of the "Flow Label" field.
190 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
193 .. table:: Protocol extraction : ``tcp``
195 +----------------------------+----------------------------+
196 | TCPHDR2 | TCPHDR1 |
197 +============================+======+======+==============+
198 | Reserved |Offset| RSV | Flags |
199 +----------------------------+------+------+--------------+
201 TCPHDR1 - TCP header word 6, "Data Offset" and "Flags" fields.
205 Use ``rte_net_ice_dynf_proto_xtr_metadata_get`` to access the protocol
206 extraction metadata, and use ``RTE_PKT_RX_DYNF_PROTO_XTR_*`` to get the
207 metadata type of ``struct rte_mbuf::ol_flags``.
209 The ``rte_net_ice_dump_proto_xtr_metadata`` routine shows how to
210 access the protocol extraction result in ``struct rte_mbuf``.
212 Driver compilation and testing
213 ------------------------------
215 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
224 Vector PMD for RX and TX path are selected automatically. The paths
225 are chosen based on 2 conditions.
228 On the X86 platform, the driver checks if the CPU supports AVX2.
229 If it's supported, AVX2 paths will be chosen. If not, SSE is chosen.
231 - ``Offload features``
232 The supported HW offload features are described in the document ice_vec.ini.
233 If any not supported features are used, ICE vector PMD is disabled and the
234 normal paths are chosen.
236 Malicious driver detection (MDD)
237 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
239 It's not appropriate to send a packet, if this packet's destination MAC address
240 is just this port's MAC address. If SW tries to send such packets, HW will
241 report a MDD event and drop the packets.
243 The APPs based on DPDK should avoid providing such packets.
245 Device Config Function (DCF)
246 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
248 This section demonstrates ICE DCF PMD, which shares the core module with ICE
251 A DCF (Device Config Function) PMD bounds to the device's trusted VF with ID 0,
252 it can act as a sole controlling entity to exercise advance functionality (such
253 as switch, ACL) for the rest VFs.
255 The DCF PMD needs to advertise and acquire DCF capability which allows DCF to
256 send AdminQ commands that it would like to execute over to the PF and receive
257 responses for the same from PF.
261 .. figure:: img/ice_dcf.*
263 DCF Communication flow.
267 echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
269 #. Enable the VF0 trust on::
271 ip link set dev enp24s0f0 vf 0 trust on
273 #. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
275 testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
277 #. Monitor the VF2 interface network traffic::
279 tcpdump -e -nn -i enp24s1f2
281 #. Create one flow to redirect the traffic to VF2 by DCF::
283 flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 \
284 dst is 192.168.0.3 / end actions vf id 2 / end
286 #. Send the packet, and it should be displayed on tcpdump::
288 sendp(Ether(src='3c:fd:fe:aa:bb:78', dst='00:00:00:01:02:03')/IP(src=' \
289 192.168.0.2', dst="192.168.0.3")/TCP(flags='S')/Raw(load='XXXXXXXXXX'), \
290 iface="enp24s0f0", count=10)
292 Sample Application Notes
293 ------------------------
298 Vlan filter only works when Promiscuous mode is off.
300 To start ``testpmd``, and add vlan 10 to port 0:
302 .. code-block:: console
304 ./app/testpmd -l 0-15 -n 4 -- -i
307 testpmd> rx_vlan add 10 0
309 Limitations or Known issues
310 ---------------------------
312 The Intel E810 requires a programmable pipeline package be downloaded
313 by the driver to support normal operations. The E810 has a limited
314 functionality built in to allow PXE boot and other use cases, but the
315 driver must download a package file during the driver initialization
318 The default DDP package file name is ice.pkg. For a specific NIC, the
319 DDP package supposed to be loaded can have a filename: ice-xxxxxx.pkg,
320 where 'xxxxxx' is the 64-bit PCIe Device Serial Number of the NIC. For
321 example, if the NIC's device serial number is 00-CC-BB-FF-FF-AA-05-68,
322 the device-specific DDP package filename is ice-00ccbbffffaa0568.pkg
323 (in hex and all low case). During initialization, the driver searches
324 in the following paths in order: /lib/firmware/updates/intel/ice/ddp
325 and /lib/firmware/intel/ice/ddp. The corresponding device-specific DDP
326 package will be downloaded first if the file exists. If not, then the
327 driver tries to load the default package. The type of loaded package
328 is stored in ``ice_adapter->active_pkg_type``.
330 A symbolic link to the DDP package file is also ok. The same package
331 file is used by both the kernel driver and the DPDK PMD.
336 Ice code released in 19.02 is for evaluation only.