1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2018 Intel Corporation.
7 The ice PMD (librte_pmd_ice) provides poll mode driver support for
8 10/25/50/100 Gbps IntelĀ® Ethernet 810 Series Network Adapters based on
9 the Intel Ethernet Controller E810.
15 - Identifying your adapter using `Intel Support
16 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
18 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
20 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
21 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
23 Recommended Matching List
24 -------------------------
26 It is highly recommended to upgrade the ice kernel driver and firmware and
27 DDP packages to avoid the compatibility issues with ice PMD. Here is the
28 suggested matching list.
30 +----------------------+-----------------------+------------------+----------------+-------------------+
31 | DPDK version | Kernel driver version | Firmware version | DDP OS Package | DDP COMMS Package |
32 +======================+=======================+==================+================+===================+
33 | 19.11 | 0.12.25 | 1.1.16.39 | 1.3.4 | 1.3.10 |
34 +----------------------+-----------------------+------------------+----------------+-------------------+
35 | 19.08 (experimental) | 0.10.1 | 1.1.12.7 | 1.2.0 | N/A |
36 +----------------------+-----------------------+------------------+----------------+-------------------+
37 | 19.05 (experimental) | 0.9.4 | 1.1.10.16 | 1.1.0 | N/A |
38 +----------------------+-----------------------+------------------+----------------+-------------------+
40 Pre-Installation Configuration
41 ------------------------------
46 The following options can be modified in the ``config`` file.
47 Please note that enabling debugging options may affect system performance.
49 - ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
51 Toggle compilation of the ``librte_pmd_ice`` driver.
53 - ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
55 Toggle display of generic debugging messages.
57 - ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
59 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
61 Runtime Config Options
62 ~~~~~~~~~~~~~~~~~~~~~~
64 - ``Safe Mode Support`` (default ``0``)
66 If driver failed to load OS package, by default driver's initialization failed.
67 But if user intend to use the device without OS package, user can take ``devargs``
68 parameter ``safe-mode-support``, for example::
70 -w 80:00.0,safe-mode-support=1
72 Then the driver will be initialized successfully and the device will enter Safe Mode.
73 NOTE: In Safe mode, only very limited features are available, features like RSS,
74 checksum, fdir, tunneling ... are all disabled.
76 - ``Generic Flow Pipeline Mode Support`` (default ``0``)
78 In pipeline mode, a flow can be set at one specific stage by setting parameter
79 ``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
80 priority 0 located at the first pipeline stage which typically be used as a firewall
81 to drop the packet on a blacklist(we called it permission stage). At this stage,
82 flow rules are created for the device's exact match engine: switch. Flows with priority
83 !0 located at the second stage, typically packets are classified here and be steered to
84 specific queue or queue group (we called it distribution stage), At this stage, flow
85 rules are created for device's flow director engine.
86 For none-pipeline mode, ``priority`` is ignored, a flow rule can be created as a flow director
87 rule or a switch rule depends on its pattern/action and the resource allocation situation,
88 all flows are virtually at the same pipeline stage.
89 By default, generic flow API is enabled in none-pipeline mode, user can choose to
90 use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
93 -w 80:00.0,pipeline-mode-support=1
95 - ``Flow Mark Support`` (default ``0``)
97 This is a hint to the driver to select the data path that supports flow mark extraction
99 NOTE: This is an experimental devarg, it will be removed when any of below conditions
101 1) all data paths support flow mark (currently vPMD does not)
102 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
105 -w 80:00.0,flow-mark-support=1
107 - ``Protocol extraction for per queue``
109 Configure the RX queues to do protocol extraction into mbuf for protocol
110 handling acceleration, like checking the TCP SYN packets quickly.
112 The argument format is::
114 -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
115 -w 18:00.0,proto_xtr=<protocol>
117 Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
118 is used as a range separator and ``,`` is used as a single number separator.
119 The grouping ``()`` can be omitted for single element group. If no queues are
120 specified, PMD will use this protocol extraction type for all queues.
122 Protocol is : ``vlan, ipv4, ipv6, ipv6_flow, tcp``.
124 .. code-block:: console
126 testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
128 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
129 VLAN extraction, other queues run with no protocol extraction.
131 .. code-block:: console
133 testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
135 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
136 IPv6 extraction, other queues use the default VLAN extraction.
138 The extraction metadata is copied into the registered dynamic mbuf field, and
139 the related dynamic mbuf flags is set.
141 .. table:: Protocol extraction : ``vlan``
143 +----------------------------+----------------------------+
145 +======+===+=================+======+===+=================+
146 | PCP | D | VID | PCP | D | VID |
147 +------+---+-----------------+------+---+-----------------+
149 VLAN1 - single or EVLAN (first for QinQ).
151 VLAN2 - C-VLAN (second for QinQ).
153 .. table:: Protocol extraction : ``ipv4``
155 +----------------------------+----------------------------+
157 +======+=======+=============+==============+=============+
158 | Ver |Hdr Len| ToS | TTL | Protocol |
159 +------+-------+-------------+--------------+-------------+
161 IPHDR1 - IPv4 header word 4, "TTL" and "Protocol" fields.
163 IPHDR2 - IPv4 header word 0, "Ver", "Hdr Len" and "Type of Service" fields.
165 .. table:: Protocol extraction : ``ipv6``
167 +----------------------------+----------------------------+
169 +=====+=============+========+=============+==============+
170 | Ver |Traffic class| Flow | Next Header | Hop Limit |
171 +-----+-------------+--------+-------------+--------------+
173 IPHDR1 - IPv6 header word 3, "Next Header" and "Hop Limit" fields.
175 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
178 .. table:: Protocol extraction : ``ipv6_flow``
180 +----------------------------+----------------------------+
182 +=====+=============+========+============================+
183 | Ver |Traffic class| Flow Label |
184 +-----+-------------+-------------------------------------+
186 IPHDR1 - IPv6 header word 1, 16 low bits of the "Flow Label" field.
188 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
191 .. table:: Protocol extraction : ``tcp``
193 +----------------------------+----------------------------+
194 | TCPHDR2 | TCPHDR1 |
195 +============================+======+======+==============+
196 | Reserved |Offset| RSV | Flags |
197 +----------------------------+------+------+--------------+
199 TCPHDR1 - TCP header word 6, "Data Offset" and "Flags" fields.
203 Use ``rte_net_ice_dynf_proto_xtr_metadata_get`` to access the protocol
204 extraction metadata, and use ``RTE_PKT_RX_DYNF_PROTO_XTR_*`` to get the
205 metadata type of ``struct rte_mbuf::ol_flags``.
207 The ``rte_net_ice_dump_proto_xtr_metadata`` routine shows how to
208 access the protocol extraction result in ``struct rte_mbuf``.
210 Driver compilation and testing
211 ------------------------------
213 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
222 Vector PMD for RX and TX path are selected automatically. The paths
223 are chosen based on 2 conditions.
226 On the X86 platform, the driver checks if the CPU supports AVX2.
227 If it's supported, AVX2 paths will be chosen. If not, SSE is chosen.
229 - ``Offload features``
230 The supported HW offload features are described in the document ice_vec.ini.
231 If any not supported features are used, ICE vector PMD is disabled and the
232 normal paths are chosen.
234 Malicious driver detection (MDD)
235 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
237 It's not appropriate to send a packet, if this packet's destination MAC address
238 is just this port's MAC address. If SW tries to send such packets, HW will
239 report a MDD event and drop the packets.
241 The APPs based on DPDK should avoid providing such packets.
243 Device Config Function (DCF)
244 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
246 This section demonstrates ICE DCF PMD, which shares the core module with ICE
249 A DCF (Device Config Function) PMD bounds to the device's trusted VF with ID 0,
250 it can act as a sole controlling entity to exercise advance functionality (such
251 as switch, ACL) for the rest VFs.
253 The DCF PMD needs to advertise and acquire DCF capability which allows DCF to
254 send AdminQ commands that it would like to execute over to the PF and receive
255 responses for the same from PF.
259 .. figure:: img/ice_dcf.*
261 DCF Communication flow.
265 echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
267 #. Enable the VF0 trust on::
269 ip link set dev enp24s0f0 vf 0 trust on
271 #. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
273 testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
275 #. Monitor the VF2 interface network traffic::
277 tcpdump -e -nn -i enp24s1f2
279 #. Create one flow to redirect the traffic to VF2 by DCF::
281 flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 \
282 dst is 192.168.0.3 / end actions vf id 2 / end
284 #. Send the packet, and it should be displayed on tcpdump::
286 sendp(Ether(src='3c:fd:fe:aa:bb:78', dst='00:00:00:01:02:03')/IP(src=' \
287 192.168.0.2', dst="192.168.0.3")/TCP(flags='S')/Raw(load='XXXXXXXXXX'), \
288 iface="enp24s0f0", count=10)
290 Sample Application Notes
291 ------------------------
296 Vlan filter only works when Promiscuous mode is off.
298 To start ``testpmd``, and add vlan 10 to port 0:
300 .. code-block:: console
302 ./app/testpmd -l 0-15 -n 4 -- -i
305 testpmd> rx_vlan add 10 0
307 Limitations or Known issues
308 ---------------------------
310 The Intel E810 requires a programmable pipeline package be downloaded
311 by the driver to support normal operations. The E810 has a limited
312 functionality built in to allow PXE boot and other use cases, but the
313 driver must download a package file during the driver initialization
316 The default DDP package file name is ice.pkg. For a specific NIC, the
317 DDP package supposed to be loaded can have a filename: ice-xxxxxx.pkg,
318 where 'xxxxxx' is the 64-bit PCIe Device Serial Number of the NIC. For
319 example, if the NIC's device serial number is 00-CC-BB-FF-FF-AA-05-68,
320 the device-specific DDP package filename is ice-00ccbbffffaa0568.pkg
321 (in hex and all low case). During initialization, the driver searches
322 in the following paths in order: /lib/firmware/updates/intel/ice/ddp
323 and /lib/firmware/intel/ice/ddp. The corresponding device-specific DDP
324 package will be downloaded first if the file exists. If not, then the
325 driver tries to load the default package. The type of loaded package
326 is stored in ``ice_adapter->active_pkg_type``.
328 A symbolic link to the DDP package file is also ok. The same package
329 file is used by both the kernel driver and the DPDK PMD.
334 Ice code released in 19.02 is for evaluation only.