1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2018 Intel Corporation.
7 The ice PMD (**librte_net_ice**) provides poll mode driver support for
8 10/25/50/100 Gbps Intel® Ethernet 800 Series Network Adapters based on
9 the Intel Ethernet Controller E810 and Intel Ethernet Connection E822/E823.
14 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
16 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
17 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
19 - Please follow the matching list to download specific kernel driver, firmware and DDP package from
20 `https://www.intel.com/content/www/us/en/search.html?ws=text#q=e810&t=Downloads&layout=table`.
22 - To understand what is DDP package and how it works, please review `Intel® Ethernet Controller E810 Dynamic
23 Device Personalization (DDP) for Telecommunications Technology Guide <https://cdrdv2.intel.com/v1/dl/getContent/617015>`_.
25 - To understand DDP for COMMs usage with DPDK, please review `Intel® Ethernet 800 Series Telecommunication (Comms)
26 Dynamic Device Personalization (DDP) Package <https://cdrdv2.intel.com/v1/dl/getContent/618651>`_.
31 - Follow the DPDK `Getting Started Guide for Windows <https://doc.dpdk.org/guides/windows_gsg/index.html>`_ to setup the basic DPDK environment.
33 - Identify the Intel® Ethernet adapter and get the latest NVM/FW version.
35 - To access any Intel® Ethernet hardware, load the NetUIO driver in place of existing built-in (inbox) driver.
37 - To load NetUIO driver, follow the steps mentioned in `dpdk-kmods repository
38 <https://git.dpdk.org/dpdk-kmods/tree/windows/netuio/README.rst>`_.
40 - Loading of private Dynamic Device Personalization (DDP) package is not supported on Windows.
43 Recommended Matching List
44 -------------------------
46 It is highly recommended to upgrade the ice kernel driver, firmware and DDP package
47 to avoid the compatibility issues with ice PMD.
48 Here is the suggested matching list which has been tested and verified.
49 The detailed information can refer to chapter Tested Platforms/Tested NICs in release notes.
51 +-----------+---------------+-----------------+-----------+--------------+-----------+
52 | DPDK | Kernel Driver | OS Default DDP | COMMS DDP | Wireless DDP | Firmware |
53 +===========+===============+=================+===========+==============+===========+
54 | 20.11 | 1.3.2 | 1.3.20 | 1.3.24 | N/A | 2.3 |
55 +-----------+---------------+-----------------+-----------+--------------+-----------+
56 | 21.02 | 1.4.11 | 1.3.24 | 1.3.28 | 1.3.4 | 2.4 |
57 +-----------+---------------+-----------------+-----------+--------------+-----------+
58 | 21.05 | 1.6.5 | 1.3.26 | 1.3.30 | 1.3.6 | 3.0 |
59 +-----------+---------------+-----------------+-----------+--------------+-----------+
61 Pre-Installation Configuration
62 ------------------------------
65 Runtime Config Options
66 ~~~~~~~~~~~~~~~~~~~~~~
68 - ``Safe Mode Support`` (default ``0``)
70 If driver failed to load OS package, by default driver's initialization failed.
71 But if user intend to use the device without OS package, user can take ``devargs``
72 parameter ``safe-mode-support``, for example::
74 -a 80:00.0,safe-mode-support=1
76 Then the driver will be initialized successfully and the device will enter Safe Mode.
77 NOTE: In Safe mode, only very limited features are available, features like RSS,
78 checksum, fdir, tunneling ... are all disabled.
80 - ``Generic Flow Pipeline Mode Support`` (default ``0``)
82 In pipeline mode, a flow can be set at one specific stage by setting parameter
83 ``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
84 priority 0 located at the first pipeline stage which typically be used as a firewall
85 to drop the packet on a blocklist(we called it permission stage). At this stage,
86 flow rules are created for the device's exact match engine: switch. Flows with priority
87 !0 located at the second stage, typically packets are classified here and be steered to
88 specific queue or queue group (we called it distribution stage), At this stage, flow
89 rules are created for device's flow director engine.
90 For none-pipeline mode, ``priority`` is ignored, a flow rule can be created as a flow director
91 rule or a switch rule depends on its pattern/action and the resource allocation situation,
92 all flows are virtually at the same pipeline stage.
93 By default, generic flow API is enabled in none-pipeline mode, user can choose to
94 use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
97 -a 80:00.0,pipeline-mode-support=1
99 - ``Protocol extraction for per queue``
101 Configure the RX queues to do protocol extraction into mbuf for protocol
102 handling acceleration, like checking the TCP SYN packets quickly.
104 The argument format is::
106 -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
107 -a 18:00.0,proto_xtr=<protocol>
109 Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
110 is used as a range separator and ``,`` is used as a single number separator.
111 The grouping ``()`` can be omitted for single element group. If no queues are
112 specified, PMD will use this protocol extraction type for all queues.
114 Protocol is : ``vlan, ipv4, ipv6, ipv6_flow, tcp, ip_offset``.
116 .. code-block:: console
118 dpdk-testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
120 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
121 VLAN extraction, other queues run with no protocol extraction.
123 .. code-block:: console
125 dpdk-testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
127 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
128 IPv6 extraction, other queues use the default VLAN extraction.
130 The extraction metadata is copied into the registered dynamic mbuf field, and
131 the related dynamic mbuf flags is set.
133 .. table:: Protocol extraction : ``vlan``
135 +----------------------------+----------------------------+
137 +======+===+=================+======+===+=================+
138 | PCP | D | VID | PCP | D | VID |
139 +------+---+-----------------+------+---+-----------------+
141 VLAN1 - single or EVLAN (first for QinQ).
143 VLAN2 - C-VLAN (second for QinQ).
145 .. table:: Protocol extraction : ``ipv4``
147 +----------------------------+----------------------------+
149 +======+=======+=============+==============+=============+
150 | Ver |Hdr Len| ToS | TTL | Protocol |
151 +------+-------+-------------+--------------+-------------+
153 IPHDR1 - IPv4 header word 4, "TTL" and "Protocol" fields.
155 IPHDR2 - IPv4 header word 0, "Ver", "Hdr Len" and "Type of Service" fields.
157 .. table:: Protocol extraction : ``ipv6``
159 +----------------------------+----------------------------+
161 +=====+=============+========+=============+==============+
162 | Ver |Traffic class| Flow | Next Header | Hop Limit |
163 +-----+-------------+--------+-------------+--------------+
165 IPHDR1 - IPv6 header word 3, "Next Header" and "Hop Limit" fields.
167 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
170 .. table:: Protocol extraction : ``ipv6_flow``
172 +----------------------------+----------------------------+
174 +=====+=============+========+============================+
175 | Ver |Traffic class| Flow Label |
176 +-----+-------------+-------------------------------------+
178 IPHDR1 - IPv6 header word 1, 16 low bits of the "Flow Label" field.
180 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
183 .. table:: Protocol extraction : ``tcp``
185 +----------------------------+----------------------------+
186 | TCPHDR2 | TCPHDR1 |
187 +============================+======+======+==============+
188 | Reserved |Offset| RSV | Flags |
189 +----------------------------+------+------+--------------+
191 TCPHDR1 - TCP header word 6, "Data Offset" and "Flags" fields.
195 .. table:: Protocol extraction : ``ip_offset``
197 +----------------------------+----------------------------+
199 +============================+============================+
200 | IPv6 HDR Offset | IPv4 HDR Offset |
201 +----------------------------+----------------------------+
203 IPHDR1 - Outer/Single IPv4 Header offset.
205 IPHDR2 - Outer/Single IPv6 Header offset.
207 Use ``rte_net_ice_dynf_proto_xtr_metadata_get`` to access the protocol
208 extraction metadata, and use ``RTE_PKT_RX_DYNF_PROTO_XTR_*`` to get the
209 metadata type of ``struct rte_mbuf::ol_flags``.
211 The ``rte_net_ice_dump_proto_xtr_metadata`` routine shows how to
212 access the protocol extraction result in ``struct rte_mbuf``.
214 - ``Hardware debug mask log support`` (default ``0``)
216 User can enable the related hardware debug mask such as ICE_DBG_NVM::
218 -a 0000:88:00.0,hw_debug_mask=0x80 --log-level=pmd.net.ice.driver:8
220 These ICE_DBG_XXX are defined in ``drivers/net/ice/base/ice_type.h``.
222 Driver compilation and testing
223 ------------------------------
225 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
234 Vector PMD for RX and TX path are selected automatically. The paths
235 are chosen based on 2 conditions.
238 On the X86 platform, the driver checks if the CPU supports AVX2.
239 If it's supported, AVX2 paths will be chosen. If not, SSE is chosen.
240 If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
241 is set to 512, AVX512 paths will be chosen.
243 - ``Offload features``
244 The supported HW offload features are described in the document ice.ini,
245 A value "P" means the offload feature is not supported by vector path.
246 If any not supported features are used, ICE vector PMD is disabled and the
247 normal paths are chosen.
249 Malicious driver detection (MDD)
250 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
252 It's not appropriate to send a packet, if this packet's destination MAC address
253 is just this port's MAC address. If SW tries to send such packets, HW will
254 report a MDD event and drop the packets.
256 The APPs based on DPDK should avoid providing such packets.
258 Device Config Function (DCF)
259 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
261 This section demonstrates ICE DCF PMD, which shares the core module with ICE
264 A DCF (Device Config Function) PMD bounds to the device's trusted VF with ID 0,
265 it can act as a sole controlling entity to exercise advance functionality (such
266 as switch, ACL) for the rest VFs.
268 The DCF PMD needs to advertise and acquire DCF capability which allows DCF to
269 send AdminQ commands that it would like to execute over to the PF and receive
270 responses for the same from PF.
274 .. figure:: img/ice_dcf.*
276 DCF Communication flow.
280 echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
282 #. Enable the VF0 trust on::
284 ip link set dev enp24s0f0 vf 0 trust on
286 #. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
288 dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
290 #. Monitor the VF2 interface network traffic::
292 tcpdump -e -nn -i enp24s1f2
294 #. Create one flow to redirect the traffic to VF2 by DCF::
296 flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 \
297 dst is 192.168.0.3 / end actions vf id 2 / end
299 #. Send the packet, and it should be displayed on tcpdump::
301 sendp(Ether(src='3c:fd:fe:aa:bb:78', dst='00:00:00:01:02:03')/IP(src=' \
302 192.168.0.2', dst="192.168.0.3")/TCP(flags='S')/Raw(load='XXXXXXXXXX'), \
303 iface="enp24s0f0", count=10)
305 Sample Application Notes
306 ------------------------
311 Vlan filter only works when Promiscuous mode is off.
313 To start ``testpmd``, and add vlan 10 to port 0:
315 .. code-block:: console
317 ./app/dpdk-testpmd -l 0-15 -n 4 -- -i
320 testpmd> rx_vlan add 10 0
322 Limitations or Known issues
323 ---------------------------
325 The Intel E810 requires a programmable pipeline package be downloaded
326 by the driver to support normal operations. The E810 has a limited
327 functionality built in to allow PXE boot and other use cases, but the
328 driver must download a package file during the driver initialization
331 The default DDP package file name is ice.pkg. For a specific NIC, the
332 DDP package supposed to be loaded can have a filename: ice-xxxxxx.pkg,
333 where 'xxxxxx' is the 64-bit PCIe Device Serial Number of the NIC. For
334 example, if the NIC's device serial number is 00-CC-BB-FF-FF-AA-05-68,
335 the device-specific DDP package filename is ice-00ccbbffffaa0568.pkg
336 (in hex and all low case). During initialization, the driver searches
337 in the following paths in order: /lib/firmware/updates/intel/ice/ddp
338 and /lib/firmware/intel/ice/ddp. The corresponding device-specific DDP
339 package will be downloaded first if the file exists. If not, then the
340 driver tries to load the default package. The type of loaded package
341 is stored in ``ice_adapter->active_pkg_type``.
343 A symbolic link to the DDP package file is also ok. The same package
344 file is used by both the kernel driver and the DPDK PMD.
348 Windows support: The DDP package is not supported on Windows so,
349 loading of the package is disabled on Windows.