1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2018 Intel Corporation.
7 The ice PMD (**librte_net_ice**) provides poll mode driver support for
8 10/25/50/100 Gbps Intel® Ethernet 800 Series Network Adapters based on
9 the Intel Ethernet Controller E810 and Intel Ethernet Connection E822/E823.
14 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
16 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
17 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
19 - Please follow the matching list to download specific kernel driver, firmware and DDP package from
20 `https://www.intel.com/content/www/us/en/search.html?ws=text#q=e810&t=Downloads&layout=table`.
22 - To understand what is DDP package and how it works, please review `Intel® Ethernet Controller E810 Dynamic
23 Device Personalization (DDP) for Telecommunications Technology Guide <https://cdrdv2.intel.com/v1/dl/getContent/617015>`_.
25 - To understand DDP for COMMs usage with DPDK, please review `Intel® Ethernet 800 Series Telecommunication (Comms)
26 Dynamic Device Personalization (DDP) Package <https://cdrdv2.intel.com/v1/dl/getContent/618651>`_.
31 - Follow the :doc:`guide for Windows <../windows_gsg/run_apps>`
32 to setup the basic DPDK environment.
34 - Identify the Intel® Ethernet adapter and get the latest NVM/FW version.
36 - To access any Intel® Ethernet hardware, load the NetUIO driver in place of existing built-in (inbox) driver.
38 - To load NetUIO driver, follow the steps mentioned in `dpdk-kmods repository
39 <https://git.dpdk.org/dpdk-kmods/tree/windows/netuio/README.rst>`_.
41 - Loading of private Dynamic Device Personalization (DDP) package is not supported on Windows.
44 Recommended Matching List
45 -------------------------
47 It is highly recommended to upgrade the ice kernel driver, firmware and DDP package
48 to avoid the compatibility issues with ice PMD.
49 Here is the suggested matching list which has been tested and verified.
50 The detailed information can refer to chapter Tested Platforms/Tested NICs in release notes.
52 +-----------+---------------+-----------------+-----------+--------------+-----------+
53 | DPDK | Kernel Driver | OS Default DDP | COMMS DDP | Wireless DDP | Firmware |
54 +===========+===============+=================+===========+==============+===========+
55 | 20.11 | 1.3.2 | 1.3.20 | 1.3.24 | N/A | 2.3 |
56 +-----------+---------------+-----------------+-----------+--------------+-----------+
57 | 21.02 | 1.4.11 | 1.3.24 | 1.3.28 | 1.3.4 | 2.4 |
58 +-----------+---------------+-----------------+-----------+--------------+-----------+
59 | 21.05 | 1.6.5 | 1.3.26 | 1.3.30 | 1.3.6 | 3.0 |
60 +-----------+---------------+-----------------+-----------+--------------+-----------+
62 Pre-Installation Configuration
63 ------------------------------
66 Runtime Config Options
67 ~~~~~~~~~~~~~~~~~~~~~~
69 - ``Safe Mode Support`` (default ``0``)
71 If driver failed to load OS package, by default driver's initialization failed.
72 But if user intend to use the device without OS package, user can take ``devargs``
73 parameter ``safe-mode-support``, for example::
75 -a 80:00.0,safe-mode-support=1
77 Then the driver will be initialized successfully and the device will enter Safe Mode.
78 NOTE: In Safe mode, only very limited features are available, features like RSS,
79 checksum, fdir, tunneling ... are all disabled.
81 - ``Generic Flow Pipeline Mode Support`` (default ``0``)
83 In pipeline mode, a flow can be set at one specific stage by setting parameter
84 ``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
85 priority 0 located at the first pipeline stage which typically be used as a firewall
86 to drop the packet on a blocklist(we called it permission stage). At this stage,
87 flow rules are created for the device's exact match engine: switch. Flows with priority
88 !0 located at the second stage, typically packets are classified here and be steered to
89 specific queue or queue group (we called it distribution stage), At this stage, flow
90 rules are created for device's flow director engine.
91 For none-pipeline mode, ``priority`` is ignored, a flow rule can be created as a flow director
92 rule or a switch rule depends on its pattern/action and the resource allocation situation,
93 all flows are virtually at the same pipeline stage.
94 By default, generic flow API is enabled in none-pipeline mode, user can choose to
95 use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
98 -a 80:00.0,pipeline-mode-support=1
100 - ``Protocol extraction for per queue``
102 Configure the RX queues to do protocol extraction into mbuf for protocol
103 handling acceleration, like checking the TCP SYN packets quickly.
105 The argument format is::
107 -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
108 -a 18:00.0,proto_xtr=<protocol>
110 Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
111 is used as a range separator and ``,`` is used as a single number separator.
112 The grouping ``()`` can be omitted for single element group. If no queues are
113 specified, PMD will use this protocol extraction type for all queues.
115 Protocol is : ``vlan, ipv4, ipv6, ipv6_flow, tcp, ip_offset``.
117 .. code-block:: console
119 dpdk-testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
121 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
122 VLAN extraction, other queues run with no protocol extraction.
124 .. code-block:: console
126 dpdk-testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
128 This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
129 IPv6 extraction, other queues use the default VLAN extraction.
131 The extraction metadata is copied into the registered dynamic mbuf field, and
132 the related dynamic mbuf flags is set.
134 .. table:: Protocol extraction : ``vlan``
136 +----------------------------+----------------------------+
138 +======+===+=================+======+===+=================+
139 | PCP | D | VID | PCP | D | VID |
140 +------+---+-----------------+------+---+-----------------+
142 VLAN1 - single or EVLAN (first for QinQ).
144 VLAN2 - C-VLAN (second for QinQ).
146 .. table:: Protocol extraction : ``ipv4``
148 +----------------------------+----------------------------+
150 +======+=======+=============+==============+=============+
151 | Ver |Hdr Len| ToS | TTL | Protocol |
152 +------+-------+-------------+--------------+-------------+
154 IPHDR1 - IPv4 header word 4, "TTL" and "Protocol" fields.
156 IPHDR2 - IPv4 header word 0, "Ver", "Hdr Len" and "Type of Service" fields.
158 .. table:: Protocol extraction : ``ipv6``
160 +----------------------------+----------------------------+
162 +=====+=============+========+=============+==============+
163 | Ver |Traffic class| Flow | Next Header | Hop Limit |
164 +-----+-------------+--------+-------------+--------------+
166 IPHDR1 - IPv6 header word 3, "Next Header" and "Hop Limit" fields.
168 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
171 .. table:: Protocol extraction : ``ipv6_flow``
173 +----------------------------+----------------------------+
175 +=====+=============+========+============================+
176 | Ver |Traffic class| Flow Label |
177 +-----+-------------+-------------------------------------+
179 IPHDR1 - IPv6 header word 1, 16 low bits of the "Flow Label" field.
181 IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
184 .. table:: Protocol extraction : ``tcp``
186 +----------------------------+----------------------------+
187 | TCPHDR2 | TCPHDR1 |
188 +============================+======+======+==============+
189 | Reserved |Offset| RSV | Flags |
190 +----------------------------+------+------+--------------+
192 TCPHDR1 - TCP header word 6, "Data Offset" and "Flags" fields.
196 .. table:: Protocol extraction : ``ip_offset``
198 +----------------------------+----------------------------+
200 +============================+============================+
201 | IPv6 HDR Offset | IPv4 HDR Offset |
202 +----------------------------+----------------------------+
204 IPHDR1 - Outer/Single IPv4 Header offset.
206 IPHDR2 - Outer/Single IPv6 Header offset.
208 Use ``rte_net_ice_dynf_proto_xtr_metadata_get`` to access the protocol
209 extraction metadata, and use ``RTE_PKT_RX_DYNF_PROTO_XTR_*`` to get the
210 metadata type of ``struct rte_mbuf::ol_flags``.
212 The ``rte_net_ice_dump_proto_xtr_metadata`` routine shows how to
213 access the protocol extraction result in ``struct rte_mbuf``.
215 - ``Hardware debug mask log support`` (default ``0``)
217 User can enable the related hardware debug mask such as ICE_DBG_NVM::
219 -a 0000:88:00.0,hw_debug_mask=0x80 --log-level=pmd.net.ice.driver:8
221 These ICE_DBG_XXX are defined in ``drivers/net/ice/base/ice_type.h``.
223 - ``1PPS out support``
225 The E810 supports four single-ended GPIO signals (SDP[20:23]). The 1PPS
226 signal outputs via SDP[20:23]. User can select GPIO pin index flexibly.
227 Pin index 0 means SDP20, 1 means SDP21 and so on. For example::
229 -a af:00.0,pps_out='[pin:0]'
231 - ``Low Rx latency`` (default ``0``)
233 vRAN workloads require low latency DPDK interface for the front haul
234 interface connection to Radio. By specifying ``1`` for parameter
235 ``rx_low_latency``, each completed Rx descriptor can be written immediately
236 to host memory and the Rx interrupt latency can be reduced to 2us::
238 -a 0000:88:00.0,rx_low_latency=1
240 As a trade-off, this configuration may cause the packet processing performance
241 degradation due to the PCI bandwidth limitation.
243 Driver compilation and testing
244 ------------------------------
246 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
255 Vector PMD for RX and TX path are selected automatically. The paths
256 are chosen based on 2 conditions.
259 On the X86 platform, the driver checks if the CPU supports AVX2.
260 If it's supported, AVX2 paths will be chosen. If not, SSE is chosen.
261 If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
262 is set to 512, AVX512 paths will be chosen.
264 - ``Offload features``
265 The supported HW offload features are described in the document ice.ini,
266 A value "P" means the offload feature is not supported by vector path.
267 If any not supported features are used, ICE vector PMD is disabled and the
268 normal paths are chosen.
270 Malicious driver detection (MDD)
271 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
273 It's not appropriate to send a packet, if this packet's destination MAC address
274 is just this port's MAC address. If SW tries to send such packets, HW will
275 report a MDD event and drop the packets.
277 The APPs based on DPDK should avoid providing such packets.
279 Device Config Function (DCF)
280 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
282 This section demonstrates ICE DCF PMD, which shares the core module with ICE
285 A DCF (Device Config Function) PMD bounds to the device's trusted VF with ID 0,
286 it can act as a sole controlling entity to exercise advance functionality (such
287 as switch, ACL) for the rest VFs.
289 The DCF PMD needs to advertise and acquire DCF capability which allows DCF to
290 send AdminQ commands that it would like to execute over to the PF and receive
291 responses for the same from PF.
295 .. figure:: img/ice_dcf.*
297 DCF Communication flow.
301 echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
303 #. Enable the VF0 trust on::
305 ip link set dev enp24s0f0 vf 0 trust on
307 #. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
309 dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
311 #. Monitor the VF2 interface network traffic::
313 tcpdump -e -nn -i enp24s1f2
315 #. Create one flow to redirect the traffic to VF2 by DCF::
317 flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 \
318 dst is 192.168.0.3 / end actions vf id 2 / end
320 #. Send the packet, and it should be displayed on tcpdump::
322 sendp(Ether(src='3c:fd:fe:aa:bb:78', dst='00:00:00:01:02:03')/IP(src=' \
323 192.168.0.2', dst="192.168.0.3")/TCP(flags='S')/Raw(load='XXXXXXXXXX'), \
324 iface="enp24s0f0", count=10)
326 Sample Application Notes
327 ------------------------
332 Vlan filter only works when Promiscuous mode is off.
334 To start ``testpmd``, and add vlan 10 to port 0:
336 .. code-block:: console
338 ./app/dpdk-testpmd -l 0-15 -n 4 -- -i
341 testpmd> rx_vlan add 10 0
343 Limitations or Known issues
344 ---------------------------
346 The Intel E810 requires a programmable pipeline package be downloaded
347 by the driver to support normal operations. The E810 has a limited
348 functionality built in to allow PXE boot and other use cases, but the
349 driver must download a package file during the driver initialization
352 The default DDP package file name is ice.pkg. For a specific NIC, the
353 DDP package supposed to be loaded can have a filename: ice-xxxxxx.pkg,
354 where 'xxxxxx' is the 64-bit PCIe Device Serial Number of the NIC. For
355 example, if the NIC's device serial number is 00-CC-BB-FF-FF-AA-05-68,
356 the device-specific DDP package filename is ice-00ccbbffffaa0568.pkg
357 (in hex and all low case). During initialization, the driver searches
358 in the following paths in order: /lib/firmware/updates/intel/ice/ddp
359 and /lib/firmware/intel/ice/ddp. The corresponding device-specific DDP
360 package will be downloaded first if the file exists. If not, then the
361 driver tries to load the default package. The type of loaded package
362 is stored in ``ice_adapter->active_pkg_type``.
364 A symbolic link to the DDP package file is also ok. The same package
365 file is used by both the kernel driver and the DPDK PMD.
369 Windows support: The DDP package is not supported on Windows so,
370 loading of the package is disabled on Windows.