1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Intel Corporation.
7 The i40e PMD (**librte_net_i40e**) provides poll mode driver support for
8 10/25/40 Gbps Intel® Ethernet 700 Series Network Adapters based on
9 the Intel Ethernet Controller X710/XL710/XXV710 and Intel Ethernet
10 Connection X722 (only support part of features).
16 Features of the i40e PMD are:
18 - Multiple queues for TX and RX
19 - Receiver Side Scaling (RSS)
21 - Packet type information
25 - VLAN/QinQ stripping and inserting
29 - Port hardware statistics
31 - Link state information
33 - Mirror on port, VLAN and VSI
34 - Interrupt mode for RX
35 - Scattered and gather for TX and RX
36 - Vector Poll mode driver
41 - IEEE1588/802.1AS timestamping
42 - VF Daemon (VFD) - EXPERIMENTAL
43 - Dynamic Device Personalization (DDP)
44 - Queue region configuration
45 - Virtual Function Port Representors
46 - Malicious Device Drive event catch and notify
52 - Identifying your adapter using `Intel Support
53 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
55 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
57 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
58 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
60 - Upgrade the NVM/FW version following the `Intel® Ethernet NVM Update Tool Quick Usage Guide for Linux
61 <https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ and `Intel® Ethernet NVM Update Tool: Quick Usage Guide for EFI <https://www.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-efi-usage-guide.html>`_ if needed.
63 - For information about supported media, please refer to this document: `Intel® Ethernet Controller X710/XXV710/XL710 Feature Support Matrix
64 <http://www.intel.com/content/dam/www/public/us/en/documents/release-notes/xl710-ethernet-controller-feature-matrix.pdf>`_.
68 * Some adapters based on the Intel(R) Ethernet Controller 700 Series only
69 support Intel Ethernet Optics modules. On these adapters, other modules are not
70 supported and will not function.
72 * For connections based on Intel(R) Ethernet Controller 700 Series,
73 support is dependent on your system board. Please see your vendor for details.
75 * In all cases Intel recommends using Intel Ethernet Optics; other modules
76 may function but are not validated by Intel. Contact Intel for supported media types.
81 - Follow the :doc:`guide for Windows <../windows_gsg/run_apps>`
82 to setup the basic DPDK environment.
84 - Identify the Intel® Ethernet adapter and get the latest NVM/FW version.
86 - To access any Intel® Ethernet hardware, load the NetUIO driver in place of existing built-in (inbox) driver.
88 - To load NetUIO driver, follow the steps mentioned in `dpdk-kmods repository
89 <https://git.dpdk.org/dpdk-kmods/tree/windows/netuio/README.rst>`_.
91 Recommended Matching List
92 -------------------------
94 It is highly recommended to upgrade the i40e kernel driver and firmware to
95 avoid the compatibility issues with i40e PMD. Here is the suggested matching
96 list which has been tested and verified. The detailed information can refer
97 to chapter Tested Platforms/Tested NICs in release notes.
99 For X710/XL710/XXV710,
101 +--------------+-----------------------+------------------+
102 | DPDK version | Kernel driver version | Firmware version |
103 +==============+=======================+==================+
104 | 22.03 | 2.17.15 | 8.30 |
105 +--------------+-----------------------+------------------+
106 | 21.11 | 2.17.4 | 8.30 |
107 +--------------+-----------------------+------------------+
108 | 21.08 | 2.15.9 | 8.30 |
109 +--------------+-----------------------+------------------+
110 | 21.05 | 2.15.9 | 8.30 |
111 +--------------+-----------------------+------------------+
112 | 21.02 | 2.14.13 | 8.00 |
113 +--------------+-----------------------+------------------+
114 | 20.11 | 2.14.13 | 8.00 |
115 +--------------+-----------------------+------------------+
116 | 20.08 | 2.12.6 | 7.30 |
117 +--------------+-----------------------+------------------+
118 | 20.05 | 2.11.27 | 7.30 |
119 +--------------+-----------------------+------------------+
120 | 20.02 | 2.10.19 | 7.20 |
121 +--------------+-----------------------+------------------+
122 | 19.11 | 2.9.21 | 7.00 |
123 +--------------+-----------------------+------------------+
124 | 19.08 | 2.8.43 | 7.00 |
125 +--------------+-----------------------+------------------+
126 | 19.05 | 2.7.29 | 6.80 |
127 +--------------+-----------------------+------------------+
128 | 19.02 | 2.7.26 | 6.80 |
129 +--------------+-----------------------+------------------+
130 | 18.11 | 2.4.6 | 6.01 |
131 +--------------+-----------------------+------------------+
132 | 18.08 | 2.4.6 | 6.01 |
133 +--------------+-----------------------+------------------+
134 | 18.05 | 2.4.6 | 6.01 |
135 +--------------+-----------------------+------------------+
136 | 18.02 | 2.4.3 | 6.01 |
137 +--------------+-----------------------+------------------+
138 | 17.11 | 2.1.26 | 6.01 |
139 +--------------+-----------------------+------------------+
140 | 17.08 | 2.0.19 | 6.01 |
141 +--------------+-----------------------+------------------+
142 | 17.05 | 1.5.23 | 5.05 |
143 +--------------+-----------------------+------------------+
144 | 17.02 | 1.5.23 | 5.05 |
145 +--------------+-----------------------+------------------+
146 | 16.11 | 1.5.23 | 5.05 |
147 +--------------+-----------------------+------------------+
148 | 16.07 | 1.4.25 | 5.04 |
149 +--------------+-----------------------+------------------+
150 | 16.04 | 1.4.25 | 5.02 |
151 +--------------+-----------------------+------------------+
156 +--------------+-----------------------+------------------+
157 | DPDK version | Kernel driver version | Firmware version |
158 +==============+=======================+==================+
159 | 22.03 | 2.17.15 | 5.50 |
160 +--------------+-----------------------+------------------+
161 | 21.11 | 2.17.4 | 5.30 |
162 +--------------+-----------------------+------------------+
163 | 21.08 | 2.15.9 | 5.30 |
164 +--------------+-----------------------+------------------+
165 | 21.05 | 2.15.9 | 5.30 |
166 +--------------+-----------------------+------------------+
167 | 21.02 | 2.14.13 | 5.00 |
168 +--------------+-----------------------+------------------+
169 | 20.11 | 2.13.10 | 5.00 |
170 +--------------+-----------------------+------------------+
171 | 20.08 | 2.12.6 | 4.11 |
172 +--------------+-----------------------+------------------+
173 | 20.05 | 2.11.27 | 4.11 |
174 +--------------+-----------------------+------------------+
175 | 20.02 | 2.10.19 | 4.11 |
176 +--------------+-----------------------+------------------+
177 | 19.11 | 2.9.21 | 4.10 |
178 +--------------+-----------------------+------------------+
179 | 19.08 | 2.9.21 | 4.10 |
180 +--------------+-----------------------+------------------+
181 | 19.05 | 2.7.29 | 3.33 |
182 +--------------+-----------------------+------------------+
183 | 19.02 | 2.7.26 | 3.33 |
184 +--------------+-----------------------+------------------+
185 | 18.11 | 2.4.6 | 3.33 |
186 +--------------+-----------------------+------------------+
189 Pre-Installation Configuration
190 ------------------------------
195 The following options can be modified in the ``config/rte_config.h`` file.
197 - ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``)
199 Number of queues reserved for PF.
201 - ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``)
203 Number of queues reserved for each VMDQ Pool.
205 Runtime Config Options
206 ~~~~~~~~~~~~~~~~~~~~~~
208 - ``Reserved number of Queues per VF`` (default ``4``)
210 The number of reserved queue per VF is determined by its host PF. If the
211 PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
212 VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
213 The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
214 number of reserved queues per VF is 4 by default. If VF request more than
215 reserved queues per VF, PF will able to allocate max to 16 queues after a VF
219 - ``Support multiple driver`` (default ``disable``)
221 There was a multiple driver support issue during use of 700 series Ethernet
222 Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
223 parameter ``support-multi-driver`` is introduced, for example::
225 -a 84:00.0,support-multi-driver=1
227 With the above configuration, DPDK PMD will not change global registers, and
228 will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
229 DPDK and Linux Kernel.
231 - ``Support VF Port Representor`` (default ``not enabled``)
233 The i40e PF PMD supports the creation of VF port representors for the control
234 and monitoring of i40e virtual function devices. Each port representor
235 corresponds to a single virtual function of that device. Using the ``devargs``
236 option ``representor`` the user can specify which virtual functions to create
237 port representors for on initialization of the PF PMD by passing the VF IDs of
238 the VFs which are required.::
240 -a DBDF,representor=[0,1,4]
242 Currently hot-plugging of representor ports is not supported so all required
243 representors must be specified on the creation of the PF.
245 - ``Enable validation for VF message`` (default ``not enabled``)
247 The PF counts messages from each VF. If in any period of seconds the message
248 statistic from a VF exceeds maximal limitation, the PF will ignore any new message
249 from that VF for some seconds.
250 Format -- "maximal-message@period-seconds:ignore-seconds"
253 -a 84:00.0,vf_msg_cfg=80@120:180
255 Vector RX Pre-conditions
256 ~~~~~~~~~~~~~~~~~~~~~~~~
257 For Vector RX it is assumed that the number of descriptor rings will be a power
258 of 2. With this pre-condition, the ring pointer can easily scroll back to the
259 head after hitting the tail without a conditional check. In addition Vector RX
260 can use this assumption to do a bit mask using ``ring_size - 1``.
262 Driver compilation and testing
263 ------------------------------
265 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
269 SR-IOV: Prerequisites and sample Application Notes
270 --------------------------------------------------
272 #. Load the kernel module:
274 .. code-block:: console
278 Check the output in dmesg:
280 .. code-block:: console
282 i40e 0000:83:00.1 ens802f0: renamed from eth0
284 #. Bring up the PF ports:
286 .. code-block:: console
290 #. Create VF device(s):
292 Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry
297 .. code-block:: console
299 echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
302 #. Assign VF MAC address:
304 Assign MAC address to the VF using iproute2 utility. The syntax is:
306 .. code-block:: console
308 ip link set <PF netdev id> vf <VF id> mac <macaddr>
312 .. code-block:: console
314 ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
316 #. Assign VF to VM, and bring up the VM.
317 Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*.
321 Follow instructions available in the document
322 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
327 .. code-block:: console
330 EAL: PCI device 0000:83:00.0 on NUMA socket 1
331 EAL: probe driver: 8086:1572 rte_i40e_pmd
332 EAL: PCI memory mapped at 0x7f7f80000000
333 EAL: PCI memory mapped at 0x7f7f80800000
334 PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
335 Interactive-mode selected
336 Configuring Port 0 (socket 0)
339 PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
340 satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0.
343 Port 0: 68:05:CA:26:85:84
344 Checking link statuses...
345 Port 0 Link Up - speed 10000 Mbps - full-duplex
351 Sample Application Notes
352 ------------------------
357 Vlan filter only works when Promiscuous mode is off.
359 To start ``testpmd``, and add vlan 10 to port 0:
361 .. code-block:: console
363 ./<build_dir>/app/dpdk-testpmd -l 0-15 -n 4 -- -i --forward-mode=mac
366 testpmd> set promisc 0 off
367 testpmd> rx_vlan add 10 0
373 The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues.
374 The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload.
376 The default input set of each flow type is::
378 ipv4-other : src_ip_address, dst_ip_address
379 ipv4-frag : src_ip_address, dst_ip_address
380 ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port
381 ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port
382 ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
384 ipv6-other : src_ip_address, dst_ip_address
385 ipv6-frag : src_ip_address, dst_ip_address
386 ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port
387 ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port
388 ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
390 l2_payload : ether_type
392 The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching.
394 Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``:
396 .. code-block:: console
398 ./<build_dir>/app/dpdk-testpmd -l 0-15 -n 4 -- -i --disable-rss \
399 --pkt-filter-mode=perfect --rxq=8 --txq=8 --nb-cores=8 \
402 Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1:
404 .. code-block:: console
406 testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.3 \
407 dst is 2.2.2.5 / udp src is 32 dst is 32 / end \
408 actions mark id 1 / queue index 1 / end
410 Check the flow director status:
412 .. code-block:: console
414 testpmd> show port fdir 0
416 ######################## FDIR infos for port 0 ####################
418 SUPPORTED FLOW TYPE: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
419 ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
422 max_len: 16 payload_limit: 480
423 payload_unit: 2 payload_seg: 3
424 bitmask_unit: 2 bitmask_num: 2
427 src_ipv4: 0x00000000,
428 dst_ipv4: 0x00000000,
431 src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000,
432 dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000
433 FLEX PAYLOAD SRC OFFSET:
434 L2_PAYLOAD: 0 1 2 3 4 5 6 ...
435 L3_PAYLOAD: 0 1 2 3 4 5 6 ...
436 L4_PAYLOAD: 0 1 2 3 4 5 6 ...
438 ipv4-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
439 ipv4-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
440 ipv4-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
441 ipv4-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
442 ipv4-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
443 ipv6-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
444 ipv6-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
445 ipv6-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
446 ipv6-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
447 ipv6-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
448 l2_payload: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
449 guarant_count: 1 best_count: 0
450 guarant_space: 512 best_space: 7168
460 The Intel® Ethernet 700 Series support a feature called
463 A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term
464 for functionality that allows local switching between virtual endpoints within
465 a physical endpoint and also with an external bridge/network.
467 A "Floating" VEB doesn't have an uplink connection to the outside world so all
468 switching is done internally and remains within the host. As such, this
469 feature provides security benefits.
471 In addition, a Floating VEB overcomes a limitation of normal VEBs where they
472 cannot forward packets when the physical link is down. Floating VEBs don't need
473 to connect to the NIC port so they can still forward traffic from VF to VF
474 even when the physical link is down.
476 Therefore, with this feature enabled VFs can be limited to communicating with
477 each other but not an outside network, and they can do so even when there is
478 no physical uplink on the associated NIC port.
480 To enable this feature, the user should pass a ``devargs`` parameter to the
483 -a 84:00.0,enable_floating_veb=1
485 In this configuration the PMD will use the floating VEB feature for all the
486 VFs created by this PF device.
488 Alternatively, the user can specify which VFs need to connect to this floating
489 VEB using the ``floating_veb_list`` argument::
491 -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
493 In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
494 while other VFs connect to the normal VEB.
496 The current implementation only supports one floating VEB and one regular
497 VEB. VFs can connect to a floating VEB or a regular VEB according to the
498 configuration passed on the EAL command line.
500 The floating VEB functionality requires a NIC firmware version of 5.0
503 Dynamic Device Personalization (DDP)
504 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
506 The Intel® Ethernet 700 Series except for the Intel Ethernet Connection
507 X722 support a feature called "Dynamic Device Personalization (DDP)",
508 which is used to configure hardware by downloading a profile to support
509 protocols/filters which are not supported by default. The DDP
510 functionality requires a NIC firmware version of 6.0 or greater.
512 Current implementation supports GTP-C/GTP-U/PPPoE/PPPoL2TP/ESP,
513 steering can be used with rte_flow API.
515 GTPv1 package is released, and it can be downloaded from
516 https://downloadcenter.intel.com/download/27587.
518 PPPoE package is released, and it can be downloaded from
519 https://downloadcenter.intel.com/download/28040.
521 ESP-AH package is released, and it can be downloaded from
522 https://downloadcenter.intel.com/download/29446.
524 Load a profile which supports GTP and store backup profile:
526 .. code-block:: console
528 testpmd> ddp add 0 ./gtp.pkgo,./backup.pkgo
530 Delete a GTP profile and restore backup profile:
532 .. code-block:: console
534 testpmd> ddp del 0 ./backup.pkgo
536 Get loaded DDP package info list:
538 .. code-block:: console
540 testpmd> ddp get list 0
542 Display information about a GTP profile:
544 .. code-block:: console
546 testpmd> ddp get info ./gtp.pkgo
548 Input set configuration
549 ~~~~~~~~~~~~~~~~~~~~~~~
550 Input set for any PCTYPE can be configured with user defined configuration,
551 For example, to use only 48bit prefix for IPv6 src address for IPv6 TCP RSS:
553 .. code-block:: console
555 testpmd> port config 0 pctype 43 hash_inset clear all
556 testpmd> port config 0 pctype 43 hash_inset set field 13
557 testpmd> port config 0 pctype 43 hash_inset set field 14
558 testpmd> port config 0 pctype 43 hash_inset set field 15
560 Queue region configuration
561 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
562 The Intel® Ethernet 700 Series supports a feature of queue regions
563 configuration for RSS in the PF, so that different traffic classes or
564 different packet classification types can be separated to different
565 queues in different queue regions. There is an API for configuration
566 of queue regions in RSS with a command line. It can parse the parameters
567 of the region index, queue number, queue start index, user priority, traffic
568 classes and so on. Depending on commands from the command line, it will call
569 i40e private APIs and start the process of setting or flushing the queue
570 region configuration. As this feature is specific for i40e only private
571 APIs are used. These new ``test_pmd`` commands are as shown below. For
572 details please refer to :doc:`../testpmd_app_ug/index`.
574 .. code-block:: console
576 testpmd> set port (port_id) queue-region region_id (value) \
577 queue_start_index (value) queue_num (value)
578 testpmd> set port (port_id) queue-region region_id (value) flowtype (value)
579 testpmd> set port (port_id) queue-region UP (value) region_id (value)
580 testpmd> set port (port_id) queue-region flush (on|off)
581 testpmd> show port (port_id) queue-region
588 RSS Flow supports to set hash input set, hash function, enable hash
589 and configure queues.
591 Configure queues as queue 0, 1, 2, 3.
593 .. code-block:: console
595 testpmd> flow create 0 ingress pattern end actions rss types end \
596 queues 0 1 2 3 end / end
598 Enable hash and set input set for ipv4-tcp.
600 .. code-block:: console
602 testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
603 actions rss types ipv4-tcp l3-src-only end queues end / end
605 Set symmetric hash enable for flow type ipv4-tcp.
607 .. code-block:: console
609 testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
610 actions rss types ipv4-tcp end queues end func symmetric_toeplitz / end
612 Set hash function as simple xor.
614 .. code-block:: console
616 testpmd> flow create 0 ingress pattern end actions rss types end \
617 queues end func simple_xor / end
619 Limitations or Known issues
620 ---------------------------
622 MPLS packet classification
623 ~~~~~~~~~~~~~~~~~~~~~~~~~~
625 For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC.
626 The L2 Payload flow type in flow director can be used to classify MPLS packet
627 by using a command in testpmd like:
629 testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \
630 0x8847 flexbytes () fwd pf queue <N> fd_id <M>
632 With the NIC firmware version 5.0 or greater, some limited MPLS support
633 is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no
634 new packet type, no classification or offload are possible. With this change,
635 L2 Payload flow type in flow director cannot be used to classify MPLS packet
636 as with previous firmware versions. Meanwhile, the Ethertype filter can be
637 used to classify MPLS packet by using a command in testpmd like:
639 testpmd> flow create 0 ingress pattern eth type is 0x8847 / end \
640 actions queue index <M> / end
642 16 Byte RX Descriptor setting on DPDK VF
643 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
645 Currently the VF's RX descriptor mode is decided by PF. There's no PF-VF
646 interface for VF to request the RX descriptor mode, also no interface to notify
647 VF its own RX descriptor mode.
648 For all available versions of the i40e driver, these drivers don't support 16
649 byte RX descriptor. If the Linux i40e kernel driver is used as host driver,
650 while DPDK i40e PMD is used as the VF driver, DPDK cannot choose 16 byte receive
651 descriptor. The reason is that the RX descriptor is already set to 32 byte by
652 the i40e kernel driver.
653 In the future, if the Linux i40e driver supports 16 byte RX descriptor, user
654 should make sure the DPDK VF uses the same RX descriptor mode, 16 byte or 32
655 byte, as the PF driver.
657 The same rule for DPDK PF + DPDK VF. The PF and VF should use the same RX
658 descriptor mode. Or the VF RX will not work.
660 Receive packets with Ethertype 0x88A8
661 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
663 Due to the FW limitation, PF can receive packets with Ethertype 0x88A8
664 only when floating VEB is disabled.
666 Incorrect Rx statistics when packet is oversize
667 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
669 When a packet is over maximum frame size, the packet is dropped.
670 However, the Rx statistics, when calling `rte_eth_stats_get` incorrectly
671 shows it as received.
673 RX/TX statistics may be incorrect when register overflowed
674 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
676 The rx_bytes/tx_bytes statistics register is 48 bit length.
677 Although this limitation is enlarged to 64 bit length on the software side,
678 but there is no way to detect if the overflow occurred more than once.
679 So rx_bytes/tx_bytes statistics data is correct when statistics are
680 updated at least once between two overflows.
682 VF & TC max bandwidth setting
683 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
685 The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel.
686 The behavior is different when handling per VF and per TC max bandwidth setting.
687 When enabling per VF max bandwidth, SW will check if per TC max bandwidth is
688 enabled. If so, return failure.
689 When enabling per TC max bandwidth, SW will check if per VF max bandwidth
690 is enabled. If so, disable per VF max bandwidth and continue with per TC max
693 TC TX scheduling mode setting
694 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
696 There are 2 TX scheduling modes for TCs, round robin and strict priority mode.
697 If a TC is set to strict priority mode, it can consume unlimited bandwidth.
698 It means if APP has set the max bandwidth for that TC, it comes to no
700 It's suggested to set the strict priority mode for a TC that is latency
701 sensitive but no consuming much bandwidth.
703 VF performance is impacted by PCI extended tag setting
704 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
706 To reach maximum NIC performance in the VF the PCI extended tag must be
707 enabled. The DPDK i40e PF driver will set this feature during initialization,
708 but the kernel PF driver does not. So when running traffic on a VF which is
709 managed by the kernel PF driver, a significant NIC performance downgrade has
710 been observed (for 64 byte packets, there is about 25% line-rate downgrade for
711 a 25GbE device and about 35% for a 40GbE device).
713 For kernel version >= 4.11, the kernel's PCI driver will enable the extended
714 tag if it detects that the device supports it. So by default, this is not an
715 issue. For kernels <= 4.11 or when the PCI extended tag is disabled it can be
716 enabled using the steps below.
718 #. Get the current value of the PCI configure register::
720 setpci -s <XX:XX.X> a8.w
724 value = value | 0x100
726 #. Set the PCI configure register with new value::
728 setpci -s <XX:XX.X> a8.w=<value>
733 The VF vlan strip function is only supported in the i40e kernel driver >= 2.1.26.
738 DCB works only when RSS is enabled.
740 Global configuration warning
741 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
743 I40E PMD will set some global registers to enable some function or set some
744 configure. Then when using different ports of the same NIC with Linux kernel
745 and DPDK, the port with Linux kernel will be impacted by the port with DPDK.
746 For example, register I40E_GL_SWT_L2TAGCTRL is used to control L2 tag, i40e
747 PMD uses I40E_GL_SWT_L2TAGCTRL to set vlan TPID. If setting TPID in port A
748 with DPDK, then the configuration will also impact port B in the NIC with
749 kernel driver, which don't want to use the TPID.
750 So PMD reports warning to clarify what is changed by writing global register.
755 When programming cloud filters for IPv4/6_UDP/TCP/SCTP with SRC port only or DST port only,
756 it will make any cloud filter using inner_vlan or tunnel key invalid. Default configuration will be
757 recovered only by NIC core reset.
759 Mirror rule limitation for X722
760 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
762 Due to firmware restriction of X722, the same VSI cannot have more than one mirror rule.
764 High Performance of Small Packets on 40GbE NIC
765 ----------------------------------------------
767 As there might be firmware fixes for performance enhancement in latest version
768 of firmware image, the firmware update might be needed for getting high performance.
769 Check the Intel support website for the latest firmware updates.
770 Users should consult the release notes specific to a DPDK release to identify
771 the validated firmware version for a NIC using the i40e driver.
773 Use 16 Bytes RX Descriptor Size
774 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
776 As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
777 In ``config/rte_config.h`` set the following to use 16 bytes size RX descriptors::
779 #define RTE_LIBRTE_I40E_16BYTE_RX_DESC 1
781 Input set requirement of each pctype for FDIR
782 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
784 Each PCTYPE can only have one specific FDIR input set at one time.
785 For example, if creating 2 rte_flow rules with different input set for one PCTYPE,
786 it will fail and return the info "Conflict with the first rule's input set",
787 which means the current rule's input set conflicts with the first rule's.
788 Remove the first rule if want to change the input set of the PCTYPE.
790 PF reset fail after QinQ set with FW >= 8.4
791 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
793 If upgrade FW to version 8.4 and higher, after set MAC VLAN filter and configure outer VLAN on PF, kill
794 DPDK process will cause the card crash.
797 Example of getting best performance with l3fwd example
798 ------------------------------------------------------
800 The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with a
801 server with Intel Xeon processors and Intel Ethernet CNA XL710.
803 The example scenario is to get best performance with two Intel Ethernet CNA XL710 40GbE ports.
804 See :numref:`figure_intel_perf_test_setup` for the performance test setup.
806 .. _figure_intel_perf_test_setup:
808 .. figure:: img/intel_perf_test_setup.*
810 Performance Test Setup
813 1. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
814 The reason for using two NICs is to overcome a PCIe v3.0 limitation since it cannot provide 80GbE bandwidth
815 for two 40GbE ports, but two different PCIe v3.0 x8 slot can.
816 Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
818 82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
819 85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
821 2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
823 3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
824 In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
826 Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
827 cores from different cores (e.g core18 and core19).
829 4. Bind these two ports to igb_uio.
831 5. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
832 will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
834 6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
835 Compile the ``l3fwd sample`` with the default lpm mode.
837 7. The command line of running l3fwd would be something like the following::
839 ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
840 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
842 This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
843 core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
845 8. Configure the traffic at a traffic generator.
847 * Start creating a stream on packet generator.
849 * Set the Ethernet II type to 0x0800.
851 Tx bytes affected by the link status change
852 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
854 For firmware versions prior to 6.01 for X710 series and 3.33 for X722 series, the tx_bytes statistics data is affected by
855 the link down event. Each time the link status changes to down, the tx_bytes decreases 110 bytes.