1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Intel Corporation.
7 The i40e PMD (librte_pmd_i40e) provides poll mode driver support for
8 10/25/40 Gbps Intel® Ethernet 700 Series Network Adapters based on
9 the Intel Ethernet Controller X710/XL710/XXV710 and Intel Ethernet
10 Connection X722 (only support part of features).
16 Features of the i40e PMD are:
18 - Multiple queues for TX and RX
19 - Receiver Side Scaling (RSS)
21 - Packet type information
25 - VLAN/QinQ stripping and inserting
29 - Port hardware statistics
31 - Link state information
33 - Mirror on port, VLAN and VSI
34 - Interrupt mode for RX
35 - Scattered and gather for TX and RX
36 - Vector Poll mode driver
41 - IEEE1588/802.1AS timestamping
42 - VF Daemon (VFD) - EXPERIMENTAL
43 - Dynamic Device Personalization (DDP)
44 - Queue region configuration
45 - Virtual Function Port Representors
46 - Malicious Device Drive event catch and notify
51 - Identifying your adapter using `Intel Support
52 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
54 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
56 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
57 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
59 - Upgrade the NVM/FW version following the `Intel® Ethernet NVM Update Tool Quick Usage Guide for Linux
60 <https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ and `Intel® Ethernet NVM Update Tool: Quick Usage Guide for EFI <https://www.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-efi-usage-guide.html>`_ if needed.
62 Recommended Matching List
63 -------------------------
65 It is highly recommended to upgrade the i40e kernel driver and firmware to
66 avoid the compatibility issues with i40e PMD. Here is the suggested matching
67 list which has been tested and verified. The detailed information can refer
68 to chapter Tested Platforms/Tested NICs in release notes.
70 +--------------+-----------------------+------------------+
71 | DPDK version | Kernel driver version | Firmware version |
72 +==============+=======================+==================+
73 | 19.11 | 2.9.21 | 7.00 |
74 +--------------+-----------------------+------------------+
75 | 19.08 | 2.8.43 | 7.00 |
76 +--------------+-----------------------+------------------+
77 | 19.05 | 2.7.29 | 6.80 |
78 +--------------+-----------------------+------------------+
79 | 19.02 | 2.7.26 | 6.80 |
80 +--------------+-----------------------+------------------+
81 | 18.11 | 2.4.6 | 6.01 |
82 +--------------+-----------------------+------------------+
83 | 18.08 | 2.4.6 | 6.01 |
84 +--------------+-----------------------+------------------+
85 | 18.05 | 2.4.6 | 6.01 |
86 +--------------+-----------------------+------------------+
87 | 18.02 | 2.4.3 | 6.01 |
88 +--------------+-----------------------+------------------+
89 | 17.11 | 2.1.26 | 6.01 |
90 +--------------+-----------------------+------------------+
91 | 17.08 | 2.0.19 | 6.01 |
92 +--------------+-----------------------+------------------+
93 | 17.05 | 1.5.23 | 5.05 |
94 +--------------+-----------------------+------------------+
95 | 17.02 | 1.5.23 | 5.05 |
96 +--------------+-----------------------+------------------+
97 | 16.11 | 1.5.23 | 5.05 |
98 +--------------+-----------------------+------------------+
99 | 16.07 | 1.4.25 | 5.04 |
100 +--------------+-----------------------+------------------+
101 | 16.04 | 1.4.25 | 5.02 |
102 +--------------+-----------------------+------------------+
104 Pre-Installation Configuration
105 ------------------------------
110 The following options can be modified in the ``config`` file.
111 Please note that enabling debugging options may affect system performance.
113 - ``CONFIG_RTE_LIBRTE_I40E_PMD`` (default ``y``)
115 Toggle compilation of the ``librte_pmd_i40e`` driver.
117 - ``CONFIG_RTE_LIBRTE_I40E_DEBUG_*`` (default ``n``)
119 Toggle display of generic debugging messages.
121 - ``CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC`` (default ``y``)
123 Toggle bulk allocation for RX.
125 - ``CONFIG_RTE_LIBRTE_I40E_INC_VECTOR`` (default ``n``)
127 Toggle the use of Vector PMD instead of normal RX/TX path.
128 To enable vPMD for RX, bulk allocation for Rx must be allowed.
130 - ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` (default ``n``)
132 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
134 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``)
136 Number of queues reserved for PF.
138 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``)
140 Number of queues reserved for each VMDQ Pool.
142 Runtime Config Options
143 ~~~~~~~~~~~~~~~~~~~~~~
145 - ``Reserved number of Queues per VF`` (default ``4``)
147 The number of reserved queue per VF is determined by its host PF. If the
148 PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
149 VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
150 The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
151 number of reserved queues per VF is 4 by default. If VF request more than
152 reserved queues per VF, PF will able to allocate max to 16 queues after a VF
156 - ``Support multiple driver`` (default ``disable``)
158 There was a multiple driver support issue during use of 700 series Ethernet
159 Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
160 parameter ``support-multi-driver`` is introduced, for example::
162 -w 84:00.0,support-multi-driver=1
164 With the above configuration, DPDK PMD will not change global registers, and
165 will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
166 DPDK and Linux Kernel.
168 - ``Support VF Port Representor`` (default ``not enabled``)
170 The i40e PF PMD supports the creation of VF port representors for the control
171 and monitoring of i40e virtual function devices. Each port representor
172 corresponds to a single virtual function of that device. Using the ``devargs``
173 option ``representor`` the user can specify which virtual functions to create
174 port representors for on initialization of the PF PMD by passing the VF IDs of
175 the VFs which are required.::
177 -w DBDF,representor=[0,1,4]
179 Currently hot-plugging of representor ports is not supported so all required
180 representors must be specified on the creation of the PF.
182 - ``Use latest supported vector`` (default ``disable``)
184 Latest supported vector path may not always get the best perf so vector path was
185 recommended to use only on later platform. But users may want the latest vector path
186 since it can get better perf in some real work loading cases. So ``devargs`` param
187 ``use-latest-supported-vec`` is introduced, for example::
189 -w 84:00.0,use-latest-supported-vec=1
191 - ``Enable validation for VF message`` (default ``not enabled``)
193 The PF counts messages from each VF. If in any period of seconds the message
194 statistic from a VF exceeds maximal limitation, the PF will ignore any new message
195 from that VF for some seconds.
196 Format -- "maximal-message@period-seconds:ignore-seconds"
199 -w 84:00.0,vf_msg_cfg=80@120:180
201 Vector RX Pre-conditions
202 ~~~~~~~~~~~~~~~~~~~~~~~~
203 For Vector RX it is assumed that the number of descriptor rings will be a power
204 of 2. With this pre-condition, the ring pointer can easily scroll back to the
205 head after hitting the tail without a conditional check. In addition Vector RX
206 can use this assumption to do a bit mask using ``ring_size - 1``.
208 Driver compilation and testing
209 ------------------------------
211 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
215 SR-IOV: Prerequisites and sample Application Notes
216 --------------------------------------------------
218 #. Load the kernel module:
220 .. code-block:: console
224 Check the output in dmesg:
226 .. code-block:: console
228 i40e 0000:83:00.1 ens802f0: renamed from eth0
230 #. Bring up the PF ports:
232 .. code-block:: console
236 #. Create VF device(s):
238 Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry
243 .. code-block:: console
245 echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
248 #. Assign VF MAC address:
250 Assign MAC address to the VF using iproute2 utility. The syntax is:
252 .. code-block:: console
254 ip link set <PF netdev id> vf <VF id> mac <macaddr>
258 .. code-block:: console
260 ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
262 #. Assign VF to VM, and bring up the VM.
263 Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*.
267 Follow instructions available in the document
268 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
273 .. code-block:: console
276 EAL: PCI device 0000:83:00.0 on NUMA socket 1
277 EAL: probe driver: 8086:1572 rte_i40e_pmd
278 EAL: PCI memory mapped at 0x7f7f80000000
279 EAL: PCI memory mapped at 0x7f7f80800000
280 PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
281 Interactive-mode selected
282 Configuring Port 0 (socket 0)
285 PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
286 satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0.
289 Port 0: 68:05:CA:26:85:84
290 Checking link statuses...
291 Port 0 Link Up - speed 10000 Mbps - full-duplex
297 Sample Application Notes
298 ------------------------
303 Vlan filter only works when Promiscuous mode is off.
305 To start ``testpmd``, and add vlan 10 to port 0:
307 .. code-block:: console
309 ./app/testpmd -l 0-15 -n 4 -- -i --forward-mode=mac
312 testpmd> set promisc 0 off
313 testpmd> rx_vlan add 10 0
319 The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues.
320 The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload.
322 The default input set of each flow type is::
324 ipv4-other : src_ip_address, dst_ip_address
325 ipv4-frag : src_ip_address, dst_ip_address
326 ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port
327 ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port
328 ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
330 ipv6-other : src_ip_address, dst_ip_address
331 ipv6-frag : src_ip_address, dst_ip_address
332 ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port
333 ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port
334 ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
336 l2_payload : ether_type
338 The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching.
340 Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``:
342 .. code-block:: console
344 ./app/testpmd -l 0-15 -n 4 -- -i --disable-rss --pkt-filter-mode=perfect \
345 --rxq=8 --txq=8 --nb-cores=8 --nb-ports=1
347 Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1:
349 .. code-block:: console
351 testpmd> flow_director_filter 0 mode IP add flow ipv4-udp \
352 src 2.2.2.3 32 dst 2.2.2.5 32 vlan 0 flexbytes () \
353 fwd pf queue 1 fd_id 1
355 Check the flow director status:
357 .. code-block:: console
359 testpmd> show port fdir 0
361 ######################## FDIR infos for port 0 ####################
363 SUPPORTED FLOW TYPE: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
364 ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
367 max_len: 16 payload_limit: 480
368 payload_unit: 2 payload_seg: 3
369 bitmask_unit: 2 bitmask_num: 2
372 src_ipv4: 0x00000000,
373 dst_ipv4: 0x00000000,
376 src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000,
377 dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000
378 FLEX PAYLOAD SRC OFFSET:
379 L2_PAYLOAD: 0 1 2 3 4 5 6 ...
380 L3_PAYLOAD: 0 1 2 3 4 5 6 ...
381 L4_PAYLOAD: 0 1 2 3 4 5 6 ...
383 ipv4-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
384 ipv4-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
385 ipv4-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
386 ipv4-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
387 ipv4-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
388 ipv6-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
389 ipv6-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
390 ipv6-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
391 ipv6-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
392 ipv6-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
393 l2_payload: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
394 guarant_count: 1 best_count: 0
395 guarant_space: 512 best_space: 7168
402 Delete all flow director rules on a port:
404 .. code-block:: console
406 testpmd> flush_flow_director 0
411 The Intel® Ethernet 700 Series support a feature called
414 A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term
415 for functionality that allows local switching between virtual endpoints within
416 a physical endpoint and also with an external bridge/network.
418 A "Floating" VEB doesn't have an uplink connection to the outside world so all
419 switching is done internally and remains within the host. As such, this
420 feature provides security benefits.
422 In addition, a Floating VEB overcomes a limitation of normal VEBs where they
423 cannot forward packets when the physical link is down. Floating VEBs don't need
424 to connect to the NIC port so they can still forward traffic from VF to VF
425 even when the physical link is down.
427 Therefore, with this feature enabled VFs can be limited to communicating with
428 each other but not an outside network, and they can do so even when there is
429 no physical uplink on the associated NIC port.
431 To enable this feature, the user should pass a ``devargs`` parameter to the
434 -w 84:00.0,enable_floating_veb=1
436 In this configuration the PMD will use the floating VEB feature for all the
437 VFs created by this PF device.
439 Alternatively, the user can specify which VFs need to connect to this floating
440 VEB using the ``floating_veb_list`` argument::
442 -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
444 In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
445 while other VFs connect to the normal VEB.
447 The current implementation only supports one floating VEB and one regular
448 VEB. VFs can connect to a floating VEB or a regular VEB according to the
449 configuration passed on the EAL command line.
451 The floating VEB functionality requires a NIC firmware version of 5.0
454 Dynamic Device Personalization (DDP)
455 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
457 The Intel® Ethernet 700 Series except for the Intel Ethernet Connection
458 X722 support a feature called "Dynamic Device Personalization (DDP)",
459 which is used to configure hardware by downloading a profile to support
460 protocols/filters which are not supported by default. The DDP
461 functionality requires a NIC firmware version of 6.0 or greater.
463 Current implementation supports GTP-C/GTP-U/PPPoE/PPPoL2TP/ESP,
464 steering can be used with rte_flow API.
466 GTPv1 package is released, and it can be downloaded from
467 https://downloadcenter.intel.com/download/27587.
469 PPPoE package is released, and it can be downloaded from
470 https://downloadcenter.intel.com/download/28040.
472 ESP-AH package is not released yet.
474 Load a profile which supports GTP and store backup profile:
476 .. code-block:: console
478 testpmd> ddp add 0 ./gtp.pkgo,./backup.pkgo
480 Delete a GTP profile and restore backup profile:
482 .. code-block:: console
484 testpmd> ddp del 0 ./backup.pkgo
486 Get loaded DDP package info list:
488 .. code-block:: console
490 testpmd> ddp get list 0
492 Display information about a GTP profile:
494 .. code-block:: console
496 testpmd> ddp get info ./gtp.pkgo
498 Input set configuration
499 ~~~~~~~~~~~~~~~~~~~~~~~
500 Input set for any PCTYPE can be configured with user defined configuration,
501 For example, to use only 48bit prefix for IPv6 src address for IPv6 TCP RSS:
503 .. code-block:: console
505 testpmd> port config 0 pctype 43 hash_inset clear all
506 testpmd> port config 0 pctype 43 hash_inset set field 13
507 testpmd> port config 0 pctype 43 hash_inset set field 14
508 testpmd> port config 0 pctype 43 hash_inset set field 15
510 Queue region configuration
511 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
512 The Intel® Ethernet 700 Series supports a feature of queue regions
513 configuration for RSS in the PF, so that different traffic classes or
514 different packet classification types can be separated to different
515 queues in different queue regions. There is an API for configuration
516 of queue regions in RSS with a command line. It can parse the parameters
517 of the region index, queue number, queue start index, user priority, traffic
518 classes and so on. Depending on commands from the command line, it will call
519 i40e private APIs and start the process of setting or flushing the queue
520 region configuration. As this feature is specific for i40e only private
521 APIs are used. These new ``test_pmd`` commands are as shown below. For
522 details please refer to :doc:`../testpmd_app_ug/index`.
524 .. code-block:: console
526 testpmd> set port (port_id) queue-region region_id (value) \
527 queue_start_index (value) queue_num (value)
528 testpmd> set port (port_id) queue-region region_id (value) flowtype (value)
529 testpmd> set port (port_id) queue-region UP (value) region_id (value)
530 testpmd> set port (port_id) queue-region flush (on|off)
531 testpmd> show port (port_id) queue-region
533 Limitations or Known issues
534 ---------------------------
536 MPLS packet classification
537 ~~~~~~~~~~~~~~~~~~~~~~~~~~
539 For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC.
540 The L2 Payload flow type in flow director can be used to classify MPLS packet
541 by using a command in testpmd like:
543 testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \
544 0x8847 flexbytes () fwd pf queue <N> fd_id <M>
546 With the NIC firmware version 5.0 or greater, some limited MPLS support
547 is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no
548 new packet type, no classification or offload are possible. With this change,
549 L2 Payload flow type in flow director cannot be used to classify MPLS packet
550 as with previous firmware versions. Meanwhile, the Ethertype filter can be
551 used to classify MPLS packet by using a command in testpmd like:
553 testpmd> ethertype_filter 0 add mac_ignr 00:00:00:00:00:00 ethertype \
556 16 Byte RX Descriptor setting on DPDK VF
557 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
559 Currently the VF's RX descriptor mode is decided by PF. There's no PF-VF
560 interface for VF to request the RX descriptor mode, also no interface to notify
561 VF its own RX descriptor mode.
562 For all available versions of the i40e driver, these drivers don't support 16
563 byte RX descriptor. If the Linux i40e kernel driver is used as host driver,
564 while DPDK i40e PMD is used as the VF driver, DPDK cannot choose 16 byte receive
565 descriptor. The reason is that the RX descriptor is already set to 32 byte by
566 the i40e kernel driver. That is to say, user should keep
567 ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n`` in config file.
568 In the future, if the Linux i40e driver supports 16 byte RX descriptor, user
569 should make sure the DPDK VF uses the same RX descriptor mode, 16 byte or 32
570 byte, as the PF driver.
572 The same rule for DPDK PF + DPDK VF. The PF and VF should use the same RX
573 descriptor mode. Or the VF RX will not work.
575 Receive packets with Ethertype 0x88A8
576 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
578 Due to the FW limitation, PF can receive packets with Ethertype 0x88A8
579 only when floating VEB is disabled.
581 Incorrect Rx statistics when packet is oversize
582 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
584 When a packet is over maximum frame size, the packet is dropped.
585 However, the Rx statistics, when calling `rte_eth_stats_get` incorrectly
586 shows it as received.
588 VF & TC max bandwidth setting
589 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
591 The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel.
592 The behavior is different when handling per VF and per TC max bandwidth setting.
593 When enabling per VF max bandwidth, SW will check if per TC max bandwidth is
594 enabled. If so, return failure.
595 When enabling per TC max bandwidth, SW will check if per VF max bandwidth
596 is enabled. If so, disable per VF max bandwidth and continue with per TC max
599 TC TX scheduling mode setting
600 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
602 There are 2 TX scheduling modes for TCs, round robin and strict priority mode.
603 If a TC is set to strict priority mode, it can consume unlimited bandwidth.
604 It means if APP has set the max bandwidth for that TC, it comes to no
606 It's suggested to set the strict priority mode for a TC that is latency
607 sensitive but no consuming much bandwidth.
609 VF performance is impacted by PCI extended tag setting
610 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
612 To reach maximum NIC performance in the VF the PCI extended tag must be
613 enabled. The DPDK i40e PF driver will set this feature during initialization,
614 but the kernel PF driver does not. So when running traffic on a VF which is
615 managed by the kernel PF driver, a significant NIC performance downgrade has
616 been observed (for 64 byte packets, there is about 25% line-rate downgrade for
617 a 25GbE device and about 35% for a 40GbE device).
619 For kernel version >= 4.11, the kernel's PCI driver will enable the extended
620 tag if it detects that the device supports it. So by default, this is not an
621 issue. For kernels <= 4.11 or when the PCI extended tag is disabled it can be
622 enabled using the steps below.
624 #. Get the current value of the PCI configure register::
626 setpci -s <XX:XX.X> a8.w
630 value = value | 0x100
632 #. Set the PCI configure register with new value::
634 setpci -s <XX:XX.X> a8.w=<value>
639 The VF vlan strip function is only supported in the i40e kernel driver >= 2.1.26.
644 DCB works only when RSS is enabled.
646 Global configuration warning
647 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
649 I40E PMD will set some global registers to enable some function or set some
650 configure. Then when using different ports of the same NIC with Linux kernel
651 and DPDK, the port with Linux kernel will be impacted by the port with DPDK.
652 For example, register I40E_GL_SWT_L2TAGCTRL is used to control L2 tag, i40e
653 PMD uses I40E_GL_SWT_L2TAGCTRL to set vlan TPID. If setting TPID in port A
654 with DPDK, then the configuration will also impact port B in the NIC with
655 kernel driver, which don't want to use the TPID.
656 So PMD reports warning to clarify what is changed by writing global register.
658 High Performance of Small Packets on 40GbE NIC
659 ----------------------------------------------
661 As there might be firmware fixes for performance enhancement in latest version
662 of firmware image, the firmware update might be needed for getting high performance.
663 Check the Intel support website for the latest firmware updates.
664 Users should consult the release notes specific to a DPDK release to identify
665 the validated firmware version for a NIC using the i40e driver.
667 Use 16 Bytes RX Descriptor Size
668 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
670 As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
671 Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
673 Example of getting best performance with l3fwd example
674 ------------------------------------------------------
676 The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with a
677 server with Intel Xeon processors and Intel Ethernet CNA XL710.
679 The example scenario is to get best performance with two Intel Ethernet CNA XL710 40GbE ports.
680 See :numref:`figure_intel_perf_test_setup` for the performance test setup.
682 .. _figure_intel_perf_test_setup:
684 .. figure:: img/intel_perf_test_setup.*
686 Performance Test Setup
689 1. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
690 The reason for using two NICs is to overcome a PCIe v3.0 limitation since it cannot provide 80GbE bandwidth
691 for two 40GbE ports, but two different PCIe v3.0 x8 slot can.
692 Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
694 82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
695 85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
697 2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
699 3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
700 In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
702 Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
703 cores from different cores (e.g core18 and core19).
705 4. Bind these two ports to igb_uio.
707 5. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
708 will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
710 6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
711 Compile the ``l3fwd sample`` with the default lpm mode.
713 7. The command line of running l3fwd would be something like the following::
715 ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
716 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
718 This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
719 core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
721 8. Configure the traffic at a traffic generator.
723 * Start creating a stream on packet generator.
725 * Set the Ethernet II type to 0x0800.
727 Tx bytes affected by the link status change
728 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
730 For firmware versions prior to 6.01 for X710 series and 3.33 for X722 series, the tx_bytes statistics data is affected by
731 the link down event. Each time the link status changes to down, the tx_bytes decreases 110 bytes.