1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Intel Corporation.
7 The I40E PMD (librte_pmd_i40e) provides poll mode driver support
8 for the Intel X710/XL710/X722 10/40 Gbps family of adapters.
14 Features of the I40E PMD are:
16 - Multiple queues for TX and RX
17 - Receiver Side Scaling (RSS)
19 - Packet type information
23 - VLAN/QinQ stripping and inserting
27 - Port hardware statistics
29 - Link state information
31 - Mirror on port, VLAN and VSI
32 - Interrupt mode for RX
33 - Scattered and gather for TX and RX
34 - Vector Poll mode driver
39 - IEEE1588/802.1AS timestamping
40 - VF Daemon (VFD) - EXPERIMENTAL
41 - Dynamic Device Personalization (DDP)
42 - Queue region configuration
47 - Identifying your adapter using `Intel Support
48 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
50 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
52 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
53 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
55 - Upgrade the NVM/FW version following the `IntelĀ® Ethernet NVM Update Tool Quick Usage Guide for Linux
56 <https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ if needed.
58 Pre-Installation Configuration
59 ------------------------------
64 The following options can be modified in the ``config`` file.
65 Please note that enabling debugging options may affect system performance.
67 - ``CONFIG_RTE_LIBRTE_I40E_PMD`` (default ``y``)
69 Toggle compilation of the ``librte_pmd_i40e`` driver.
71 - ``CONFIG_RTE_LIBRTE_I40E_DEBUG_*`` (default ``n``)
73 Toggle display of generic debugging messages.
75 - ``CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC`` (default ``y``)
77 Toggle bulk allocation for RX.
79 - ``CONFIG_RTE_LIBRTE_I40E_INC_VECTOR`` (default ``n``)
81 Toggle the use of Vector PMD instead of normal RX/TX path.
82 To enable vPMD for RX, bulk allocation for Rx must be allowed.
84 - ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` (default ``n``)
86 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
88 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``)
90 Number of queues reserved for PF.
92 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``)
94 Number of queues reserved for each VMDQ Pool.
96 - ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` (default ``-1``)
98 Interrupt Throttling interval.
101 Runtime Config Options
102 ~~~~~~~~~~~~~~~~~~~~~~
104 - ``Number of Queues per VF`` (default ``4``)
106 The number of queue per VF is determined by its host PF. If the PCI address
107 of an i40e PF is aaaa:bb.cc, the number of queues per VF can be configured
108 with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n. The value n can be
109 1, 2, 4, 8 or 16. If no such parameter is configured, the number of queues
110 per VF is 4 by default.
113 Driver compilation and testing
114 ------------------------------
116 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
120 SR-IOV: Prerequisites and sample Application Notes
121 --------------------------------------------------
123 #. Load the kernel module:
125 .. code-block:: console
129 Check the output in dmesg:
131 .. code-block:: console
133 i40e 0000:83:00.1 ens802f0: renamed from eth0
135 #. Bring up the PF ports:
137 .. code-block:: console
141 #. Create VF device(s):
143 Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry
148 .. code-block:: console
150 echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
153 #. Assign VF MAC address:
155 Assign MAC address to the VF using iproute2 utility. The syntax is:
157 .. code-block:: console
159 ip link set <PF netdev id> vf <VF id> mac <macaddr>
163 .. code-block:: console
165 ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
167 #. Assign VF to VM, and bring up the VM.
168 Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*.
172 Follow instructions available in the document
173 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
178 .. code-block:: console
181 EAL: PCI device 0000:83:00.0 on NUMA socket 1
182 EAL: probe driver: 8086:1572 rte_i40e_pmd
183 EAL: PCI memory mapped at 0x7f7f80000000
184 EAL: PCI memory mapped at 0x7f7f80800000
185 PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
186 Interactive-mode selected
187 Configuring Port 0 (socket 0)
190 PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
191 satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0.
194 Port 0: 68:05:CA:26:85:84
195 Checking link statuses...
196 Port 0 Link Up - speed 10000 Mbps - full-duplex
202 Sample Application Notes
203 ------------------------
208 Vlan filter only works when Promiscuous mode is off.
210 To start ``testpmd``, and add vlan 10 to port 0:
212 .. code-block:: console
214 ./app/testpmd -l 0-15 -n 4 -- -i --forward-mode=mac
217 testpmd> set promisc 0 off
218 testpmd> rx_vlan add 10 0
224 The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues.
225 The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload.
227 The default input set of each flow type is::
229 ipv4-other : src_ip_address, dst_ip_address
230 ipv4-frag : src_ip_address, dst_ip_address
231 ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port
232 ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port
233 ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
235 ipv6-other : src_ip_address, dst_ip_address
236 ipv6-frag : src_ip_address, dst_ip_address
237 ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port
238 ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port
239 ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
241 l2_payload : ether_type
243 The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching.
245 Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``:
247 .. code-block:: console
249 ./app/testpmd -l 0-15 -n 4 -- -i --disable-rss --pkt-filter-mode=perfect \
250 --rxq=8 --txq=8 --nb-cores=8 --nb-ports=1
252 Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1:
254 .. code-block:: console
256 testpmd> flow_director_filter 0 mode IP add flow ipv4-udp \
257 src 2.2.2.3 32 dst 2.2.2.5 32 vlan 0 flexbytes () \
258 fwd pf queue 1 fd_id 1
260 Check the flow director status:
262 .. code-block:: console
264 testpmd> show port fdir 0
266 ######################## FDIR infos for port 0 ####################
268 SUPPORTED FLOW TYPE: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
269 ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
272 max_len: 16 payload_limit: 480
273 payload_unit: 2 payload_seg: 3
274 bitmask_unit: 2 bitmask_num: 2
277 src_ipv4: 0x00000000,
278 dst_ipv4: 0x00000000,
281 src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000,
282 dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000
283 FLEX PAYLOAD SRC OFFSET:
284 L2_PAYLOAD: 0 1 2 3 4 5 6 ...
285 L3_PAYLOAD: 0 1 2 3 4 5 6 ...
286 L4_PAYLOAD: 0 1 2 3 4 5 6 ...
288 ipv4-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
289 ipv4-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
290 ipv4-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
291 ipv4-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
292 ipv4-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
293 ipv6-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
294 ipv6-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
295 ipv6-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
296 ipv6-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
297 ipv6-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
298 l2_payload: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
299 guarant_count: 1 best_count: 0
300 guarant_space: 512 best_space: 7168
307 Delete all flow director rules on a port:
309 .. code-block:: console
311 testpmd> flush_flow_director 0
316 The IntelĀ® Ethernet Controller X710 and XL710 Family support a feature called
319 A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term
320 for functionality that allows local switching between virtual endpoints within
321 a physical endpoint and also with an external bridge/network.
323 A "Floating" VEB doesn't have an uplink connection to the outside world so all
324 switching is done internally and remains within the host. As such, this
325 feature provides security benefits.
327 In addition, a Floating VEB overcomes a limitation of normal VEBs where they
328 cannot forward packets when the physical link is down. Floating VEBs don't need
329 to connect to the NIC port so they can still forward traffic from VF to VF
330 even when the physical link is down.
332 Therefore, with this feature enabled VFs can be limited to communicating with
333 each other but not an outside network, and they can do so even when there is
334 no physical uplink on the associated NIC port.
336 To enable this feature, the user should pass a ``devargs`` parameter to the
339 -w 84:00.0,enable_floating_veb=1
341 In this configuration the PMD will use the floating VEB feature for all the
342 VFs created by this PF device.
344 Alternatively, the user can specify which VFs need to connect to this floating
345 VEB using the ``floating_veb_list`` argument::
347 -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
349 In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
350 while other VFs connect to the normal VEB.
352 The current implementation only supports one floating VEB and one regular
353 VEB. VFs can connect to a floating VEB or a regular VEB according to the
354 configuration passed on the EAL command line.
356 The floating VEB functionality requires a NIC firmware version of 5.0
359 Dynamic Device Personalization (DDP)
360 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
362 The IntelĀ® Ethernet Controller X*710 support a feature called "Dynamic Device
363 Personalization (DDP)", which is used to configure hardware by downloading
364 a profile to support protocols/filters which are not supported by default.
365 The DDP functionality requires a NIC firmware version of 6.0 or greater.
367 Current implementation supports MPLSoUDP/MPLSoGRE/GTP-C/GTP-U/PPPoE/PPPoL2TP,
368 steering can be used with rte_flow API.
370 Load a profile which supports MPLSoUDP/MPLSoGRE and store backup profile:
372 .. code-block:: console
374 testpmd> ddp add 0 ./mpls.pkgo,./backup.pkgo
376 Delete a MPLS profile and restore backup profile:
378 .. code-block:: console
380 testpmd> ddp del 0 ./backup.pkgo
382 Get loaded DDP package info list:
384 .. code-block:: console
386 testpmd> ddp get list 0
388 Display information about a MPLS profile:
390 .. code-block:: console
392 testpmd> ddp get info ./mpls.pkgo
394 Input set configuration
395 ~~~~~~~~~~~~~~~~~~~~~~~
396 Input set for any PCTYPE can be configured with user defined configuration,
397 For example, to use only 48bit prefix for IPv6 src address for IPv6 TCP RSS:
399 .. code-block:: console
401 testpmd> port config 0 pctype 43 hash_inset clear all
402 testpmd> port config 0 pctype 43 hash_inset set field 13
403 testpmd> port config 0 pctype 43 hash_inset set field 14
404 testpmd> port config 0 pctype 43 hash_inset set field 15
406 Queue region configuration
407 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
408 The Ethernet Controller X710/XL710 supports a feature of queue regions
409 configuration for RSS in the PF, so that different traffic classes or
410 different packet classification types can be separated to different
411 queues in different queue regions. There is an API for configuration
412 of queue regions in RSS with a command line. It can parse the parameters
413 of the region index, queue number, queue start index, user priority, traffic
414 classes and so on. Depending on commands from the command line, it will call
415 i40e private APIs and start the process of setting or flushing the queue
416 region configuration. As this feature is specific for i40e only private
417 APIs are used. These new ``test_pmd`` commands are as shown below. For
418 details please refer to :doc:`../testpmd_app_ug/index`.
420 .. code-block:: console
422 testpmd> set port (port_id) queue-region region_id (value) \
423 queue_start_index (value) queue_num (value)
424 testpmd> set port (port_id) queue-region region_id (value) flowtype (value)
425 testpmd> set port (port_id) queue-region UP (value) region_id (value)
426 testpmd> set port (port_id) queue-region flush (on|off)
427 testpmd> show port (port_id) queue-region
429 Limitations or Known issues
430 ---------------------------
432 MPLS packet classification on X710/XL710
433 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
435 For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC.
436 The L2 Payload flow type in flow director can be used to classify MPLS packet
437 by using a command in testpmd like:
439 testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \
440 0x8847 flexbytes () fwd pf queue <N> fd_id <M>
442 With the NIC firmware version 5.0 or greater, some limited MPLS support
443 is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no
444 new packet type, no classification or offload are possible. With this change,
445 L2 Payload flow type in flow director cannot be used to classify MPLS packet
446 as with previous firmware versions. Meanwhile, the Ethertype filter can be
447 used to classify MPLS packet by using a command in testpmd like:
449 testpmd> ethertype_filter 0 add mac_ignr 00:00:00:00:00:00 ethertype \
452 16 Byte RX Descriptor setting on DPDK VF
453 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
455 Currently the VF's RX descriptor mode is decided by PF. There's no PF-VF
456 interface for VF to request the RX descriptor mode, also no interface to notify
457 VF its own RX descriptor mode.
458 For all available versions of the i40e driver, these drivers don't support 16
459 byte RX descriptor. If the Linux i40e kernel driver is used as host driver,
460 while DPDK i40e PMD is used as the VF driver, DPDK cannot choose 16 byte receive
461 descriptor. The reason is that the RX descriptor is already set to 32 byte by
462 the i40e kernel driver. That is to say, user should keep
463 ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n`` in config file.
464 In the future, if the Linux i40e driver supports 16 byte RX descriptor, user
465 should make sure the DPDK VF uses the same RX descriptor mode, 16 byte or 32
466 byte, as the PF driver.
468 The same rule for DPDK PF + DPDK VF. The PF and VF should use the same RX
469 descriptor mode. Or the VF RX will not work.
471 Receive packets with Ethertype 0x88A8
472 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
474 Due to the FW limitation, PF can receive packets with Ethertype 0x88A8
475 only when floating VEB is disabled.
477 Incorrect Rx statistics when packet is oversize
478 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
480 When a packet is over maximum frame size, the packet is dropped.
481 However the Rx statistics, when calling `rte_eth_stats_get` incorrectly
482 shows it as received.
484 VF & TC max bandwidth setting
485 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
487 The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel.
488 The dehavior is different when handling per VF and per TC max bandwidth setting.
489 When enabling per VF max bandwidth, SW will check if per TC max bandwidth is
490 enabled. If so, return failure.
491 When enabling per TC max bandwidth, SW will check if per VF max bandwidth
492 is enabled. If so, disable per VF max bandwidth and continue with per TC max
495 TC TX scheduling mode setting
496 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
498 There're 2 TX scheduling modes for TCs, round robin and strict priority mode.
499 If a TC is set to strict priority mode, it can consume unlimited bandwidth.
500 It means if APP has set the max bandwidth for that TC, it comes to no
502 It's suggested to set the strict priority mode for a TC that is latency
503 sensitive but no consuming much bandwidth.
505 VF performance is impacted by PCI extended tag setting
506 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
508 To reach maximum NIC performance in the VF the PCI extended tag must be
509 enabled. The DPDK I40E PF driver will set this feature during initialization,
510 but the kernel PF driver does not. So when running traffic on a VF which is
511 managed by the kernel PF driver, a significant NIC performance downgrade has
512 been observed (for 64 byte packets, there is about 25% linerate downgrade for
513 a 25G device and about 35% for a 40G device).
515 For kernel version >= 4.11, the kernel's PCI driver will enable the extended
516 tag if it detects that the device supports it. So by default, this is not an
517 issue. For kernels <= 4.11 or when the PCI extended tag is disabled it can be
518 enabled using the steps below.
520 #. Get the current value of the PCI configure register::
522 setpci -s <XX:XX.X> a8.w
526 value = value | 0x100
528 #. Set the PCI configure register with new value::
530 setpci -s <XX:XX.X> a8.w=<value>
535 The VF vlan strip function is only supported in the i40e kernel driver >= 2.1.26.
540 DCB works only when RSS is enabled.
542 Global configuration warning
543 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
545 I40E PMD will set some global registers to enable some function or set some
546 configure. Then when using different ports of the same NIC with Linux kernel
547 and DPDK, the port with Linux kernel will be impacted by the port with DPDK.
548 For example, register I40E_GL_SWT_L2TAGCTRL is used to control L2 tag, i40e
549 PMD uses I40E_GL_SWT_L2TAGCTRL to set vlan TPID. If setting TPID in port A
550 with DPDK, then the configuration will also impact port B in the NIC with
551 kernel driver, which don't want to use the TPID.
552 So PMD reports warning to clarify what is changed by writing global register.
554 High Performance of Small Packets on 40G NIC
555 --------------------------------------------
557 As there might be firmware fixes for performance enhancement in latest version
558 of firmware image, the firmware update might be needed for getting high performance.
559 Check with the local Intel's Network Division application engineers for firmware updates.
560 Users should consult the release notes specific to a DPDK release to identify
561 the validated firmware version for a NIC using the i40e driver.
563 Use 16 Bytes RX Descriptor Size
564 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
566 As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
567 Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
569 High Performance and per Packet Latency Tradeoff
570 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
572 Due to the hardware design, the interrupt signal inside NIC is needed for per
573 packet descriptor write-back. The minimum interval of interrupts could be set
574 at compile time by ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` in configuration files.
575 Though there is a default configuration, the interval could be tuned by the
576 users with that configuration item depends on what the user cares about more,
577 performance or per packet latency.
579 Example of getting best performance with l3fwd example
580 ------------------------------------------------------
582 The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with an
583 Intel server platform and Intel XL710 NICs.
585 The example scenario is to get best performance with two Intel XL710 40GbE ports.
586 See :numref:`figure_intel_perf_test_setup` for the performance test setup.
588 .. _figure_intel_perf_test_setup:
590 .. figure:: img/intel_perf_test_setup.*
592 Performance Test Setup
595 1. Add two Intel XL710 NICs to the platform, and use one port per card to get best performance.
596 The reason for using two NICs is to overcome a PCIe Gen3's limitation since it cannot provide 80G bandwidth
597 for two 40G ports, but two different PCIe Gen3 x8 slot can.
598 Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
600 82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
601 85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
603 2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
605 3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
606 In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
608 Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
609 cores from different cores (e.g core18 and core19).
611 4. Bind these two ports to igb_uio.
613 5. As to XL710 40G port, we need at least two queue pairs to achieve best performance, then two queues per port
614 will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
616 6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
617 Compile the ``l3fwd sample`` with the default lpm mode.
619 7. The command line of running l3fwd would be something like the following::
621 ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
622 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
624 This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
625 core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
627 8. Configure the traffic at a traffic generator.
629 * Start creating a stream on packet generator.
631 * Set the Ethernet II type to 0x0800.