2 Copyright(c) 2016 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 ======================
34 The I40E PMD (librte_pmd_i40e) provides poll mode driver support
35 for the Intel X710/XL710/X722 10/40 Gbps family of adapters.
41 Features of the I40E PMD are:
43 - Multiple queues for TX and RX
44 - Receiver Side Scaling (RSS)
46 - Packet type information
50 - VLAN/QinQ stripping and inserting
54 - Port hardware statistics
56 - Link state information
58 - Mirror on port, VLAN and VSI
59 - Interrupt mode for RX
60 - Scattered and gather for TX and RX
61 - Vector Poll mode driver
66 - IEEE1588/802.1AS timestamping
67 - VF Daemon (VFD) - EXPERIMENTAL
68 - Dynamic Device Personalization (DDP)
74 - Identifying your adapter using `Intel Support
75 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
77 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
79 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
80 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
82 - Upgrade the NVM/FW version following the `IntelĀ® Ethernet NVM Update Tool Quick Usage Guide for Linux
83 <https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ if needed.
85 Pre-Installation Configuration
86 ------------------------------
91 The following options can be modified in the ``config`` file.
92 Please note that enabling debugging options may affect system performance.
94 - ``CONFIG_RTE_LIBRTE_I40E_PMD`` (default ``y``)
96 Toggle compilation of the ``librte_pmd_i40e`` driver.
98 - ``CONFIG_RTE_LIBRTE_I40E_DEBUG_*`` (default ``n``)
100 Toggle display of generic debugging messages.
102 - ``CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC`` (default ``y``)
104 Toggle bulk allocation for RX.
106 - ``CONFIG_RTE_LIBRTE_I40E_INC_VECTOR`` (default ``n``)
108 Toggle the use of Vector PMD instead of normal RX/TX path.
109 To enable vPMD for RX, bulk allocation for Rx must be allowed.
111 - ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` (default ``n``)
113 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
115 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``)
117 Number of queues reserved for PF.
119 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``)
121 Number of queues reserved for each VMDQ Pool.
123 - ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` (default ``-1``)
125 Interrupt Throttling interval.
128 Runtime Config Options
129 ~~~~~~~~~~~~~~~~~~~~~~
131 - ``Number of Queues per VF`` (default ``4``)
133 The number of queue per VF is determined by its host PF. If the PCI address
134 of an i40e PF is aaaa:bb.cc, the number of queues per VF can be configured
135 with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n. The value n can be
136 1, 2, 4, 8 or 16. If no such parameter is configured, the number of queues
137 per VF is 4 by default.
140 Driver compilation and testing
141 ------------------------------
143 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
147 SR-IOV: Prerequisites and sample Application Notes
148 --------------------------------------------------
150 #. Load the kernel module:
152 .. code-block:: console
156 Check the output in dmesg:
158 .. code-block:: console
160 i40e 0000:83:00.1 ens802f0: renamed from eth0
162 #. Bring up the PF ports:
164 .. code-block:: console
168 #. Create VF device(s):
170 Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry
175 .. code-block:: console
177 echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
180 #. Assign VF MAC address:
182 Assign MAC address to the VF using iproute2 utility. The syntax is:
184 .. code-block:: console
186 ip link set <PF netdev id> vf <VF id> mac <macaddr>
190 .. code-block:: console
192 ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
194 #. Assign VF to VM, and bring up the VM.
195 Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*.
199 Follow instructions available in the document
200 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
205 .. code-block:: console
208 EAL: PCI device 0000:83:00.0 on NUMA socket 1
209 EAL: probe driver: 8086:1572 rte_i40e_pmd
210 EAL: PCI memory mapped at 0x7f7f80000000
211 EAL: PCI memory mapped at 0x7f7f80800000
212 PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
213 Interactive-mode selected
214 Configuring Port 0 (socket 0)
217 PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
218 satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0.
221 Port 0: 68:05:CA:26:85:84
222 Checking link statuses...
223 Port 0 Link Up - speed 10000 Mbps - full-duplex
229 Sample Application Notes
230 ------------------------
235 Vlan filter only works when Promiscuous mode is off.
237 To start ``testpmd``, and add vlan 10 to port 0:
239 .. code-block:: console
241 ./app/testpmd -l 0-15 -n 4 -- -i --forward-mode=mac
244 testpmd> set promisc 0 off
245 testpmd> rx_vlan add 10 0
251 The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues.
252 The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload.
254 The default input set of each flow type is::
256 ipv4-other : src_ip_address, dst_ip_address
257 ipv4-frag : src_ip_address, dst_ip_address
258 ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port
259 ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port
260 ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
262 ipv6-other : src_ip_address, dst_ip_address
263 ipv6-frag : src_ip_address, dst_ip_address
264 ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port
265 ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port
266 ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
268 l2_payload : ether_type
270 The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching.
272 Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``:
274 .. code-block:: console
276 ./app/testpmd -l 0-15 -n 4 -- -i --disable-rss --pkt-filter-mode=perfect \
277 --rxq=8 --txq=8 --nb-cores=8 --nb-ports=1
279 Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1:
281 .. code-block:: console
283 testpmd> flow_director_filter 0 mode IP add flow ipv4-udp \
284 src 2.2.2.3 32 dst 2.2.2.5 32 vlan 0 flexbytes () \
285 fwd pf queue 1 fd_id 1
287 Check the flow director status:
289 .. code-block:: console
291 testpmd> show port fdir 0
293 ######################## FDIR infos for port 0 ####################
295 SUPPORTED FLOW TYPE: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
296 ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
299 max_len: 16 payload_limit: 480
300 payload_unit: 2 payload_seg: 3
301 bitmask_unit: 2 bitmask_num: 2
304 src_ipv4: 0x00000000,
305 dst_ipv4: 0x00000000,
308 src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000,
309 dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000
310 FLEX PAYLOAD SRC OFFSET:
311 L2_PAYLOAD: 0 1 2 3 4 5 6 ...
312 L3_PAYLOAD: 0 1 2 3 4 5 6 ...
313 L4_PAYLOAD: 0 1 2 3 4 5 6 ...
315 ipv4-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
316 ipv4-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
317 ipv4-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
318 ipv4-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
319 ipv4-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
320 ipv6-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
321 ipv6-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
322 ipv6-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
323 ipv6-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
324 ipv6-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
325 l2_payload: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
326 guarant_count: 1 best_count: 0
327 guarant_space: 512 best_space: 7168
334 Delete all flow director rules on a port:
336 .. code-block:: console
338 testpmd> flush_flow_director 0
343 The IntelĀ® Ethernet Controller X710 and XL710 Family support a feature called
346 A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term
347 for functionality that allows local switching between virtual endpoints within
348 a physical endpoint and also with an external bridge/network.
350 A "Floating" VEB doesn't have an uplink connection to the outside world so all
351 switching is done internally and remains within the host. As such, this
352 feature provides security benefits.
354 In addition, a Floating VEB overcomes a limitation of normal VEBs where they
355 cannot forward packets when the physical link is down. Floating VEBs don't need
356 to connect to the NIC port so they can still forward traffic from VF to VF
357 even when the physical link is down.
359 Therefore, with this feature enabled VFs can be limited to communicating with
360 each other but not an outside network, and they can do so even when there is
361 no physical uplink on the associated NIC port.
363 To enable this feature, the user should pass a ``devargs`` parameter to the
366 -w 84:00.0,enable_floating_veb=1
368 In this configuration the PMD will use the floating VEB feature for all the
369 VFs created by this PF device.
371 Alternatively, the user can specify which VFs need to connect to this floating
372 VEB using the ``floating_veb_list`` argument::
374 -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
376 In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
377 while other VFs connect to the normal VEB.
379 The current implementation only supports one floating VEB and one regular
380 VEB. VFs can connect to a floating VEB or a regular VEB according to the
381 configuration passed on the EAL command line.
383 The floating VEB functionality requires a NIC firmware version of 5.0
386 Dynamic Device Personalization (DDP)
387 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
389 The IntelĀ® Ethernet Controller X*710 support a feature called "Dynamic Device
390 Personalization (DDP)", which is used to configure hardware by downloading
391 a profile to support protocols/filters which are not supported by default.
392 The DDP functionality requires a NIC firmware version of 6.0 or greater.
394 Current implementation supports MPLSoUDP/MPLSoGRE/GTP-C/GTP-U/PPPoE/PPPoL2TP,
395 steering can be used with rte_flow API.
397 Load a profile which supports MPLSoUDP/MPLSoGRE:
399 .. code-block:: console
401 testpmd> ddp add 0 ./mpls.pkgo
403 Delete a MPLS profile:
405 .. code-block:: console
407 testpmd> ddp del 0 ./mpls.pkgo
409 Get loaded DDP package info list:
411 .. code-block:: console
413 testpmd> ddp get list 0
415 Display information about a MPLS profile:
417 .. code-block:: console
419 testpmd> ddp get info ./mpls.pkgo
421 Input set configuration
422 ~~~~~~~~~~~~~~~~~~~~~~~
423 Input set for any PCTYPE can be configured with user defined configuration,
424 For example, to use only 48bit prefix for IPv6 src address for IPv6 TCP RSS:
426 .. code-block:: console
428 testpmd> port config 0 pctype 43 hash_inset clear all
429 testpmd> port config 0 pctype 43 hash_inset set field 13
430 testpmd> port config 0 pctype 43 hash_inset set field 14
431 testpmd> port config 0 pctype 43 hash_inset set field 15
433 Limitations or Known issues
434 ---------------------------
436 MPLS packet classification on X710/XL710
437 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
439 For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC.
440 The L2 Payload flow type in flow director can be used to classify MPLS packet
441 by using a command in testpmd like:
443 testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \
444 0x8847 flexbytes () fwd pf queue <N> fd_id <M>
446 With the NIC firmware version 5.0 or greater, some limited MPLS support
447 is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no
448 new packet type, no classification or offload are possible. With this change,
449 L2 Payload flow type in flow director cannot be used to classify MPLS packet
450 as with previous firmware versions. Meanwhile, the Ethertype filter can be
451 used to classify MPLS packet by using a command in testpmd like:
453 testpmd> ethertype_filter 0 add mac_ignr 00:00:00:00:00:00 ethertype \
456 16 Byte RX Descriptor setting on DPDK VF
457 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
459 Currently the VF's RX descriptor mode is decided by PF. There's no PF-VF
460 interface for VF to request the RX descriptor mode, also no interface to notify
461 VF its own RX descriptor mode.
462 For all available versions of the i40e driver, these drivers don't support 16
463 byte RX descriptor. If the Linux i40e kernel driver is used as host driver,
464 while DPDK i40e PMD is used as the VF driver, DPDK cannot choose 16 byte receive
465 descriptor. The reason is that the RX descriptor is already set to 32 byte by
466 the i40e kernel driver. That is to say, user should keep
467 ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n`` in config file.
468 In the future, if the Linux i40e driver supports 16 byte RX descriptor, user
469 should make sure the DPDK VF uses the same RX descriptor mode, 16 byte or 32
470 byte, as the PF driver.
472 The same rule for DPDK PF + DPDK VF. The PF and VF should use the same RX
473 descriptor mode. Or the VF RX will not work.
475 Receive packets with Ethertype 0x88A8
476 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
478 Due to the FW limitation, PF can receive packets with Ethertype 0x88A8
479 only when floating VEB is disabled.
481 Incorrect Rx statistics when packet is oversize
482 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
484 When a packet is over maximum frame size, the packet is dropped.
485 However the Rx statistics, when calling `rte_eth_stats_get` incorrectly
486 shows it as received.
488 VF & TC max bandwidth setting
489 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
491 The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel.
492 The dehavior is different when handling per VF and per TC max bandwidth setting.
493 When enabling per VF max bandwidth, SW will check if per TC max bandwidth is
494 enabled. If so, return failure.
495 When enabling per TC max bandwidth, SW will check if per VF max bandwidth
496 is enabled. If so, disable per VF max bandwidth and continue with per TC max
499 TC TX scheduling mode setting
500 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
502 There're 2 TX scheduling modes for TCs, round robin and strict priority mode.
503 If a TC is set to strict priority mode, it can consume unlimited bandwidth.
504 It means if APP has set the max bandwidth for that TC, it comes to no
506 It's suggested to set the strict priority mode for a TC that is latency
507 sensitive but no consuming much bandwidth.
509 VF performance is impacted by PCI extended tag setting
510 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
512 To reach maximum NIC performance in the VF the PCI extended tag must be
513 enabled. The DPDK I40E PF driver will set this feature during initialization,
514 but the kernel PF driver does not. So when running traffic on a VF which is
515 managed by the kernel PF driver, a significant NIC performance downgrade has
516 been observed (for 64 byte packets, there is about 25% linerate downgrade for
517 a 25G device and about 35% for a 40G device).
519 For kernel version >= 4.11, the kernel's PCI driver will enable the extended
520 tag if it detects that the device supports it. So by default, this is not an
521 issue. For kernels <= 4.11 or when the PCI extended tag is disabled it can be
522 enabled using the steps below.
524 #. Get the current value of the PCI configure register::
526 setpci -s <XX:XX.X> a8.w
530 value = value | 0x100
532 #. Set the PCI configure register with new value::
534 setpci -s <XX:XX.X> a8.w=<value>
539 The VF vlan strip function is only supported in the i40e kernel driver >= 2.1.26.
544 DCB works only when RSS is enabled.
546 High Performance of Small Packets on 40G NIC
547 --------------------------------------------
549 As there might be firmware fixes for performance enhancement in latest version
550 of firmware image, the firmware update might be needed for getting high performance.
551 Check with the local Intel's Network Division application engineers for firmware updates.
552 Users should consult the release notes specific to a DPDK release to identify
553 the validated firmware version for a NIC using the i40e driver.
555 Use 16 Bytes RX Descriptor Size
556 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
558 As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
559 Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
561 High Performance and per Packet Latency Tradeoff
562 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
564 Due to the hardware design, the interrupt signal inside NIC is needed for per
565 packet descriptor write-back. The minimum interval of interrupts could be set
566 at compile time by ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` in configuration files.
567 Though there is a default configuration, the interval could be tuned by the
568 users with that configuration item depends on what the user cares about more,
569 performance or per packet latency.
571 Example of getting best performance with l3fwd example
572 ------------------------------------------------------
574 The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with an
575 Intel server platform and Intel XL710 NICs.
577 The example scenario is to get best performance with two Intel XL710 40GbE ports.
578 See :numref:`figure_intel_perf_test_setup` for the performance test setup.
580 .. _figure_intel_perf_test_setup:
582 .. figure:: img/intel_perf_test_setup.*
584 Performance Test Setup
587 1. Add two Intel XL710 NICs to the platform, and use one port per card to get best performance.
588 The reason for using two NICs is to overcome a PCIe Gen3's limitation since it cannot provide 80G bandwidth
589 for two 40G ports, but two different PCIe Gen3 x8 slot can.
590 Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
592 82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
593 85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
595 2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
597 3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
598 In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
600 Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
601 cores from different cores (e.g core18 and core19).
603 4. Bind these two ports to igb_uio.
605 5. As to XL710 40G port, we need at least two queue pairs to achieve best performance, then two queues per port
606 will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
608 6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
609 Compile the ``l3fwd sample`` with the default lpm mode.
611 7. The command line of running l3fwd would be something like the following::
613 ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
614 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
616 This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
617 core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
619 8. Configure the traffic at a traffic generator.
621 * Start creating a stream on packet generator.
623 * Set the Ethernet II type to 0x0800.