2 Copyright(c) 2016 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 ======================
34 The I40E PMD (librte_pmd_i40e) provides poll mode driver support
35 for the Intel X710/XL710/X722 10/40 Gbps family of adapters.
41 Features of the I40E PMD are:
43 - Multiple queues for TX and RX
44 - Receiver Side Scaling (RSS)
46 - Packet type information
50 - VLAN/QinQ stripping and inserting
54 - Port hardware statistics
56 - Link state information
58 - Mirror on port, VLAN and VSI
59 - Interrupt mode for RX
60 - Scattered and gather for TX and RX
61 - Vector Poll mode driver
66 - IEEE1588/802.1AS timestamping
67 - VF Daemon (VFD) - EXPERIMENTAL
73 - Identifying your adapter using `Intel Support
74 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
76 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
78 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
79 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
81 - Upgrade the NVM/FW version following the `IntelĀ® Ethernet NVM Update Tool Quick Usage Guide for Linux
82 <https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ if needed.
84 Pre-Installation Configuration
85 ------------------------------
90 The following options can be modified in the ``config`` file.
91 Please note that enabling debugging options may affect system performance.
93 - ``CONFIG_RTE_LIBRTE_I40E_PMD`` (default ``y``)
95 Toggle compilation of the ``librte_pmd_i40e`` driver.
97 - ``CONFIG_RTE_LIBRTE_I40E_DEBUG_*`` (default ``n``)
99 Toggle display of generic debugging messages.
101 - ``CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC`` (default ``y``)
103 Toggle bulk allocation for RX.
105 - ``CONFIG_RTE_LIBRTE_I40E_INC_VECTOR`` (default ``n``)
107 Toggle the use of Vector PMD instead of normal RX/TX path.
108 To enable vPMD for RX, bulk allocation for Rx must be allowed.
110 - ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` (default ``n``)
112 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
114 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``)
116 Number of queues reserved for PF.
118 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF`` (default ``4``)
120 Number of queues reserved for each SR-IOV VF.
122 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``)
124 Number of queues reserved for each VMDQ Pool.
126 - ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` (default ``-1``)
128 Interrupt Throttling interval.
131 Driver compilation and testing
132 ------------------------------
134 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
138 SR-IOV: Prerequisites and sample Application Notes
139 --------------------------------------------------
141 #. Load the kernel module:
143 .. code-block:: console
147 Check the output in dmesg:
149 .. code-block:: console
151 i40e 0000:83:00.1 ens802f0: renamed from eth0
153 #. Bring up the PF ports:
155 .. code-block:: console
159 #. Create VF device(s):
161 Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry
166 .. code-block:: console
168 echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
171 #. Assign VF MAC address:
173 Assign MAC address to the VF using iproute2 utility. The syntax is:
175 .. code-block:: console
177 ip link set <PF netdev id> vf <VF id> mac <macaddr>
181 .. code-block:: console
183 ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
185 #. Assign VF to VM, and bring up the VM.
186 Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*.
190 Follow instructions available in the document
191 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
196 .. code-block:: console
199 EAL: PCI device 0000:83:00.0 on NUMA socket 1
200 EAL: probe driver: 8086:1572 rte_i40e_pmd
201 EAL: PCI memory mapped at 0x7f7f80000000
202 EAL: PCI memory mapped at 0x7f7f80800000
203 PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
204 Interactive-mode selected
205 Configuring Port 0 (socket 0)
208 PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
209 satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0.
212 Port 0: 68:05:CA:26:85:84
213 Checking link statuses...
214 Port 0 Link Up - speed 10000 Mbps - full-duplex
220 Sample Application Notes
221 ------------------------
226 Vlan filter only works when Promiscuous mode is off.
228 To start ``testpmd``, and add vlan 10 to port 0:
230 .. code-block:: console
232 ./app/testpmd -l 0-15 -n 4 -- -i --forward-mode=mac
235 testpmd> set promisc 0 off
236 testpmd> rx_vlan add 10 0
242 The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues.
243 The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload.
245 The default input set of each flow type is::
247 ipv4-other : src_ip_address, dst_ip_address
248 ipv4-frag : src_ip_address, dst_ip_address
249 ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port
250 ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port
251 ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
253 ipv6-other : src_ip_address, dst_ip_address
254 ipv6-frag : src_ip_address, dst_ip_address
255 ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port
256 ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port
257 ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
259 l2_payload : ether_type
261 The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching.
263 Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``:
265 .. code-block:: console
267 ./app/testpmd -l 0-15 -n 4 -- -i --disable-rss --pkt-filter-mode=perfect \
268 --rxq=8 --txq=8 --nb-cores=8 --nb-ports=1
270 Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1:
272 .. code-block:: console
274 testpmd> flow_director_filter 0 mode IP add flow ipv4-udp \
275 src 2.2.2.3 32 dst 2.2.2.5 32 vlan 0 flexbytes () \
276 fwd pf queue 1 fd_id 1
278 Check the flow director status:
280 .. code-block:: console
282 testpmd> show port fdir 0
284 ######################## FDIR infos for port 0 ####################
286 SUPPORTED FLOW TYPE: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
287 ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
290 max_len: 16 payload_limit: 480
291 payload_unit: 2 payload_seg: 3
292 bitmask_unit: 2 bitmask_num: 2
295 src_ipv4: 0x00000000,
296 dst_ipv4: 0x00000000,
299 src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000,
300 dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000
301 FLEX PAYLOAD SRC OFFSET:
302 L2_PAYLOAD: 0 1 2 3 4 5 6 ...
303 L3_PAYLOAD: 0 1 2 3 4 5 6 ...
304 L4_PAYLOAD: 0 1 2 3 4 5 6 ...
306 ipv4-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
307 ipv4-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
308 ipv4-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
309 ipv4-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
310 ipv4-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
311 ipv6-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
312 ipv6-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
313 ipv6-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
314 ipv6-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
315 ipv6-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
316 l2_payload: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
317 guarant_count: 1 best_count: 0
318 guarant_space: 512 best_space: 7168
325 Delete all flow director rules on a port:
327 .. code-block:: console
329 testpmd> flush_flow_director 0
334 The IntelĀ® Ethernet Controller X710 and XL710 Family support a feature called
337 A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term
338 for functionality that allows local switching between virtual endpoints within
339 a physical endpoint and also with an external bridge/network.
341 A "Floating" VEB doesn't have an uplink connection to the outside world so all
342 switching is done internally and remains within the host. As such, this
343 feature provides security benefits.
345 In addition, a Floating VEB overcomes a limitation of normal VEBs where they
346 cannot forward packets when the physical link is down. Floating VEBs don't need
347 to connect to the NIC port so they can still forward traffic from VF to VF
348 even when the physical link is down.
350 Therefore, with this feature enabled VFs can be limited to communicating with
351 each other but not an outside network, and they can do so even when there is
352 no physical uplink on the associated NIC port.
354 To enable this feature, the user should pass a ``devargs`` parameter to the
357 -w 84:00.0,enable_floating_veb=1
359 In this configuration the PMD will use the floating VEB feature for all the
360 VFs created by this PF device.
362 Alternatively, the user can specify which VFs need to connect to this floating
363 VEB using the ``floating_veb_list`` argument::
365 -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
367 In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
368 while other VFs connect to the normal VEB.
370 The current implementation only supports one floating VEB and one regular
371 VEB. VFs can connect to a floating VEB or a regular VEB according to the
372 configuration passed on the EAL command line.
374 The floating VEB functionality requires a NIC firmware version of 5.0
378 Limitations or Known issues
379 ---------------------------
381 MPLS packet classification on X710/XL710
382 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
384 For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC.
385 The L2 Payload flow type in flow director can be used to classify MPLS packet
386 by using a command in testpmd like:
388 testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \
389 0x8847 flexbytes () fwd pf queue <N> fd_id <M>
391 With the NIC firmware version 5.0 or greater, some limited MPLS support
392 is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no
393 new packet type, no classification or offload are possible. With this change,
394 L2 Payload flow type in flow director cannot be used to classify MPLS packet
395 as with previous firmware versions. Meanwhile, the Ethertype filter can be
396 used to classify MPLS packet by using a command in testpmd like:
398 testpmd> ethertype_filter 0 add mac_ignr 00:00:00:00:00:00 ethertype \
401 16 Byte Descriptor cannot be used on DPDK VF
402 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
404 If the Linux i40e kernel driver is used as host driver, while DPDK i40e PMD
405 is used as the VF driver, DPDK cannot choose 16 byte receive descriptor. That
406 is to say, user should keep ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n`` in
409 Receive packets with Ethertype 0x88A8
410 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
412 Due to the FW limitation, PF can receive packets with Ethertype 0x88A8
413 only when floating VEB is disabled.
415 Incorrect Rx statistics when packet is oversize
416 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
418 When a packet is over maximum frame size, the packet is dropped.
419 However the Rx statistics, when calling `rte_eth_stats_get` incorrectly
420 shows it as received.
422 VF & TC max bandwidth setting
423 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
425 The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel.
426 The dehavior is different when handling per VF and per TC max bandwidth setting.
427 When enabling per VF max bandwidth, SW will check if per TC max bandwidth is
428 enabled. If so, return failure.
429 When enabling per TC max bandwidth, SW will check if per VF max bandwidth
430 is enabled. If so, disable per VF max bandwidth and continue with per TC max
433 TC TX scheduling mode setting
434 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
436 There're 2 TX scheduling modes for TCs, round robin and strict priority mode.
437 If a TC is set to strict priority mode, it can consume unlimited bandwidth.
438 It means if APP has set the max bandwidth for that TC, it comes to no
440 It's suggested to set the strict priority mode for a TC that is latency
441 sensitive but no consuming much bandwidth.
443 VF performance is impacted by PCI extended tag setting
444 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
446 To reach maximum NIC performance in the VF the PCI extended tag must be
447 enabled. The DPDK I40E PF driver will set this feature during initialization,
448 but the kernel PF driver does not. So when running traffic on a VF which is
449 managed by the kernel PF driver, a significant NIC performance downgrade has
450 been observed (for 64 byte packets, there is about 25% linerate downgrade for
451 a 25G device and about 35% for a 40G device).
453 For kernel version >= 4.11, the kernel's PCI driver will enable the extended
454 tag if it detects that the device supports it. So by default, this is not an
455 issue. For kernels <= 4.11 or when the PCI extended tag is disabled it can be
456 enabled using the steps below.
458 #. Get the current value of the PCI configure register::
460 setpci -s <XX:XX.X> a8.w
464 value = value | 0x100
466 #. Set the PCI configure register with new value::
468 setpci -s <XX:XX.X> a8.w=<value>
470 High Performance of Small Packets on 40G NIC
471 --------------------------------------------
473 As there might be firmware fixes for performance enhancement in latest version
474 of firmware image, the firmware update might be needed for getting high performance.
475 Check with the local Intel's Network Division application engineers for firmware updates.
476 Users should consult the release notes specific to a DPDK release to identify
477 the validated firmware version for a NIC using the i40e driver.
479 Use 16 Bytes RX Descriptor Size
480 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
482 As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
483 Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
485 High Performance and per Packet Latency Tradeoff
486 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
488 Due to the hardware design, the interrupt signal inside NIC is needed for per
489 packet descriptor write-back. The minimum interval of interrupts could be set
490 at compile time by ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` in configuration files.
491 Though there is a default configuration, the interval could be tuned by the
492 users with that configuration item depends on what the user cares about more,
493 performance or per packet latency.
495 Example of getting best performance with l3fwd example
496 ------------------------------------------------------
498 The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with an
499 Intel server platform and Intel XL710 NICs.
501 The example scenario is to get best performance with two Intel XL710 40GbE ports.
502 See :numref:`figure_intel_perf_test_setup` for the performance test setup.
504 .. _figure_intel_perf_test_setup:
506 .. figure:: img/intel_perf_test_setup.*
508 Performance Test Setup
511 1. Add two Intel XL710 NICs to the platform, and use one port per card to get best performance.
512 The reason for using two NICs is to overcome a PCIe Gen3's limitation since it cannot provide 80G bandwidth
513 for two 40G ports, but two different PCIe Gen3 x8 slot can.
514 Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
516 82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
517 85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
519 2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
521 3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
522 In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
524 Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
525 cores from different cores (e.g core18 and core19).
527 4. Bind these two ports to igb_uio.
529 5. As to XL710 40G port, we need at least two queue pairs to achieve best performance, then two queues per port
530 will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
532 6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
533 Compile the ``l3fwd sample`` with the default lpm mode.
535 7. The command line of running l3fwd would be something like the following::
537 ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
538 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
540 This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
541 core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
543 8. Configure the traffic at a traffic generator.
545 * Start creating a stream on packet generator.
547 * Set the Ethernet II type to 0x0800.