I40E Poll Mode Driver
======================
-The i40e PMD (librte_pmd_i40e) provides poll mode driver support for
+The i40e PMD (**librte_net_i40e**) provides poll mode driver support for
10/25/40 Gbps Intel® Ethernet 700 Series Network Adapters based on
the Intel Ethernet Controller X710/XL710/XXV710 and Intel Ethernet
Connection X722 (only support part of features).
- Malicious Device Drive event catch and notify
- Generic flow API
-Prerequisites
--------------
+Linux Prerequisites
+-------------------
- Identifying your adapter using `Intel Support
<http://www.intel.com/support>`_ and get the latest NVM/FW images.
* In all cases Intel recommends using Intel Ethernet Optics; other modules
may function but are not validated by Intel. Contact Intel for supported media types.
+Windows Prerequisites
+---------------------
+
+- Follow the DPDK `Getting Started Guide for Windows <https://doc.dpdk.org/guides/windows_gsg/index.html>`_ to setup the basic DPDK environment.
+
+- Identify the Intel® Ethernet adapter and get the latest NVM/FW version.
+
+- To access any Intel® Ethernet hardware, load the NetUIO driver in place of existing built-in (inbox) driver.
+
+- To load NetUIO driver, follow the steps mentioned in `dpdk-kmods repository
+ <https://git.dpdk.org/dpdk-kmods/tree/windows/netuio/README.rst>`_.
+
Recommended Matching List
-------------------------
+--------------+-----------------------+------------------+
| DPDK version | Kernel driver version | Firmware version |
+==============+=======================+==================+
+ | 20.11 | 2.13.10 | 8.00 |
+ +--------------+-----------------------+------------------+
| 20.08 | 2.12.6 | 7.30 |
+--------------+-----------------------+------------------+
| 20.05 | 2.11.27 | 7.30 |
+--------------+-----------------------+------------------+
| DPDK version | Kernel driver version | Firmware version |
+==============+=======================+==================+
+ | 20.11 | 2.13.10 | 5.00 |
+ +--------------+-----------------------+------------------+
| 20.08 | 2.12.6 | 4.11 |
+--------------+-----------------------+------------------+
| 20.05 | 2.11.27 | 4.11 |
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
-- ``Use latest supported vector`` (default ``disable``)
-
- Latest supported vector path may not always get the best perf so vector path was
- recommended to use only on later platform. But users may want the latest vector path
- since it can get better perf in some real work loading cases. So ``devargs`` param
- ``use-latest-supported-vec`` is introduced, for example::
-
- -w 84:00.0,use-latest-supported-vec=1
-
- ``Enable validation for VF message`` (default ``not enabled``)
The PF counts messages from each VF. If in any period of seconds the message
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
- testpmd> flow_director_filter 0 mode IP add flow ipv4-udp \
- src 2.2.2.3 32 dst 2.2.2.5 32 vlan 0 flexbytes () \
- fwd pf queue 1 fd_id 1
+ testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.3 \
+ dst is 2.2.2.5 / udp src is 32 dst is 32 / end \
+ actions mark id 1 / queue index 1 / end
Check the flow director status:
f_add: 0 f_remove: 0
-Delete all flow director rules on a port:
-
-.. code-block:: console
-
- testpmd> flush_flow_director 0
-
Floating VEB
~~~~~~~~~~~~~
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
- ``RSS Flow``
RSS Flow supports to set hash input set, hash function, enable hash
- and configure queue region.
+ and configure queues.
For example:
- Configure queue region as queue 0, 1, 2, 3.
+ Configure queues as queue 0, 1, 2, 3.
.. code-block:: console
as with previous firmware versions. Meanwhile, the Ethertype filter can be
used to classify MPLS packet by using a command in testpmd like:
- testpmd> ethertype_filter 0 add mac_ignr 00:00:00:00:00:00 ethertype \
- 0x8847 fwd queue <M>
+ testpmd> flow create 0 ingress pattern eth type is 0x8847 / end \
+ actions queue index <M> / end
16 Byte RX Descriptor setting on DPDK VF
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
it will make any cloud filter using inner_vlan or tunnel key invalid. Default configuration will be
recovered only by NIC core reset.
+Mirror rule limitation for X722
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Due to firmware restriction of X722, the same VSI cannot have more than one mirror rule.
+
High Performance of Small Packets on 40GbE NIC
----------------------------------------------
7. The command line of running l3fwd would be something like the following::
- ./dpdk-l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,