1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2016 Intel Corporation.
4 .. include:: <isonum.txt>
12 Vector PMD uses IntelĀ® SIMD instructions to optimize packet I/O.
13 It improves load/store bandwidth efficiency of L1 data cache by using a wider SSE/AVX register 1 (1).
14 The wider register gives space to hold multiple packet buffers so as to save instruction number when processing bulk of packets.
16 There is no change to PMD API. The RX/TX handler are the only two entries for vPMD packet I/O.
17 They are transparently registered at runtime RX/TX execution if all condition checks pass.
19 1. To date, only an SSE version of IX GBE vPMD is available.
21 Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
22 The following sections explain RX and TX constraints in the vPMD.
27 Linux Prerequisites and Pre-conditions
28 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
30 The following prerequisites apply:
32 * To enable vPMD to work for RX, bulk allocation for Rx must be allowed.
34 Ensure that the following pre-conditions are satisfied:
36 * rxq->rx_free_thresh >= RTE_PMD_IXGBE_RX_MAX_BURST
38 * rxq->rx_free_thresh < rxq->nb_rx_desc
40 * (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
42 * rxq->nb_rx_desc < (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
44 These conditions are checked in the code.
46 Scattered packets are not supported in this mode.
47 If an incoming packet is greater than the maximum acceptable length of one "mbuf" data size (by default, the size is 2 KB),
48 vPMD for RX would be disabled.
50 By default, IXGBE_MAX_RING_DESC is set to 4096 and RTE_PMD_IXGBE_RX_MAX_BURST is set to 32.
52 Windows Prerequisites and Pre-conditions
53 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
55 - Follow the :doc:`guide for Windows <../windows_gsg/run_apps>`
56 to setup the basic DPDK environment.
58 - Identify the Intel\ |reg| Ethernet adapter and get the latest NVM/FW version.
60 - To access any Intel\ |reg| Ethernet hardware,
61 load the NetUIO driver in place of existing built-in (inbox) driver.
63 - To load NetUIO driver, follow the steps mentioned in `dpdk-kmods repository
64 <https://git.dpdk.org/dpdk-kmods/tree/windows/netuio/README.rst>`_.
66 - Loading of private Dynamic Device Personalization (DDP) package
67 is not supported on Windows.
70 Feature not Supported by RX Vector PMD
71 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
73 Some features are not supported when trying to increase the throughput in vPMD.
82 * RX checksum off load
84 Other features are supported using optional MACRO configuration. They include:
90 To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
92 * RTE_ETH_RX_OFFLOAD_VLAN_STRIP
94 * RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
96 * RTE_ETH_RX_OFFLOAD_CHECKSUM
98 * RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
102 fdir_conf->mode will also be checked.
107 The following ``devargs`` options can be enabled at runtime. They must
108 be passed as part of EAL arguments. For example,
110 .. code-block:: console
112 dpdk-testpmd -a af:10.0,pflink_fullchk=1 -- -i
114 - ``pflink_fullchk`` (default **0**)
116 When calling ``rte_eth_link_get_nowait()`` to get VF link status,
117 this option is used to control how VF synchronizes its status with
118 PF's. If set, VF will not only check the PF's physical link status
119 by reading related register, but also check the mailbox status. We
120 call this behavior as fully checking. And checking mailbox will
121 trigger PF's mailbox interrupt generation. If unset, the application
122 can get the VF's link status quickly by just reading the PF's link
123 status register, this will avoid the whole system's mailbox interrupt
126 ``rte_eth_link_get()`` will still use the mailbox method regardless
127 of the pflink_fullchk setting.
132 As vPMD is focused on high throughput, it assumes that the RX burst size is equal to or greater than 32 per burst.
133 It returns zero if using nb_pkt < 32 as the expected packet number in the receive handler.
141 The only prerequisite is related to tx_rs_thresh.
142 The tx_rs_thresh value must be greater than or equal to RTE_PMD_IXGBE_TX_MAX_BURST,
143 but less or equal to RTE_IXGBE_TX_MAX_FREE_BUF_SZ.
144 Consequently, by default the tx_rs_thresh value is in the range 32 to 64.
146 Feature not Supported by TX Vector PMD
147 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
149 TX vPMD only works when offloads is set to 0
151 This means that it does not support any TX offload.
153 Application Programming Interface
154 ---------------------------------
156 In DPDK release v16.11 an API for ixgbe specific functions has been added to the ixgbe PMD.
157 The declarations for the API functions are in the header ``rte_pmd_ixgbe.h``.
159 Sample Application Notes
160 ------------------------
165 When running l3fwd with vPMD, there is one thing to note.
166 In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
167 Otherwise, by default, RX vPMD is disabled.
172 As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
173 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
176 Limitations or Known issues
177 ---------------------------
179 Malicious Driver Detection not Supported
180 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
182 The Intel x550 series NICs support a feature called MDD (Malicious
183 Driver Detection) which checks the behavior of the VF driver.
184 If this feature is enabled, the VF must use the advanced context descriptor
185 correctly and set the CC (Check Context) bit.
186 DPDK PF doesn't support MDD, but kernel PF does. We may hit problem in this
187 scenario kernel PF + DPDK VF. If user enables MDD in kernel PF, DPDK VF will
188 not work. Because kernel PF thinks the VF is malicious. But actually it's not.
189 The only reason is the VF doesn't act as MDD required.
190 There's significant performance impact to support MDD. DPDK should check if
191 the advanced context descriptor should be set and set it. And DPDK has to ask
192 the info about the header length from the upper layer, because parsing the
193 packet itself is not acceptable. So, it's too expensive to support MDD.
194 When using kernel PF + DPDK VF on x550, please make sure to use a kernel
195 PF driver that disables MDD or can disable MDD.
197 Some kernel drivers already disable MDD by default while some kernels can use
198 the command ``insmod ixgbe.ko MDD=0,0`` to disable MDD. Each "0" in the
199 command refers to a port. For example, if there are 6 ixgbe ports, the command
200 should be changed to ``insmod ixgbe.ko MDD=0,0,0,0,0,0``.
206 The statistics of ixgbe hardware must be polled regularly in order for it to
207 remain consistent. Running a DPDK application without polling the statistics will
208 cause registers on hardware to count to the maximum value, and "stick" at
211 In order to avoid statistic registers every reaching the maximum value,
212 read the statistics from the hardware using ``rte_eth_stats_get()`` or
213 ``rte_eth_xstats_get()``.
215 The maximum time between statistics polls that ensures consistent results can
216 be calculated as follows:
220 max_read_interval = UINT_MAX / max_packets_per_second
221 max_read_interval = 4294967295 / 14880952
222 max_read_interval = 288.6218096127183 (seconds)
223 max_read_interval = ~4 mins 48 sec.
225 In order to ensure valid results, it is recommended to poll every 4 minutes.
230 Although the user can set the MTU separately on PF and VF ports, the ixgbe NIC
231 only supports one global MTU per physical port.
232 So when the user sets different MTUs on PF and VF ports in one physical port,
233 the real MTU for all these PF and VF ports is the largest value set.
234 This behavior is based on the kernel driver behavior.
236 VF MAC address setting
237 ~~~~~~~~~~~~~~~~~~~~~~
239 On ixgbe, the concept of "pool" can be used for different things depending on
240 the mode. In VMDq mode, "pool" means a VMDq pool. In IOV mode, "pool" means a
243 There is no RTE API to add a VF's MAC address from the PF. On ixgbe, the
244 ``rte_eth_dev_mac_addr_add()`` function can be used to add a VF's MAC address,
247 X550 does not support legacy interrupt mode
248 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
252 X550 cannot get interrupts if using ``uio_pci_generic`` module or using legacy
253 interrupt mode of ``igb_uio`` or ``vfio``. Because the errata of X550 states
254 that the Interrupt Status bit is not implemented. The errata is the item #22
255 from `X550 spec update <https://www.intel.com/content/dam/www/public/us/en/
256 documents/specification-updates/ethernet-x550-spec-update.pdf>`_
260 When using ``uio_pci_generic`` module or using legacy interrupt mode of
261 ``igb_uio`` or ``vfio``, the Interrupt Status bit would be checked if the
262 interrupt is coming. Since the bit is not implemented in X550, the irq cannot
263 be handled correctly and cannot report the event fd to DPDK apps. Then apps
264 cannot get interrupts and ``dmesg`` will show messages like ``irq #No.: ``
269 Do not bind the ``uio_pci_generic`` module in X550 NICs.
270 Do not bind ``igb_uio`` with legacy mode in X550 NICs.
271 Before binding ``vfio`` with legacy mode in X550 NICs, use ``modprobe vfio ``
272 ``nointxmask=1`` to load ``vfio`` module if the intx is not shared with other
275 RSS isn't supported when QinQ is enabled
276 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
278 Due to FW limitation, IXGBE doesn't support RSS when QinQ is enabled currently.
280 UDP with zero checksum is reported as error
281 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
283 Intel 82599 10 Gigabit Ethernet Controller Specification Update (Revision 2.87)
284 Errata: 44 Integrity Error Reported for IPv4/UDP Packets With Zero Checksum
286 To support UDP zero checksum, the zero and bad UDP checksum packet is marked as
287 RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
290 Inline crypto processing support
291 --------------------------------
293 Inline IPsec processing is supported for ``RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
294 mode for ESP packets only:
296 - ESP authentication only: AES-128-GMAC (128-bit key)
297 - ESP encryption and authentication: AES-128-GCM (128-bit key)
299 IPsec Security Gateway Sample Application supports inline IPsec processing for
302 For more details see the IPsec Security Gateway Sample Application and Security
303 library documentation.
306 Virtual Function Port Representors
307 ----------------------------------
308 The IXGBE PF PMD supports the creation of VF port representors for the control
309 and monitoring of IXGBE virtual function devices. Each port representor
310 corresponds to a single virtual function of that device. Using the ``devargs``
311 option ``representor`` the user can specify which virtual functions to create
312 port representors for on initialization of the PF PMD by passing the VF IDs of
313 the VFs which are required.::
315 -a DBDF,representor=[0,1,4]
317 Currently hot-plugging of representor ports is not supported so all required
318 representors must be specified on the creation of the PF.
320 Supported Chipsets and NICs
321 ---------------------------
323 - Intel 82599EB 10 Gigabit Ethernet Controller
324 - Intel 82598EB 10 Gigabit Ethernet Controller
325 - Intel 82599ES 10 Gigabit Ethernet Controller
326 - Intel 82599EN 10 Gigabit Ethernet Controller
327 - Intel Ethernet Controller X540-AT2
328 - Intel Ethernet Controller X550-BT2
329 - Intel Ethernet Controller X550-AT2
330 - Intel Ethernet Controller X550-AT
331 - Intel Ethernet Converged Network Adapter X520-SR1
332 - Intel Ethernet Converged Network Adapter X520-SR2
333 - Intel Ethernet Converged Network Adapter X520-LR1
334 - Intel Ethernet Converged Network Adapter X520-DA1
335 - Intel Ethernet Converged Network Adapter X520-DA2
336 - Intel Ethernet Converged Network Adapter X520-DA4
337 - Intel Ethernet Converged Network Adapter X520-QDA1
338 - Intel Ethernet Converged Network Adapter X520-T2
339 - Intel 10 Gigabit AF DA Dual Port Server Adapter
340 - Intel 10 Gigabit AT Server Adapter
341 - Intel 10 Gigabit AT2 Server Adapter
342 - Intel 10 Gigabit CX4 Dual Port Server Adapter
343 - Intel 10 Gigabit XF LR Server Adapter
344 - Intel 10 Gigabit XF SR Dual Port Server Adapter
345 - Intel 10 Gigabit XF SR Server Adapter
346 - Intel Ethernet Converged Network Adapter X540-T1
347 - Intel Ethernet Converged Network Adapter X540-T2
348 - Intel Ethernet Converged Network Adapter X550-T1
349 - Intel Ethernet Converged Network Adapter X550-T2