1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2016 Intel Corporation.
10 Vector PMD uses IntelĀ® SIMD instructions to optimize packet I/O.
11 It improves load/store bandwidth efficiency of L1 data cache by using a wider SSE/AVX register 1 (1).
12 The wider register gives space to hold multiple packet buffers so as to save instruction number when processing bulk of packets.
14 There is no change to PMD API. The RX/TX handler are the only two entries for vPMD packet I/O.
15 They are transparently registered at runtime RX/TX execution if all condition checks pass.
17 1. To date, only an SSE version of IX GBE vPMD is available.
18 To ensure that vPMD is in the binary code, ensure that the option CONFIG_RTE_IXGBE_INC_VECTOR=y is in the configure file.
20 Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
21 The following sections explain RX and TX constraints in the vPMD.
26 Prerequisites and Pre-conditions
27 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
29 The following prerequisites apply:
31 * To enable vPMD to work for RX, bulk allocation for Rx must be allowed.
33 Ensure that the following pre-conditions are satisfied:
35 * rxq->rx_free_thresh >= RTE_PMD_IXGBE_RX_MAX_BURST
37 * rxq->rx_free_thresh < rxq->nb_rx_desc
39 * (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
41 * rxq->nb_rx_desc < (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
43 These conditions are checked in the code.
45 Scattered packets are not supported in this mode.
46 If an incoming packet is greater than the maximum acceptable length of one "mbuf" data size (by default, the size is 2 KB),
47 vPMD for RX would be disabled.
49 By default, IXGBE_MAX_RING_DESC is set to 4096 and RTE_PMD_IXGBE_RX_MAX_BURST is set to 32.
51 Feature not Supported by RX Vector PMD
52 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
54 Some features are not supported when trying to increase the throughput in vPMD.
63 * RX checksum off load
65 Other features are supported using optional MACRO configuration. They include:
71 To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
73 * DEV_RX_OFFLOAD_VLAN_STRIP
75 * DEV_RX_OFFLOAD_VLAN_EXTEND
77 * DEV_RX_OFFLOAD_CHECKSUM
79 * DEV_RX_OFFLOAD_HEADER_SPLIT
83 fdir_conf->mode will also be checked.
88 As vPMD is focused on high throughput, it assumes that the RX burst size is equal to or greater than 32 per burst.
89 It returns zero if using nb_pkt < 32 as the expected packet number in the receive handler.
97 The only prerequisite is related to tx_rs_thresh.
98 The tx_rs_thresh value must be greater than or equal to RTE_PMD_IXGBE_TX_MAX_BURST,
99 but less or equal to RTE_IXGBE_TX_MAX_FREE_BUF_SZ.
100 Consequently, by default the tx_rs_thresh value is in the range 32 to 64.
102 Feature not Supported by TX Vector PMD
103 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
105 TX vPMD only works when offloads is set to 0
107 This means that it does not support any TX offload.
109 Application Programming Interface
110 ---------------------------------
112 In DPDK release v16.11 an API for ixgbe specific functions has been added to the ixgbe PMD.
113 The declarations for the API functions are in the header ``rte_pmd_ixgbe.h``.
115 Sample Application Notes
116 ------------------------
121 When running l3fwd with vPMD, there is one thing to note.
122 In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
123 Otherwise, by default, RX vPMD is disabled.
128 As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
129 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
132 Limitations or Known issues
133 ---------------------------
135 Malicious Driver Detection not Supported
136 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
138 The Intel x550 series NICs support a feature called MDD (Malicious
139 Driver Detection) which checks the behavior of the VF driver.
140 If this feature is enabled, the VF must use the advanced context descriptor
141 correctly and set the CC (Check Context) bit.
142 DPDK PF doesn't support MDD, but kernel PF does. We may hit problem in this
143 scenario kernel PF + DPDK VF. If user enables MDD in kernel PF, DPDK VF will
144 not work. Because kernel PF thinks the VF is malicious. But actually it's not.
145 The only reason is the VF doesn't act as MDD required.
146 There's significant performance impact to support MDD. DPDK should check if
147 the advanced context descriptor should be set and set it. And DPDK has to ask
148 the info about the header length from the upper layer, because parsing the
149 packet itself is not acceptable. So, it's too expensive to support MDD.
150 When using kernel PF + DPDK VF on x550, please make sure to use a kernel
151 PF driver that disables MDD or can disable MDD.
153 Some kernel drivers already disable MDD by default while some kernels can use
154 the command ``insmod ixgbe.ko MDD=0,0`` to disable MDD. Each "0" in the
155 command refers to a port. For example, if there are 6 ixgbe ports, the command
156 should be changed to ``insmod ixgbe.ko MDD=0,0,0,0,0,0``.
162 The statistics of ixgbe hardware must be polled regularly in order for it to
163 remain consistent. Running a DPDK application without polling the statistics will
164 cause registers on hardware to count to the maximum value, and "stick" at
167 In order to avoid statistic registers every reaching the maximum value,
168 read the statistics from the hardware using ``rte_eth_stats_get()`` or
169 ``rte_eth_xstats_get()``.
171 The maximum time between statistics polls that ensures consistent results can
172 be calculated as follows:
176 max_read_interval = UINT_MAX / max_packets_per_second
177 max_read_interval = 4294967295 / 14880952
178 max_read_interval = 288.6218096127183 (seconds)
179 max_read_interval = ~4 mins 48 sec.
181 In order to ensure valid results, it is recommended to poll every 4 minutes.
186 Although the user can set the MTU separately on PF and VF ports, the ixgbe NIC
187 only supports one global MTU per physical port.
188 So when the user sets different MTUs on PF and VF ports in one physical port,
189 the real MTU for all these PF and VF ports is the largest value set.
190 This behavior is based on the kernel driver behavior.
192 VF MAC address setting
193 ~~~~~~~~~~~~~~~~~~~~~~
195 On ixgbe, the concept of "pool" can be used for different things depending on
196 the mode. In VMDq mode, "pool" means a VMDq pool. In IOV mode, "pool" means a
199 There is no RTE API to add a VF's MAC address from the PF. On ixgbe, the
200 ``rte_eth_dev_mac_addr_add()`` function can be used to add a VF's MAC address,
204 Inline crypto processing support
205 --------------------------------
207 Inline IPsec processing is supported for ``RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
208 mode for ESP packets only:
210 - ESP authentication only: AES-128-GMAC (128-bit key)
211 - ESP encryption and authentication: AES-128-GCM (128-bit key)
213 IPsec Security Gateway Sample Application supports inline IPsec processing for
216 For more details see the IPsec Security Gateway Sample Application and Security
217 library documentation.
220 Virtual Function Port Representors
221 ----------------------------------
222 The IXGBE PF PMD supports the creation of VF port representors for the control
223 and monitoring of IXGBE virtual function devices. Each port representor
224 corresponds to a single virtual function of that device. Using the ``devargs``
225 option ``representor`` the user can specify which virtual functions to create
226 port representors for on initialization of the PF PMD by passing the VF IDs of
227 the VFs which are required.::
229 -w DBDF,representor=[0,1,4]
231 Currently hot-plugging of representor ports is not supported so all required
232 representors must be specified on the creation of the PF.
234 Supported Chipsets and NICs
235 ---------------------------
237 - Intel 82599EB 10 Gigabit Ethernet Controller
238 - Intel 82598EB 10 Gigabit Ethernet Controller
239 - Intel 82599ES 10 Gigabit Ethernet Controller
240 - Intel 82599EN 10 Gigabit Ethernet Controller
241 - Intel Ethernet Controller X540-AT2
242 - Intel Ethernet Controller X550-BT2
243 - Intel Ethernet Controller X550-AT2
244 - Intel Ethernet Controller X550-AT
245 - Intel Ethernet Converged Network Adapter X520-SR1
246 - Intel Ethernet Converged Network Adapter X520-SR2
247 - Intel Ethernet Converged Network Adapter X520-LR1
248 - Intel Ethernet Converged Network Adapter X520-DA1
249 - Intel Ethernet Converged Network Adapter X520-DA2
250 - Intel Ethernet Converged Network Adapter X520-DA4
251 - Intel Ethernet Converged Network Adapter X520-QDA1
252 - Intel Ethernet Converged Network Adapter X520-T2
253 - Intel 10 Gigabit AF DA Dual Port Server Adapter
254 - Intel 10 Gigabit AT Server Adapter
255 - Intel 10 Gigabit AT2 Server Adapter
256 - Intel 10 Gigabit CX4 Dual Port Server Adapter
257 - Intel 10 Gigabit XF LR Server Adapter
258 - Intel 10 Gigabit XF SR Dual Port Server Adapter
259 - Intel 10 Gigabit XF SR Server Adapter
260 - Intel Ethernet Converged Network Adapter X540-T1
261 - Intel Ethernet Converged Network Adapter X540-T2
262 - Intel Ethernet Converged Network Adapter X550-T1
263 - Intel Ethernet Converged Network Adapter X550-T2