2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
37 Vector PMD uses IntelĀ® SIMD instructions to optimize packet I/O.
38 It improves load/store bandwidth efficiency of L1 data cache by using a wider SSE/AVX register 1 (1).
39 The wider register gives space to hold multiple packet buffers so as to save instruction number when processing bulk of packets.
41 There is no change to PMD API. The RX/TX handler are the only two entries for vPMD packet I/O.
42 They are transparently registered at runtime RX/TX execution if all condition checks pass.
44 1. To date, only an SSE version of IX GBE vPMD is available.
45 To ensure that vPMD is in the binary code, ensure that the option CONFIG_RTE_IXGBE_INC_VECTOR=y is in the configure file.
47 Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
48 The following sections explain RX and TX constraints in the vPMD.
53 Prerequisites and Pre-conditions
54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
56 The following prerequisites apply:
58 * To enable vPMD to work for RX, bulk allocation for Rx must be allowed.
60 Ensure that the following pre-conditions are satisfied:
62 * rxq->rx_free_thresh >= RTE_PMD_IXGBE_RX_MAX_BURST
64 * rxq->rx_free_thresh < rxq->nb_rx_desc
66 * (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
68 * rxq->nb_rx_desc < (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
70 These conditions are checked in the code.
72 Scattered packets are not supported in this mode.
73 If an incoming packet is greater than the maximum acceptable length of one "mbuf" data size (by default, the size is 2 KB),
74 vPMD for RX would be disabled.
76 By default, IXGBE_MAX_RING_DESC is set to 4096 and RTE_PMD_IXGBE_RX_MAX_BURST is set to 32.
78 Feature not Supported by RX Vector PMD
79 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
81 Some features are not supported when trying to increase the throughput in vPMD.
90 * RX checksum off load
92 Other features are supported using optional MACRO configuration. They include:
98 * Enabled by RX_OLFLAGS (RTE_IXGBE_RX_OLFLAGS_ENABLE=y)
101 To guarantee the constraint, configuration flags in dev_conf.rxmode will be checked:
113 fdir_conf->mode will also be checked.
118 As vPMD is focused on high throughput, it assumes that the RX burst size is equal to or greater than 32 per burst.
119 It returns zero if using nb_pkt < 32 as the expected packet number in the receive handler.
127 The only prerequisite is related to tx_rs_thresh.
128 The tx_rs_thresh value must be greater than or equal to RTE_PMD_IXGBE_TX_MAX_BURST,
129 but less or equal to RTE_IXGBE_TX_MAX_FREE_BUF_SZ.
130 Consequently, by default the tx_rs_thresh value is in the range 32 to 64.
132 Feature not Supported by RX Vector PMD
133 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
135 TX vPMD only works when txq_flags is set to IXGBE_SIMPLE_FLAGS.
137 This means that it does not support TX multi-segment, VLAN offload and TX csum offload.
138 The following MACROs are used for these three features:
140 * ETH_TXQ_FLAGS_NOMULTSEGS
142 * ETH_TXQ_FLAGS_NOVLANOFFL
144 * ETH_TXQ_FLAGS_NOXSUMSCTP
146 * ETH_TXQ_FLAGS_NOXSUMUDP
148 * ETH_TXQ_FLAGS_NOXSUMTCP
151 Sample Application Notes
152 ~~~~~~~~~~~~~~~~~~~~~~~~
157 By default, using CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y:
159 .. code-block:: console
161 ./x86_64-native-linuxapp-gcc/app/testpmd -c 300 -n 4 -- -i --burst=32 --rxfreet=32 --mbcache=250 --txpt=32 --rxht=8 --rxwt=0 --txfreet=32 --txrst=32 --txqflags=0xf01
163 When CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=n, better performance can be achieved:
165 .. code-block:: console
167 ./x86_64-native-linuxapp-gcc/app/testpmd -c 300 -n 4 -- -i --burst=32 --rxfreet=32 --mbcache=250 --txpt=32 --rxht=8 --rxwt=0 --txfreet=32 --txrst=32 --txqflags=0xf01 --disable-hw-vlan
172 When running l3fwd with vPMD, there is one thing to note.
173 In the configuration, ensure that port_conf.rxmode.hw_ip_checksum=0.
174 Otherwise, by default, RX vPMD is disabled.
179 As in the case of l3fwd, set configure port_conf.rxmode.hw_ip_checksum=0 to enable vPMD.
180 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
183 Malicious Driver Detection not Supported
184 ----------------------------------------
186 The Intel x550 series NICs support a feature called MDD (Malicious
187 Driver Detection) which checks the behavior of the VF driver.
188 If this feature is enabled, the VF must use the advanced context descriptor
189 correctly and set the CC (Check Context) bit.
190 DPDK PF doesn't support MDD, but kernel PF does. We may hit problem in this
191 scenario kernel PF + DPDK VF. If user enables MDD in kernel PF, DPDK VF will
192 not work. Because kernel PF thinks the VF is malicious. But actually it's not.
193 The only reason is the VF doesn't act as MDD required.
194 There's significant performance impact to support MDD. DPDK should check if
195 the advanced context descriptor should be set and set it. And DPDK has to ask
196 the info about the header length from the upper layer, because parsing the
197 packet itself is not acceptable. So, it's too expensive to support MDD.
198 When using kernel PF + DPDK VF on x550, please make sure using the kernel
199 driver that disables MDD or can disable MDD. (Some kernel driver can use
200 this CLI 'insmod ixgbe.ko MDD=0,0' to disable MDD. Some kernel driver disables
207 The statistics of ixgbe hardware must be polled regularly in order for it to
208 remain consistent. Running a DPDK application without polling the statistics will
209 cause registers on hardware to count to the maximum value, and "stick" at
212 In order to avoid statistic registers every reaching the maximum value,
213 read the statistics from the hardware using ``rte_eth_stats_get()`` or
214 ``rte_eth_xstats_get()``.
216 The maximum time between statistics polls that ensures consistent results can
217 be calculated as follows:
221 max_read_interval = UINT_MAX / max_packets_per_second
222 max_read_interval = 4294967295 / 14880952
223 max_read_interval = 288.6218096127183 (seconds)
224 max_read_interval = ~4 mins 48 sec.
226 In order to ensure valid results, it is recommended to poll every 4 minutes.