dpdk.git
2 years agoeal: fix crash when allocating memory on a control thread tmp_20211026
Ilyes Ben Hamouda [Tue, 17 Aug 2021 12:49:09 +0000 (14:49 +0200)]
eal: fix crash when allocating memory on a control thread

By using rte_malloc(), the choice of a numa socket
in case of SOCKET_ID_ANY is 0. This may lead to control
thread crash in alloc_seg() when numa node 0 is not used.

Let's choose the first numa socket where memory is available.

Note:
malloc_get_numa_socket is no longer small(contains a loop).
Hence, inline keyword was removed.

PR=74909
Fixes: b94580d6887e ("malloc: avoid unknown socket id")
Signed-off-by: Ilyes Ben Hamouda <ilyes.ben_hamouda@6wind.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2 years agobus/pci: fix selection of default device NUMA node
Houssem Bouhlel [Mon, 25 Oct 2021 14:24:14 +0000 (16:24 +0200)]
bus/pci: fix selection of default device NUMA node

There can be dev binding issue when no hugepages
are allocated for socket 0.
To avoid this, set device numa node value based on
the first lcore instead of 0.

Signed-off-by: Houssem Bouhlel <houssem.bouhlel@6wind.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2 years agopipeline: support action annotations
Yogesh Jangra [Mon, 18 Oct 2021 01:22:53 +0000 (21:22 -0400)]
pipeline: support action annotations

Enable restricting the scope of an action to regular table entries or
to the table default entry in order to support the P4 language
tableonly or defaultonly annotations.

Signed-off-by: Yogesh Jangra <yogesh.jangra@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2 years agoport: configure loop count for source port
Yogesh Jangra [Fri, 17 Sep 2021 10:32:05 +0000 (06:32 -0400)]
port: configure loop count for source port

Add support for configurable number of loops through the input PCAP
file for the source port. Added an additional parameter to source
port CLI command.

Signed-off-by: Yogesh Jangra <yogesh.jangra@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2 years agopipeline: fix instruction label check
Yogesh Jangra [Thu, 21 Oct 2021 03:23:32 +0000 (23:23 -0400)]
pipeline: fix instruction label check

The instruction_data array was incorrectly indexed, which resulted in
the array index getting out of bounds and sometimes segfault.

Fixes: a1711f (“pipeline: add SWX Rx and extract instructions“)
Cc: stable@dpdk.org
Signed-off-by: Yogesh Jangra <yogesh.jangra@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2 years agotest/event: fix timer adapter creation test
Shijith Thotton [Mon, 30 Aug 2021 20:12:59 +0000 (01:42 +0530)]
test/event: fix timer adapter creation test

Removed freeing of unallocated mempool in event timer adapter create
unit test.

Fixes: d1f3385d0076 ("test: add event timer adapter auto-test")
Cc: stable@dpdk.org
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
2 years agotest/devargs: fix memory leak
Xueming Li [Sat, 23 Oct 2021 12:17:55 +0000 (20:17 +0800)]
test/devargs: fix memory leak

In layer argument test function, kvargs are parsed and checked without
free. This patch calls rte_kvargs_free() function to avoid memory leak.

Coverity issue: 373631
Fixes: a4975cd20dca ("test: add devargs test cases")

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2 years agonet: fix build with pedantic for L2TPv2 definitions
David Marchand [Sun, 24 Oct 2021 10:04:11 +0000 (12:04 +0200)]
net: fix build with pedantic for L2TPv2 definitions

Build is broken on RHEL7 following introduction of this new protocol.

Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")

Signed-off-by: David Marchand <david.marchand@redhat.com>
Tested-by: Raslan Darawsheh <rasland@nvidia.com>
2 years agombuf: add namespace to offload flags
Olivier Matz [Fri, 15 Oct 2021 19:24:08 +0000 (21:24 +0200)]
mbuf: add namespace to offload flags

Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2 years agodevtools: add cocci script to rename mbuf offload flags
Olivier Matz [Fri, 15 Oct 2021 19:24:07 +0000 (21:24 +0200)]
devtools: add cocci script to rename mbuf offload flags

The mbuf offload flags do not match the DPDK namespace (they are not
prefixed by RTE_). This coccinelle script is used in the next commit to
do the replacement in the code.

A draft script was initially submitted [1] in commit d7595795b760 ("doc:
announce renaming of mbuf offload flags"), but dropped by mistake at
commit.

1: http://inbox.dpdk.org/dev/20210730155700.32574-1-olivier.matz@6wind.com

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2 years agombuf: mark old VLAN offload flags as deprecated
Olivier Matz [Fri, 15 Oct 2021 19:24:06 +0000 (21:24 +0200)]
mbuf: mark old VLAN offload flags as deprecated

The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
VLAN flags") (2017). But they were not using the RTE_DEPRECATED
macro, because it did not exist at this time. Add it, and replace
usage of these flags.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2 years agombuf: remove duplicate definition of cksum offload flags
Olivier Matz [Fri, 15 Oct 2021 19:24:05 +0000 (21:24 +0200)]
mbuf: remove duplicate definition of cksum offload flags

The flags PKT_RX_L4_CKSUM_BAD and PKT_RX_IP_CKSUM_BAD are defined
twice with the same value. Remove one of the occurrence, which was
marked as "deprecated".

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2 years agocompress/mlx5: support partial transformation
Raja Zidane [Wed, 15 Sep 2021 00:12:23 +0000 (00:12 +0000)]
compress/mlx5: support partial transformation

Currently compress, decompress and dma are allowed
only when all 3 capabilities are on.
A case where the user wants decompress offload, if
decompress capability is on but one of compress,
dma is off, is not allowed.
Split compress/decompress/dma support check to allow
partial transformations.

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agocrypto/cnxk: allow different cores in pending queue
Anoob Joseph [Mon, 18 Oct 2021 07:51:40 +0000 (13:21 +0530)]
crypto/cnxk: allow different cores in pending queue

Rework pending queue to allow producer and consumer cores to be
different.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
2 years agocommon/cnxk: align CPT queue depth to power of 2
Anoob Joseph [Mon, 18 Oct 2021 07:51:39 +0000 (13:21 +0530)]
common/cnxk: align CPT queue depth to power of 2

Use CPT LF queue depth as power of 2 to aid in masked checks for pending
queue.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agoipsec: fix telemetry text
Radu Nicolau [Tue, 19 Oct 2021 15:15:28 +0000 (16:15 +0100)]
ipsec: fix telemetry text

Set correct tunnel type telemetry text - tunnel type
was wrongly set as IPv4-UDP for all types.

Fixes: bf5b65a8e781 ("ipsec: support SA telemetry")

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocryptodev: move device-specific structures
Akhil Goyal [Wed, 20 Oct 2021 11:27:54 +0000 (16:57 +0530)]
cryptodev: move device-specific structures

The device specific structures - rte_cryptodev
and rte_cryptodev_data are moved to cryptodev_pmd.h
to hide it from the applications.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2 years agocryptodev: use new flat array in fast path API
Akhil Goyal [Wed, 20 Oct 2021 11:27:53 +0000 (16:57 +0530)]
cryptodev: use new flat array in fast path API

Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2 years agodrivers/crypto: invoke probing finish function
Akhil Goyal [Wed, 20 Oct 2021 11:27:52 +0000 (16:57 +0530)]
drivers/crypto: invoke probing finish function

Invoke event_dev_probing_finish() function at the end of probing,
this function sets the function pointers in the fp_ops flat array
in case of secondary process.
For primary process, fp_ops is updated in rte_cryptodev_start().

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agocryptodev: add device probing finish function
Akhil Goyal [Wed, 20 Oct 2021 11:27:51 +0000 (16:57 +0530)]
cryptodev: add device probing finish function

Added a rte_cryptodev_pmd_probing_finish API which
need to be called by the PMD after the device is initialized
completely. This will set the fast path function pointers
in the flat array for secondary process. For primary process,
these are set in rte_cryptodev_start.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
2 years agocrypto/scheduler: use proper API for device start/stop
Akhil Goyal [Wed, 20 Oct 2021 11:27:50 +0000 (16:57 +0530)]
crypto/scheduler: use proper API for device start/stop

The worker PMDs were using direct device start/stop
functions rather than rte_cryptodev_start(),
so rte_crypto_fp_ops never get set. This patch calls
the rte_cryptodev_start and stop APIs which start and
stop devices properly and fp_ops get set.

Reported-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2 years agocryptodev: move inline APIs into separate structure
Akhil Goyal [Wed, 20 Oct 2021 11:27:49 +0000 (16:57 +0530)]
cryptodev: move inline APIs into separate structure

Move fastpath inline function pointers from rte_cryptodev into a
separate structure accessed via a flat array.
The intention is to make rte_cryptodev and related structures private
to avoid future API/ABI breakages.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2 years agocryptodev: allocate max space for internal queue array
Akhil Goyal [Wed, 20 Oct 2021 11:27:48 +0000 (16:57 +0530)]
cryptodev: allocate max space for internal queue array

At queue_pair config stage, allocate memory for maximum
number of queue pair pointers that a device can support.

This will allow fast path APIs(enqueue_burst/dequeue_burst) to
refer pointer to internal QP data without checking for currently
configured QPs.
This is required to hide the rte_cryptodev and rte_cryptodev_data
structure from user.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2 years agocryptodev: separate out internal structures
Akhil Goyal [Wed, 20 Oct 2021 11:27:47 +0000 (16:57 +0530)]
cryptodev: separate out internal structures

A new header file rte_cryptodev_core.h is added and all
internal data structures which need not be exposed directly to
application are moved to this file. These structures are mostly
used by drivers, but they need to be in the public header file
as they are accessed by datapath inline functions for
performance reasons.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2 years agotest/crypto: enable chacha_poly PMD
Kai Ji [Fri, 15 Oct 2021 14:39:57 +0000 (14:39 +0000)]
test/crypto: enable chacha_poly PMD

An autotest is added for the new chacha20_poly1305 PMD.
A new test case is also added for SGL test.

Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: add chacha_poly PMD
Kai Ji [Fri, 15 Oct 2021 14:39:56 +0000 (14:39 +0000)]
crypto/ipsec_mb: add chacha_poly PMD

Add in new chacha20_poly1305 PMD to the ipsec_mb framework.

Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: move zuc PMD
Piotr Bronowski [Fri, 15 Oct 2021 14:39:55 +0000 (14:39 +0000)]
crypto/ipsec_mb: move zuc PMD

This patch removes the crypto/zuc folder and gathers all zuc PMD
implementation specific details into two files,
pmd_zuc.c and pmd_zuc_priv.h in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: support snow3g digest appended ops
Piotr Bronowski [Fri, 15 Oct 2021 14:39:54 +0000 (14:39 +0000)]
crypto/ipsec_mb: support snow3g digest appended ops

This patch enables out-of-place auth-cipher operations where
digest should be encrypted along with the rest of raw data.
It also adds support for partially encrypted digest when using
auth-cipher operations.

Signed-off-by: Damian Nowak <damianx.nowak@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: move snow3g PMD
Piotr Bronowski [Fri, 15 Oct 2021 14:39:53 +0000 (14:39 +0000)]
crypto/ipsec_mb: move snow3g PMD

This patch removes the crypto/snow3g folder and gathers all snow3g PMD
implementation specific details into a single file,
pmd_snow3g.c in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: move kasumi PMD
Piotr Bronowski [Fri, 15 Oct 2021 14:39:52 +0000 (14:39 +0000)]
crypto/ipsec_mb: move kasumi PMD

This patch removes the crypto/kasumi folder and gathers all kasumi PMD
implementation specific details into a single file,
pmd_kasumi.c in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: move aesni_gcm PMD
Piotr Bronowski [Fri, 15 Oct 2021 14:39:51 +0000 (14:39 +0000)]
crypto/ipsec_mb: move aesni_gcm PMD

This patch removes the crypto/aesni_gcm folder and gathers all
aesni-gcm PMD implementation specific details into a single file,
pmd_aesni_gcm.c in the crypto/ipsec_mb folder.
A redundant check for iv length is removed.

GCM ops are stored in the queue pair for multi process support, they
are updated during queue pair setup for both primary and secondary
processes.

GCM ops are also set per lcore for the CPU crypto mode.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agotest/crypto: add ZUC-256 vectors
Pablo de Lara [Fri, 15 Oct 2021 14:39:50 +0000 (14:39 +0000)]
test/crypto: add ZUC-256 vectors

Add extra ZUC-EIA3-256 and ZUC-EEA3-256 test vectors.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agotest/crypto: check auth parameters
Pablo de Lara [Fri, 15 Oct 2021 14:39:49 +0000 (14:39 +0000)]
test/crypto: check auth parameters

Check for auth parameters in the transform to verify if a test case is
supported by the crypto device under test.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agotest/crypto: check cipher parameters
Pablo de Lara [Fri, 15 Oct 2021 14:39:48 +0000 (14:39 +0000)]
test/crypto: check cipher parameters

Check for cipher parameters in the transform to verify if a test case
is supported by the crypto device under test.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: support ZUC-256 for aesni_mb
Pablo de Lara [Fri, 15 Oct 2021 14:39:47 +0000 (14:39 +0000)]
crypto/ipsec_mb: support ZUC-256 for aesni_mb

Add support for ZUC-EEA3-256 and ZUC-EIA3-256.
Only 4-byte tags supported for now.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: move aesni_mb PMD
Piotr Bronowski [Fri, 15 Oct 2021 14:39:46 +0000 (14:39 +0000)]
crypto/ipsec_mb: move aesni_mb PMD

This patch removes the crypto/aesni_mb folder and gathers all
aesni-mb PMD implementation specific details into a single file,
pmd_aesni_mb.c in crypto/ipsec_mb.

Now that intel-ipsec-mb v1.0 is the minimum supported version, old
macros can be replaced with the newer macros supported by this version.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: support multi-process
Ciara Power [Fri, 15 Oct 2021 14:39:45 +0000 (14:39 +0000)]
crypto/ipsec_mb: support multi-process

The ipsec_mb SW PMD now has multiprocess support.
The queue-pair IMB_MGR is stored in a memzone instead of being allocated
externally by the Intel IPSec MB library, when v1.1 is used.
If v1.0 is used, multi process is not supported, and allocation is
done as before.
The secondary process needs to reconfigure the queue-pair to allow for
IMB_MGR function pointers be updated.

Intel IPsec MB library version 1.1 is required for this support.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/ipsec_mb: introduce IPsec_mb framework
Fan Zhang [Fri, 15 Oct 2021 14:39:44 +0000 (14:39 +0000)]
crypto/ipsec_mb: introduce IPsec_mb framework

This patch introduces the new framework to share common code between
the SW crypto PMDs that depend on the intel-ipsec-mb library.
This change helps to reduce future effort on the code maintenance and
feature updates.

The PMDs that will be added to this framework in subsequent patches are:
  - AESNI MB
  - AESNI GCM
  - CHACHA20_POLY1305
  - KASUMI
  - SNOW3G
  - ZUC

The use of these PMDs will not change, they will still be supported for
x86, and will use the same EAL args as before.

The minimum required version for the intel-ipsec-mb library is now v1.0.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agoethdev: avoid usage of ULL for 64-bit unsigned constants
Andrew Rybchenko [Fri, 22 Oct 2021 11:20:16 +0000 (14:20 +0300)]
ethdev: avoid usage of ULL for 64-bit unsigned constants

Use UINT64_C() macro instead.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: replace single bit masks with macros
Andrew Rybchenko [Fri, 22 Oct 2021 07:30:02 +0000 (10:30 +0300)]
ethdev: replace single bit masks with macros

The macros RTE_BIT32 and RTE_BIT64 are used to replace single bit masks.

Do not switch VLAN offload flags since type is not fixed size.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: add namespace
Ferruh Yigit [Fri, 22 Oct 2021 11:03:12 +0000 (12:03 +0100)]
ethdev: add namespace

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2 years agotest/bonding: fix after hiding ethdev internal structures
Konstantin Ananyev [Fri, 22 Oct 2021 13:26:42 +0000 (14:26 +0100)]
test/bonding: fix after hiding ethdev internal structures

link bounding auto-test internally creates emulated ethdev.
Some tests change Rx/Tx functions of this emulated device on the fly:
by directly modifying rte_eth_dev fields and without doing stop/start
for these devices.
As now ethdev uses rte_eth_fp_ops[] for fast-path functions, these
direct changes doesn't make expected effect.
Fix the problem by guarding fast-path functions changes with
rte_eth_dev_stop()/rte_eth_dev_start().

Fixes: 7a0935239b9e ("ethdev: make fast-path functions to use new flat array")

Reported-by: Lewei Yang <leweix.yang@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agodrivers/net: fix removing jumbo offload flag
Ferruh Yigit [Fri, 22 Oct 2021 12:57:15 +0000 (13:57 +0100)]
drivers/net: fix removing jumbo offload flag

After DEV_RX_OFFLOAD_JUMBO_FRAME flag removed, drivers give jumbo frame
decisions based on MTU value checks, but some of the checks were wrong
by mistake, causing device initialization to fail, fixing them.

Fixes: b563c1421282 ("ethdev: remove jumbo offload flag")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
2 years agodoc: remove jumbo offload feature
Ferruh Yigit [Fri, 22 Oct 2021 12:57:14 +0000 (13:57 +0100)]
doc: remove jumbo offload feature

Jumbo offload is no more announced as capability, and
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag is removed.

This patch is also removing 'Jumbo frame' feature from documentation.

Fixes: b563c1421282 ("ethdev: remove jumbo offload flag")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2 years agonet/af_xdp: fix max Rx packet length
Ciara Loftus [Fri, 22 Oct 2021 14:07:17 +0000 (14:07 +0000)]
net/af_xdp: fix max Rx packet length

Commit 1bb4a528c41f ("ethdev: fix max Rx packet length") clarified the
expected usage of the max_rx_pktlen and max_mtu values and implemented
some extra checks on these values to ensure they are sane. After this,
the AF_XDP PMD fails to initialise. The value for max_rx_pktlen which
represents the max size of the Ethernet frame was set to ETH_FRAME_LEN
(1514) and the max_mtu which represents the size of the payload was set
to the max size of the Ethernet frame. This did not make sense, as
naturally the maximum frame size should be greater than the payload
size.

Fix this by setting the max_rx_pktlen equal to the max size of the
Ethernet frame as expected, and the max MTU equal to the max_rx_pktlen
less the overhead which is set to the size of an Ethernet header plus
CRC.

Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length")

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: forbid MTU set before device configure
Ivan Ilchenko [Fri, 22 Oct 2021 10:18:28 +0000 (13:18 +0300)]
ethdev: forbid MTU set before device configure

rte_eth_dev_configure() always sets MTU to either dev_conf.rxmode.mtu
or RTE_ETHER_MTU if application doesn't provide the value.
So, there is no point to allow rte_eth_dev_set_mtu() before since
set value will be overwritten on configure anyway.

Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length")

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: remove unused L2 tunnel mask defines
Andrew Rybchenko [Fri, 22 Oct 2021 07:22:14 +0000 (10:22 +0300)]
ethdev: remove unused L2 tunnel mask defines

Fixes: cf47acc0f9ba ("ethdev: remove L2 tunnel offload control API")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoapp/testpmd: fix packet burst spreading stats
Eli Britstein [Thu, 21 Oct 2021 13:20:02 +0000 (16:20 +0300)]
app/testpmd: fix packet burst spreading stats

RX/TX functions (rte_eth_rx_burst/rte_eth_tx_burst) get 'nb_pkts'
argument, which specifies the maximum number to receive/transmit.
It can be 0..nb_pkts, meaning nb_pkts+1 options.
Testpmd can provide statistics of the burst sizes ('set
record-burst-stats on') by incrementing an array cell of index
<burst-size>. This array is mistakenly [MAX_PKT_BURST] size. Receiving
the maximum burst will cause out of bound write.
Enlarge the spread stats array by one cell to fix it.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agonet/hns3: add runtime config for mailbox limit time
Chengchang Tang [Fri, 22 Oct 2021 01:38:40 +0000 (09:38 +0800)]
net/hns3: add runtime config for mailbox limit time

Current, the max waiting time for MBX response is 500ms, but in
some scenarios, it is not enough. Since it depends on the response
of the kernel mode driver, and its response time is related to the
scheduling of the system. In this special scenario, most of the
cores are isolated, and only a few cores are used for system
scheduling. When a large number of services are started, the
scheduling of the system will be very busy, and the reply of the
mbx message will time out, which will cause our PMD initialization
to fail.

This patch add a runtime config to set the max wait time. For the
above scenes, users can adjust the waiting time to a suitable value
by themselves.

Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2 years agoapp/testpmd: add forwarding engine for shared Rx queue
Xueming Li [Thu, 21 Oct 2021 10:41:42 +0000 (18:41 +0800)]
app/testpmd: add forwarding engine for shared Rx queue

To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
2 years agoapp/testpmd: force shared Rx queue polled on same core
Xueming Li [Thu, 21 Oct 2021 10:41:41 +0000 (18:41 +0800)]
app/testpmd: force shared Rx queue polled on same core

Shared Rx queue must be polled on same core. This patch checks and stops
forwarding if shared RxQ being scheduled on multiple
cores.

It's suggested to use same number of Rx queues and polling cores.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
2 years agoapp/testpmd: dump port info for shared Rx queue
Xueming Li [Thu, 21 Oct 2021 10:41:40 +0000 (18:41 +0800)]
app/testpmd: dump port info for shared Rx queue

In case of shared Rx queue, source port mbuf from polling result isn't
the Rx port of forwarding stream. To provide original port ID, this
patch dumps mbuf->port for each packet in verbose mode if shared Rx
queue enabled.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2 years agoapp/testpmd: add parameter for shared Rx queue
Xueming Li [Thu, 21 Oct 2021 10:41:39 +0000 (18:41 +0800)]
app/testpmd: add parameter for shared Rx queue

Adds "--rxq-share=X" parameter to enable shared RxQ.

Rx queue is shared if device supports, otherwise fallback to standard
RxQ.

Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX,
implies all ports join share group 1. Queue ID is mapped equally with
shared Rx queue ID.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2 years agoapp/testpmd: dump device capability and Rx domain info
Xueming Li [Thu, 21 Oct 2021 10:41:38 +0000 (18:41 +0800)]
app/testpmd: dump device capability and Rx domain info

Dump device capability and Rx domain ID if shared Rx queue is supported
by device.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2 years agoethdev: get device capability name as string
Xueming Li [Thu, 21 Oct 2021 10:41:37 +0000 (18:41 +0800)]
ethdev: get device capability name as string

This patch adds API to return name of device capability.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2 years agoethdev: introduce shared Rx queue
Xueming Li [Thu, 21 Oct 2021 10:41:36 +0000 (18:41 +0800)]
ethdev: introduce shared Rx queue

In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain, the memory consumption became significant.
Polling all ports also leads to high cache miss, high latency and low
throughput.

This patch introduces shared Rx queue. Ports in same Rx domain and
switch domain could share Rx queue set by specifying non-zero sharing
group in Rx queue configuration.

Shared Rx queue is identified by share_rxq field of Rx queue
configuration. Port A RxQ X can share RxQ with Port B RxQ Y by using
same shared Rx queue ID.

No special API is defined to receive packets from shared Rx queue.
Polling any member port of a shared Rx queue receives packets of that
queue for all member ports, port_id is identified by mbuf->port. PMD is
responsible to resolve shared Rx queue from device and queue data.

Shared Rx queue must be polled in same thread or core, polling a queue
ID of any member port is essentially same.

Multiple share groups are supported. PMD should support mixed
configuration by allowing multiple share groups and non-shared Rx queue
on one port.

Example grouping and polling model to reflect service priority:
 Group1, 2 shared Rx queues per port: PF, rep0, rep1
 Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127
 Core0: poll PF queue0
 Core1: poll PF queue1
 Core2: poll rep2 queue0

PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE.

PMD is responsible for shared Rx queue consistency checks to avoid
member port's configuration contradict each other.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2 years agoethdev: fix PCI device release in secondary process
Huisong Li [Thu, 21 Oct 2021 02:24:21 +0000 (10:24 +0800)]
ethdev: fix PCI device release in secondary process

In secondary process, rte_eth_dev_close() doesn't clear eth_dev->data.
If calling rte_dev_remove() after rte_eth_dev_close(), in
rte_eth_dev_pci_generic_remove() function, the released eth device still
can be found by its name in shared memory. As a result, the eth device
will be released repeatedly. The state of the eth device is modified to
RTE_ETH_DEV_UNUSED after rte_eth_dev_close(). So this state can be used
to avoid this problem.

Fixes: dcd5c8112bc3 ("ethdev: add PCI driver helpers")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agonet/cnxk: support port ID flow action
Satheesh Paul [Thu, 21 Oct 2021 05:11:15 +0000 (10:41 +0530)]
net/cnxk: support port ID flow action

This patch adds support for rte flow action type port_id to
enable directing packets from an input port PF to an output
port which is a VF of the input port PF.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agocommon/cnxk: support port ID action
Satheesh Paul [Thu, 21 Oct 2021 05:11:14 +0000 (10:41 +0530)]
common/cnxk: support port ID action

This patch adds ROC API to support flow port ID action type.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/virtio: fix avail descriptor ID
Xuan Ding [Thu, 21 Oct 2021 14:25:40 +0000 (14:25 +0000)]
net/virtio: fix avail descriptor ID

Vhost will update desc’s Buffer ID advance to next used descriptor when
VIRTIO_F_IN_ORDER feature negotiated. When virtio reuses the descriptor,
the Buffer ID should be restored even VIRTQ_DESC_F_INDIRECT
feature negotiated.

Fixes: b473061b0e1d ("net/virtio: fix indirect descriptors in packed datapaths")
Cc: stable@dpdk.org
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Yong Liu <yong.liu@intel.com>
Signed-off-by: Miao Li <miao.li@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agonet/vhost: merge stats loop in datapath
Gaoxiang Liu [Sun, 17 Oct 2021 23:19:52 +0000 (07:19 +0800)]
net/vhost: merge stats loop in datapath

To improve performance in vhost Tx/Rx, merge vhost stats loop.
eth_vhost_tx has 2 loop of send num iteraion.
It can be merge into one.
eth_vhost_rx has the same issue as Tx.

Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agovhost: enable IOMMU for async vhost
Xuan Ding [Mon, 11 Oct 2021 07:59:42 +0000 (07:59 +0000)]
vhost: enable IOMMU for async vhost

The use of IOMMU has many advantages, such as isolation and address
translation. This patch extends the capability of DMA engine to use
IOMMU if the DMA engine is bound to vfio.

When set memory table, the guest memory will be mapped
into the default container of DPDK.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yvonne Yang <yvonnex.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agovfio: allow partially unmapping adjacent memory
Xuan Ding [Mon, 11 Oct 2021 07:59:41 +0000 (07:59 +0000)]
vfio: allow partially unmapping adjacent memory

Currently, if we map a memory area A, then map a separate memory area B
that by coincidence happens to be adjacent to A, current implementation
will merge these two segments into one, and if partial unmapping is not
supported, these segments will then be only allowed to be unmapped in
one go. In other words, given segments A and B that are adjacent, it
is currently not possible to map A, then map B, then unmap A.

Fix this by adding a notion of "chunk size", which will allow
subdividing segments into equally sized segments whenever we are dealing
with an IOMMU that does not support partial unmapping. With this change,
we will still be able to merge adjacent segments, but only if they are
of the same size. If we keep with our above example, adjacent segments A
and B will be stored as separate segments if they are of different
sizes.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yvonne Yang <yvonnex.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agonet/virtio: fix indirect descriptor reconnection
Xuan Ding [Wed, 13 Oct 2021 01:36:40 +0000 (01:36 +0000)]
net/virtio: fix indirect descriptor reconnection

Add initialization for packed ring indirect descriptors
in reconnection path.

Fixes: 381f39ebb78a ("net/virtio: fix packed ring indirect descricptors setup")
Cc: stable@dpdk.org
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yinan Wang <yinan.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agovhost: add sanity check on inflight last index
Li Feng [Thu, 14 Oct 2021 12:40:08 +0000 (20:40 +0800)]
vhost: add sanity check on inflight last index

The index in rte_vhost_set_last_inflight_io_split is from
the frontend driver, check if it's in the virtqueue range.

Fixes: bb0c2de9602b ("vhost: add APIs to operate inflight ring")
Cc: stable@dpdk.org
Signed-off-by: Li Feng <fengli@smartx.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agonet/virtio: fix Tx checksum for tunnel packets
Ivan Malov [Thu, 16 Sep 2021 18:49:55 +0000 (21:49 +0300)]
net/virtio: fix Tx checksum for tunnel packets

Tx prepare method calls rte_net_intel_cksum_prepare(), which
handles tunnel packets correctly, but Tx burst path does not
take tunnel presence into account when computing the offsets.

Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
2 years agoexamples/vhost: fix use after free on drain
Wenwu Ma [Fri, 24 Sep 2021 17:23:00 +0000 (17:23 +0000)]
examples/vhost: fix use after free on drain

When a vdev is removed in destroy_device function,
the corresponding vhost TX buffer will also be freed,
but the vhost TX buffer may still be used in the
drain_vhost function, which will cause an error of
heap-use-after-free. Therefore, before accessing
vhost TX buffer, we need to check whether the vdev
has been removed, if so, let's skip this vdev.

Fixes: a68ba8e0a6b6 ("examples/vhost: refactor vhost data path")
Cc: stable@dpdk.org
Signed-off-by: Wenwu Ma <wenwux.ma@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2 years agonet/virtio: fix oversized packets in vectorized Rx
Marvin Liu [Sun, 26 Sep 2021 09:28:42 +0000 (17:28 +0800)]
net/virtio: fix oversized packets in vectorized Rx

If packed ring size is not power of two, it is possible that remained
number less than one batch and meanwhile batch operation can pass.
This will cause incorrect remained number calculation and then lead to
receiving oversized packets. The patch fixed the issue by added
remained number check before batch operation.

Fixes: 77d66da83834 ("net/virtio: add vectorized packed ring Rx")
Cc: stable@dpdk.org
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agovdpa/mlx5: retry VAR allocation during vDPA restart
Xueming Li [Fri, 15 Oct 2021 15:05:45 +0000 (23:05 +0800)]
vdpa/mlx5: retry VAR allocation during vDPA restart

VAR is the device memory space for the virtio queues doorbells,
Qemu could mmap it to directly to speed up doorbell push.

On a busy system, Qemu takes time to release VAR resources during driver
shutdown. If vdpa restarted quickly, the VAR allocation failed with
error 28 since the VAR is singleton resource per device.

This patch adds retry mechanism for VAR allocation.

Fixes: 4cae722c1b06 ("vdpa/mlx5: move virtual doorbell alloc to probe")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agovdpa/mlx5: workaround FW first completion in start
Xueming Li [Fri, 15 Oct 2021 15:05:44 +0000 (23:05 +0800)]
vdpa/mlx5: workaround FW first completion in start

After a vDPA application restart, Qemu restores VQ with used and
available index, new incoming packet triggers virtio driver to
handle buffers. Under heavy traffic, no available buffer for
firmware to receive new packets, no Rx interrupts generated,
driver is stuck on endless interrupt waiting.

As a firmware workaround, this patch sends a notification after
VQ setup to ask driver handling buffers and filling new buffers.

Fixes: bff735011078 ("vdpa/mlx5: prepare virtio queues")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agonet/virtio: fix check scatter on all Rx queues
Zhihong Peng [Fri, 8 Oct 2021 05:49:45 +0000 (05:49 +0000)]
net/virtio: fix check scatter on all Rx queues

This patch fixes the wrong way to obtain virtqueue.
The end of virtqueue cannot be judged based on whether
the array is NULL.

Fixes: 4e8169eb0d2d ("net/virtio: fix Rx scatter offload")
Cc: stable@dpdk.org
Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2 years agonet/ice: fix TM hierarchy commit flag reset
Ting Xu [Thu, 21 Oct 2021 05:54:07 +0000 (05:54 +0000)]
net/ice: fix TM hierarchy commit flag reset

After DCF commits TM hierarchy configuration, the commit flag is set to
avoid duplicated commit. But the flag is not reset after device stop,
which prevents the update of hierarchy configuration unless close the
device. It is not reasonable. This patch fix to reset the commit flag
after device stop. Then users can delete and add nodes to commit a new
TM hierarchy configuration.

Fixes: 3a6bfc37eaf4 ("net/ice: support QoS config VF bandwidth in DCF")
Cc: stable@dpdk.org
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2 years agonet/e1000: build on Windows
William Tu [Wed, 20 Oct 2021 03:47:49 +0000 (20:47 -0700)]
net/e1000: build on Windows

This patch enables building the e1000 driver for Windows.
I tested using two Windows VM on top of VMware Fusion,
creating two e1000 devices with device ID 0x10D3 (8274L),
verifying rx/tx works correctly using dpdk-testpmd.exe
rxonly and txonly mode.

Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Pallavi Kadam <pallavi.kadam@intel.com>
Tested-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Tested-by: Pallavi Kadam <pallavi.kadam@intel.com>
2 years agonet/ixgbe: fix port initialization if MTU config fails
Tudor Cornea [Wed, 20 Oct 2021 18:13:46 +0000 (21:13 +0300)]
net/ixgbe: fix port initialization if MTU config fails

On a VMware ESXi 6.0 setup with an Intel 82599 NIC the ports don't
seem to initialize anymore, while running testpmd.

Configuring Port 0 (socket 0)
ixgbevf_dev_rx_init(): Set max packet length to 1518 failed.
ixgbevf_dev_start(): Unable to initialize RX hardware (-22)
Fail to start port 0: Invalid argument
Configuring Port 1 (socket 0)
ixgbevf_dev_rx_init(): Set max packet length to 1518 failed.
ixgbevf_dev_start(): Unable to initialize RX hardware (-22)
Fail to start port 1: Invalid argument
Please stop the ports first

If the call to ixgbevf_rlpml_set_vf fails and we return prematurely,
we will not be able to initialize the ports correctly.

The behavior seems to have changed since the following commit:

Fixes: c77866a16904 ("net/ixgbe: detect failed VF MTU set")
Cc: stable@dpdk.org
We can make this particular use case work correctly if we don't
return an error, which seems to be consistent with the overall
kernel ixgbevf implementation.

[1]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c?h=v5.14#n2015

Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2 years agonet/mlx5: set Tx queue affinity in round-robin
Rongwei Liu [Thu, 21 Oct 2021 08:56:36 +0000 (11:56 +0300)]
net/mlx5: set Tx queue affinity in round-robin

Previously, we set txq affinity to 0 and let firmware
to perform round-robin when bonding. Firmware uses a
global counter to assign txq affinity to different
physical ports accord to remainder after division.

There are three dis-advantages:
1. The global counter is shared between kernel and dpdk.
2. After restarting pmd or port, the previous counter value
is reused, so the new affinity is unpredictable.
3. There is no way to get what affinity is set by firmware.

In this update, we will create several TISs up to the
number of bonding ports and bind each TIS to one PF port.

For each port, it will start to pick up TIS using its port
index. Upper layer application can quickly calculate each txq's
affinity without querying.

At DPDK layer, when creating txq with 2 bonding ports, the
affinity is set like:
port 0: 1-->2-->1-->2
port 1: 2-->1-->2-->1
port 2: 1-->2-->1-->2

Note: Only applicable to DevX api.
This affinity subjects to HW hash.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agocommon/mlx5: add LAG context query
Rongwei Liu [Thu, 21 Oct 2021 08:56:35 +0000 (11:56 +0300)]
common/mlx5: add LAG context query

Added a new function mlx5_devx_cmd_query_lag() to query LAG
property from firmware including state/affinity/mode etc.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agonet/mlx5: close tools socket with last device
Dmitry Kozlyuk [Thu, 14 Oct 2021 08:55:28 +0000 (11:55 +0300)]
net/mlx5: close tools socket with last device

MLX5 PMD exposes a socket for external tools to dump port state.
Socket events are listened using an interrupt source of EXT type.
The socket was closed and the interrupt callback was unregistered
at program exit, which is incorrect because DPDK could be already
shut down at this point. Move actions performed at program exit
to the moment the last MLX5 port is closed. The socket will be opened
again if later a new MLX5 device is plugged in and probed.
Also fix comments that were decisively talking
about secondary processes instead of external tools.

Fixes: e6cdc54cc0ef ("net/mlx5: add socket server for external tools")
Cc: stable@dpdk.org
Reported-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2 years agonet/mlx5: fix Rx queue resource cleanup
Dmitry Kozlyuk [Mon, 18 Oct 2021 17:24:56 +0000 (20:24 +0300)]
net/mlx5: fix Rx queue resource cleanup

mlx5_rxq_start() allocates rxq_ctrl->obj and frees it on failure,
but did not set it to NULL. Later mlx5_rxq_release() could not recognize
this object is already freed and attempted to release its resources,
resulting in a crash:

    Configuring Port 0 (socket 0)
    mlx5_common: Failed to create RQ using DevX
    mlx5_common: Can't create DevX RQ object.
    mlx5_net: Port 0 Rx queue 0 RQ creation failure.
    Segmentation fault

Set rxq_ctrl->obj to NULL after it is freed to skip resource release.

Fixes: 1260a87b2889 ("net/mlx5: share Rx control code")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agonet/mlx5: fix meter yellow policy with RSS action
Bing Zhao [Mon, 18 Oct 2021 13:53:05 +0000 (16:53 +0300)]
net/mlx5: fix meter yellow policy with RSS action

The RSS configuration in a policy action container was a pointer
inside a union, and the pointer area could be used as other fate
action. In the current implementation, the RSS of the green color
was prior to that of the yellow color. There was a high possibility
the pointer was considered as the RSS and result in a error flow
expansion when only the yellow color had the RSS action.

The check of the fate action type should also be done to get rid of
the misjudgment.

Fixes: b38a12272b3a ("net/mlx5: split meter color policy handling")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agonet/mlx5: check DevX to support more Verbs ports
Xueming Li [Tue, 19 Oct 2021 10:35:01 +0000 (18:35 +0800)]
net/mlx5: check DevX to support more Verbs ports

Verbs API doesn't support device port number larger than 255 by design.

To support more VF or SubFunction port representors, forces DevX API
check when max Verbs device link ports larger than 255.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: enable DevX Tx queue creation
Xueming Li [Tue, 19 Oct 2021 10:35:00 +0000 (18:35 +0800)]
net/mlx5: enable DevX Tx queue creation

Verbs API does not support Infiniband device port number larger 255 by
design. To support more representors on a single Infiniband device DevX
API should be engaged.

While creating Send Queue (SQ) object with Verbs API, the PMD assigned
IB device port attribute and kernel created the default miss flows in
FDB domain, to redirect egress traffic from the queue being created to
representor appropriate peer (wire, HPF, VF or SF).

With DevX API there is no IB-device port attribute (it is merely kernel
one, DevX operates in PRM terms) and PMD must create default miss flows
in FDB explicitly. PMD did not provide this and using DevX API for
E-Switch configurations was disabled.

The default miss FDB flow matches E-Switch manager vport (to make sure
the source is some representor) and SQn (Send Queue number - device
internal queue index). The root flow table managed by kernel/firmware
and it does not support vport redirect action, we have to split the
default miss flow into two ones:

- flow with lowest priority in the root table that matches E-Switch
manager vport ID and jump to group 1.
- flow in group 1 that matches E-Switch manager vport ID and SQn and
forwards packet to peer vport

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: fix internal root table flow priority
Xueming Li [Tue, 19 Oct 2021 10:34:59 +0000 (18:34 +0800)]
net/mlx5: fix internal root table flow priority

When creating internal transfer flow on root table with lowest
priority, the flow was created with max UINT32_MAX priority. It is wrong
since the flow is created in kernel and  max priority supported is 16.

This patch fixes this by adding internal flow check.

Fixes: 5f8ae44dd454 ("net/mlx5: enlarge maximal flow priority")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: support flow item of normal Tx queue
Xueming Li [Tue, 19 Oct 2021 10:34:58 +0000 (18:34 +0800)]
net/mlx5: support flow item of normal Tx queue

Extends txq flow pattern to support both hairpin and regular txq.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: support E-Switch manager egress traffic match
Xueming Li [Tue, 19 Oct 2021 10:34:57 +0000 (18:34 +0800)]
net/mlx5: support E-Switch manager egress traffic match

For egress packet on representor, the vport ID in transport domain
is E-Switch manager vport ID since representor shares resources of
E-Switch manager. E-Switch manager vport ID and Tx queue internal device
index are used to match representor egress packet.

This patch adds flow item port ID match on E-Switch manager.

E-Switch manager vport ID is 0xfffe on BlueField, 0 otherwise.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: improve Verbs flow priority discovery
Xueming Li [Tue, 19 Oct 2021 10:34:56 +0000 (18:34 +0800)]
net/mlx5: improve Verbs flow priority discovery

To detect number flow Verbs flow priorities, PMD try to create Verbs
flows in different priority. While Verbs is not designed to support
ports larger than 255.

When DevX supported by kernel driver, 16 Verbs priorities must be
supported, no need to create Verbs flows.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: use Netlink when IB port greater than 255
Xueming Li [Tue, 19 Oct 2021 10:34:55 +0000 (18:34 +0800)]
net/mlx5: use Netlink when IB port greater than 255

IB spec doesn't allow 255 ports on a single HCA, port number of 256 was
cast to u8 value 0 which invalid to ibv_query_port()

This patch invokes Netlink API to query port state when port number
greater than 255.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agocommon/mlx5: get RDMA port state via Netlink
Xueming Li [Tue, 19 Oct 2021 10:34:54 +0000 (18:34 +0800)]
common/mlx5: get RDMA port state via Netlink

Introduce netlink API to get RDMA port state.

Port state is retrieved based on RDMA device name and port index.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agomaintainers: update for AMD axgbe
Chandubabu Namburu [Thu, 21 Oct 2021 04:58:38 +0000 (04:58 +0000)]
maintainers: update for AMD axgbe

Updating AMD axgbe maintainer

Signed-off-by: Chandubabu Namburu <chandu@amd.com>
Acked-by: Somalapuram Amaranath <asomalap@amd.com>
2 years agoapp/testpmd: support L2TPv2 and PPP protocol pattern
Jie Wang [Thu, 21 Oct 2021 10:49:24 +0000 (18:49 +0800)]
app/testpmd: support L2TPv2 and PPP protocol pattern

Add support for test-pmd to parse protocol pattern L2TPv2 and PPP.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agonet/iavf: support PPPoL2TPv2oUDP RSS Hash
Jie Wang [Thu, 21 Oct 2021 10:49:23 +0000 (18:49 +0800)]
net/iavf: support PPPoL2TPv2oUDP RSS Hash

Add support for PPP over L2TPv2 over UDP protocol RSS Hash based
on inner IP src/dst address and TCP/UDP src/dst port.

Patterns are listed below:
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: support L2TPv2 and PPP procotol
Jie Wang [Thu, 21 Oct 2021 10:49:22 +0000 (18:49 +0800)]
ethdev: support L2TPv2 and PPP procotol

Added flow pattern items and header formats of L2TPv2 and PPP.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agocommon/cnxk: add new PCI IDs to supported devices
Tomasz Duszynski [Fri, 1 Oct 2021 20:41:53 +0000 (22:41 +0200)]
common/cnxk: add new PCI IDs to supported devices

CNF10KA does not differ it terms of RVU resources from
CN10KA platform hence add it to list of devices respective
drivers support.

Otherwise devices on CNF10KA are not probed even though
compatible drivers exist.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agoethdev: remove full stop after short comments
Andrew Rybchenko [Wed, 20 Oct 2021 12:52:21 +0000 (15:52 +0300)]
ethdev: remove full stop after short comments

Full stop at the end of short comment just make line longer. It
should be either everywhere or nowhere to be consistent.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: make device and data structures readable
Andrew Rybchenko [Wed, 20 Oct 2021 12:47:27 +0000 (15:47 +0300)]
ethdev: make device and data structures readable

Add empty lines to separate fields commented using different styles.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: remove reserved fields from internal structures
Andrew Rybchenko [Wed, 20 Oct 2021 12:47:26 +0000 (15:47 +0300)]
ethdev: remove reserved fields from internal structures

Fixes: f9bdee267ab8 ("ethdev: hide internal structures")

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: fix EEPROM spelling
Andrew Rybchenko [Wed, 20 Oct 2021 12:47:25 +0000 (15:47 +0300)]
ethdev: fix EEPROM spelling

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: fix ID spelling in comments and log messages
Andrew Rybchenko [Wed, 20 Oct 2021 12:47:24 +0000 (15:47 +0300)]
ethdev: fix ID spelling in comments and log messages

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: fix VLAN spelling including VLAN ID case
Andrew Rybchenko [Wed, 20 Oct 2021 12:47:23 +0000 (15:47 +0300)]
ethdev: fix VLAN spelling including VLAN ID case

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: fix DCB and VMDq spelling
Andrew Rybchenko [Wed, 20 Oct 2021 12:47:22 +0000 (15:47 +0300)]
ethdev: fix DCB and VMDq spelling

Fix both in one changeset since they share line in a number of cases.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agoethdev: fix Ethernet spelling
Andrew Rybchenko [Wed, 20 Oct 2021 12:47:21 +0000 (15:47 +0300)]
ethdev: fix Ethernet spelling

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>