dpdk.git
2 years agonet/iavf: fix function pointer in multi-process
Steve Yang [Mon, 28 Feb 2022 09:48:59 +0000 (09:48 +0000)]
net/iavf: fix function pointer in multi-process

This patch uses the index value to call the function, instead of the
function pointer assignment to save the selection of Receive Flex
Descriptor profile ID.

Otherwise the secondary process will run with wrong function address
from primary process.

Fixes: 12b435bf8f2f ("net/iavf: support flex desc metadata extraction")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2 years agonet/iavf: support NAT-T / UDP encapsulation
Radu Nicolau [Mon, 28 Feb 2022 15:00:22 +0000 (15:00 +0000)]
net/iavf: support NAT-T / UDP encapsulation

Add support for NAT-T / UDP encapsulated ESP.
This fixes the inline crypto feature for iAVF which will not
function properly without setting the UDP encapsulation options.

Fixes: 6bc987ecb860 ("net/iavf: support IPsec inline crypto")
Cc: stable@dpdk.org
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
2 years agonet/ixgbe: fix FSP check for X550EM devices
Stephen Douthit [Mon, 28 Feb 2022 15:29:35 +0000 (10:29 -0500)]
net/ixgbe: fix FSP check for X550EM devices

Currently all X500EM* MAC types fall through to the default case and get
reported as non-SFP regardless of media type, which isn't correct.

Fixes: 0790adeb5675 ("ixgbe/base: support X550em_a device")
Cc: stable@dpdk.org
Signed-off-by: Stephen Douthit <stephend@silicom-usa.com>
Signed-off-by: Jeff Daly <jeffd@silicom-usa.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2 years agonet/hns3: increase time waiting for PF reset completion
Huisong Li [Wed, 2 Mar 2022 00:35:01 +0000 (08:35 +0800)]
net/hns3: increase time waiting for PF reset completion

On the case that PF and VF need to be reset, after the hardware reset is
complete, VF needs wait for 1 second to restore the configuration so
that VF does not fail to recover because PF reset isn't complete. But
the estimated time is not sufficient. This patch fixes it to 5 seconds.

Fixes: 2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
2 years agonet/hns3: fix VF RSS TC mode entry
Huisong Li [Mon, 28 Feb 2022 03:21:46 +0000 (11:21 +0800)]
net/hns3: fix VF RSS TC mode entry

For packets with VLAN priorities destined for the VF, hardware still
assign Rx queue based on the Up-to-TC mapping PF configured. But VF has
only one TC. If other TC don't enable, it causes that the priority
packets that aren't destined for TC0 aren't received by RSS hash but is
destined for queue 0. So driver has to enable the unused TC by using TC0
queue mapping configuration.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
2 years agonet/hns3: fix RSS TC mode entry
Huisong Li [Mon, 28 Feb 2022 03:21:45 +0000 (11:21 +0800)]
net/hns3: fix RSS TC mode entry

The driver allocates queues only to valid TCs. But the driver also
configure queues for invalid TCs, which is unreasonable.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
2 years agonet/hns3: remove duplicate macro definition
Jie Hai [Mon, 28 Feb 2022 03:21:41 +0000 (11:21 +0800)]
net/hns3: remove duplicate macro definition

This patch fixes duplicate macro definition of HNS3_RSS_CFG_TBL_SIZE.

Fixes: 737f30e1c3ab ("net/hns3: support command interface with firmware")
Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
2 years agoapp/crypto-perf: add IPsec operations population routine
Anoob Joseph [Fri, 4 Mar 2022 10:40:38 +0000 (16:10 +0530)]
app/crypto-perf: add IPsec operations population routine

Ops population functions are called in datapath. Keeping it common for
PDCP & DOCSIS would mean ops population would have additional
conditional checks causing the throughput reported to be lower than what
the PMD is capable of.

Separate out routine for IPsec cases and split vector population and op
preparation into two loops to allow 2 rte_rdtsc_precise() calls to
capture cycles consumed for memcpying the vector. Checking the cycle
count from the loop would mean more calls to the same API.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2 years agocrypto/qat: fix process type handling
Kai Ji [Tue, 1 Mar 2022 15:02:54 +0000 (23:02 +0800)]
crypto/qat: fix process type handling

This patch fix the memory corruptions issue reported by
coverity. The process type handling in QAT PMDs where only
primary and secondary process are supported in qat build
request.

Coverity issue: 376551, 376570, 376534
Fixes: fb3b9f492205 ("crypto/qat: rework burst data path")

Signed-off-by: Kai Ji <kai.ji@intel.com>
2 years agocrypto/qat: fix smaller modulus cases for mod exp
Arek Kusztal [Tue, 1 Mar 2022 14:16:16 +0000 (14:16 +0000)]
crypto/qat: fix smaller modulus cases for mod exp

This patch fixes not working cases when modulus is
smaller than other arguments.

Fixes: 3b78aa7b2317 ("crypto/qat: refactor asymmetric crypto functions")

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
2 years agocrypto/qat: fix RSA clearing
Arek Kusztal [Mon, 28 Feb 2022 16:05:55 +0000 (16:05 +0000)]
crypto/qat: fix RSA clearing

This patch fixes structurally dead code in QAT
asym pmd.

Coverity issue: 376563
Fixes: 002486db239e ("crypto/qat: refactor asymmetric session")

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
2 years agocompressdev: fix socket ID type
Raja Zidane [Tue, 1 Mar 2022 14:15:02 +0000 (16:15 +0200)]
compressdev: fix socket ID type

Socket ID is used and interpreted as integer, one of the possible
values for socket id is -1 (SOCKET_ID_ANY).
here socket_id is defined as unsigned 8 bit integer, so when putting
-1, it is interpreted as 255, which causes allocation errors when
trying to allocate from socket_id (255).

change socket_id from unsigned 8 bit integer to integer.

Fixes: ed7dd94f7f66 ("compressdev: add basic device management")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agoapp/compress-perf: fix number of queue pairs to setup
Raja Zidane [Wed, 2 Mar 2022 08:41:31 +0000 (10:41 +0200)]
app/compress-perf: fix number of queue pairs to setup

The number of QPs is limited by the number of cores, such that in
case the user requests more QPs than possible, the number of QPs
actually configured on the device is equal to the number of cores,
but the app tries to setup the original number of QPs.

Align the number of QPs setup'ed to the limited number.

Fixes: 424dd6c8c1a8 ("app/compress-perf: add weak functions for multicore test")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agoapp/compress-perf: fix socket ID type during init
Raja Zidane [Wed, 2 Mar 2022 08:39:27 +0000 (10:39 +0200)]
app/compress-perf: fix socket ID type during init

Socket ID is obtained by function rte_compressdev_socket_id, which
returns it as integer, but is interpreted as unsigned byte integer.

change type from uint8_t to int.

Fixes: ed7dd94f7f66 ("compressdev: add basic device management")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agocompress/mlx5: support out-of-space status
Raja Zidane [Sun, 27 Feb 2022 14:00:52 +0000 (16:00 +0200)]
compress/mlx5: support out-of-space status

When trying to dequeue, an OP may fail due to insufficient
space for the OP output, the compressdev API defines out-of-space
for OP status. The driver can detect out-of-space errors and
report them to the user. Check if hw_error_syndrome specifies
out-of-space and set the OP status accordingly.
Also added an error message for a case of missing B-final flag.

Fixes: f8c97babc9f4 ("compress/mlx5: add data-path functions")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agoapp/compress-perf: optimize operations pool allocation
Raja Zidane [Wed, 23 Feb 2022 13:33:07 +0000 (15:33 +0200)]
app/compress-perf: optimize operations pool allocation

An array of the size of total operations needed for the de/compression is
reserved for ops while enqueueing, although only first burst_size entries
of the array are used.

Reduce the size of the array allocated.

Fixes: b68a82425da4 ("app/compress-perf: add performance measurement")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
2 years agoapp/compress-perf: fix cycle count operations allocation
Raja Zidane [Wed, 23 Feb 2022 13:32:17 +0000 (15:32 +0200)]
app/compress-perf: fix cycle count operations allocation

In cyclecount main_loop function, each iteration it tries to
enqueue X ops, in case Y<X ops were enqueued, the rest of the
X-Y ops are moved to the beginning of the ops array, to preserve
ops order, and next Y ops are allocated for the next enqueue
action, the allocation of the ops occurs on the first Y entries
in the array, when it should have skipped the first X-Y
array entries and allocate the following Y entries.

Fix the allocation by adding the correct offset.

Fixes: 2695db95a147 ("test/compress: add cycle-count mode to perf tool")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agoevent/dlb2: add shift value check in sparse dequeue
Timothy McDaniel [Wed, 2 Mar 2022 15:12:08 +0000 (09:12 -0600)]
event/dlb2: add shift value check in sparse dequeue

Add a check to ensure that all shift counts are valid.
Shifting by more than 63 bits may result in undefined behavior, as
noted during coverity scan.

Coverity issue: 376527
Fixes: e697f35dbdd1 ("event/dlb2: update rolling mask used for dequeue")
Cc: stable@dpdk.org
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
2 years agoevent/cnxk: fix Rx adapter config check
Pavan Nikhilesh [Tue, 1 Mar 2022 22:53:09 +0000 (04:23 +0530)]
event/cnxk: fix Rx adapter config check

The rx_queue_flags should be checked against
RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID flag.

Fixes: cb4bfd6e7bdf ("event/cnxk: support Rx adapter")
Cc: stable@dpdk.org
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2 years agoevent/cnxk: fix sub-event clearing mask length
Pavan Nikhilesh [Tue, 1 Mar 2022 22:53:08 +0000 (04:23 +0530)]
event/cnxk: fix sub-event clearing mask length

Fix incorrect mask used when clearing subevent type masking out
useful data.

Fixes: e239e0d3faf7 ("event/cnxk: add SSO HW device operations")
Cc: stable@dpdk.org
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2 years agoci: remove redundant drivers enabling
Thomas Monjalon [Sat, 26 Feb 2022 18:36:51 +0000 (19:36 +0100)]
ci: remove redundant drivers enabling

No need to explicitly enable drivers bus/vdev and mempool/ring.

bus/vdev is always enabled since
commit 2e33309ebe03 ("config: enable/disable drivers in Arm builds")

mempool/ring is always enabled since
commit 81c2337e044d ("build: make ring mempool driver mandatory")

The driver net/null is kept to allow running test-null.sh.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2 years agoci: remove outdated default versions for ABI check
Thomas Monjalon [Tue, 8 Feb 2022 13:47:15 +0000 (14:47 +0100)]
ci: remove outdated default versions for ABI check

The variables REF_GIT_TAG and LIBABIGAIL_VERSION are set
in the CI configuration like .travis.yml or .github/workflows/build.yml.
The default values are outdated and probably unused.

The default values are removed completely
to avoid forgetting an update in future.

The use of the variables is quoted to make sure
a missing value will trigger an appropriate failure.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Aaron Conole <aconole@redhat.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2 years agoversion: 22.03-rc2
Thomas Monjalon [Sun, 27 Feb 2022 20:52:48 +0000 (21:52 +0100)]
version: 22.03-rc2

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2 years agoadd missing newline at EOF
Stephen Hemminger [Fri, 25 Feb 2022 17:47:03 +0000 (09:47 -0800)]
add missing newline at EOF

The text files did not end with newline.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2 years agoremove extra blank line at EOF
Stephen Hemminger [Fri, 25 Feb 2022 17:47:02 +0000 (09:47 -0800)]
remove extra blank line at EOF

These source files all had unnecessary blank line at end of file.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2 years agokni: fix freeing order in device release
Huisong Li [Wed, 9 Feb 2022 07:35:25 +0000 (15:35 +0800)]
kni: fix freeing order in device release

The "kni_dev" is the private data of the "net_device" in kni, and allocated
with the "net_device" by calling "alloc_netdev()". The "net_device" is
freed by calling "free_netdev()" when kni release. The freed memory
includes the "kni_dev". So after "kni_dev" should not be accessed after
"net_device" is released.

Fixes: e77fec694936 ("kni: fix possible mbuf leaks and speed up port release")
Cc: stable@dpdk.org
KASAN trace:

[   85.263717] ==========================================================
[   85.264418] BUG: KASAN: use-after-free in kni_net_release_fifo_phy+
0x30/0x84 [rte_kni]
[   85.265139] Read of size 8 at addr ffff000260668d60 by task kni/341
[   85.265703]
[   85.265857] CPU: 0 PID: 341 Comm: kni Tainted: G     U     O
5.15.0-rc4+ #1
[   85.266525] Hardware name: linux,dummy-virt (DT)
[   85.266968] Call trace:
[   85.267220]  dump_backtrace+0x0/0x2d0
[   85.267591]  show_stack+0x24/0x30
[   85.267924]  dump_stack_lvl+0x8c/0xb8
[   85.268294]  print_address_description.constprop.0+0x74/0x2b8
[   85.268855]  kasan_report+0x1e4/0x200
[   85.269224]  __asan_load8+0x98/0xd4
[   85.269577]  kni_net_release_fifo_phy+0x30/0x84 [rte_kni]
[   85.270116]  kni_dev_remove.isra.0+0x50/0x64 [rte_kni]
[   85.270630]  kni_ioctl_release+0x254/0x320 [rte_kni]
[   85.271136]  kni_ioctl+0x64/0xb0 [rte_kni]
[   85.271553]  __arm64_sys_ioctl+0xdc/0x120
[   85.271955]  invoke_syscall+0x68/0x1a0
[   85.272332]  el0_svc_common.constprop.0+0x90/0x200
[   85.272807]  do_el0_svc+0x94/0xa4
[   85.273144]  el0_svc+0x78/0x240
[   85.273463]  el0t_64_sync_handler+0x1a8/0x1b0
[   85.273895]  el0t_64_sync+0x1a0/0x1a4
[   85.274264]
[   85.274427] Allocated by task 341:
[   85.274767]  kasan_save_stack+0x2c/0x60
[   85.275157]  __kasan_kmalloc+0x90/0xb4
[   85.275533]  __kmalloc_node+0x230/0x594
[   85.275917]  kvmalloc_node+0x8c/0x190
[   85.276286]  alloc_netdev_mqs+0x70/0x6b0
[   85.276678]  kni_ioctl_create+0x224/0xf40 [rte_kni]
[   85.277166]  kni_ioctl+0x9c/0xb0 [rte_kni]
[   85.277581]  __arm64_sys_ioctl+0xdc/0x120
[   85.277980]  invoke_syscall+0x68/0x1a0
[   85.278357]  el0_svc_common.constprop.0+0x90/0x200
[   85.278830]  do_el0_svc+0x94/0xa4
[   85.279172]  el0_svc+0x78/0x240
[   85.279491]  el0t_64_sync_handler+0x1a8/0x1b0
[   85.279925]  el0t_64_sync+0x1a0/0x1a4
[   85.280292]
[   85.280454] Freed by task 341:
[   85.280763]  kasan_save_stack+0x2c/0x60
[   85.281147]  kasan_set_track+0x2c/0x40
[   85.281522]  kasan_set_free_info+0x2c/0x50
[   85.281930]  __kasan_slab_free+0xdc/0x140
[   85.282331]  slab_free_freelist_hook+0x90/0x250
[   85.282782]  kfree+0x128/0x580
[   85.283099]  kvfree+0x48/0x60
[   85.283402]  netdev_freemem+0x34/0x44
[   85.283770]  netdev_release+0x50/0x64
[   85.284138]  device_release+0xa0/0x120
[   85.284516]  kobject_put+0xf8/0x160
[   85.284867]  put_device+0x20/0x30
[   85.285204]  free_netdev+0x22c/0x310
[   85.285562]  kni_dev_remove.isra.0+0x48/0x64 [rte_kni]
[   85.286076]  kni_ioctl_release+0x254/0x320 [rte_kni]
[   85.286573]  kni_ioctl+0x64/0xb0 [rte_kni]
[   85.286992]  __arm64_sys_ioctl+0xdc/0x120
[   85.287392]  invoke_syscall+0x68/0x1a0
[   85.287769]  el0_svc_common.constprop.0+0x90/0x200
[   85.288243]  do_el0_svc+0x94/0xa4
[   85.288579]  el0_svc+0x78/0x240
[   85.288899]  el0t_64_sync_handler+0x1a8/0x1b0
[   85.289332]  el0t_64_sync+0x1a0/0x1a4
[   85.289699]
[   85.289862] The buggy address belongs to the object at ffff000260668000
[   85.289862]  which belongs to the cache kmalloc-cg-8k of size 8192
[   85.291079] The buggy address is located 3424 bytes inside of
[   85.291079]  8192-byte region [ffff000260668000ffff00026066a000)
[   85.292213] The buggy address belongs to the page:
[   85.292684] page:(____ptrval____) refcount:1 mapcount:0 mapping:
0000000000000000 index:0x0 pfn:0x2a0668
[   85.293585] head:(____ptrval____) order:3 compound_mapcount:0
compound_pincount:0
[   85.294305] flags: 0xbfff80000010200(slab|head|node=0|zone=2|
lastcpupid=0x7fff)
[   85.295020] raw: 0bfff80000010200 0000000000000000 dead000000000122
ffff0000c000d680
[   85.295767] raw: 0000000000000000 0000000080020002 00000001ffffffff
0000000000000000
[   85.296512] page dumped because: kasan: bad access detected
[   85.297054]
[   85.297217] Memory state around the buggy address:
[   85.297688]  ffff000260668c00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb
fb fb
[   85.298384]  ffff000260668c80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb
fb fb
[   85.299088] >ffff000260668d00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb
fb fb
[   85.299781]                                                        ^
[   85.300396]  ffff000260668d80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb
fb fb
[   85.301092]  ffff000260668e00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb
fb fb
[   85.301787] ===========================================================

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agobus/pci: assign driver pointer before mapping
Michal Krawczyk [Wed, 19 Jan 2022 14:50:37 +0000 (15:50 +0100)]
bus/pci: assign driver pointer before mapping

Patch changing the way of accessing interrupt handle also changed order
of the rte_pci_map_device() call and rte_pci_device:driver assignment.
It was causing issues with Write Combine mapping on the Linux platform
if it was used with the igb_uio module.

Linux implementation of pci_uio_map_resource_by_index(), which is called
by rte_pci_map_device(), needs access to the device's driver. Otherwise
it won't be able to check the driver's flags and won't respect them.

Fixes: d61138d4f0e2 ("drivers: remove direct access to interrupt handle")
Cc: stable@dpdk.org
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
2 years agodevargs: fix crash with uninitialized parsing
Madhuker Mythri [Mon, 14 Feb 2022 17:08:11 +0000 (22:38 +0530)]
devargs: fix crash with uninitialized parsing

The function rte_devargs_parse() previously was safe to call with
non-initialized devargs structure as parameter.

When adding the support for the global device syntax,
this assumption was broken.
Restore it by forcing memset as part of the call itself.

Bugzilla ID: 933
Fixes: b344eb5d941a ("devargs: parse global device syntax")
Cc: stable@dpdk.org
Signed-off-by: Madhuker Mythri <madhuker.mythri@oracle.com>
Signed-off-by: Gaetan Rivet <grive@u256.net>
2 years agoeal/linux: fix illegal memory access in uevent handler
Steve Yang [Wed, 23 Feb 2022 08:49:50 +0000 (08:49 +0000)]
eal/linux: fix illegal memory access in uevent handler

'recv()' fills the 'buf', later 'strlcpy()' used to copy from this buffer.
But as coverity warns 'recv()' doesn't guarantee that 'buf' is
null-terminated, but 'strlcpy()' requires it.

Enlarge 'buf' size to 'EAL_UEV_MSG_LEN + 1' and ensure the last one can
be set to 0 when received buffer size is EAL_UEV_MSG_LEN.

CID 375864:  Memory - illegal accesses  (STRING_NULL)
Passing unterminated string "buf" to "dev_uev_parse", which expects
a null-terminated string.

Coverity issue: 375864
Fixes: 0d0f478d0483 ("eal/linux: add uevent parse and process")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agodistributor: fix potential overflow
Bruce Richardson [Thu, 17 Feb 2022 15:02:39 +0000 (15:02 +0000)]
distributor: fix potential overflow

Coverity flags the fact that the tag values used in distributor are
32-bit, which means that when we use bit-manipulation to convert a tag
match/no-match to a bit in an array, we need to typecast to a 64-bit
type before shifting past 32 bits.

Coverity issue: 375808
Fixes: 08ccf3faa6a9 ("distributor: new packet distributor library")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
2 years agoefd: fix uninitialized structure
Pablo de Lara [Fri, 25 Feb 2022 09:27:45 +0000 (09:27 +0000)]
efd: fix uninitialized structure

Coverity flags that both elements of efd_online_group_entry
are used uninitialized. This is OK because this structure
is initially used for starting values, so any value is OK.

Coverity ID: 375868
Fixes: 56b6ef874f80 ("efd: new Elastic Flow Distributor library")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
2 years agotest/efd: fix sockets mask size
Pablo de Lara [Fri, 25 Feb 2022 09:27:44 +0000 (09:27 +0000)]
test/efd: fix sockets mask size

Constant value 1 has a size of 32 bits, and shifting it more than 32 bits
to the left overflows. 1ULL is needed to be able to get a 64-bit value.

Coverity ID: 375846
Fixes: 8751a7e9832b ("efd: allow more CPU sockets in table creation")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
2 years agogpu/cuda: map GPU memory with GDRCopy
Elena Agostini [Fri, 25 Feb 2022 03:12:26 +0000 (03:12 +0000)]
gpu/cuda: map GPU memory with GDRCopy

To enable the gpudev rte_gpu_mem_cpu_map feature to expose
GPU memory to the CPU, the GPU CUDA driver library needs
the GDRCopy library and driver.

If DPDK is built without GDRCopy, the GPU CUDA driver returns
error if the is invoked rte_gpu_mem_cpu_map.

All the others GPU CUDA driver functionalities are not affected by
the absence of GDRCopy, thus this is an optional functionality
that can be enabled in the GPU CUDA driver.

CUDA driver documentation has been updated accordingly.

Signed-off-by: Elena Agostini <eagostini@nvidia.com>
2 years agodoc: add CUDA driver features
Elena Agostini [Fri, 25 Feb 2022 03:12:25 +0000 (03:12 +0000)]
doc: add CUDA driver features

The features list were missed when introducing the driver.

Fixes: 1306a73b1958 ("gpu/cuda: introduce CUDA driver")
Cc: stable@dpdk.org
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
2 years agoraw/cnxk_gpio: stop device once tests are complete
Tomasz Duszynski [Fri, 25 Feb 2022 14:55:46 +0000 (15:55 +0100)]
raw/cnxk_gpio: stop device once tests are complete

Started device should eventually be stopped.

Fixes: 0e6557b448fa ("raw/cnxk_gpio: add self test")

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/mlx5: support queue/RSS actions for external Rx queue
Michael Baum [Thu, 24 Feb 2022 23:25:11 +0000 (01:25 +0200)]
net/mlx5: support queue/RSS actions for external Rx queue

Add support queue/RSS action for external Rx queue.
In indirection table creation, the queue index will be taken from
mapping array.

This feature supports neither LRO nor Hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agonet/mlx5: add external Rx queue mapping API
Michael Baum [Thu, 24 Feb 2022 23:25:10 +0000 (01:25 +0200)]
net/mlx5: add external Rx queue mapping API

External queue is a queue that has been created and managed outside the
PMD. The queues owner might use PMD to generate flow rules using these
external queues.

When the queue is created in hardware it is given an ID represented by
32 bits. In contrast, the index of the queues in PMD is represented by
16 bits. To enable the use of PMD to generate flow rules, the queue
owner must provide a mapping between the HW index and a 16-bit index
corresponding to the ethdev API.

This patch adds an API enabling to insert/cancel a mapping between HW
queue id and ethdev queue id.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agonet/mlx5: optimize queue type checks
Michael Baum [Thu, 24 Feb 2022 23:25:09 +0000 (01:25 +0200)]
net/mlx5: optimize queue type checks

The RxQ/TxQ control structure has a field named type. This type is enum
with values for standard and hairpin.
The use of this field is to check whether the queue is of the hairpin
type or standard.

This patch replaces it with a boolean variable that saves whether it is
a hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agocommon/mlx5: support remote PD and CTX
Michael Baum [Thu, 24 Feb 2022 23:25:08 +0000 (01:25 +0200)]
common/mlx5: support remote PD and CTX

Add option to probe common device using import CTX/PD functions instead
of create functions.
This option requires accepting the context FD and the PD handle as
devargs.

This sharing can be useful for applications that use PMD for only some
operations. For example, an app that generates queues itself and uses
PMD just to configure flow rules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agocommon/mlx5: glue device and PD import
Michael Baum [Thu, 24 Feb 2022 23:25:07 +0000 (01:25 +0200)]
common/mlx5: glue device and PD import

Add support for rdma-core API to import device.
The API gets ibv_context file descriptor and returns an ibv_context
pointer that is associated with the given file descriptor.
Add also support for rdma-core API to import PD.
The API gets ibv_context and PD handle and returns a protection domain
(PD) that is associated with the given handle in the given context.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agocommon/mlx5: consider local functions as internal
Michael Baum [Thu, 24 Feb 2022 23:25:06 +0000 (01:25 +0200)]
common/mlx5: consider local functions as internal

The functions which are not explicitly marked as internal
were exported because the local catch-all rule was missing in the
version script.
After adding the missing rule, all local functions are hidden.
The function mlx5_get_device_guid is used in another library,
so it needs to be exported (as internal).

Because the local functions were exported as non-internal
in DPDK 21.11, any change in these functions would break the ABI.
An ABI exception is added for this library, considering that all
functions are either local or internal.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2 years agonet/mlx5: support matching GRE optional fields
Sean Zhang [Fri, 25 Feb 2022 01:14:17 +0000 (03:14 +0200)]
net/mlx5: support matching GRE optional fields

This patch adds matching on the optional fields (checksum/key/sequence)
of GRE header. The matching on checksum and sequence fields requests
support from rdma-core with the capability of misc5 and tunnel_header 0-3.

For patterns without checksum and sequence specified, keep using misc for
matching as before, but for patterns with checksum or sequence, validate
capability first and then use misc5 for the matching.

Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agodoc: add GRE option flow item to feature list
Ferruh Yigit [Fri, 25 Feb 2022 18:27:59 +0000 (18:27 +0000)]
doc: add GRE option flow item to feature list

'gre_option' flow item was missing in the feature list, adding it.

Fixes: f61490bdf218 ("ethdev: support GRE optional fields")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2 years agoapp/testpmd: fix build without drivers
Thomas Monjalon [Fri, 25 Feb 2022 15:26:53 +0000 (16:26 +0100)]
app/testpmd: fix build without drivers

When ixgbe and bnxt are disabled, compilation was failing:

app/test-pmd/cmdline.c:9396:11: error:
variable 'vf_rxmode' set but not used

Fixes: 4cfe399f6550 ("net/bnxt: support to set VF rxmode")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2 years agoapp/testpmd: fix raw encap of GENEVE option
Bing Zhao [Thu, 24 Feb 2022 07:02:14 +0000 (09:02 +0200)]
app/testpmd: fix raw encap of GENEVE option

The structure "rte_flow_item_geneve_opt" is not a protocol header of
geneve tunnel option from rfc8926. The field "data" is a pointer
which points to the actual variable-length option data. So the
structure is not packed.

There is 4 bytes hole before the pointer in a 64-bit system. The
option header is just 4 bytes. When using offsetof() to get the
fixed part's size of option header, the wrong value 8 was got. When
constructing the encap header, a wrong size and offset was used due
to this hole.

With this commit, the fixed part's size is calculated explicitly
based on all fields.

Fixes: 55c074f3ba1d ("app/testpmd: support GENEVE option item")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/i40e: fix unintentional integer overflow
Steve Yang [Fri, 25 Feb 2022 02:39:47 +0000 (02:39 +0000)]
net/i40e: fix unintentional integer overflow

Cast 1 to type uint64_t to avoid overflow.

CID 375812 (#1 of 1):
Unintentional integer overflow (OVERFLOW_BEFORE_WIDEN)
overflow_before_widen: Potentially overflowing expression 1 << 2 * i + 1
with type int (32 bits, signed) is evaluated using 32-bit arithmetic, and
then used in a context that expects an expression of type uint64_t
(64 bits, unsigned).

Coverity issue: 375812
Fixes: 5fec01c35c49 ("net/i40e: support Linux VF to configure IRQ link list")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agonet/ice/base: support E824S and E825 devices
Robin Zhang [Fri, 25 Feb 2022 02:00:14 +0000 (02:00 +0000)]
net/ice/base: support E824S and E825 devices

Add support for E824S and E825 family devices.

This will be documented later in release notes since devices are not
mature yet to announce to users.

Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2 years agocommon/cnxk: support CNF95xx B0 variant
Tomasz Duszynski [Thu, 24 Feb 2022 10:42:36 +0000 (11:42 +0100)]
common/cnxk: support CNF95xx B0 variant

Add CNF95xx B0 variant to the list of supported models.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: make inline inbound device usage as default
Vamsi Attunuru [Fri, 25 Feb 2022 06:54:45 +0000 (12:24 +0530)]
net/cnxk: make inline inbound device usage as default

Currently inline inbound device usage is not default for eventdev,
patch renames force_inl_dev dev arg to no_inl_dev and enables inline
inbound device by default.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agocommon/cnxk: check SQ node before setting BP config
Satha Rao [Fri, 25 Feb 2022 04:59:26 +0000 (23:59 -0500)]
common/cnxk: check SQ node before setting BP config

Validate sq_node and parent before accessing their fields.
SQ was created without any associated TM node, this is valid negative
case, so return success while stopping TM without SQ node.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: enable packet marking callbacks
Satha Rao [Fri, 25 Feb 2022 04:59:25 +0000 (23:59 -0500)]
net/cnxk: enable packet marking callbacks

cnxk platform supports red/yellow packet marking based on TM
configuration. This patch set hooks to enable/disable packet
marking for VLAN DEI, IP DSCP and IP ECN. Marking enabled only
in scalar mode.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agocommon/cnxk: enable packet marking
Satha Rao [Fri, 25 Feb 2022 04:59:24 +0000 (23:59 -0500)]
common/cnxk: enable packet marking

cnxk platforms supports packet marking when TM enabled with
valid shaper rates. VLAN DEI, IP ECN, or IP DSCP inside
packet will be updated based on mark flags selected.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: support IP reassembly
Vidya Sagar Velumuri [Thu, 24 Feb 2022 18:29:01 +0000 (23:59 +0530)]
net/cnxk: support IP reassembly

Added capability and support for inline inbound IP reassembly
in cnxk driver. The IP reassembly offload is supported only
when the inline IPSec security offload is enabled.

In case of IP reassembly incomplete, the mbufs are attached
in the mbuf dynamic field and a dynamic flag is set accordingly.

Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agocommon/cnxk: configure reassembly parameters
Vidya Sagar Velumuri [Thu, 24 Feb 2022 18:29:00 +0000 (23:59 +0530)]
common/cnxk: configure reassembly parameters

When reassembly is enabled by application, set corresponding
flags in SA during creation.

Provide ROC API to configure reassembly unit with active and
zombie limits and step size.

Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: align prefetches to CN10K cache model
Pavan Nikhilesh [Thu, 24 Feb 2022 16:10:12 +0000 (21:40 +0530)]
net/cnxk: align prefetches to CN10K cache model

Align prefetches for CN10K cache model for vWQE in Rx and Tx.
Move mbuf->next NULL assignment to Tx path and enabled it only
when multi segments offload is enabled to reduce L1 pressure.
Add macros to detect corrupted mbuf->next values when
MEMPOOL_DEBUG is set.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: optimize Rx packet size extraction
Pavan Nikhilesh [Thu, 24 Feb 2022 16:10:11 +0000 (21:40 +0530)]
net/cnxk: optimize Rx packet size extraction

In vWQE mode, the mbuf address is calculated without using the
IOVA list.

Packet length can also be calculated by using NIX_PARSE_S by
which we can completely eliminate reading 2nd cache line
depending on the offloads enabled.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: support outbound soft expiry notification
Vamsi Attunuru [Thu, 24 Feb 2022 09:49:31 +0000 (15:19 +0530)]
net/cnxk: support outbound soft expiry notification

Add support for soft expiry notification mechanism in outbound
path by creating required number of ring buffers and a common poll
thread which polls for soft expiry events enqueued by microcode.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agocommon/cnxk: extend log on model mismatch
Tomasz Duszynski [Thu, 24 Feb 2022 10:34:22 +0000 (11:34 +0100)]
common/cnxk: extend log on model mismatch

Model is uniquely identified by 4 numbers. Print them all in case
model being populated is not on a list of known models. This makes
debugging a bit easier.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Reviewed-by: Jakub Palider <jpalider@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: remove unused files after template rework
Nithin Dabilpuram [Thu, 24 Feb 2022 10:13:45 +0000 (15:43 +0530)]
net/cnxk: remove unused files after template rework

Remove unused files that were left over after Rx and Tx template
function rework.

Fixes: 5169508a68fa ("net/cnxk: add cn9k template Rx functions to build")
Fixes: dd8c20eee472 ("net/cnxk: add cn9k template Tx functions to build")
Fixes: be294749a12a ("net/cnxk: add cn10k template Rx functions to build")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: fix RSS RETA table update
Rakesh Kudurumalla [Thu, 24 Feb 2022 08:35:28 +0000 (14:05 +0530)]
net/cnxk: fix RSS RETA table update

RSS reta table is corrupted during rte_eth_dev_rss_reta_update().
Fix it by restoring previous table entries before updating.

Fixes: 00242a687de6 ("net/cnxk: support RETA and RSS hash")
Cc: stable@dpdk.org
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/mlx5: add header reformat HW steering action
Suanming Mou [Thu, 24 Feb 2022 13:40:51 +0000 (15:40 +0200)]
net/mlx5: add header reformat HW steering action

HW steering header reformat action can work under bulk mode. In
this case, when create the table, bulk size of header reformat
actions will be allocated in low level. Afterwards, when create
flow, just simply specify the action index in the bulk and the
encapsulation data to the action will be enough.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add indirect HW steering action
Suanming Mou [Thu, 24 Feb 2022 13:40:50 +0000 (15:40 +0200)]
net/mlx5: add indirect HW steering action

HW steering can support indirect action as well. With indirect action,
the flow can be created with more flexible shared RSS action selection.
This will can save the action template with different RSS actions.

This commit adds the flow queue operation callback for:
rte_flow_async_action_handle_create();
rte_flow_async_action_handle_destroy();
rte_flow_async_action_handle_update();

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add HW mark action
Suanming Mou [Thu, 24 Feb 2022 13:40:49 +0000 (15:40 +0200)]
net/mlx5: add HW mark action

The mark action is covered by tag action internally. While it is added
the HW will add a tag to the packet. The mark value can be set as fixed
or dynamic as the action mask indicates.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add queue and RSS HW steering action
Suanming Mou [Thu, 24 Feb 2022 13:40:48 +0000 (15:40 +0200)]
net/mlx5: add queue and RSS HW steering action

This commit adds the queue and RSS action. Similar to the jump action,
dynamic ones will be added to the action construct list.

Due to the queue and RSS action in template should not be destroyed
during port restart, the actions are created with standalone indirect
table as indirect action does. When port stops, detaches the indirect
table from action, when port starts, attaches the indirect table back
to the action.

One more change is made to accelerate the action creation. Currently
the mlx5_hrxq_get() function returns the object index instead of object
pointer. This introduced an extra converting the index to the object by
calling mlx5_ipool_get() in most of the case. And that extra converting
hurts multi-thread performance since mlx5_ipool_get() uses the global
lock inside. As the hash Rx queue object itself also contains the index,
returns the object directly will achieve better performance without the
global lock.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add flow jump action
Suanming Mou [Thu, 24 Feb 2022 13:40:47 +0000 (15:40 +0200)]
net/mlx5: add flow jump action

Jump action connects different level of flow tables and allows packet
handling in the chain of flows.

A new action construct data struct is also added in this commit to help
to handle not only the dynamic jump action but also for the other
generic dynamic actions. The actions with empty mask configuration means
dynamic action, and the dedicated action will be created with the flow
action configuration during flow creation. In that dynamic action case,
the action will be appended to the table template's action list during
table creation.
When creating the flows, traverse the action list and pick the dynamic
action configuration details from flow actions as the action construct
data struct describes, then create the dedicated dynamic actions.

This commit adds the jump action and the generic dynamic action
construct mechanism.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add flow flush
Suanming Mou [Thu, 24 Feb 2022 13:40:46 +0000 (15:40 +0200)]
net/mlx5: add flow flush

In case port is being stopped, all created flows should be flushed.
This commit adds the flow flush helper function.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add basic flow queue operation
Suanming Mou [Thu, 24 Feb 2022 13:40:45 +0000 (15:40 +0200)]
net/mlx5: add basic flow queue operation

The HW steering uses async queue-based flow rules management
mechanism. The matcher and part of the actions have been
prepared during flow table creation. Some remaining actions
will be constructed during flow creation if needed.

A flow postpone attribute bit describes if flow management
should be applied to the HW directly. An extra push function
is provided to force push all the cached flows to the HW.

Once the flow has been applied to the HW, the pull function
will be called to get the queued creation/destruction flows.

The DR rule flow memory is represented in PMD layer instead
of allocating from HW steering layer. While destroying the
flow, the flow rule memory can only be freed after the CQE
received.

The HW queue job descriptor is currently introduced to convey
the flow information and operation type between the flow
insertion/destruction in the pull function.

This commit adds the basic flow queue operation for:
rte_flow_async_create();
rte_flow_async_destroy();
rte_flow_push();
rte_flow_pull();

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add table management
Suanming Mou [Thu, 24 Feb 2022 13:40:44 +0000 (15:40 +0200)]
net/mlx5: add table management

Flow table is a group of flows with the same matching criteria
and the same actions defined for them. The table defines rules
that have the same matching fields but with different matching
values. For example, matching on 5 tuple, the table will be
(IPv4 source + IPv4 dest + s_port + d_port + next_proto)
while the values for each rule will be different.

The templates' relevant matching criteria and action instances
will be created in the table creation and saved in the table.
As table attributes indicate the supported flow number, the flow
memory will also be allocated at the same time.

This commit adds the table management functions.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add action template management
Suanming Mou [Thu, 24 Feb 2022 13:40:43 +0000 (15:40 +0200)]
net/mlx5: add action template management

The action template holds a list of action types that will be
used together on the same rule. The template's actions instances
will be created only when the template bind to the dedicated
group. And the created actions will be saved to each individual
group in order for best performance. The actions in a group will
not be shared with each other unless shared actions are specified.

This commit adds the action template management which stores the
flow action template.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add pattern template management
Suanming Mou [Thu, 24 Feb 2022 13:40:42 +0000 (15:40 +0200)]
net/mlx5: add pattern template management

The pattern template defines flows that have the same matching
fields but with different matching values.
For example, matching on 5 tuple TCP flow, the template will be
(eth(null) + IPv4(source + dest) + TCP(s_port + d_port) while
the values for each rule will be different.

Due to the pattern template can be used in different domains, the
items will only be cached in pattern template create stage, while
the template is bound to a dedicated table, the HW criteria will
be created and saved to the table. The pattern templates can be
used by multiple tables. But different tables create the same
criteria and will not share the matcher between each other in order
to have better performance.

This commit adds pattern template management.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add port flow configuration
Suanming Mou [Thu, 24 Feb 2022 13:40:41 +0000 (15:40 +0200)]
net/mlx5: add port flow configuration

The hardware steering is backend to support rte_flow_async API in
mlx5 PMD. The port configuration function creates the queues and
needed flow management resources.

The PMD layer configuration function allocates the queues' context
and per-queue job descriptor pool. The job descriptor pool size
is equal to the queue size, and the job descriptors will be popped
from pool with LIFO strategy to convey the flow information during
flow insertion/destruction. Then, while polling the queued operation
result, the flow information will be extracted from the job descriptor
and the descriptor will be pushed back to the LIFO pool.

The commit creates the flow port queues and the job descriptor pools.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: introduce hardware steering enable routine
Suanming Mou [Thu, 24 Feb 2022 13:40:40 +0000 (15:40 +0200)]
net/mlx5: introduce hardware steering enable routine

The new hardware steering engine relies on using dedicated steering WQEs
instead of writing to the low-level steering table entries directly.
In the first implementation the hardware steering engine supports the
new queue based Flow API, the existing synchronous non-queue based Flow
API is not supported.

A new dv_flow_en value 2 is added to manage mlx5 PMD steering engine:

dv_flow_en rte_flow API rte_flow_async API
------------------------------------------------
 0 support not support
 1 support not support
 2 not support support

This commit introduces the extra dv_flow_en = 2 to specify the new
flow initialize and manage operation routine.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: add HW steering low-level abstract stub
Suanming Mou [Thu, 24 Feb 2022 13:40:39 +0000 (15:40 +0200)]
net/mlx5: add HW steering low-level abstract stub

The HW steering low-level implementation will be added later in another
patch series. To avoid the linkage issues the abstract stub replacement
is provided currently.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: introduce hardware steering operation
Suanming Mou [Thu, 24 Feb 2022 13:40:38 +0000 (15:40 +0200)]
net/mlx5: introduce hardware steering operation

The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:

   - FW Steering - Verbs/Direct Verbs, uses FW calls to manage flows
   - SW Steering - DevX/mlx5dv, uses WQEs to access table memory directly

However, there are still some disadvantages:

   - performance is limited, we should invoke firmware either to
     manage the entire flow, or to handle some internal steering objects

   - organizing and preparing flow infrastructure (actions, matchers,
     groups, etc.) on the flow inserting is sure to cause slow flow
     insertion

   - security, exposing the low-level steering entries directly to the
     userspace may cause security risks

A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.

In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.

This commit adds the basic driver operation.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/cnxk: add SoC specific PTP timestamp read
Rakesh Kudurumalla [Thu, 24 Feb 2022 08:02:16 +0000 (13:32 +0530)]
net/cnxk: add SoC specific PTP timestamp read

Timestamp resolution for an incoming and outgoing packets
is different for CN10k and CN9K. Added SoC specific
callback to retrieve timestamp in correct format
when read by application.

Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agocommon/cnxk: add CN10K specific xstats
Rahul Bhansali [Wed, 23 Feb 2022 12:51:04 +0000 (18:21 +0530)]
common/cnxk: add CN10K specific xstats

Add cn10k specific Rx xstats of bandwidth profile,
CPT and IPsec counters.

Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/mlx5: support wait on time in Tx
Viacheslav Ovsiienko [Thu, 24 Feb 2022 10:55:01 +0000 (12:55 +0200)]
net/mlx5: support wait on time in Tx

The hardware since ConnectX-7 supports waiting on
specified moment of time with new introduced wait
descriptor. A timestamp can be directly placed
into descriptor and pushed to sending queue.
Once hardware encounter the wait descriptor the
queue operation is suspended till specified moment
of time. This patch update the Tx datapath to handle
this new hardware wait capability.

PMD documentation and release notes updated accordingly.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agonet/mlx5: configure Tx queue with send on time offload
Viacheslav Ovsiienko [Thu, 24 Feb 2022 10:55:00 +0000 (12:55 +0200)]
net/mlx5: configure Tx queue with send on time offload

The wait on time configuration flag is copied to the Tx queue
structure due to performance considerations. Timestamp
mask is prepared and stored in queue structure as well.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agocommon/mlx5: check send on time capability
Viacheslav Ovsiienko [Thu, 24 Feb 2022 10:54:59 +0000 (12:54 +0200)]
common/mlx5: check send on time capability

The patch provides check for send scheduling on time hardware capability.
With this capability enabled hardware is able to handle Wait WQEs
with directly specified timestamp values. No Clock Queue is needed
anymore to handle send scheduling.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2 years agoapp/testpmd: add async indirect actions operations
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:40 +0000 (05:02 +0200)]
app/testpmd: add async indirect actions operations

Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2 years agoapp/testpmd: add flow queue pull operation
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:39 +0000 (05:02 +0200)]
app/testpmd: add flow queue pull operation

Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2 years agoapp/testpmd: add flow queue push operation
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:38 +0000 (05:02 +0200)]
app/testpmd: add flow queue push operation

Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2 years agoapp/testpmd: add async flow create/destroy operations
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:37 +0000 (05:02 +0200)]
app/testpmd: add async flow create/destroy operations

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2 years agoapp/testpmd: add flow table management
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:36 +0000 (05:02 +0200)]
app/testpmd: add flow table management

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2 years agoapp/testpmd: add flow template management
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:35 +0000 (05:02 +0200)]
app/testpmd: add flow template management

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2 years agoapp/testpmd: add flow engine configuration
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:34 +0000 (05:02 +0200)]
app/testpmd: add flow engine configuration

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2 years agoethdev: bring in async indirect actions operations
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:33 +0000 (05:02 +0200)]
ethdev: bring in async indirect actions operations

Queue-based flow rules management mechanism is suitable
not only for flow rules creation/destruction, but also
for speeding up other types of Flow API management.
Indirect action object operations may be executed
asynchronously as well. Provide async versions for all
indirect action operations, namely:
rte_flow_async_action_handle_create,
rte_flow_async_action_handle_destroy and
rte_flow_async_action_handle_update.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2 years agoethdev: bring in async queue-based flow rules operations
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:32 +0000 (05:02 +0200)]
ethdev: bring in async queue-based flow rules operations

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_async_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_async_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2 years agoethdev: add flow item/action templates
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:31 +0000 (05:02 +0200)]
ethdev: add flow item/action templates

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2 years agoethdev: introduce flow engine configuration
Alexander Kozyrev [Wed, 23 Feb 2022 03:02:30 +0000 (05:02 +0200)]
ethdev: introduce flow engine configuration

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2 years agonet/cnxk: fix build with GCC 12
Rakesh Kudurumalla [Wed, 23 Feb 2022 09:55:40 +0000 (15:25 +0530)]
net/cnxk: fix build with GCC 12

Resolve following compilation error with gcc 12 version.
error: storing the address of local variable message in *error.message

Fixes: 26b034f78ca7 ("net/cnxk: support to validate meter policy")
Cc: stable@dpdk.org
Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: add option to override outbound inline SA IV
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:11 +0000 (01:05 +0530)]
net/cnxk: add option to override outbound inline SA IV

Add option to override outbound inline SA IV for debug
purposes via environment variable. User can set env variable as:
export CN10K_ETH_SEC_IV_OVR="0x0, 0x0,..."

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: add devargs for min-max SPI
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:10 +0000 (01:05 +0530)]
net/cnxk: add devargs for min-max SPI

Add support for inline inbound SPI range via devargs
instead of just max SPI value and range being 0..max.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: enable flow control by default on device configure
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:09 +0000 (01:05 +0530)]
net/cnxk: enable flow control by default on device configure

Enable flow control by default on device configuration
instead of basing it on Kernel behaviour.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: enable packet pool tail drop
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:08 +0000 (01:05 +0530)]
net/cnxk: enable packet pool tail drop

Enable packet pool tail drop on RQ when inbound security is not
enabled. This is only part of the configuration. It is a NOP if
tail drop is not enabled on NPA_AURA_CTX_S. And tail drop
on packet pool AURA is enabled only when that packet pool aura
is used by inline device RQ.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: use NPA batch burst free for meta buffers
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:07 +0000 (01:05 +0530)]
net/cnxk: use NPA batch burst free for meta buffers

Currently meta buffers are freed in bursts of one LMT line
i.e 15 pointers. Instead free them in bursts of 16 LMTlines
which is 240 ptrs for better perf.

Also mark mempool objects as get and put in missing places.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: fix inline IPsec security error handling
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:06 +0000 (01:05 +0530)]
net/cnxk: fix inline IPsec security error handling

Use raw mbuf free on inline security error to simulate
HW NPA free instead of doing rte_pktmbuf_free(). This
is needed as the callback will not be called from
DPDK lcore.

Fixes: 69daa9e5022b ("net/cnxk: support inline security setup for cn10k")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: realloc inline dev XAQ for security
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:05 +0000 (01:05 +0530)]
net/cnxk: realloc inline dev XAQ for security

Realloc inline dev XAQ when Rx/Tx security ie enabled with
new packet pool as XAQ should be large enough to hold all
mbufs if inline outbound reports error or all mbufs.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: register callback early to handle initial packets
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:04 +0000 (01:05 +0530)]
net/cnxk: register callback early to handle initial packets

Register callback early to handle initial error packets from
inline device.

Fixes: 69daa9e5022b ("net/cnxk: support inline security setup for cn10k")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2 years agonet/cnxk: fix inline device RQ tag mask
Nithin Dabilpuram [Tue, 22 Feb 2022 19:35:03 +0000 (01:05 +0530)]
net/cnxk: fix inline device RQ tag mask

Fix inline device RQ  tagmask to get packets with receive errors
as type ETHDEV packets to callback handler so that packet buffers
can get freed. Currently only IPsec denied packets get the right
tag mask.

Fixes: ee48f711f3b0 ("common/cnxk: support NIX inline inbound and outbound setup")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>