dpdk.git
3 years agocommon/sfc_efx/base: add ingress m-port RxQ flag
Igor Romanov [Fri, 2 Jul 2021 08:39:35 +0000 (11:39 +0300)]
common/sfc_efx/base: add ingress m-port RxQ flag

Add a flag to request support for ingress m-port on an RxQ.
Implement it only for Riverhead, other families will return an error
if the flag is set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
3 years agonet/sfc: prepare for internal Tx queue
Igor Romanov [Fri, 2 Jul 2021 08:39:34 +0000 (11:39 +0300)]
net/sfc: prepare for internal Tx queue

Make software index of a Tx queue and ethdev index separate.
When an ethdev TxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
3 years agonet/sfc: explicitly control IRQ used for Rx queues
Andrew Rybchenko [Fri, 2 Jul 2021 08:39:33 +0000 (11:39 +0300)]
net/sfc: explicitly control IRQ used for Rx queues

Interrupts support has assumptions on interrupt numbers used
for LSC and Rx queues. The first interrupt is used for LSC,
subsequent interrupts are used for Rx queues.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
3 years agocommon/sfc_efx/base: support custom EvQ to IRQ mapping
Andrew Rybchenko [Fri, 2 Jul 2021 08:39:32 +0000 (11:39 +0300)]
common/sfc_efx/base: support custom EvQ to IRQ mapping

Custom mapping is actually supported for EF10 and EF100 families only.

A driver (e.g. DPDK PMD) may require to customize mapping of EvQ
to interrupts if, for example, extra EvQ are used for house-keeping
in polling or wake up (via another EvQ) mode.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
3 years agocommon/sfc_efx/base: separate target EvQ and IRQ config
Andrew Rybchenko [Fri, 2 Jul 2021 08:39:31 +0000 (11:39 +0300)]
common/sfc_efx/base: separate target EvQ and IRQ config

Target EvQ and IRQ number are specified in the same location
in MCDI request. The value is treated as IRQ number if the
event queue is interrupting (corresponding flag is set) and
as target event queue otherwise.

However it is better to separate it on helper API level to
make it more clear.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
3 years agonet/sfc: do not enable interrupts on internal Rx queues
Andrew Rybchenko [Fri, 2 Jul 2021 08:39:30 +0000 (11:39 +0300)]
net/sfc: do not enable interrupts on internal Rx queues

rxq_intr flag requests support for interrupt mode for ethdev Rx queues.
There is no internal Rx queues yet.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
3 years agonet/sfc: prepare for internal Rx queue
Igor Romanov [Fri, 2 Jul 2021 08:39:29 +0000 (11:39 +0300)]
net/sfc: prepare for internal Rx queue

Make software index of an Rx queue and ethdev index separate.
When an ethdev RxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
3 years agoexamples/pipeline: fix build
Ali Alnubani [Mon, 12 Jul 2021 07:46:36 +0000 (10:46 +0300)]
examples/pipeline: fix build

This patch fixes the following build failures seen on Ubuntu 16.04
with gcc 5.4.0 because of uninitialized variables:
...
examples/pipeline/cli.c:1559:11: error: 'weight_val' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]
...
examples/pipeline/cli.c:1545:13: error: 'member_id_val' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]
...
examples/pipeline/cli.c:1538:12: error: 'group_id_val' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]
...
examples/pipeline/cli.c:2189:2: error: 'idx1' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]
...
examples/pipeline/cli.c:2179:43: error: 'idx0' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]
...
examples/pipeline/cli.c:2265:2: error: 'idx1' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]
...
examples/pipeline/cli.c:2248:43: error: 'idx0' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]
...
examples/pipeline/cli.c:2358:2: error: 'idx1' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]
...
examples/pipeline/cli.c:2325:43: error: 'idx0' may be used
  uninitialized in this function [-Werror=maybe-uninitialized]

Fixes: 598fe0dd0d8e ("examples/pipeline: support selector table")

Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
3 years agoevent/cnxk: support vectorized Tx event fast path
Pavan Nikhilesh [Wed, 14 Jul 2021 09:02:07 +0000 (14:32 +0530)]
event/cnxk: support vectorized Tx event fast path

Add Tx event vector fastpath, integrate event vector Tx routine
into Tx burst.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agoevent/cnxk: support vectorized Rx event fast path
Pavan Nikhilesh [Wed, 14 Jul 2021 09:02:06 +0000 (14:32 +0530)]
event/cnxk: support vectorized Rx event fast path

Add Rx event vector fastpath to convert HW defined metadata into
rte_mbuf and rte_event_vector.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agoevent/cnxk: support vectorized Rx adapter
Pavan Nikhilesh [Wed, 14 Jul 2021 09:02:05 +0000 (14:32 +0530)]
event/cnxk: support vectorized Rx adapter

Add event vector support for cnxk event Rx adapter, add control path
APIs to get vector limits and ability to configure event vectorization
on a given Rx queue.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agoevent/cnxk: support Tx adapter fast path
Pavan Nikhilesh [Wed, 14 Jul 2021 09:02:04 +0000 (14:32 +0530)]
event/cnxk: support Tx adapter fast path

Add support for event eth Tx adapter fastpath operations.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agoevent/cnxk: support Tx adapter
Pavan Nikhilesh [Wed, 14 Jul 2021 09:02:03 +0000 (14:32 +0530)]
event/cnxk: support Tx adapter

Add support for event eth Tx adapter.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
3 years agoevent/cnxk: support Rx adapter fast path
Pavan Nikhilesh [Wed, 14 Jul 2021 09:02:02 +0000 (14:32 +0530)]
event/cnxk: support Rx adapter fast path

Add support for event eth Rx adapter fastpath operations.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agoevent/cnxk: support Rx adapter
Pavan Nikhilesh [Wed, 14 Jul 2021 09:02:01 +0000 (14:32 +0530)]
event/cnxk: support Rx adapter

Add support for event eth Rx adapter.
Resize cn10k workslot fastpath structure to fit in 64B cacheline size.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agonet/iavf: fix bandwidth unit in TM capability query
Ting Xu [Thu, 15 Jul 2021 10:36:06 +0000 (18:36 +0800)]
net/iavf: fix bandwidth unit in TM capability query

In IAVF node TM capability querying, the unit of bandwidth is Kbps,
which is not correct according to TM specification. Change the unit to
Byte per second. Refine some unclear comments as well.

Fixes: 44d0a720a538 ("net/iavf: query QoS capabilities and set queue TC mapping")

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/ice/base: support MPLS ethertype switch filter
Alvin Zhang [Tue, 13 Jul 2021 02:35:30 +0000 (10:35 +0800)]
net/ice/base: support MPLS ethertype switch filter

Add MPLS training packet and offsets.
Add check to identify MPLS ethertype filters.

For example:
testpmd> flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 \
         type is 0x8847 / end actions queue index 2 / end

This flow will result in all the matched ingress packets be
forwarded to queue 2.

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/ice: fix ESP flow director with SPI as input set
Simei Su [Tue, 13 Jul 2021 02:10:24 +0000 (10:10 +0800)]
net/ice: fix ESP flow director with SPI as input set

FDIR can't work when SPI as inputset for both ESP over IP and ESP
over UDP flow. This patch fixes this issue by adding the corresponding
input set for ESP over IP and ESP over UDP when parsing input set. Also,
it adds input set bit for NAT_T_ESP to distinguish ESP over IP and ESP
over UDP.

Fixes: 70feafc1a3f2 ("net/ice: support ESP/NATT flow director to match outer IP")

Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/ice/base: revert change of first profile mask
Wenjun Wu [Tue, 13 Jul 2021 01:51:04 +0000 (09:51 +0800)]
net/ice/base: revert change of first profile mask

Segmentation fault mentioned in below commit is related to
other root cause under investigation.
This reverts patch below since it may have potential
risk and side effect if the first profile mask is set to 0.

Fixes: 148fdf2d3537 ("net/ice/base: fix first profile mask")
Cc: stable@dpdk.org
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/i40e: replace SMP barrier with thread fence in Rx
Joyce Kong [Tue, 6 Jul 2021 06:54:04 +0000 (01:54 -0500)]
net/i40e: replace SMP barrier with thread fence in Rx

Simply replace the SMP barrier with atomic thread fence for
i40e hw ring scan, if there is no synchronization point.

Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/iavf: support default RSS for IP fragment
Wenjun Wu [Mon, 12 Jul 2021 08:27:30 +0000 (16:27 +0800)]
net/iavf: support default RSS for IP fragment

This patch adds default RSS support for IPv4 and IPv6 fragment packet.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/iavf: support RSS for GTPoGRE
Lingyu Liu [Wed, 7 Jul 2021 12:57:52 +0000 (12:57 +0000)]
net/iavf: support RSS for GTPoGRE

Support AVF RSS for inner most header of GTPoGRE packet. It supports
RSS based on inner most IP src + dst address and TCP/UDP src + dst
port.

Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/iavf: support flow director for GTPoGRE
Lingyu Liu [Wed, 7 Jul 2021 12:57:51 +0000 (12:57 +0000)]
net/iavf: support flow director for GTPoGRE

Support AVF FDIR for inner header of GTPoGRE tunnel packet.
Only patterns without inner most L3,L4 header support outer L3 src/dst
and TEID,QFI FDIR.

+------------------------------------+-------------------------------+
|                Pattern             |            Input Set          |
+------------------------------------+-------------------------------+
|eth/ipv4/gre/ipv4/gtpu/(eh/)ipv4    |inner: src/dst ip              |
|eth/ipv4/gre/ipv4/gtpu/(eh/)ipv4/udp|inner: src/dst ip, src/dst port|
|eth/ipv4/gre/ipv4/gtpu/(eh/)ipv4/tcp|inner: src/dst ip, src/dst port|
|eth/ipv4/gre/ipv4/gtpu/(eh/)ipv6    |inner: src/dst ip              |
|eth/ipv4/gre/ipv4/gtpu/(eh/)ipv6/udp|inner: src/dst ip, src/dst port|
|eth/ipv4/gre/ipv4/gtpu/(eh/)ipv6/tcp|inner: src/dst ip, src/dst port|
|eth/ipv4/gre/ipv6/gtpu/(eh/)ipv4    |inner: src/dst ip              |
|eth/ipv4/gre/ipv6/gtpu/(eh/)ipv4/udp|inner: src/dst ip, src/dst port|
|eth/ipv4/gre/ipv6/gtpu/(eh/)ipv4/tcp|inner: src/dst ip, src/dst port|
|eth/ipv4/gre/ipv6/gtpu/(eh/)ipv6    |inner: src/dst ip              |
|eth/ipv4/gre/ipv6/gtpu/(eh/)ipv6/udp|inner: src/dst ip, src/dst port|
|eth/ipv4/gre/ipv6/gtpu/(eh/)ipv6/tcp|inner: src/dst ip, src/dst port|
|eth/ipv6/gre/ipv4/gtpu/(eh/)ipv4    |inner: src/dst ip              |
|eth/ipv6/gre/ipv4/gtpu/(eh/)ipv4/udp|inner: src/dst ip, src/dst port|
|eth/ipv6/gre/ipv4/gtpu/(eh/)ipv4/tcp|inner: src/dst ip, src/dst port|
|eth/ipv6/gre/ipv4/gtpu/(eh/)ipv6    |inner: src/dst ip              |
|eth/ipv6/gre/ipv4/gtpu/(eh/)ipv6/udp|inner: src/dst ip, src/dst port|
|eth/ipv6/gre/ipv4/gtpu/(eh/)ipv6/tcp|inner: src/dst ip, src/dst port|
|eth/ipv6/gre/ipv6/gtpu/(eh/)ipv4    |inner: src/dst ip              |
|eth/ipv6/gre/ipv6/gtpu/(eh/)ipv4/udp|inner: src/dst ip, src/dst port|
|eth/ipv6/gre/ipv6/gtpu/(eh/)ipv4/tcp|inner: src/dst ip, src/dst port|
|eth/ipv6/gre/ipv6/gtpu/(eh/)ipv6    |inner: src/dst ip              |
|eth/ipv6/gre/ipv6/gtpu/(eh/)ipv6/udp|inner: src/dst ip, src/dst port|
|eth/ipv6/gre/ipv6/gtpu/(eh/)ipv6/tcp|inner: src/dst ip, src/dst port|
|eth/ipv4/gre/ipv4/gtpu(/eh)         |outer: src/dst ip, teid(,qfi)  |
|eth/ipv4/gre/ipv6/gtpu(/eh)         |outer: src/dst ip, teid(,qfi)  |
|eth/ipv6/gre/ipv4/gtpu(/eh)         |outer: src/dst ip, teid(,qfi)  |
|eth/ipv6/gre/ipv6/gtpu(/eh)         |outer: src/dst ip, teid(,qfi)  |
+------------------------------------+-------------------------------+

Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/iavf: support flow pattern for GTPoGRE
Lingyu Liu [Wed, 7 Jul 2021 12:57:50 +0000 (12:57 +0000)]
net/iavf: support flow pattern for GTPoGRE

Add GTPoGRE pattern support for AVF FDIR and RSS.

Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
3 years agonet/bnxt: update CFA resource types
Ajit Khaparde [Thu, 15 Jul 2021 19:29:08 +0000 (12:29 -0700)]
net/bnxt: update CFA resource types

Update cfa_resource_types.h to add a new entry for compatibility with FW.

Signed-off-by: Shuanglin Wang <shuanglin.wang@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
3 years agonet/bnxt: clear cached statistics
Kalesh AP [Tue, 13 Jul 2021 13:34:13 +0000 (19:04 +0530)]
net/bnxt: clear cached statistics

As part of the workaround put in the commit "219842b9990c",
driver caches the last read stats values from the hardware.
But this is not cleared during the clear stats operation. This
results in showing up stale stats values while reading the stats
after the clear operation.

Fixes: 219842b9990c ("net/bnxt: workaround spurious zero stats in Thor")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
3 years agonet/bnxt: handle pause storm event
Somnath Kotur [Mon, 12 Jul 2021 08:04:35 +0000 (13:34 +0530)]
net/bnxt: handle pause storm event

FW has been modified to send a new async event when it detects
a pause storm. Register for this new event and log it upon receipt.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
3 years agonet/bnxt: refactor async event handling
Somnath Kotur [Mon, 12 Jul 2021 08:04:34 +0000 (13:34 +0530)]
net/bnxt: refactor async event handling

Store the async event completion data1 and data2 in separate variables
at the start of the function before the switch case for the different
events so they can be used by any of the event handlers.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
3 years agonet/bnxt: inform firmware about host MTU
Kalesh AP [Mon, 12 Jul 2021 08:04:33 +0000 (13:34 +0530)]
net/bnxt: inform firmware about host MTU

This enables device firmware to respond appropriately to BMC queries
about the driver's configured MTU.

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
3 years agonet/bnxt: update HSI structure
Kalesh AP [Mon, 12 Jul 2021 08:04:32 +0000 (13:34 +0530)]
net/bnxt: update HSI structure

- HWRM version updated to 1.10.2.44
- Added corresponding driver changes for the Admin MTU field name change.

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
3 years agonet/bnxt: fix nested lock during bonding
Weifeng Li [Sat, 3 Jul 2021 10:20:42 +0000 (06:20 -0400)]
net/bnxt: fix nested lock during bonding

Bnxt PMD registers LSC callback (bond_ethdev_lsc_event_callback) when
working at bond mode. This callback will dead lock when LSC
interrupt triggered.

lsc interrupt ->
bnxt_handle_async_event ->
bnxt_link_update_op ->
bond_ethdev_lsc_event_callback (lsc_lock) ->
bnxt_link_update_op ->
bond_ethdev_lsc_event_callback (lsc_lock dead lock)

Fixes: c2faa1d1969e ("net/bnxt: add support for LSC interrupt event")
Cc: stable@dpdk.org
Signed-off-by: Weifeng Li <liweifeng96@126.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
3 years agonet/bnxt: fix missing barriers in completion handling
Lance Richardson [Fri, 9 Jul 2021 16:38:48 +0000 (12:38 -0400)]
net/bnxt: fix missing barriers in completion handling

Ensure that Rx/Tx/Async completion entry fields are accessed
only after the completion's valid flag has been loaded and
verified. This is needed for correct operation on systems that
use relaxed memory consistency models.

Fixes: 2eb53b134aae ("net/bnxt: add initial Rx code")
Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
3 years agonet/cnxk: support raw flow pattern
Satheesh Paul [Mon, 12 Jul 2021 06:49:01 +0000 (12:19 +0530)]
net/cnxk: support raw flow pattern

Add support for rte_flow_item_raw to parse custom L2 and L3
protocols.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agocommon/cnxk: support custom L2/L3 protocols parsing
Satheesh Paul [Mon, 12 Jul 2021 06:49:00 +0000 (12:19 +0530)]
common/cnxk: support custom L2/L3 protocols parsing

Add roc API for parsing custom L2 and L3 protocols.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Reviewed-by: Kiran Kumar K <kirankumark@marvell.com>
3 years agonet/cnxk: update link status when device stopped
Satha Rao [Wed, 7 Jul 2021 16:49:17 +0000 (12:49 -0400)]
net/cnxk: update link status when device stopped

Set link status to down and don't fetch link status from kernel
when device in stopped state.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agonet/octeontx2: fix TM node statistics query
Satha Rao [Wed, 7 Jul 2021 16:49:16 +0000 (12:49 -0400)]
net/octeontx2: fix TM node statistics query

Until hierarchy committed TM hardware resources are not allocated
for node.
This patch check for status of HW resources before reading statistics.

Fixes: 1e25d57fae38 ("net/octeontx2: add TM stats and shaper profile")
Cc: stable@dpdk.org
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agonet/octeontx2: handle link status when device stopped
Satha Rao [Wed, 7 Jul 2021 16:49:15 +0000 (12:49 -0400)]
net/octeontx2: handle link status when device stopped

Set link status to down and don't fetch link status from kernel
when device in stopped state.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agonet/cnxk: fix default MCAM allocation size
Satheesh Paul [Tue, 6 Jul 2021 08:19:18 +0000 (13:49 +0530)]
net/cnxk: fix default MCAM allocation size

Preallocation of MCAM entries is not valid anymore since the
AF side MCAM allocation scheme has changed. This patch disables
preallocation by changing the default MCAM preallocation size
from 8 to 1.

Fixes: 168c59cfe42 ("net/octeontx2: add flow MCAM utility functions")
Cc: stable@dpdk.org
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agonet/octeontx2: support non-ethernet L2 header
Anoob Joseph [Thu, 1 Jul 2021 09:29:29 +0000 (14:59 +0530)]
net/octeontx2: support non-ethernet L2 header

In the inline inound path, a custom header would be present at L3 which
has sequence number & SPI. L2 need to be adjusted such that the eventual
packet would have L3 after L2. Remove assumption of L2 type in this
handling.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
3 years agonet/mvpp2: fix not supported VLAN operations status
Meir Levi [Sun, 11 Jul 2021 13:13:14 +0000 (16:13 +0300)]
net/mvpp2: fix not supported VLAN operations status

vlan_strip and vlan_extend features need to return "unsupported"
error value.

Fixes: ff0b8b10dc4 ("net/mvpp2: support VLAN offload")
Cc: stable@dpdk.org
Signed-off-by: Meir Levi <mlevi4@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
3 years agonet/mvpp2: fix configured state dependency
Dana Vardi [Sun, 11 Jul 2021 13:12:49 +0000 (16:12 +0300)]
net/mvpp2: fix configured state dependency

Need to set configure flag to allow create and commit mrvl tm
hierarchy tree. tm configuration depends on parameters that are
being set in port configure stage, e.g. nb_tx_queues.
This also aligned with the tm api description.

Fixes: 429c394417 ("net/mvpp2: support traffic manager")
Cc: stable@dpdk.org
Signed-off-by: Dana Vardi <danat@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
3 years agonet/mvpp2: fix port speed overflow
Dana Vardi [Sun, 11 Jul 2021 13:11:43 +0000 (16:11 +0300)]
net/mvpp2: fix port speed overflow

ethtool_cmd_speed return uint32 and after the arithmetic
operation in mrvl_get_max_rate func the result is out of range.

Fixes: 429c394417 ("net/mvpp2: support traffic manager")
Cc: stable@dpdk.org
Signed-off-by: Dana Vardi <danat@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
3 years agonet/mlx5: fix typo in vectorized Rx comments
Sarosh Arif [Tue, 8 Jun 2021 11:08:50 +0000 (16:08 +0500)]
net/mlx5: fix typo in vectorized Rx comments

Change "returing" to "returning".

Fixes: 2e542da70937 ("net/mlx5: add Altivec Rx")
Fixes: 570acdb1da8a ("net/mlx5: add vectorized Rx/Tx burst for ARM")
Fixes: 3c2ddbd413e3 ("net/mlx5: separate shareable vector functions")
Cc: stable@dpdk.org
Signed-off-by: Sarosh Arif <sarosh.arif@emumba.com>
3 years agonet/mlx5: fix threshold for mbuf replenishment in MPRQ
Alexander Kozyrev [Tue, 13 Jul 2021 15:21:12 +0000 (18:21 +0300)]
net/mlx5: fix threshold for mbuf replenishment in MPRQ

The replenishment scheme for the vectorized MPRQ Rx burst aims
to improve the cache locality by allocating new mbufs only when
there are almost no mbufs left: one burst gap between allocated
and consumed indexes.

This gap is not big enough to accommodate a corner case when we
have a very aggressive CQE compression with multiple regular CQEs
at the beginning and 64 zipped CQEs at the end.

Need to keep in mind this case and extend the replenishment
threshold by MLX5_VPMD_RX_MAX_BURST (64) to avoid mbuf overflow.

Fixes: 5fc2e5c27d6 ("net/mlx5: fix mbuf overflow in vectorized MPRQ")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
3 years agonet/mlx5: fix missing RSS expansion of IPv6 frag
Xiaoyu Min [Wed, 7 Jul 2021 02:32:47 +0000 (10:32 +0800)]
net/mlx5: fix missing RSS expansion of IPv6 frag

IPV6_FRAG_EXT item is missed for RSS expansion which causes wrongly
expanded flows:
flow create 0 ingress pattern eth / ipv6 / udp dst is 250 / vxlan-gpe /
ipv6 / ipv6_frag_ext / end actions rss level 2 types ip end / end

Different from other items, IPV6_FRAG_EXT hasn't next field because HW
only support to do hash of UDP/TCP for non-fragment.

This MLX5_EXPANSION_IPV6_FRAG_EXT node in RSS expansion graph only helps
RSS expansion function to locate right node in graph from which start
to expand.

Fixes: 0e5a0d8f7556 ("net/mlx5: support match on IPv6 fragment extension")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: fix missing RSS expandable items
Xiaoyu Min [Wed, 7 Jul 2021 02:32:46 +0000 (10:32 +0800)]
net/mlx5: fix missing RSS expandable items

Some RSS expandable items are missing which leads to the expanded
rte flow rules with wrong patterns.

Fix by adding missed items.

Fixes: d91093b9a2af ("net/mlx5: fix RSS pattern expansion")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: support flow matchng on IPv4 IHL
Gregory Etelson [Mon, 5 Jul 2021 11:40:35 +0000 (14:40 +0300)]
net/mlx5: support flow matchng on IPv4 IHL

Query MLX5 port hardware if it is capable to offload IPv4
IHL field.

Provide flow rules capability to match on IPv4 IHL field.
Minimal HCA firmware version required to offload IPv4 IHL is
xx_30_2000.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
3 years agodoc: add multi-thread flow rate optimizations for mlx5
Suanming Mou [Tue, 13 Jul 2021 08:45:00 +0000 (11:45 +0300)]
doc: add multi-thread flow rate optimizations for mlx5

This commit adds the multiple-thread flow insertion optimization
description.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: optimize Rx queue match
Suanming Mou [Tue, 13 Jul 2021 08:44:59 +0000 (11:44 +0300)]
net/mlx5: optimize Rx queue match

As hrxq struct has the indirect table pointer, while matching the
hrxq, better to use the hrxq indirect table instead of searching
from the list.

This commit optimizes the hrxq indirect table matching.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: change memory release configuration
Suanming Mou [Tue, 13 Jul 2021 08:44:58 +0000 (11:44 +0300)]
net/mlx5: change memory release configuration

This commit changes the index pool memory release configuration
to 0 when memory reclaim mode is not required.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: optimize hash list table allocate on demand
Suanming Mou [Tue, 13 Jul 2021 08:44:57 +0000 (11:44 +0300)]
net/mlx5: optimize hash list table allocate on demand

Currently, all the hash list tables are allocated during start up.
Since different applications may only use dedicated limited actions,
optimized the hash list table allocate on demand will save initial
memory.

This commit optimizes hash list table allocate on demand.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: enable indexed pool per-core cache
Suanming Mou [Tue, 13 Jul 2021 08:44:56 +0000 (11:44 +0300)]
net/mlx5: enable indexed pool per-core cache

This commit enables the tag and header modify action indexed
pool per-core cache in non-reclaim memory mode.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: adjust hash bucket size
Suanming Mou [Tue, 13 Jul 2021 08:44:55 +0000 (11:44 +0300)]
net/mlx5: adjust hash bucket size

With the new per core optimization to the list, the hash bucket size
can be tuned to a more accurate number.

This commit adjusts the hash bucket size.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: move header modify allocator to ipool
Matan Azrad [Tue, 13 Jul 2021 08:44:54 +0000 (11:44 +0300)]
net/mlx5: move header modify allocator to ipool

Modify header actions are allocated by mlx5_malloc which has a big
overhead of memory and allocation time.

One of the action types under the modify header object is SET_TAG,

The SET_TAG action is commonly not reused by the flows and each flow has
its own value.

Hence, the mlx5_malloc becomes a bottleneck in flow insertion rate in
the common cases of SET_TAG.

Use ipool allocator for SET_TAG action.

Ipool allocator has less overhead of memory and insertion rate and has
better synchronization mechanism in multithread cases.

Different ipool is created for each optional size of modify header
handler.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agocommon/mlx5: support list non-lcore operations
Suanming Mou [Tue, 13 Jul 2021 08:44:53 +0000 (11:44 +0300)]
common/mlx5: support list non-lcore operations

This commit supports the list non-lcore operations with
an extra sub-list and lock.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agocommon/mlx5: optimize cache list object memory
Suanming Mou [Tue, 13 Jul 2021 08:44:52 +0000 (11:44 +0300)]
common/mlx5: optimize cache list object memory

Currently, hash list uses the cache list as bucket list. The list
in the buckets have the same name, ctx and callbacks. This wastes
the memory.

This commit abstracts all the name, ctx and callback members in the
list to a constant struct and others to the inconstant struct, uses
the wrapper functions to satisfy both hash list and cache list can
set the list constant and inconstant struct individually.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agocommon/mlx5: allocate cache list memory individually
Suanming Mou [Tue, 13 Jul 2021 08:44:51 +0000 (11:44 +0300)]
common/mlx5: allocate cache list memory individually

Currently, the list's local cache instance memory is allocated with
the list. As the local cache instance array size is RTE_MAX_LCORE,
most of the cases the system will only have very limited cores.
allocate the instance memory individually per core will be more
economic to the memory.

This commit changes the instance array to pointer array, allocate
the local cache memory only when the core is to be used.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agocommon/mlx5: add per-lcore cache to hash list utility
Matan Azrad [Tue, 13 Jul 2021 08:44:50 +0000 (11:44 +0300)]
common/mlx5: add per-lcore cache to hash list utility

Using the mlx5 list utility object in the hlist buckets.

This patch moves the list utility object to the common utility, creates
all the clone operations for all the hlist instances in the driver.

Also adjust all the utility callbacks to be generic for both list and
hlist.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agocommon/mlx5: call list callbacks with context
Suanming Mou [Tue, 13 Jul 2021 08:44:49 +0000 (11:44 +0300)]
common/mlx5: call list callbacks with context

This commit optimizes to call the list callback functions with global
context directly.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agocommon/mlx5: add per-lcore sharing flag in object list
Suanming Mou [Tue, 13 Jul 2021 08:44:48 +0000 (11:44 +0300)]
common/mlx5: add per-lcore sharing flag in object list

Without lcores_share flag, mlx5 PMD was sharing the rdma-core objects
between all lcores.

Having lcores_share flag disabled, means each lcore will have its own
objects, which will eventually lead to increased insertion/deletion
rates.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agocommon/mlx5: move list utility from net driver
Suanming Mou [Tue, 13 Jul 2021 08:44:47 +0000 (11:44 +0300)]
common/mlx5: move list utility from net driver

Hash list is planned to be implemented with the cache list code.

This commit moves the list utility to common directory.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: allocate list memory in create function
Matan Azrad [Tue, 13 Jul 2021 08:44:46 +0000 (11:44 +0300)]
net/mlx5: allocate list memory in create function

Currently, the list memory was allocated by the list API caller.

Move it to be allocated by the create API in order to save consistence
with the hlist utility.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agonet/mlx5: relax list utility atomic operations
Matan Azrad [Tue, 13 Jul 2021 08:44:45 +0000 (11:44 +0300)]
net/mlx5: relax list utility atomic operations

The atomic operation in the list utility no need a barriers because the
critical part are managed by RW lock.

Relax them.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agonet/mlx5: manage list cache entries release
Matan Azrad [Tue, 13 Jul 2021 08:44:44 +0000 (11:44 +0300)]
net/mlx5: manage list cache entries release

When a cache entry is allocated by lcore A and is released by lcore B,
the driver should synchronize the cache list access of lcore A.

The design decision is to manage a counter per lcore cache that will be
increased atomically when the non-original lcore decreases the reference
counter of cache entry to 0.

In list register operation, before the running lcore starts a lookup in
its cache, it will check the counter in order to free invalid entries in
its cache.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agonet/mlx5: minimize list critical sections
Matan Azrad [Tue, 13 Jul 2021 08:44:43 +0000 (11:44 +0300)]
net/mlx5: minimize list critical sections

The mlx5 internal list utility is thread safe.

In order to synchronize list access between the threads, a RW lock is
taken for the critical sections.

The create\remove\clone\clone_free operations are in the critical
sections.

These operations are heavy and make the critical sections heavy because
they are used for memory and other resources allocations\deallocations.

Moved out the operations from the critical sections and use generation
counter in order to detect parallel allocations.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agonet/mlx5: add per-lcore cache to the list utility
Matan Azrad [Tue, 13 Jul 2021 08:44:42 +0000 (11:44 +0300)]
net/mlx5: add per-lcore cache to the list utility

When mlx5 list object is accessed by multiple cores, the list lock
counter is all the time written by all the cores what increases cache
misses in the memory caches.

In addition, when one thread accesses the list for add\remove\lookup
operation, all the other threads coming to do an operation in the list
are stuck in the lock.

Add per lcore cache to allow thread manipulations to be lockless when
the list objects are mostly reused.

Synchronization with atomic operations should be done in order to
allow threads to unregister an entry from other thread cache.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agonet/mlx5: remove cache term from the list utility
Matan Azrad [Tue, 13 Jul 2021 08:44:41 +0000 (11:44 +0300)]
net/mlx5: remove cache term from the list utility

The internal mlx5 list tool is used mainly when the list objects need to
be synchronized between multiple threads.

The "cache" term is used in the internal mlx5 list API.

Next enhancements on this tool will use the "cache" term for per thread
cache management.

To prevent confusing, remove the current "cache" term from the API's
names.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agonet/mlx5: optimize header modify action memory
Matan Azrad [Tue, 13 Jul 2021 08:44:40 +0000 (11:44 +0300)]
net/mlx5: optimize header modify action memory

Define the types of the modify header action fields to be with the
minimum size needed for the optional values range.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
3 years agonet/mlx5: replace flow list with indexed pool
Suanming Mou [Tue, 13 Jul 2021 08:44:39 +0000 (11:44 +0300)]
net/mlx5: replace flow list with indexed pool

The flow list is used to save the create flows and to be used only
when port closes all the flows need to be flushed.

This commit takes advantage of the index pool foreach operation to
flush all the allocated flows.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: support indexed pool non-lcore operations
Suanming Mou [Tue, 13 Jul 2021 08:44:38 +0000 (11:44 +0300)]
net/mlx5: support indexed pool non-lcore operations

This commit supports the index pool non-lcore operations with
an extra cache and lcore lock.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: add indexed pool iterator
Suanming Mou [Tue, 13 Jul 2021 08:44:37 +0000 (11:44 +0300)]
net/mlx5: add indexed pool iterator

In some cases, application may want to know all the allocated
index in order to apply some operations to the allocated index.

This commit adds the indexed pool functions to support foreach
operation.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: add indexed pool local cache
Suanming Mou [Tue, 13 Jul 2021 08:44:36 +0000 (11:44 +0300)]
net/mlx5: add indexed pool local cache

For object which wants efficient index allocate and free, local
cache will be very helpful.

Two level cache is introduced to allocate and free the index more
efficient. One as local and the other as global. The global cache
is able to save all the allocated index. That means all the allocated
index will not be freed. Once the local cache is full, the extra
index will be flushed to the global cache. Once local cache is empty,
first try to fetch more index from global, if global is still empty,
allocate new trunk with more index.

This commit adds new local cache mechanism for indexed pool.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: allow limiting the indexed pool maximum index
Suanming Mou [Tue, 13 Jul 2021 08:44:35 +0000 (11:44 +0300)]
net/mlx5: allow limiting the indexed pool maximum index

Some ipool instances in the driver are used as ID\index allocator and
added other logic in order to work with limited index values.

Add a new configuration for ipool specify the maximum index value.
The ipool will ensure that no index bigger than the maximum value is
provided.

Use this configuration in ID allocator cases instead of the current
logics. This patch add the maximum ID configurable for the index pool.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
3 years agonet/mlx5: reduce unnecessary memory access in Rx
Ruifeng Wang [Wed, 7 Jul 2021 09:03:07 +0000 (17:03 +0800)]
net/mlx5: reduce unnecessary memory access in Rx

MR btree len is a constant during Rx replenish.
Moved retrieve of the value out of loop to reduce data loads.
Slight performance uplift was measured on both N1SDP and x86.

Suggested-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
3 years agonet/mlx5: remove redundant operations in NEON Rx
Ruifeng Wang [Wed, 7 Jul 2021 09:03:06 +0000 (17:03 +0800)]
net/mlx5: remove redundant operations in NEON Rx

Mask of entries after the compressed CQE is covered by invalid mask of
non-compressed valid CQEs. Hence remove redundant calculation on mask.
The change showed slight performance uplift on N1SDP.

Fixes: 570acdb1da8a ("net/mlx5: add vectorized Rx/Tx burst for ARM")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
3 years agoapp/testpmd: support matching on VXLAN reserved field
Rongwei Liu [Tue, 13 Jul 2021 12:09:20 +0000 (15:09 +0300)]
app/testpmd: support matching on VXLAN reserved field

Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.

The examples for the "last_rsvd" pattern field are as below:

1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...

This flow will exactly match the last 8-bits to be 0x80.

2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...

This flow will only match the MSB of the last 8-bits to be 1.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
3 years agonet/mlx5: support matching on VXLAN reserved field
Rongwei Liu [Tue, 13 Jul 2021 12:09:19 +0000 (15:09 +0300)]
net/mlx5: support matching on VXLAN reserved field

This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.

For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
3 years agoapp/testpmd: add flow matching on IPv4 version and IHL
Gregory Etelson [Tue, 13 Jul 2021 07:29:25 +0000 (10:29 +0300)]
app/testpmd: add flow matching on IPv4 version and IHL

The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
3 years agoapp/testpmd: fix offloads for newly attached port
Viacheslav Ovsiienko [Mon, 12 Jul 2021 12:40:53 +0000 (15:40 +0300)]
app/testpmd: fix offloads for newly attached port

For the newly attached ports (with "port attach" command) the
default offloads settings, configured from application command
line, were not applied, causing port start failure following
the attach.

For example, if scattering offload was configured in command
line and rxpkts was configured for multiple segments, the newly
attached port start was failed due to missing scattering offload
enable in the new port settings. The missing code to apply
the offloads to the new device and its queues is added.

The new local routine init_config_port_offloads() is introduced,
embracing the shared part of port offloads initialization code.

Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
3 years agonet/hns3: support multiple TC MAC pause
Huisong Li [Sat, 10 Jul 2021 01:58:34 +0000 (09:58 +0800)]
net/hns3: support multiple TC MAC pause

MAC PAUSE can take effect on a single TC or multiple TCs, depending on the
hardware. For example, the Kunpeng 920 supports MAC pause in a single TC,
and the Kunpeng 930 supports MAC pause in multiple TCs. This patch
supports MAC PAUSE in multiple TC for some hardware.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
3 years agonet/hns3: support VLAN filter state modify for VF
Chengchang Tang [Sat, 10 Jul 2021 01:58:33 +0000 (09:58 +0800)]
net/hns3: support VLAN filter state modify for VF

Since the HW limitation for VF, the VLAN filter is default enabled, and
is not allowed to be closed. Now, the limitation has been removed in
Kunpeng930 network engine, so this patch add support for VF to modify the
VLAN filter state.

A capabilities bit is added to differentiate between different platforms
and achieve compatibility. When the VF runs on an incomatible platform or
an incompatible kernel-mode driver version is used, the VF behavior is
the same as that before.

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
3 years agonet/hns3: query basic info for VF
Chengchang Tang [Sat, 10 Jul 2021 01:58:32 +0000 (09:58 +0800)]
net/hns3: query basic info for VF

There are some features of VF depend on PF, so it's necessary for VF
to know whether current PF supports. Therefore, the final capability
set of VF will be composed of the capability set of hardware and the
capability set of PF.

For compatibility reasons, the mailbox HNS3_MBX_GET_TCINFO has been
modified to obatin more basic information about the current PF, including
the communication interface version and current PF capabilities set.

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
3 years agonet/softnic: fix connection memory leak
Dapeng Yu [Fri, 9 Jul 2021 06:00:56 +0000 (14:00 +0800)]
net/softnic: fix connection memory leak

In function softnic_conn_init(), a block of memory is allocated as
connection buffer, but it is never freed in softnic_conn_free(),
which cause memory leak.

Fixes: 7709a63bf178 ("net/softnic: add connection agent")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
3 years agonet/vmxnet3: support MSI-X interrupt
Jochen Behrens [Thu, 8 Jul 2021 14:02:25 +0000 (07:02 -0700)]
net/vmxnet3: support MSI-X interrupt

Add support for MSI-X interrupt vectors to the vmxnet3 driver.
This will allow more efficient deployments in cloud environments.

By default it will try to allocate 1 vector (0) for link
event and one MSI-X vector for each Rx queue.  To simplify
things, it will only be enabled if the number of Tx and Rx
queues are equal (so that Tx/Rx share the same vector).
If for any reason vmxnet3 cannot enable intr mode, it will
fall back to the LSC only mode.

Signed-off-by: Yong Wang <yongwang@vmware.com>
Signed-off-by: Jochen Behrens <jbehrens@vmware.com>
3 years agonet/bonding: check flow setting
Martin Havlik [Tue, 22 Jun 2021 09:25:29 +0000 (11:25 +0200)]
net/bonding: check flow setting

Return value from bond_ethdev_8023ad_flow_set() is now checked
and appropriate message is logged on error.

Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: Martin Havlik <xhavli56@stud.fit.vutbr.cz>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
3 years agonet/bonding: fix error message on flow verify
Martin Havlik [Tue, 22 Jun 2021 09:25:28 +0000 (11:25 +0200)]
net/bonding: fix error message on flow verify

Return value is now saved to errval and log message on error reports
correct function name, doesn't use q_id which was out of context,
and uses up-to-date errval.

Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: Martin Havlik <xhavli56@stud.fit.vutbr.cz>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
3 years agonet/ngbe: support close and reset device
Jiawen Wu [Thu, 8 Jul 2021 09:32:39 +0000 (17:32 +0800)]
net/ngbe: support close and reset device

Support to close and reset device.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: add simple Tx flow
Jiawen Wu [Thu, 8 Jul 2021 09:32:38 +0000 (17:32 +0800)]
net/ngbe: add simple Tx flow

Initialize device with the simplest transmit functions.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: add simple Rx flow
Jiawen Wu [Thu, 8 Jul 2021 09:32:37 +0000 (17:32 +0800)]
net/ngbe: add simple Rx flow

Initialize device with the simplest receive function.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: support Rx queue start/stop
Jiawen Wu [Thu, 8 Jul 2021 09:32:36 +0000 (17:32 +0800)]
net/ngbe: support Rx queue start/stop

Initializes receive unit, support to start and stop receive unit for
specified queues.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: support Tx queue start/stop
Jiawen Wu [Thu, 8 Jul 2021 09:32:35 +0000 (17:32 +0800)]
net/ngbe: support Tx queue start/stop

Initializes transmit unit, support to start and stop transmit unit for
specified queues.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: support device start/stop
Jiawen Wu [Thu, 8 Jul 2021 09:32:34 +0000 (17:32 +0800)]
net/ngbe: support device start/stop

Setup MSI-X interrupt, complete PHY configuration and set device link
speed to start device. Disable interrupt, stop hardware and clear queues
to stop device.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: support Tx queue setup/release
Jiawen Wu [Thu, 8 Jul 2021 09:32:33 +0000 (17:32 +0800)]
net/ngbe: support Tx queue setup/release

Setup device Tx queue and release Tx queue.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: support Rx queue setup/release
Jiawen Wu [Thu, 8 Jul 2021 09:32:32 +0000 (17:32 +0800)]
net/ngbe: support Rx queue setup/release

Setup device Rx queue and release Rx queue.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: setup PHY link
Jiawen Wu [Thu, 8 Jul 2021 09:32:31 +0000 (17:32 +0800)]
net/ngbe: setup PHY link

Setup PHY, determine link and speed status from PHY.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: support link update
Jiawen Wu [Thu, 8 Jul 2021 09:32:30 +0000 (17:32 +0800)]
net/ngbe: support link update

Register to handle device interrupt.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: store MAC address
Jiawen Wu [Thu, 8 Jul 2021 09:32:29 +0000 (17:32 +0800)]
net/ngbe: store MAC address

Store MAC addresses and init receive address filters.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: identify and reset PHY
Jiawen Wu [Thu, 8 Jul 2021 09:32:28 +0000 (17:32 +0800)]
net/ngbe: identify and reset PHY

Identify PHY to get the PHY type, and perform a PHY reset.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: add HW initialization
Jiawen Wu [Thu, 8 Jul 2021 09:32:27 +0000 (17:32 +0800)]
net/ngbe: add HW initialization

Initialize the hardware by resetting the hardware in base code.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
3 years agonet/ngbe: initialize and validate EEPROM
Jiawen Wu [Thu, 8 Jul 2021 09:32:26 +0000 (17:32 +0800)]
net/ngbe: initialize and validate EEPROM

Reset swfw lock before NVM access, init EEPROM and validate the
checksum.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>