dpdk.git
4 years agonet/txgbe: support FC auto negotiation
Jiawen Wu [Mon, 19 Oct 2020 08:54:02 +0000 (16:54 +0800)]
net/txgbe: support FC auto negotiation

Add flow control negotiation with link partner.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support flow control
Jiawen Wu [Mon, 19 Oct 2020 08:54:01 +0000 (16:54 +0800)]
net/txgbe: support flow control

Add flow control support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support DCB
Jiawen Wu [Mon, 19 Oct 2020 08:54:00 +0000 (16:54 +0800)]
net/txgbe: support DCB

Add DCB transmit and receive mode configurations,
and allocate DCB packet buffer.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support RSS
Jiawen Wu [Mon, 19 Oct 2020 08:53:59 +0000 (16:53 +0800)]
net/txgbe: support RSS

Add RSS configure, support to RSS hash and reta operations for PF.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add VMDq configure
Jiawen Wu [Mon, 19 Oct 2020 08:53:58 +0000 (16:53 +0800)]
net/txgbe: add VMDq configure

Add multiple queue setting with VMDq.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add PF module configure for SRIOV
Jiawen Wu [Mon, 19 Oct 2020 08:53:57 +0000 (16:53 +0800)]
net/txgbe: add PF module configure for SRIOV

Add PF module configure for SRIOV.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add process mailbox operation
Jiawen Wu [Mon, 19 Oct 2020 08:53:56 +0000 (16:53 +0800)]
net/txgbe: add process mailbox operation

Add check operation for vf function level reset,
mailbox messages and ack from vf.
Waiting to process the messages.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add PF module init and uninit for SRIOV
Jiawen Wu [Mon, 19 Oct 2020 08:53:55 +0000 (16:53 +0800)]
net/txgbe: add PF module init and uninit for SRIOV

Add PF module init and uninit operations with mailbox.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add SWFW semaphore and lock
Jiawen Wu [Mon, 19 Oct 2020 08:53:54 +0000 (16:53 +0800)]
net/txgbe: add SWFW semaphore and lock

Add semaphore between software and firmware.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support VLAN
Jiawen Wu [Mon, 19 Oct 2020 08:53:53 +0000 (16:53 +0800)]
net/txgbe: support VLAN

Add VLAN filter, tpid, offload and strip set support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add queue stats mapping
Jiawen Wu [Mon, 19 Oct 2020 08:53:52 +0000 (16:53 +0800)]
net/txgbe: add queue stats mapping

Add queue stats mapping set, and clear hardware counters.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support device xstats
Jiawen Wu [Mon, 19 Oct 2020 08:53:51 +0000 (16:53 +0800)]
net/txgbe: support device xstats

Add device extended stats get from reading hardware registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support device statistics
Jiawen Wu [Mon, 19 Oct 2020 08:53:50 +0000 (16:53 +0800)]
net/txgbe: support device statistics

Add device stats get from reading hardware registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add Rx and Tx queue info get
Jiawen Wu [Mon, 19 Oct 2020 08:53:49 +0000 (16:53 +0800)]
net/txgbe: add Rx and Tx queue info get

Add Rx and Tx queue information get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support Rx interrupt
Jiawen Wu [Mon, 19 Oct 2020 08:53:48 +0000 (16:53 +0800)]
net/txgbe: support Rx interrupt

Support rx queue interrupt.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support device stop and close
Jiawen Wu [Mon, 19 Oct 2020 08:53:47 +0000 (16:53 +0800)]
net/txgbe: support device stop and close

Add device stop, close and reset operations.
And support hardware thermal sensor.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add Rx and Tx data path start and stop
Jiawen Wu [Mon, 19 Oct 2020 08:53:46 +0000 (16:53 +0800)]
net/txgbe: add Rx and Tx data path start and stop

Add receive and transmit data path start and stop.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support device start
Jiawen Wu [Mon, 19 Oct 2020 08:53:45 +0000 (16:53 +0800)]
net/txgbe: support device start

Add device start operation with hardware start and reset.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support Rx
Jiawen Wu [Mon, 19 Oct 2020 08:53:44 +0000 (16:53 +0800)]
net/txgbe: support Rx

Fill receive functions and define receive descriptor.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support Tx prepare
Jiawen Wu [Mon, 19 Oct 2020 08:53:43 +0000 (16:53 +0800)]
net/txgbe: support Tx prepare

Fill transmit prepare function.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support Tx with hardware offload
Jiawen Wu [Mon, 19 Oct 2020 08:53:42 +0000 (16:53 +0800)]
net/txgbe: support Tx with hardware offload

Fill transmit function with hardware offload.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support simple Tx
Jiawen Wu [Mon, 19 Oct 2020 08:53:41 +0000 (16:53 +0800)]
net/txgbe: support simple Tx

Fill simple transmit function and define transmit descriptor.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support packet type
Jiawen Wu [Mon, 19 Oct 2020 08:53:40 +0000 (16:53 +0800)]
net/txgbe: support packet type

Add packet type marco definition and convert ptype to ptid.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add Rx and Tx start and stop
Jiawen Wu [Mon, 19 Oct 2020 08:53:39 +0000 (16:53 +0800)]
net/txgbe: add Rx and Tx start and stop

Add receive and transmit units start and stop for specified queue.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add Rx and Tx queues setup and release
Jiawen Wu [Mon, 19 Oct 2020 08:53:38 +0000 (16:53 +0800)]
net/txgbe: add Rx and Tx queues setup and release

Add receive and transmit queues setup and release.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add Rx and Tx init
Jiawen Wu [Mon, 19 Oct 2020 08:53:37 +0000 (16:53 +0800)]
net/txgbe: add Rx and Tx init

Add receive and transmit initialize unit.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add unicast hash bitmap
Jiawen Wu [Mon, 19 Oct 2020 08:53:36 +0000 (16:53 +0800)]
net/txgbe: add unicast hash bitmap

Add unicast hash bitmap.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add MAC address operations
Jiawen Wu [Mon, 19 Oct 2020 08:53:35 +0000 (16:53 +0800)]
net/txgbe: add MAC address operations

Add MAC address related operations.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add autoneg control read and write
Jiawen Wu [Mon, 19 Oct 2020 08:53:34 +0000 (16:53 +0800)]
net/txgbe: add autoneg control read and write

Add autoc read and write for kr/kx/kx4/sfi link.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add multi-speed link setup
Jiawen Wu [Mon, 19 Oct 2020 08:53:33 +0000 (16:53 +0800)]
net/txgbe: add multi-speed link setup

Add multispeed fiber setup link and laser control.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add link status change
Jiawen Wu [Mon, 19 Oct 2020 08:53:32 +0000 (16:53 +0800)]
net/txgbe: add link status change

Add ethdev link interrupt handler, MAC setup link
and check link status and get capabilities.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add device configuration
Jiawen Wu [Mon, 19 Oct 2020 08:53:31 +0000 (16:53 +0800)]
net/txgbe: add device configuration

Add device configure operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add interrupt operation
Jiawen Wu [Mon, 19 Oct 2020 08:53:30 +0000 (16:53 +0800)]
net/txgbe: add interrupt operation

Add device interrupt handler and setup misx interrupt.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support getting device info
Jiawen Wu [Mon, 19 Oct 2020 08:53:29 +0000 (16:53 +0800)]
net/txgbe: support getting device info

Add device information get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add PHY reset
Jiawen Wu [Mon, 19 Oct 2020 08:53:28 +0000 (16:53 +0800)]
net/txgbe: add PHY reset

Add phy reset function, support read and write phy registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add module identify
Jiawen Wu [Mon, 19 Oct 2020 08:53:27 +0000 (16:53 +0800)]
net/txgbe: add module identify

Add sfp anf qsfp module identify, i2c start and stop.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add PHY init
Jiawen Wu [Mon, 19 Oct 2020 08:53:26 +0000 (16:53 +0800)]
net/txgbe: add PHY init

Add phy init functions, get phy type and identify.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add HW init and reset operation
Jiawen Wu [Mon, 19 Oct 2020 08:53:25 +0000 (16:53 +0800)]
net/txgbe: add HW init and reset operation

Add hardware init function and reset operation in mac layer.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add EEPROM functions
Jiawen Wu [Mon, 19 Oct 2020 08:53:24 +0000 (16:53 +0800)]
net/txgbe: add EEPROM functions

Add EEPROM functions.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add HW infrastructure and dummy function
Jiawen Wu [Mon, 19 Oct 2020 08:53:23 +0000 (16:53 +0800)]
net/txgbe: add HW infrastructure and dummy function

Add hardware infrastructure and dummy function.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add MAC type and bus LAN id
Jiawen Wu [Mon, 19 Oct 2020 08:53:22 +0000 (16:53 +0800)]
net/txgbe: add MAC type and bus LAN id

Add base driver shared code.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add error types and registers
Jiawen Wu [Mon, 19 Oct 2020 08:53:21 +0000 (16:53 +0800)]
net/txgbe: add error types and registers

Add error types and registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add device init and uninit
Jiawen Wu [Mon, 19 Oct 2020 08:53:20 +0000 (16:53 +0800)]
net/txgbe: add device init and uninit

Add basic init and uninit function,
and some macro definitions prepare for hardware infrastructure.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: support probe and remove
Jiawen Wu [Mon, 19 Oct 2020 08:53:19 +0000 (16:53 +0800)]
net/txgbe: support probe and remove

Add basic PCIe ethdev probe and remove.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/txgbe: add build and doc infrastructure
Jiawen Wu [Mon, 19 Oct 2020 08:53:18 +0000 (16:53 +0800)]
net/txgbe: add build and doc infrastructure

Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for txgbe PMD.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agoapp/testpmd: fix RSS key for flow API RSS rule
Lijun Ou [Wed, 21 Oct 2020 10:07:10 +0000 (18:07 +0800)]
app/testpmd: fix RSS key for flow API RSS rule

When a flow API RSS rule is issued in testpmd, device RSS key is changed
unexpectedly, device RSS key is changed to the testpmd default RSS key.

Consider the following usage with testpmd:
1. first, startup testpmd:
 testpmd> show port 0 rss-hash key
 RSS functions: all ipv4-frag ipv4-other ipv6-frag ipv6-other ip
 RSS key: 6D5A56DA255B0EC24167253D43A38FB0D0CA2BCBAE7B30B477CB2DA38030F
          20C6A42B73BBEAC01FA
2. create a rss rule
 testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end \
          actions rss types ipv4-udp end queues end / end

3. show rss-hash key
 testpmd> show port 0 rss-hash key
 RSS functions: all ipv4-udp udp
 RSS key: 74657374706D6427732064656661756C74205253532068617368206B65792
          C206F76657272696465

This is because testpmd always sends a key with the RSS rule,
if user provides a key as part of the rule that key is used, if user
doesn't provide a key, testpmd default key is sent to the PMDs, which is
causing device programmed RSS key to be changed.

There was a previous attempt to fix the same issue [1], but it has been
reverted back [2] because of the crash when 'key_len' is provided
without 'key'.

This patch follows the same approach with the initial fix [1] but also
addresses the crash.

After change, testpmd RSS key is 'NULL' by default, if user provides a
key as part of rule it is used, if not no key is sent to the PMDs at all

[1]
Commit a4391f8bae85 ("app/testpmd: set default RSS key as null")

[2]
Commit f3698c3d09a6 ("app/testpmd: revert setting default RSS")

Fixes: d0ad8648b1c5 ("app/testpmd: fix RSS flow action configuration")
Cc: stable@dpdk.org
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/netvsc: allocate contiguous physical memory for RNDIS
Long Li [Thu, 22 Oct 2020 00:26:07 +0000 (17:26 -0700)]
net/netvsc: allocate contiguous physical memory for RNDIS

When sending data, netvsc assumes the tx_rndis buffer is contiguous and
calculates physical addresses based on this assumption.

Use memzone to allocate tx_rndis so it's guaranteed that this buffer is
physically contiguous.

Cc: stable@dpdk.org
Signed-off-by: Long Li <longli@microsoft.com>
4 years agonet/mvpp2: fix memory leak in error path
Yunjian Wang [Thu, 22 Oct 2020 04:25:27 +0000 (12:25 +0800)]
net/mvpp2: fix memory leak in error path

In mrvl_create() allocated memory for 'mtr', we don't free it
when profile get fails and it will lead to memory leak.

We can get profile at the beginning of the function to
fix it, before calling mtr = rte_zmalloc_socket().

Fixes: cdb53f8da628 ("net/mvpp2: support metering")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Liron Himi <lironh@marvell.com>
4 years agonet/ena: remove unused macro
David Marchand [Fri, 23 Oct 2020 08:43:51 +0000 (10:43 +0200)]
net/ena: remove unused macro

This assert macro is not called anymore.
This also fixes an invalid reference to RTE_LOGTYPE_ERR that does not
exist.

Fixes: 3adcba9a8987 ("net/ena: update HAL to the newer version")
Fixes: 6f1c9df9e9cc ("net/ena: use dynamic log type for debug logging")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
4 years agoexamples/vhost: support vhost async data path
Cheng Jiang [Thu, 22 Oct 2020 08:59:07 +0000 (08:59 +0000)]
examples/vhost: support vhost async data path

This patch is to implement vhost DMA operation callbacks for CBDMA
PMD and add vhost async data-path in vhost sample. With providing
callback implementation for CBDMA, vswitch can leverage IOAT to
accelerate vhost async data-path.

Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
4 years agoexamples/vhost: add async vhost args parsing
Cheng Jiang [Thu, 22 Oct 2020 08:59:06 +0000 (08:59 +0000)]
examples/vhost: add async vhost args parsing

This patch is to add async vhost driver arguments parsing function
for CBDMA channel, DMA initiation function and args description.
The meson build file is changed to fix dependency problem. With
these arguments vhost device can be set to use CBDMA or CPU for
enqueue operation and bind vhost device with specific CBDMA channel
to accelerate data copy.

Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
4 years agovhost: remove fallback in async enqueue API
Patrick Fu [Wed, 21 Oct 2020 05:44:25 +0000 (13:44 +0800)]
vhost: remove fallback in async enqueue API

By design, async enqueue API should return directly if async device
is not registered. This patch removes the corrupted implementation of
the enqueue fallback from async mode to sync mode.

Fixes: cd6760da1076 ("vhost: introduce async enqueue for split ring")
Cc: stable@dpdk.org
Signed-off-by: Patrick Fu <patrick.fu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
4 years agovhost: check virtqueue metadata pointer
Maxime Coquelin [Mon, 19 Oct 2020 17:34:15 +0000 (19:34 +0200)]
vhost: check virtqueue metadata pointer

This patch checks whether the virtqueue metadata pointer
is valid before dereferencing it. It is not considered
a fix as earlier patch ensures there are no holes in the
array of virtqueue metadata pointers.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
4 years agovhost: validate index in async API
Maxime Coquelin [Mon, 19 Oct 2020 17:34:14 +0000 (19:34 +0200)]
vhost: validate index in async API

This patch validates the queue index parameter, in order
to ensure no out-of-bound accesses happen.

Fixes: 9eed6bfd2efb ("vhost: allow to enable or disable features")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
4 years agovhost: validate index in inflight API
Maxime Coquelin [Mon, 19 Oct 2020 17:34:13 +0000 (19:34 +0200)]
vhost: validate index in inflight API

This patch validates the queue index parameter, in order
to ensure neither out-of-bound accesses nor NULL pointer
dereferencing happen.

Fixes: 4d891f77ddfa ("vhost: add APIs to get inflight ring")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
4 years agovhost: validate index in live-migration API
Maxime Coquelin [Mon, 19 Oct 2020 17:34:12 +0000 (19:34 +0200)]
vhost: validate index in live-migration API

This patch validates the queue index parameter, in order
to ensure no out-of-bound accesses happen.

Fixes: bd2e0c3fe5ac ("vhost: add APIs for live migration")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
4 years agovhost: validate index in guest notification API
Maxime Coquelin [Mon, 19 Oct 2020 17:34:11 +0000 (19:34 +0200)]
vhost: validate index in guest notification API

This patch validates the queue index parameter, in order
to ensure neither out-of-bound accesses nor NULL pointer
dereferencing happen.

Fixes: 9eed6bfd2efb ("vhost: allow to enable or disable features")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
4 years agovhost: validate index in available entries API
Maxime Coquelin [Mon, 19 Oct 2020 17:34:10 +0000 (19:34 +0200)]
vhost: validate index in available entries API

This patch validates the queue index parameter, in order
to ensure neither out-of-bound accesses nor NULL pointer
dereferencing happen.

Fixes: a67f286a6596 ("vhost: export queue free entries")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
4 years agovhost: fix virtqueues metadata allocation
Maxime Coquelin [Mon, 19 Oct 2020 17:34:09 +0000 (19:34 +0200)]
vhost: fix virtqueues metadata allocation

The Vhost-user backend implementation assumes there will be
no holes in the device's array of virtqueues metadata
pointers.

It can happen though, and would cause segmentation faults,
memory leaks or undefined behaviour.

This patch keep the assumption that there is no holes in this
array, and allocate all uninitialized virtqueues metadata up
to requested index.

Fixes: 160cbc815b41 ("vhost: remove a hack on queue allocation")
Cc: stable@dpdk.org
Suggested-by: Adrian Moreno <amorenoz@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
4 years agonet/vhost: fix xstats after clearing stats
David Christensen [Thu, 15 Oct 2020 17:49:37 +0000 (10:49 -0700)]
net/vhost: fix xstats after clearing stats

The PMD API allows stats and xstats values to be cleared separately.
This is a problem for the vhost PMD since some of the xstats values are
derived from existing stats values.  For example:

testpmd> show port xstats all
...
tx_unicast_packets: 17562959
...
testpmd> clear port stats all
...
show port xstats all
...
tx_unicast_packets: 18446744073709551615
...

Modify the driver so that stats and xstats values are stored, updated,
and cleared separately.

Fixes: 4d6cf2ac93dc ("net/vhost: add extended statistics")
Cc: stable@dpdk.org
Signed-off-by: David Christensen <drc@linux.vnet.ibm.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
4 years agonet/iavf: fix vector Rx
Jeff Guo [Fri, 16 Oct 2020 09:44:31 +0000 (17:44 +0800)]
net/iavf: fix vector Rx

The limitation of burst size in vector rx was removed, since it should
retrieve as much received packets as possible. And also the scattered
receive path should use a wrapper function to achieve the goal of
burst maximizing.

Bugzilla ID: 516
Fixes: 319c421f3890 ("net/avf: enable SSE Rx Tx")
Fixes: 1162f5a0ef31 ("net/iavf: support flexible Rx descriptor in SSE path")
Fixes: 5b6e8859081d ("net/iavf: support flexible Rx descriptor in AVX path")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Tested-by: Wei Ling <weix.ling@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/fm10k: fix vector Rx
Jeff Guo [Fri, 16 Oct 2020 09:44:30 +0000 (17:44 +0800)]
net/fm10k: fix vector Rx

The scattered receive path should use a wrapper function to achieve the
goal of burst maximizing.

Bugzilla ID: 516
Fixes: fe65e1e1ce61 ("fm10k: add vector scatter Rx")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/ice: fix vector Rx
Jeff Guo [Fri, 16 Oct 2020 09:44:29 +0000 (17:44 +0800)]
net/ice: fix vector Rx

The limitation of burst size in vector rx was removed, since it should
retrieve as much received packets as possible. And also the scattered
receive path should use a wrapper function to achieve the goal of
burst maximizing.

Bugzilla ID: 516
Fixes: c68a52b8b38c ("net/ice: support vector SSE in Rx")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Tested-by: Yingya Han <yingyax.han@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/i40e: fix vector Rx
Jeff Guo [Fri, 16 Oct 2020 09:44:28 +0000 (17:44 +0800)]
net/i40e: fix vector Rx

The limitation of burst size in vector rx was removed, since it should
retrieve as much received packets as possible. And also the scattered
receive path should use a wrapper function to achieve the goal of
burst maximizing.

Bugzilla ID: 516
Fixes: 5b463eda8d26 ("net/i40e: make vector driver filenames consistent")
Fixes: ae0eb310f253 ("net/i40e: implement vector PMD for ARM")
Fixes: c3def6a8724c ("net/i40e: implement vector PMD for altivec")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/ixgbe: fix vector Rx
Jeff Guo [Fri, 16 Oct 2020 09:44:27 +0000 (17:44 +0800)]
net/ixgbe: fix vector Rx

The limitation of burst size in vector rx was removed, since it should
retrieve as much received packets as possible. And also the scattered
receive path should use a wrapper function to achieve the goal of
burst maximizing.

Bugzilla ID: 516
Fixes: b20971b6cca0 ("net/ixgbe: implement vector driver for ARM")
Fixes: 0e51f9dc4860 ("net/ixgbe: rename x86 vector driver file")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/i40e: fix QinQ flow pattern to allow non full mask
Padraig Connolly [Thu, 15 Oct 2020 09:28:58 +0000 (10:28 +0100)]
net/i40e: fix QinQ flow pattern to allow non full mask

Issue reported by customer that only full mask was allowed on inner and
outer VLAN tag, thus not allowing mask to set VLAN ID filter only.
Removed check that enforces inner vlan and outer vlan equal
I40E_TCI_MASK (full mask 0xffff).

Fixes: d37705068ee8 ("net/i40e: parse QinQ pattern")
Cc: stable@dpdk.org
Signed-off-by: Padraig Connolly <padraig.j.connolly@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
4 years agonet/ice: optimize Tx by using AVX512
Leyi Rong [Fri, 23 Oct 2020 04:14:07 +0000 (12:14 +0800)]
net/ice: optimize Tx by using AVX512

Optimize Tx path by using AVX512 instructions and vectorize the
tx free bufs process.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/ice: add RSS hash parsing in AVX512 path
Leyi Rong [Fri, 23 Oct 2020 04:14:06 +0000 (12:14 +0800)]
net/ice: add RSS hash parsing in AVX512 path

Support RSS hash parsing in AVX512 data path as the default
RXDID is set to #22, that means the RSS hash field locates
in the 2nd 16B of each Flex Rx descriptor.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/ice: add AVX512 vector path
Leyi Rong [Fri, 23 Oct 2020 04:14:05 +0000 (12:14 +0800)]
net/ice: add AVX512 vector path

Add AVX512 support for ice PMD. This patch adds ice_rxtx_vec_avx512.c
to support ice AVX512 vPMD.

This patch aims to enable AVX512 on ice vPMD. Main changes are focus
on Rx path compared with AVX2 vPMD.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/ixgbe: prevent driver forcing application to exit
Conor Walsh [Tue, 20 Oct 2020 10:02:48 +0000 (10:02 +0000)]
net/ixgbe: prevent driver forcing application to exit

Remove the usage of rte_panic() within ixgbe_pf_host_init()

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
4 years agonet/ixgbe: check switch domain allocation result
Conor Walsh [Tue, 20 Oct 2020 10:02:47 +0000 (10:02 +0000)]
net/ixgbe: check switch domain allocation result

The return value of rte_eth_switch_domain_alloc() was not being checked
within ixgbe_pf_host_init() which caused a coverity issue. If the call
fails a warning is logged using PMD_INIT_LOG() and *vfinfo is free'd.
ixgbe_pf_host_init() now has a return value which is checked in
eth_ixgbe_dev_init()

Coverity issue: 362795
Fixes: cf80ba6e2038 ("net/ixgbe: add support for representor ports")
Cc: stable@dpdk.org
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
4 years agonet/bnxt: fix resource leak
Ajit Khaparde [Tue, 20 Oct 2020 23:24:28 +0000 (16:24 -0700)]
net/bnxt: fix resource leak

Fix a potential resource leak in case of errors during dev args
parsing during device probe.

Fixes: 6dc83230b43b ("net/bnxt: support port representor data path")
Cc: stable@dpdk.org
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
4 years agonet/i40e: fix virtual channel conflict
Yuying Zhang [Mon, 19 Oct 2020 02:20:25 +0000 (02:20 +0000)]
net/i40e: fix virtual channel conflict

i40evf_execute_vf_cmd() uses _atomic_set_cmd() to execute virtual
channel commands safely in multi-process mode and multi-thread mode.
However, it returns error when one process or thread is pending. Add
rte_spinlock_trylock() to handle this issue in concurrent scenarios.

Fixes: 4861cde46116 ("i40e: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/iavf: add enable/disable queues for large VF
Ting Xu [Thu, 22 Oct 2020 06:49:02 +0000 (14:49 +0800)]
net/iavf: add enable/disable queues for large VF

The current virtchnl structure for enable/disable queues only supports
max 32 queue pairs. Use a new opcode and structure to indicate up to 256
queue pairs, in order to enable/disable queues in large VF case.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/iavf: enable IRQ mapping configuration for large VF
Ting Xu [Thu, 22 Oct 2020 06:49:01 +0000 (14:49 +0800)]
net/iavf: enable IRQ mapping configuration for large VF

The current IRQ mapping configuration only supports max 16 queues and
16 MSIX vectors. Change the queue vector mapping structure to indicate
up to 256 queues. A new opcode is used to handle the case with large
number of queues. To avoid adminq buffer size limitation, we support
to send the virtchnl message multiple times if needed.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/iavf: enable multiple queues configuration for large VF
Ting Xu [Thu, 22 Oct 2020 06:49:00 +0000 (14:49 +0800)]
net/iavf: enable multiple queues configuration for large VF

Since the adminq buffer size has a 4K limitation, the current virtchnl
command VIRTCHNL_OP_CONFIG_VSI_QUEUES cannot send the message only once
to configure up to 256 queues. In this patch, we send the messages
multiple times to make sure that the buffer size is less than 4K each
time.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/iavf: negotiate large VF and request more queues
Ting Xu [Thu, 22 Oct 2020 06:48:59 +0000 (14:48 +0800)]
net/iavf: negotiate large VF and request more queues

Negotiate large VF capability with PF during VF initialization. If large
VF is supported and the number of queues larger than 16 is required, VF
requests additional queues from PF. Mark the state that large VF is
supported.

If the allocated queues number is larger than 16, the max RSS queue
region cannot be 16 anymore. Add the function to query max RSS queue
region from PF, use it in the RSS initialization and future filters
configuration.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/iavf: support requesting additional queues from PF
Ting Xu [Thu, 22 Oct 2020 06:48:58 +0000 (14:48 +0800)]
net/iavf: support requesting additional queues from PF

Add a new virtchnl function to request additional queues from PF.
Current default queue pairs number when creating a VF is 16. In order to
support up to 256 queue pairs per VF, enable this request queues
function.

When requesting queues succeeds, PF will return an event message. If it
is handled by interrupt first, the request queues command cannot receive
the correct PF response and will wait until timeout. Therefore, disable
interrupt before requesting queues in order to handle the event message
asynchronously.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/iavf: handle virtchnl event message without interrupt
Ting Xu [Thu, 22 Oct 2020 06:48:57 +0000 (14:48 +0800)]
net/iavf: handle virtchnl event message without interrupt

Currently, VF can only handle virtchnl event message by calling
interrupt.
It is not available in two cases:
1. If the event message comes during VF initialization before interrupt
   is enabled, this message will not be handled correctly.
2. Some virtchnl commands need to receive the event message and handle
   it with interrupt disabled.

To solve this issue, we add the virtchnl event message handling in the
process of reading vitchnl messages in adminq from PF.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/mlx5: implement vectorized MPRQ burst
Alexander Kozyrev [Wed, 21 Oct 2020 20:30:30 +0000 (20:30 +0000)]
net/mlx5: implement vectorized MPRQ burst

MPRQ (Multi-Packet Rx Queue) processes one packet at a time using
simple scalar instructions. MPRQ works by posting a single large buffer
(consisted of multiple fixed-size strides) in order to receive multiple
packets at once on this buffer. A Rx packet is then copied to a
user-provided mbuf or PMD attaches the Rx packet to the mbuf by the
pointer to an external buffer.

There is an opportunity to speed up the packet receiving by processing
4 packets simultaneously using SIMD (single instruction, multiple data)
extensions. Allocate mbufs in batches for every MPRQ buffer and process
the packets in groups of 4 until all the strides are exhausted. Then
switch to another MPRQ buffer and repeat the process over again.

The vectorized MPRQ burst routine is engaged automatically in case
the mprq_en=1 devarg is specified and the vectorization is not disabled
explicitly by providing rx_vec_en=0 devarg. There is a limitation:
LRO is not supported and scalar MPRQ is selected if it is on.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
4 years agonet/mlx5: refactor vectorized Rx
Alexander Kozyrev [Wed, 21 Oct 2020 20:30:29 +0000 (20:30 +0000)]
net/mlx5: refactor vectorized Rx

Move the main processing cycle into a separate function:
rxq_cq_process_v. Put the regular rxq_burst_v function
to a non-arch specific file. Having all SIMD instructions
in a single reusable block is a first preparatory step to
implement vectorized Rx burst for MPRQ feature.

Pass a pointer to the storage of mbufs directly to the
rxq_copy_mbuf_v instead of calculating the pointer inside
this function. This is needed for the future vectorized Rx
routing which is going to pass a different pointer here.

Calculate the number of packets to replenish inside the
mlx5_rx_replenish_bulk_mbuf. Containing this logic in one
place allows us to do the same for MPRQ case.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
4 years agonet/mlx5: fix port shared data reference count
Xueming Li [Wed, 21 Oct 2020 11:15:23 +0000 (11:15 +0000)]
net/mlx5: fix port shared data reference count

When probe a representor, tag cache hash table and modification cache
hash table allocated memory upon each port, overwrote previous existing
cache in shared context data.

This patch moves reference check of shared data prior to hash table
allocation to avoid such issue.

Fixes: 6801116688fe ("net/mlx5: fix multiple flow table hash list")
Fixes: 1ef4cdef2682 ("net/mlx5: fix flow tag hash list conversion")
Cc: stable@dpdk.org
Acked-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
4 years agonet/mlx5: fix xstats reset reinitialization
Shiri Kuzin [Mon, 19 Oct 2020 06:36:50 +0000 (09:36 +0300)]
net/mlx5: fix xstats reset reinitialization

The mlx5_xstats_reset clears the device extended statistics.
In this function the driver may reinitialize the structures
that are used to read device counters.

In case of reinitialization, the number of counters may
change, which wouldn't be taken into account by the
reset API callback and can cause a segmentation fault.

This issue is fixed by allocating the counters size after
the reinitialization.

Fixes: a4193ae3bc4f ("net/mlx5: support extended statistics")
Cc: stable@dpdk.org
Reported-by: Ralf Hoffmann <ralf.hoffmann@allegro-packets.com>
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: optimize counter extend memory
Suanming Mou [Tue, 20 Oct 2020 03:02:28 +0000 (11:02 +0800)]
net/mlx5: optimize counter extend memory

Counter extend memory was allocated for non-batch counter to save the
extra DevX object. Currently, for non-batch counter which does not
support aging, entry in the generic counter struct is used only when
counter is free in free list, and bytes in the struct is used only when
counter is allocated in using.

In this case, the DevX object can be saved to the generic counter struct
union with entry memory when counter is allocated and union with bytes
when counter is free.
And pool type is also not needed as non-fallback mode only has generic
counter and aging counter, just a bit to indicate the pool is aged or
not will be enough.

This eliminates the counter extend info struct saves the memory.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: rename flow counter macro
Suanming Mou [Tue, 20 Oct 2020 03:02:27 +0000 (11:02 +0800)]
net/mlx5: rename flow counter macro

Add the MLX5_ prefix to the defined counter macro names.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: make shared counters thread safe
Suanming Mou [Tue, 20 Oct 2020 03:02:26 +0000 (11:02 +0800)]
net/mlx5: make shared counters thread safe

The shared counters save the counter index to three level table. As
three level table supports multiple-thread operations now, the shared
counters can take advantage of the table to support multiple-thread.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: make three level table thread safe
Suanming Mou [Tue, 20 Oct 2020 03:02:25 +0000 (11:02 +0800)]
net/mlx5: make three level table thread safe

This commit adds thread safety support in three level table using
spinlock and reference counter for each table entry.

An new mlx5_l3t_prepare_entry() function is added in order to support
multiple-thread operation.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: synchronize flow counter pool creation
Suanming Mou [Tue, 20 Oct 2020 03:02:24 +0000 (11:02 +0800)]
net/mlx5: synchronize flow counter pool creation

Currently, counter operations are not thread safe as the counter
pools' array resize is not protected.

This commit protects the container pools' array resize using a spinlock.
The original counter pool statistic memory allocate is moved to the
host thread in order to minimize the critical section. Since that pool
statistic memory is required only in query time. The container pools'
array should be resized by the user threads, the new pool may be used
by other rte_flow APIs before the host thread resize is done, if the
pool is not saved to the pools' array, the specified counter memory will
not be found as the pool is not saved to the counter management pool
array. The pool raw statistic memory will be filled in host thread.

The shared counters will be protected in other commit.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: remove single counter container
Suanming Mou [Tue, 20 Oct 2020 03:02:23 +0000 (11:02 +0800)]
net/mlx5: remove single counter container

A flow counter which was allocated by a batch API couldn't be assigned
to a flow in the root table (group 0) in old rdma-core version.
Hence, a root table flow counter required PMD mechanism to manage
counters which were allocated singly.

Currently, the batch counters have already been supported in root table
includes a new rdma-core version with MLX5_FLOW_ACTION_COUNTER_OFFSET
enum and with a kernel driver includes
MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX_OFFSET enum.

When the PMD uses rdma-core API to assign a batch counter to a root
table flow using invalid counter offset, it should get an error only
if the batch counter assignment for root table is supported.
Using this trial in the initialization time can help to detect the
support.

Using the above trial, if the support is valid, remove the management of
single counter container in the fast counter mechanism. Otherwise, move
the counter mechanism to fallback mode.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: optimize shared counter memory
Suanming Mou [Tue, 20 Oct 2020 03:02:22 +0000 (11:02 +0800)]
net/mlx5: optimize shared counter memory

Instead of using special memory to indicate shared counter, this patch
does the optimization to use the counter handler reserved memory to
indicate it.  The counter index with MLX5_CNT_SHARED_OFFSET means the
shared counter.

This patch is also an arrangement for a new adjustment to use batch
counter as shared counter.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: locate aging pools in the general container
Suanming Mou [Tue, 20 Oct 2020 03:02:21 +0000 (11:02 +0800)]
net/mlx5: locate aging pools in the general container

Commit [1] introduced different container for the aging counter
pools. In order to save container memory the aging counter pools
can be located in the general pool container.

This patch locates the aging counter pools in the general pool
container. Remove the aging container management.

[1] commit fd143711a6ea ("net/mlx5: separate aging counter pool range")

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/bnxt: fix xstats by id
Ferruh Yigit [Tue, 16 Jun 2020 15:36:13 +0000 (16:36 +0100)]
net/bnxt: fix xstats by id

The xstat by id device operation seems wrong, it fills 'xstats' struct
via 'bnxt_dev_xstats_get_op()' call, but the retrieved values are not
transferred to user input 'values' array.

ethdev layer 'rte_eth_xstats_get_by_id()' &
'rte_eth_xstats_get_names_by_id' already provides "by id" support when
device operations are missing.
It is good for PMD to provide these device operations if it has a more
performant way to get by id. But current implementation in PMD already
does same thing with the ethdev APIs, so removing them provides same
functionality.

Fixes: 88920136688c ("net/bnxt: support xstats get by id")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
4 years agonet/bnxt: fix queue release
Somnath Kotur [Tue, 20 Oct 2020 04:11:18 +0000 (09:41 +0530)]
net/bnxt: fix queue release

Some of the ring related memory was not being freed in both the release
ops. Fix to free them now.
Add some more NULL ptr checks in the corresponding queue_release_mbufs()
and queue_release_op() respectively.
Also call queue_release_op() in the error path of the corresponding
queue_setup_op()

Fixes: 6133f207970c ("net/bnxt: add Rx queue create/destroy")
Fixes: 51c87ebafc7d ("net/bnxt: add Tx queue create/destroy")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
4 years agodoc: advertise flow API transfer rules support in sfc
Andrew Rybchenko [Tue, 20 Oct 2020 09:13:42 +0000 (10:13 +0100)]
doc: advertise flow API transfer rules support in sfc

Transfer rules support matching on various inner and outer packet
headers, traffic source items like PORT_ID, PHY_PORT, PF and VF and
actions to route traffic to destination (PORT_ID, PHY_PORT, PF, VF or
DROP), MARK, FLAG and apply VLAN push/pop transformations.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
4 years agonet/sfc: support encap flow items in transfer rules
Ivan Malov [Tue, 20 Oct 2020 09:13:41 +0000 (10:13 +0100)]
net/sfc: support encap flow items in transfer rules

Add support for flow items VXLAN, Geneve and NVGRE to
MAE-specific RTE flow implementation.

Having support for these items implies the ability to insert
so-called outer MAE rules and refer to them in MAE action rules.
The patch takes care of all necessary facilities to do that.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
4 years agocommon/sfc_efx/base: support outer rule provisioning
Ivan Malov [Tue, 20 Oct 2020 09:13:40 +0000 (10:13 +0100)]
common/sfc_efx/base: support outer rule provisioning

Let the client insert / remove outer rules.
Let the client refer to an inserted outer rule in a match
specification of type ACTION.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
4 years agocommon/sfc_efx/base: validate and compare outer match specs
Ivan Malov [Tue, 20 Oct 2020 09:13:39 +0000 (10:13 +0100)]
common/sfc_efx/base: validate and compare outer match specs

Let the client validate an outer match specification.
Let the client comprare classes of two outer match specifications.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
4 years agocommon/sfc_efx/base: add API to compare match specs
Ivan Malov [Tue, 20 Oct 2020 09:13:38 +0000 (10:13 +0100)]
common/sfc_efx/base: add API to compare match specs

Match specification format and its size are not exposed to clients.
Provide an API to compare two match specifications.

A client would typically use this API to compare a match specification
of an outer rule being validated with match specifications of already
active outer rules (to make sure that rule class is supported).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
4 years agocommon/sfc_efx/base: add MAE match field VNET ID for tunnels
Ivan Malov [Tue, 20 Oct 2020 09:13:37 +0000 (10:13 +0100)]
common/sfc_efx/base: add MAE match field VNET ID for tunnels

Add MCDI-compatible enumeration for this field and
provide necessary mappings for it to be inserted
directly into mask-value pairs buffer.

VNET_ID can be used to serve the following match fields:
rte_flow_item_vxlan.vni, rte_flow_item_geneve.vni,
rte_flow_item_nvgre.tni

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
4 years agocommon/sfc_efx/base: add MAE encap match fields
Ivan Malov [Tue, 20 Oct 2020 09:13:36 +0000 (10:13 +0100)]
common/sfc_efx/base: add MAE encap match fields

Add MCDI-compatible enumeration for these fields and
provide necessary mappings for them to be inserted
directly into mask-value pairs buffer.

These fields are meant to comprise a so-called outer
match specification; provide necessary definitions.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>