dpdk.git
4 years agonet/ice/base: filter for GTPU outer IP without inner IP
Qi Zhang [Wed, 26 Aug 2020 12:21:19 +0000 (20:21 +0800)]
net/ice/base: filter for GTPU outer IP without inner IP

Add ptype MAC_IPV4_GTPU into
ice_ptypes_ipv4_ofos, ice_ptypes_ipv4_ofos_all and ice_ipv4_ofos_no_l4

Add ptype MAC_IPV6_GTPU into
ice_ptypes_ipv6_ofos, ice_ptypes_ipv6_ofos_all and ice_ipv6_ofos_no_l4

Add ptype MAC_IPV4_GTPU and MAC_IPV6_GTPU into
the new ice_ptypes_gtpu_no_ip

So outer IP can be configured as input set for GTPU packet that without
inner IP layer.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: support outer IP filter for GTPC
Qi Zhang [Wed, 26 Aug 2020 12:15:52 +0000 (20:15 +0800)]
net/ice/base: support outer IP filter for GTPC

Add ptype MAC_IPV4_GTPC_TEID and MAC_IPV4_GTPC into
ice_ptypes_ipv4_ofos, ice_ptypes_ipv4_ofos_all and ice_ipv4_ofos_no_l4

Add ptype MAC_IPV6_GTPC_TEID and MAC_IPV6_GTPC into
ice_ptypes_ipv6_ofos, ice_ptypes_ipv6_ofos_all and ice_ipv6_ofos_no_l4

So outer IP can be configured as input set for GTPC packet.

Also add MAC_IPV4_GTPC_TEID and MAC_IPV6_GTPC_TEID into
ice_ptypes_gtpc, so when ICE_FLOW_SEG_HDR_GTPC is requested, it can
take effect on all GTPC packets (with or without TEID).

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: refactor DCB related variables
Qi Zhang [Wed, 26 Aug 2020 12:05:49 +0000 (20:05 +0800)]
net/ice/base: refactor DCB related variables

In this patch, the DCB related variables will be refactored out of the
ice_port_info_struct. The goal is to make the ice_port_info struct
cleaner.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: reduce profile to recip info get from firmware
Qi Zhang [Wed, 26 Aug 2020 10:39:47 +0000 (18:39 +0800)]
net/ice/base: reduce profile to recip info get from firmware

Only need to get profile_to_recip info from firmware for
profiles used by switch, no need for other free profile
in order that we can reduce the time consumed when
download a switch rule.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: introduce Tx rate limiting on port level
Qi Zhang [Wed, 26 Aug 2020 10:30:23 +0000 (18:30 +0800)]
net/ice/base: introduce Tx rate limiting on port level

The PSM Configuration has a Rate Limiter for each associated
switch port based on its relative speed from the total BW of
switch ports connected to LAN controller. The rate limiters
will be dynamic get readjusted if switch port speeds are
changed at the root node layer of the scheduler tree. Adding
a function to directly modify the EIR of root node.

Signed-off-by: Shibin Koikkara Reeny <shibin.koikkara.reeny@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: join format strings to same line
Qi Zhang [Wed, 26 Aug 2020 10:24:26 +0000 (18:24 +0800)]
net/ice/base: join format strings to same line

When printing messages with ice_debug, align the printed string to the
origin line of the message in order to ease debugging and tracking
messages back to their source.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: support GTP-U type switch rule
Qi Zhang [Wed, 26 Aug 2020 10:16:19 +0000 (18:16 +0800)]
net/ice/base: support GTP-U type switch rule

This patch add support for GTP-U type of switch rule.
It enable all GTP-U related ptype.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: add AQ LLDP filter control command
Qi Zhang [Wed, 26 Aug 2020 10:09:07 +0000 (18:09 +0800)]
net/ice/base: add AQ LLDP filter control command

As of NVM ver 1.7.1 there is a new AQ command to add and remove
LLDP filters for Rx flow.  This patch implements the support
structure to implement this functionality.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: fix abbreviations
Qi Zhang [Wed, 26 Aug 2020 09:52:43 +0000 (17:52 +0800)]
net/ice/base: fix abbreviations

Correct abbreviations as identified by abbrevcheck

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: introduce and use for each bit iterator
Qi Zhang [Wed, 26 Aug 2020 09:34:31 +0000 (17:34 +0800)]
net/ice/base: introduce and use for each bit iterator

A number of code flows iterate over a block of memory to do something
for every bit set in that memory. Use existing bit operations in a new
iterator macro to make those code flows cleaner.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: add function header
Qi Zhang [Wed, 26 Aug 2020 08:57:20 +0000 (16:57 +0800)]
net/ice/base: add function header

Add a function header for ice_cfg_phy_fc()

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: introduce and use bitmap hamming weight API
Qi Zhang [Wed, 26 Aug 2020 08:49:14 +0000 (16:49 +0800)]
net/ice/base: introduce and use bitmap hamming weight API

Introduce ice_bitmap_hweight() and use it instead of open-coding that
functionality.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: introduce and use bitmap set API
Qi Zhang [Wed, 26 Aug 2020 07:04:26 +0000 (15:04 +0800)]
net/ice/base: introduce and use bitmap set API

Introduce ice_bitmap_set() and use it instead of open-coding that
functionality.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: replace single-element array hack
Qi Zhang [Wed, 26 Aug 2020 06:57:07 +0000 (14:57 +0800)]
net/ice/base: replace single-element array hack

Convert the pre-C90-extension "C struct hack" method (using a single-
element array at the end of a structure for implementing variable-length
types) to the preferred use of C99 flexible array member.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: silence static analysis warning
Qi Zhang [Wed, 26 Aug 2020 06:44:33 +0000 (14:44 +0800)]
net/ice/base: silence static analysis warning

Sparse warns about these casts to/from restricted types which are not
actual problems; silence the warnings.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: cleanup misleading comment
Qi Zhang [Wed, 26 Aug 2020 06:42:19 +0000 (14:42 +0800)]
net/ice/base: cleanup misleading comment

The maximum Admin Queue buffer size and NVM shadow RAM sector size are
both 4 Kilobytes. Some comments refer to those as 4Kb which can be
confused with 4 Kilobits.
Update the comments to use the commonly used KB symbol instead.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: clean code wrapping
Qi Zhang [Wed, 26 Aug 2020 06:34:04 +0000 (14:34 +0800)]
net/ice/base: clean code wrapping

To make the wrapping a little cleaner, move the variables only applicable
to ICE_FC_AUTO into that case. Also move caching of the value to only occur
on success.

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: cleanup stack hog
Qi Zhang [Wed, 26 Aug 2020 05:51:55 +0000 (13:51 +0800)]
net/ice/base: cleanup stack hog

In ice_flow_add_prof_sync(), struct ice_flow_prof_params has recently
grown in size hogging stack space when allocated there.
Hogging stack space should be avoided. Change allocation to be on the
heap when needed.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: fix issues around move nodes
Qi Zhang [Wed, 26 Aug 2020 05:39:35 +0000 (13:39 +0800)]
net/ice/base: fix issues around move nodes

1. Fixed the max children check when moving the last(8th) children. This
   allows the parent node to hold 8 children instead of 7.
2. Check whether the VSI is already part of the given aggregator subtree
   before moving it.

Fixes: 29a0c11489ef ("net/ice/base: clean code")
Cc: stable@dpdk.org
Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: avoid single-member variable-length structs
Qi Zhang [Wed, 26 Aug 2020 05:23:48 +0000 (13:23 +0800)]
net/ice/base: avoid single-member variable-length structs

There are a number of structures that consist of a one-element array as the
only struct member.  Some of those are unused (ice_aqc_add_get_recipe_data,
ice_aqc_get_port_options_data, ice_aqc_dis_txq, etc.) so remove them.
Others are used to index into a buffer/array consisting of a variable
number of a different data or structure type.  Those are unnecessary since
we can use simple pointer arithmetic or index directly into the buffer to
access individual elements of the buffer/array.

Additional code cleanups were done near areas affected by this change.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: split capabilities discovering
Qi Zhang [Wed, 26 Aug 2020 03:20:01 +0000 (11:20 +0800)]
net/ice/base: split capabilities discovering

Using the new ice_aq_list_caps and ice_parse_(dev|func)_caps functions,
replace ice_discover_caps with two functions that each take a pointer to
the dev_caps and func_caps structures respectively.

This makes the side effect of updating the hw->dev_caps and
hw->func_caps obvious from reading the implementation of the function.
Additionally, it opens the way for enabling reading of device
capabilities outside of the initialization flow. By passing in
a pointer, another caller will be able to read the capabilities without
modifying the hw capabilities structures.

As there are no other callers, it is safe to now remove
ice_aq_discover_caps and ice_parse_caps.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/ice/base: handle error gracefully in HW table calloc
Qi Zhang [Wed, 26 Aug 2020 03:14:07 +0000 (11:14 +0800)]
net/ice/base: handle error gracefully in HW table calloc

In the ice_init_hw_tbls API, if the ice_calloc for es->written
fails, catch that error and bail out gracefully, instead of
continuing with a NULL pointer.

Signed-off-by: Surabhi Boob <surabhi.boob@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/bnxt: improve vector Tx
Lance Richardson [Wed, 9 Sep 2020 15:57:30 +0000 (11:57 -0400)]
net/bnxt: improve vector Tx

Improve performance of vector burst transmit function by processing
multiple packets per inner loop iteration.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: handle multiple packets per loop in vector Rx
Lance Richardson [Wed, 9 Sep 2020 15:57:17 +0000 (11:57 -0400)]
net/bnxt: handle multiple packets per loop in vector Rx

Process four receive descriptors per inner loop in vector mode
burst receive functions.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: optimize vector path mbuf allocation
Lance Richardson [Wed, 9 Sep 2020 15:57:00 +0000 (11:57 -0400)]
net/bnxt: optimize vector path mbuf allocation

Simplify and optimize receive mbuf allocation function used
by the vector mode PMDs.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: use table based mbuf flags handling
Lance Richardson [Wed, 9 Sep 2020 15:53:02 +0000 (11:53 -0400)]
net/bnxt: use table based mbuf flags handling

Use table to translate receive descriptor status flags to
rte_mbuf ol_flags values.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: use table based packet type translation
Lance Richardson [Wed, 9 Sep 2020 15:53:01 +0000 (11:53 -0400)]
net/bnxt: use table based packet type translation

Use table-based method for translating receive packet descriptor
flags into rte_mbuf packet type values.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: increase max burst size for vector path
Lance Richardson [Wed, 9 Sep 2020 15:53:00 +0000 (11:53 -0400)]
net/bnxt: increase max burst size for vector path

Increase the maximum supported burst size for the bnxt vector
mode PMD from 32 to 64.

With larger burst sizes, per-burst overhead is amortized over more
packets, improving overall performance. For small packets this has
been measured to provide a 4-10% increase in single-core throughput
with testpmd iofwd.

Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: reduce CQ queue size without aggregation ring
Lance Richardson [Wed, 9 Sep 2020 15:52:59 +0000 (11:52 -0400)]
net/bnxt: reduce CQ queue size without aggregation ring

Don't allocate extra completion queue entries for aggregation
ring when aggregation ring will not be used.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: improve small ring sizes support
Lance Richardson [Wed, 9 Sep 2020 15:52:58 +0000 (11:52 -0400)]
net/bnxt: improve small ring sizes support

Improve support for small ring sizes:
   - Ensure that transmit free threshold is no more than 1/4 ring size.
   - Ensure that receive free threshold is no more than 1/4 ring size.
   - Validate requested ring sizes against minimum supported size.
   - Use rxq receive free threshold instead of fixed maximum burst
     size to trigger bulk receive buffer allocation.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: require async completion ring for vector path
Lance Richardson [Wed, 9 Sep 2020 15:52:57 +0000 (11:52 -0400)]
net/bnxt: require async completion ring for vector path

Disable support for vector mode when async completions can be placed
in a receive completion ring and change the default for all platforms
to use a dedicated async completion ring.

Simplify completion handling in vector mode receive paths now that
it no longer needs to handle async completions.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: use appropriate type for Rx ring
Lance Richardson [Wed, 9 Sep 2020 15:52:56 +0000 (11:52 -0400)]
net/bnxt: use appropriate type for Rx ring

Change the type of the software receive mbuf ring from an array
of structures containing an mbuf pointer to an array of pointers
to struct rte_mbuf for consistency with how this ring is currently
used by the vector mode receive function.

Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: fix getting burst mode for Arm
Lance Richardson [Wed, 9 Sep 2020 15:52:54 +0000 (11:52 -0400)]
net/bnxt: fix getting burst mode for Arm

Transmit and receive burst mode get operations incorrectly return
"Vector SSE" on ARM64 platforms, change to return "Vector Neon"
instead.

Fixes: 398358341419 ("net/bnxt: support NEON")
Cc: stable@dpdk.org
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
4 years agonet/bnxt: fix freeing mbuf
Yunjian Wang [Sat, 5 Sep 2020 09:36:53 +0000 (17:36 +0800)]
net/bnxt: fix freeing mbuf

We should use rte_pktmbuf_free() instead of rte_free() to free the mbuf.

Fixes: 6dc83230b43b ("net/bnxt: support port representor data path")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
4 years agonet/bnxt: remove logically dead code
Yunjian Wang [Sat, 5 Sep 2020 09:36:42 +0000 (17:36 +0800)]
net/bnxt: remove logically dead code

This patch removes logically dead code reported by coverity.

Coverity issue: 360824
Fixes: 6dc83230b43b ("net/bnxt: support port representor data path")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
4 years agonet/iavf: fix command after PF reset
Jeff Guo [Fri, 11 Sep 2020 10:18:48 +0000 (18:18 +0800)]
net/iavf: fix command after PF reset

If PF reset is finished but VF reset is pending, VF should no need to
send any invalid cmd to PF. That would avoid mass unexpected behaviors
affecting the robust.

Fixes: 22b123a36d07 ("net/avf: initialize PMD")
Fixes: 9e03acd726cf ("net/iavf: fix flow access")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Tested-by: Hailin Xu <hailinx.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agocommon/iavf: add IPv6 prefix protocol header fields
Qi Zhang [Fri, 11 Sep 2020 01:30:38 +0000 (09:30 +0800)]
common/iavf: add IPv6 prefix protocol header fields

Some IPv6 prefix related protocol header fields are defined in this
patch, so that we can use prefix instead of full IPv6 address for RSS.
Ref https://tools.ietf.org/html/rfc6052.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agocommon/iavf: support GTPC
Qi Zhang [Fri, 11 Sep 2020 01:30:37 +0000 (09:30 +0800)]
common/iavf: support GTPC

Add GTPC header and its field selector.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agocommon/iavf: save max MTU received from PF
Qi Zhang [Fri, 11 Sep 2020 01:30:36 +0000 (09:30 +0800)]
common/iavf: save max MTU received from PF

Most values from the VIRTCHNL_OP_GET_VF_RESOURCES are stored in the
iavf_hw_capabilities structure. Unfortunately, it seems that
max_mtu was missed. Add this member to the structure and save it when
parsing hw config.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agocommon/iavf: cleanup virtual channel code
Qi Zhang [Fri, 11 Sep 2020 01:30:35 +0000 (09:30 +0800)]
common/iavf: cleanup virtual channel code

1. use BIT to replace <<
2. move VIRTCHNL_VF_CAP_DCF to keep order
3. align the vc msg validate

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agocommon/iavf: use pad byte to specify MAC type
Qi Zhang [Fri, 11 Sep 2020 01:30:34 +0000 (09:30 +0800)]
common/iavf: use pad byte to specify MAC type

Currently, there is no way for a VF driver to specify that it wants to
change its device/primary unicast MAC address. This makes it
difficult/impossible for the PF driver to track the VF's device/primary
unicast MAC address, which is used for VM/VF reboot and displaying on
the host. Fix this by using 2 bits of a pad byte in the
virtchnl_ether_addr structure so the VF can specify what type of MAC
it's adding/deleting.

Below are the values that should be used by all VF drivers going
forward.

VIRTCHNL_ETHER_ADDR_LEGACY(0):
- The type should only ever be 0 for legacy AVF drivers (i.e.
  drivers that don't support the new type bits). The PF drivers
  will track VF's device/primary unicast MAC using with best
  effort.

VIRTCHNL_ETHER_ADDR_PRIMARY(1):
- This type should only be used when the VF is changing their
  device/primary unicast MAC. It should be used for both delete
  and add cases related to the device/primary unicast MAC.

VIRTCHNL_ETHER_ADDR_EXTRA(2):
- This type should be used when the VF is adding and/or deleting
  MAC addresses that are not the device/primary unicast MAC. For
  example, extra unicast addresses and multicast addresses
  assuming the PF supports "extra" addresses at all.

If a PF is parsing the type field of the virtchnl_ether_addr, then it
should use the VIRTCHNL_ETHER_ADDR_TYPE_MASK to mask the first two bits
of the type field since 0, 1, and 2 are the only valid values.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/ice: fix firmware version output
Shougang Wang [Fri, 11 Sep 2020 02:40:41 +0000 (02:40 +0000)]
net/ice: fix firmware version output

Kernel driver shows firmware version as hex but ice PMD shows
as decimal. This patch fixes the issue to make consistent with
kernel driver.

Fixes: f9204d8a23c3 ("net/ice: fix firmware version result of ethtool")
Cc: stable@dpdk.org
Signed-off-by: Shougang Wang <shougangx.wang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
4 years agonet/netvsc: replace compiler builtin overflow check
Ferruh Yigit [Tue, 8 Sep 2020 10:06:42 +0000 (11:06 +0100)]
net/netvsc: replace compiler builtin overflow check

'__builtin_add_overflow' added to gcc in version 5, earlier versions
causing build error, like gcc 4.8.5 in RHEL7.

Replaced compiler builtin check with arithmetic check.

Fixes: 7838d3a6ae7a ("net/netvsc: check for overflow on packet info from host")

Reported-by: Raslan Darawsheh <rasland@mellanox.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tested-by: Raslan Darawsheh <rasland@nvidia.com>
Tested-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
4 years agoapp/testpmd: count empty polls in 5-tuple swap engine
Dharmik Thakkar [Tue, 14 Jul 2020 21:51:08 +0000 (16:51 -0500)]
app/testpmd: count empty polls in 5-tuple swap engine

Enable empty polls in burst stats within 5tswap.c

Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agoapp/testpmd: enable burst stats for noisy VNF mode
Phil Yang [Tue, 14 Jul 2020 21:51:07 +0000 (16:51 -0500)]
app/testpmd: enable burst stats for noisy VNF mode

Add burst stats for noisy VNF mode.

Fixes: 3c156061b938 ("app/testpmd: add noisy neighbour forwarding mode")
Cc: stable@dpdk.org
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agoapp/testpmd: add record-burst-stats runtime config
Dharmik Thakkar [Tue, 14 Jul 2020 21:51:05 +0000 (16:51 -0500)]
app/testpmd: add record-burst-stats runtime config

Convert CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS to a
runtime configuration.

Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Tested-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agoapp/testpmd: add record-core-cycles runtime config
Dharmik Thakkar [Tue, 14 Jul 2020 21:51:03 +0000 (16:51 -0500)]
app/testpmd: add record-core-cycles runtime config

Convert CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES to a
runtime configuration.

Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Tested-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agoethdev: remove underscore prefix from internal API
Ferruh Yigit [Wed, 9 Sep 2020 13:01:48 +0000 (14:01 +0100)]
ethdev: remove underscore prefix from internal API

'_rte_eth_dev_callback_process()' & '_rte_eth_dev_reset()' internal APIs
has unconventional underscore ('_') prefix.
Although this is not documented most probably this is to mark them as
internal. Since we have '__rte_internal' flag to mark this, removing '_'
from API names.

For '_rte_eth_dev_reset()', there is already a public API named
'rte_eth_dev_reset()', so renaming '_rte_eth_dev_reset()' to
'rte_eth_dev_internal_reset'.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Sachin Saxena <sachin.saxena@nxp.com>
4 years agoethdev: use hairpin helper functions
Ferruh Yigit [Wed, 9 Sep 2020 13:01:46 +0000 (14:01 +0100)]
ethdev: use hairpin helper functions

Hairpin helper functions were not used by drivers, but it was used only
local to ethdev. They are:
'rte_eth_dev_is_rx_hairpin_queue()'
'rte_eth_dev_is_tx_hairpin_queue()'

Exposing them as internal APIs and update mlx5 driver (only user of
hairpin) to use them.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: David Marchand <david.marchand@redhat.com>
4 years agoethdev: mark internal functions
Ferruh Yigit [Wed, 9 Sep 2020 13:01:45 +0000 (14:01 +0100)]
ethdev: mark internal functions

Some ethdev functions are for drivers only, not for applications.

Since we have '__rte_internal' tag available now, marking internal
functions with it and moving functions to INTERNAL section in linker
script.
This is also good for documenting the internal functions.

Some internal APIs seems marked as experimental, but it doesn't make
sense to have internals APIs as experimental, updating their tag and
doxygen comments.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: David Marchand <david.marchand@redhat.com>
4 years agoethdev: make device operations struct private
Ferruh Yigit [Wed, 9 Sep 2020 13:01:44 +0000 (14:01 +0100)]
ethdev: make device operations struct private

Hiding the 'struct eth_dev_ops' from applications.

Removing relevant deprecation notice.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: David Marchand <david.marchand@redhat.com>
4 years agoethdev: move inline device operations
Ferruh Yigit [Wed, 9 Sep 2020 13:01:43 +0000 (14:01 +0100)]
ethdev: move inline device operations

This patch is a preparation to hide the 'struct eth_dev_ops' from
applications by moving some device operations from 'struct eth_dev_ops'
to 'struct rte_eth_dev'.

Mentioned ethdev APIs are in the data path and implemented as inline
because of performance reasons.

Exposing 'struct eth_dev_ops' to applications is bad because it is a
contract between ethdev and PMDs, not really needs to be known by
applications, also changes in the struct causing ABI breakages which
shouldn't.

To be able to both keep APIs inline and hide the 'struct eth_dev_ops',
moving device operations used in ethdev inline APIs to 'struct
rte_eth_dev' to the same level with Rx/Tx burst functions.

The list of dev_ops moved:
eth_rx_queue_count_t       rx_queue_count;
eth_rx_descriptor_done_t   rx_descriptor_done;
eth_rx_descriptor_status_t rx_descriptor_status;
eth_tx_descriptor_status_t tx_descriptor_status;

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Sachin Saxena <sachin.saxena@nxp.com>
4 years agoethdev: deprecate descriptor status check API
Ferruh Yigit [Wed, 9 Sep 2020 13:01:42 +0000 (14:01 +0100)]
ethdev: deprecate descriptor status check API

Marking 'rte_eth_rx_descriptor_done()' API as deprecated.
``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
APIs can be used as replacement.

Plan is to remove the API on 21.11 release.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
4 years agoethdev: remove redundant license text
Ferruh Yigit [Thu, 10 Sep 2020 11:08:48 +0000 (12:08 +0100)]
ethdev: remove redundant license text

Redundant BSD-3 license text removed, the licensing already documented
by "SPDX-License-Identifier: BSD-3-Clause" SPDX tag.

Cc: stable@dpdk.org
Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
4 years agoethdev: mark all traffic manager API as experimental
Nithin Dabilpuram [Thu, 10 Sep 2020 10:09:29 +0000 (15:39 +0530)]
ethdev: mark all traffic manager API as experimental

This patch marks all traffic manager API as experimental as
per deprecation notice[1] and discussion[2] mentioned in following
threads.

[1] https://mails.dpdk.org/archives/dev/2020-May/166221.html
[2] https://mails.dpdk.org/archives/dev/2020-April/165364.html

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/mlx5: share Rx queue drop action code
Michael Baum [Thu, 3 Sep 2020 10:13:49 +0000 (10:13 +0000)]
net/mlx5: share Rx queue drop action code

Move Rx queue drop action similar resources allocations from Verbs
module to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: separate Rx queue drop
Michael Baum [Thu, 3 Sep 2020 10:13:48 +0000 (10:13 +0000)]
net/mlx5: separate Rx queue drop

Separate Rx queue drop creation into both Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: share Rx hash queue code
Michael Baum [Thu, 3 Sep 2020 10:13:47 +0000 (10:13 +0000)]
net/mlx5: share Rx hash queue code

Move Rx hash queue object similar resources allocations from DevX and
Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: share Rx queue indirection table code
Michael Baum [Thu, 3 Sep 2020 10:13:46 +0000 (10:13 +0000)]
net/mlx5: share Rx queue indirection table code

Move Rx indirection table object similar resources allocations from DevX
and Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: remove indirection table type field
Michael Baum [Thu, 3 Sep 2020 10:13:45 +0000 (10:13 +0000)]
net/mlx5: remove indirection table type field

Once the separation between Verbs and DevX is done using function
pointers, the type field of the indirection table structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: separate Rx hash queue creation
Michael Baum [Thu, 3 Sep 2020 10:13:44 +0000 (10:13 +0000)]
net/mlx5: separate Rx hash queue creation

Separate Rx hash queue creation into both Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: separate Rx indirection table object creation
Michael Baum [Thu, 3 Sep 2020 10:13:43 +0000 (10:13 +0000)]
net/mlx5: separate Rx indirection table object creation

Separate Rx indirection table object creation into both Verbs and DevX
modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: share Rx queue object modification
Michael Baum [Thu, 3 Sep 2020 10:13:42 +0000 (10:13 +0000)]
net/mlx5: share Rx queue object modification

Use new modify_wq functions for Rx object creation in DevX and Verbs
modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: separate Rx queue object modification
Michael Baum [Thu, 3 Sep 2020 10:13:41 +0000 (10:13 +0000)]
net/mlx5: separate Rx queue object modification

Separate Rx object modification to the Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: rearrange creation of WQ and CQ object
Michael Baum [Thu, 3 Sep 2020 10:13:40 +0000 (10:13 +0000)]
net/mlx5: rearrange creation of WQ and CQ object

Rearrangement of WQ and CQ creation for Verbs Rx queue:
1. Rename the allocation function.
2. Reduce the number of arguments that the creation functions receive.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: rearrange creation of RQ and CQ resources
Michael Baum [Thu, 3 Sep 2020 10:13:39 +0000 (10:13 +0000)]
net/mlx5: rearrange creation of RQ and CQ resources

Rearrangement of RQ and CQ resource handling for DevX Rx queue:
1. Rename the allocation function so that it is understood that it
allocates all resources and not just the CQ or RQ.
2. Move the allocation and release of the doorbell into creation and
release functions.
3. Reduce the number of arguments that the creation functions receive.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: share Rx control code
Michael Baum [Thu, 3 Sep 2020 10:13:38 +0000 (10:13 +0000)]
net/mlx5: share Rx control code

Move Rx object similar resources allocations and debug logs from DevX
and Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: separate Rx interrupt handling
Michael Baum [Thu, 3 Sep 2020 10:13:37 +0000 (10:13 +0000)]
net/mlx5: separate Rx interrupt handling

Separate interrupt event handler into both Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: separate Rx queue object creations
Michael Baum [Thu, 3 Sep 2020 10:13:36 +0000 (10:13 +0000)]
net/mlx5: separate Rx queue object creations

As an arrangement to Windows OS support, the Verbs operations should be
separated to another file.
By this way, the build can easily cut the unsupported Verbs APIs from
the compilation process.

Define operation structure and DevX module in addition to the existing
linux Verbs module.
Separate Rx object creation into the Verbs/DevX modules and update the
operation structure according to the OS support and the user
configuration.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: mitigate Rx queue reference counters
Michael Baum [Thu, 3 Sep 2020 10:13:35 +0000 (10:13 +0000)]
net/mlx5: mitigate Rx queue reference counters

The Rx queue structures manage 2 different reference counter per queue:
rxq_ctrl reference counter and rxq_obj reference counter.

There is no real need to use two different counters, it just complicates
the release functions.
Remove the rxq_obj counter and use only the rxq_ctrl counter.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: fix types differentiation in Rx queue create
Michael Baum [Thu, 3 Sep 2020 10:13:34 +0000 (10:13 +0000)]
net/mlx5: fix types differentiation in Rx queue create

Rx HW objects can be created by both Verbs and DevX operations.
The management of the 2 types of operations are done directly in the
main flow of the object’s creations.

Some arrangements and validations were wrongly done to the irrelevant
type:

1. LRO related validations were done for Verbs type where LRO is not
supported at all.
2. Verbs allocation arrangements were done for DevX operations where it
is not needed.
3. Doorbell destroy was considered for Verbs types where it is
irrelevant.

Adjust the aforementioned points only for the relevant types.

Fixes: e79c9be91515 ("net/mlx5: support Rx hairpin queues")
Fixes: 08d1838f645a ("net/mlx5: implement CQ for Rx using DevX API")
Fixes: 17ed314c6c0b ("net/mlx5: allow LRO per Rx queue")
Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: fix Rx queue state update
Michael Baum [Thu, 3 Sep 2020 10:13:33 +0000 (10:13 +0000)]
net/mlx5: fix Rx queue state update

In order to support DevX Rx queue stop and start operations, the state
of the queue should be updated in FW.
The state update PRM command requires to set both the current state and
the new requested state.

The current state and the new requested state fields setting were
wrongly switched.

Switch them back to the correct setting.

Fixes: 161d103b231c ("net/mlx5: add queue start and stop")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/mlx5: fix Rx hash queue creation error flow
Michael Baum [Thu, 3 Sep 2020 10:13:32 +0000 (10:13 +0000)]
net/mlx5: fix Rx hash queue creation error flow

The mlx5_hrxq_new function allocates several resources and if one of the
allocations fails, the function jumps to an error label where it
releases all the allocated resources.

When the TIR action creation fails, the hrxq memory is not released what
can cause a resource leak.

Add an appropriate release to the hrxq pointer in the error flow.

Fixes: 772dc0eb83d3 ("net/mlx5: convert hrxq to indexed")
Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
4 years agonet/ark: remove Tx padding configuration macro
Ed Czeck [Tue, 8 Sep 2020 19:20:18 +0000 (15:20 -0400)]
net/ark: remove Tx padding configuration macro

Replace behavior with RTE_LIBRTE_ARK_MIN_TX_PKTLEN
with a default value of 0.
Update documentation as needed.

Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/ark: replace compile time log config with runtime
Ed Czeck [Tue, 8 Sep 2020 19:20:17 +0000 (15:20 -0400)]
net/ark: replace compile time log config with runtime

Use ARK_PMD_LOG in place of PMD_DRV_LOG, PMD_DEBUG_LOG, PMD_FUNC_LOG,
PMD_STATS_LOG, PMD_RX_LOG, and PMD_TX_LOG.
Review and adjust log levels and messages as needed.

Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/ice: fix flow validation for unsupported patterns
Guinan Sun [Tue, 8 Sep 2020 03:15:05 +0000 (03:15 +0000)]
net/ice: fix flow validation for unsupported patterns

When loading the OS default package and the pipeline mode is enabled
by the "pipeline-mode-support=1" operation. In this case, the wrong
parser is selected for processing and it will cause the unsupported
patterns(pppoes/pfcp/l2tpv3/esp/ah) to be validated successfully.
This patch corrects the parser selection issue.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
4 years agonet/iavf: refactor RSS
Qi Zhang [Fri, 4 Sep 2020 03:33:12 +0000 (11:33 +0800)]
net/iavf: refactor RSS

Current RSS implementation is not easy to scale and maintain.
The patch refactor the code base on below design:

1. iavf_pattern_match_item->input_set_mask is the superset of
   ETH_RSS_xxx.
2. iavf_pattern_match_item->meta is the virtchnl_proto_hdrs template.
3. iavf_hash_parse_pattern will generate pattern hint.
4. iavf_hash_parse_action will refine the virtchnl_proto_hdrs base on
   pattern hint and ETH_RSS_xxx.
5. The refine process include
   1) refine field selector of l2, l3, l4.
   2) insert gtpu proto_hdr at the beginning base on pattern hint.
   3) refine field selector for gtpu header.

The patch reduce the code from 4000+ line to less than 1000.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
4 years agonet/ice/base: fix outer IPv6 packet type table
Qi Zhang [Sun, 6 Sep 2020 13:01:45 +0000 (21:01 +0800)]
net/ice/base: fix outer IPv6 packet type table

ptype 264, 265, 266, 267, 275 should not be set
in ice_ptypes_ipv6_ofos_all.

Fixes: 88824213be8a ("net/ice/base: enable RSS for PFCP/L2TP/ESP/AH")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
4 years agonet/dpaa: support configuring RSS on runtime
Sachin Saxena [Fri, 4 Sep 2020 08:39:30 +0000 (14:09 +0530)]
net/dpaa: support configuring RSS on runtime

With fmlib (FMCLESS) mode now RSS can be modified on runtime.
This patch add support for RSS update functions

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
4 years agonet/dpaa: support FMC parser for VSP
Jun Yang [Fri, 4 Sep 2020 08:39:29 +0000 (14:09 +0530)]
net/dpaa: support FMC parser for VSP

FMC tool generates and saves the setup in a file.
This patch help Parse the /tmp/fmc.bin generated by FMC to
setup RXQs for each port on FMC mode.
The parser gets the fqids and vspids from fmc.bin

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
4 years agonet/dpaa: support virtual storage profile
Jun Yang [Fri, 4 Sep 2020 08:39:28 +0000 (14:09 +0530)]
net/dpaa: support virtual storage profile

This patch adds support for Virtual Storage profile (VSP) feature.
With VSP support when memory pool is created, the hw buffer pool id
i.e. bpid is not allocated; the bpid is identified by dpaa flow
create API.
The memory pool of RX queue is attached to specific BMan pool
according to the VSP ID when RX queue is setup.
For fmlib based hash queue, VSP base ID is assigned to each queue.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
4 years agobus/dpaa: add virtual storage profile port init
Hemant Agrawal [Fri, 4 Sep 2020 08:39:27 +0000 (14:09 +0530)]
bus/dpaa: add virtual storage profile port init

This patch add support to initialize the VSP ports
in the FMAN library.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
4 years agobus/dpaa: support shared MAC
Radu Bulie [Fri, 4 Sep 2020 08:39:26 +0000 (14:09 +0530)]
bus/dpaa: support shared MAC

A shared MAC interface is an interface which can be used
by both kernel and userspace based on classification configuration
It is defined in dts with the compatible string
"fsl,dpa-ethernet-shared" which bpool will be seeded by the dpdk
partition and configured as a netdev by the dpaa Linux eth driver.
User space buffers from the bpool will be kmapped by the kernel.

Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com>
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
4 years agonet/dpaa: support FMCless mode
Sachin Saxena [Fri, 4 Sep 2020 08:39:25 +0000 (14:09 +0530)]
net/dpaa: support FMCless mode

This patch uses fmlib to configure the FMAN HW for flow
and distribution configuration, thus avoiding the need
for static FMC tool execution optionally.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
4 years agonet/dpaa: support VSP in fmlib
Jun Yang [Fri, 4 Sep 2020 08:39:24 +0000 (14:09 +0530)]
net/dpaa: support VSP in fmlib

This patch adds support for VSP (Virtual Storage Profile)
in fmlib routines.
VSP allow a network interface to be divided into physical
and virtual instance(s).
The concept is very similar to SRIOV.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
4 years agonet/dpaa: support fmlib
Hemant Agrawal [Fri, 4 Sep 2020 08:39:23 +0000 (14:09 +0530)]
net/dpaa: support fmlib

DPAA platorm MAC interface is known as FMAN i.e. Frame Manager.
There are two ways to control it.
1. Statically configure the queues and classification rules before the
start of the application using FMC tool.
2. Dynamically configure it within application by making API calls of
fmlib.

The fmlib or Frame Manager library provides an API on top of the
Frame Manager driver ioctl calls, that provides a user space application
with a simple way to configure driver parameters and PCD
(parse - classify - distribute) rules.

This patch integrates the base fmlib so that various queue config, RSS
and classification related features can be supported on DPAA platform.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
4 years agonet/hns3: fix out of bounds access
Yunjian Wang [Mon, 7 Sep 2020 01:46:33 +0000 (09:46 +0800)]
net/hns3: fix out of bounds access

This patch fixes (out-of-bounds access) coverity issue.

Coverity issue: 349932
Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
4 years agonet/iavf: downgrade error log
Steve Yang [Fri, 4 Sep 2020 07:29:07 +0000 (07:29 +0000)]
net/iavf: downgrade error log

When receiving the unsupported AQ messages, it's taken as an
error. It's not appropriate and triggers too much unnecessary print.

Fixes: 22b123a36d07 ("net/avf: initialize PMD")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/iavf: fix setting of MAC address
Steve Yang [Fri, 4 Sep 2020 07:29:05 +0000 (07:29 +0000)]
net/iavf: fix setting of MAC address

When setting the MAC address, the ethdev layer copies the new mac
address in dev->data->mac_addrs[0] before calling the dev_ops.

Therefore, is_same_ether_addr(mac_addr, dev->data->mac_addrs) was
always true, and the MAC was never set. Remove this test to fix the
issue.

Fixes: 538da7a1cad2 ("net: add rte prefix to ether functions")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/iavf: fix port start during configuration restore
Steve Yang [Fri, 4 Sep 2020 07:29:04 +0000 (07:29 +0000)]
net/iavf: fix port start during configuration restore

If configuring VF promiscuous mode is not supported,
return -ENOTSUP error code in .promiscuous_enable/disable dev_ops.
This is to fix the port start during configuration restore,
where if .promiscuous_enable/disable dev_ops exists
and return any value other than -ENOTSUP, start will fail.

Same is done for .allmulticast_enable/disable dev_ops.

Fixes: ca041cd44fcc ("ethdev: change allmulticast callbacks to return status")
Fixes: 9039c8125730 ("ethdev: change promiscuous callbacks to return status")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/iavf: fix scattered Rx enabling
Steve Yang [Fri, 4 Sep 2020 07:29:02 +0000 (07:29 +0000)]
net/iavf: fix scattered Rx enabling

No need to add additional vlan tag size for max packet size,
the queue's Rx Max Frame Size (rxq->max_pkt_len) already
includes the vlan header size in iavf.

Fixes: 69dd4c3d0898 ("net/avf: enable queue and device")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
4 years agonet/i40e: fix link status
Guinan Sun [Fri, 4 Sep 2020 06:21:54 +0000 (06:21 +0000)]
net/i40e: fix link status

If the PF driver supports the new speed reporting capabilities
then use link_event_adv instead of link_event to get the speed.

Fixes: 2a73125b7041 ("i40evf: fix link info update")
Cc: stable@dpdk.org
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Tested-by: Jiaqi Min <jiaqix.min@intel.com>
4 years agonet/ice: return unknown speed in status
Ivan Dyukov [Tue, 11 Aug 2020 08:52:25 +0000 (11:52 +0300)]
net/ice: return unknown speed in status

rte_ethdev has declared new NUM_UNKNOWN speed which
could be used in case when no speed information is available and
link is up. NUM_NONE should be returned, if link is down.

Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/i40e: return unknown speed in status
Ivan Dyukov [Tue, 11 Aug 2020 08:52:24 +0000 (11:52 +0300)]
net/i40e: return unknown speed in status

rte_ethdev has declared new NUM_UNKNOWN speed which
could be used in case when no speed information is available and
link is up. NUM_NONE should be returned, if link is down.

Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
4 years agonet/ixgbe: return unknown speed in status
Ivan Dyukov [Tue, 11 Aug 2020 08:52:23 +0000 (11:52 +0300)]
net/ixgbe: return unknown speed in status

rte_ethdev has declared new NUM_UNKNOWN speed which
could be used in case when no speed information is available

Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Wei Zhao <wei.zhao1@intel.com>
4 years agoethdev: allow unknown link speed
Thomas Monjalon [Tue, 11 Aug 2020 08:52:20 +0000 (11:52 +0300)]
ethdev: allow unknown link speed

When querying the link information, the link status is
a mandatory major information.
Other boolean values are supposed to be accurate:
- duplex mode (half/full)
- negotiation (auto/fixed)

This API update is making explicit that the link speed information
is optional.
The value ETH_SPEED_NUM_NONE (0) was already part of the API.
The value ETH_SPEED_NUM_UNKNOWN (infinite) is added to cover
two different cases:
- speed is not known by the driver
- device is virtual

Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Suggested-by: Benoit Ganne <bganne@cisco.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
4 years agonet/hns3: fix some incomplete command structures
Huisong Li [Tue, 25 Aug 2020 11:53:05 +0000 (19:53 +0800)]
net/hns3: fix some incomplete command structures

The descriptor of the command between firmware and driver consists of
8-byte header and 24-byte data field. The contents sent to firmware are
packaged into a command structure as the data field of command
descriptor.

There are some command structures in hns3_dcb.h file that are less than
24 byte. So this patch fixes these incomplete command structures.

Fixes: 62e3ccc2b94c ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
4 years agonet/hns3: fix default MAC address from firmware
Huisong Li [Tue, 25 Aug 2020 11:53:03 +0000 (19:53 +0800)]
net/hns3: fix default MAC address from firmware

Currently, default MAC address obtained from firmware in PF driver is
directly used by .mac_addr_set ops implementation function when the
rte_eth_dev_start API function is executed. At this moment, if the
default MAC addr isn't an unicast address, it will fail to set default
MAC addr to hardware.

So this patch adds the validity check of default MAC addr in PF driver.
We will use a random unicast address, if the default MAC address
obtained from firmware is not a valid unicast address.

In addition, this patch also adjusts the location of processing default
MAC addr in VF driver so as to increase relevance and readability of the
code.

Fixes: eab21776717e ("net/hns3: support setting VF MAC address by PF driver")
Fixes: d51867db65c1 ("net/hns3: add initialization")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
4 years agonet/hns3: replace max private macro
Huisong Li [Tue, 25 Aug 2020 11:53:01 +0000 (19:53 +0800)]
net/hns3: replace max private macro

This patch uses RTE_MAX function in DPDK lib to replace the private
macro named max_t in driver.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
4 years agonet/hns3: support maximum 256 flow director counter
Wei Hu (Xavier) [Tue, 25 Aug 2020 11:53:00 +0000 (19:53 +0800)]
net/hns3: support maximum 256 flow director counter

The FDIR counter was used to count the number of FDIR hit, the maximum
number of the counter is 128 based on kunpeng 920, and it was 256 based
on kunpeng 930.

The firmware is responsible to allocate counters for different PF
devices, so the available counter number of one PF may be bigger than
128.

Currently, there are two places using the counter in driver:
1. Configure the counter. Driver uses the command whose opcode is
   HNS3_OPC_FD_AD_OP, now we extend one bit to hold the high bit of
   counter-id in the command format.
2. Query the statistic information of the counter. Driver uses the
   command whose opcode is HNS3_OPC_FD_COUNTER_OP, now the command
   already support 16-bit counter-id.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>