'xtrct' or 'xtract' is currently used in the code to shorten 'extract'.
Rename ice_prgm_acl_prof_extrt() to ice_prgm_acl_prof_xtrct() so we don't
have another variation of a 'extract'.
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
The ice flash contains two copies of each of the NVM, Option ROM, and
Netlist modules. Each bank has a pointer word and a size word. In order
to correctly read from the active flash bank, the driver must calculate
the offset manually.
During NVM initialization, read the Shadow RAM control word and
determine which bank is active for each NVM module. Additionally, cache
the size and pointer values for use in calculating the correct offset.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 13:35:30 +0000 (21:35 +0800)]
net/ice/base: separate NVM version struct
The ice_nvm_info structure has become somewhat of a dumping ground for
all of the fields related to flash version. It holds the NVM version and
EETRACK id, the OptionROM info structure, the flash size, the ShadowRAM
size, and more.
A future change is going to add the ability to read the NVM version and
EETRACK ID from the inactive NVM bank. To make this simpler, it is
useful to have these NVM version info fields extracted to their own
structure.
Rename ice_nvm_info into ice_flash_info, and create a separate
ice_nvm_info structure that will contain the eetrack and NVM map
version. Move the netlist_ver structure into ice_flash_info and rename it
ice_netlist_info for consistency.
Modify the static ice_get_orom_ver_info to take the option rom structure
as a pointer. This makes it more obvious what portion of the hw struct
is being modified. Do the same for ice_get_netlist_ver_info.
Introduce a new ice_get_nvm_ver_info function, which will be similar to
ice_get_orom_ver_info and ice_get_netlist_ver_info, used to keep the NVM
version extraction code co-located.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 13:25:46 +0000 (21:25 +0800)]
net/ice/base: enable QinQ filter for switch advanced rule
Enable QinQ type filter for switch advanced rule, it support tunnel
and non-tunnel packet use external and inner vlan id as input set
for rules, it also support session id as input set for PPPoE rule
with QinQ flag in packet.
Qi Zhang [Wed, 26 Aug 2020 13:06:56 +0000 (21:06 +0800)]
net/ice/base: change misc ACL style
This is a collection of minor ACL style changes including:
- When there is nothing to unroll, return a value directly.
- Return ICE_SUCCESS(0) in cases where an error was previously checked
so ICE_SUCCESS is the only possible return.
- Remove unnecessary parentheses and newlines
- Move unroll of allocation to end of function and use goto on errors to
free.
- Fix function header comment style
- Remove 'else' from an 'if else' condition where both conditions return
a value to reduce indentation.
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 13:03:24 +0000 (21:03 +0800)]
net/ice/base: preserve NVM capabilities in safe mode
If the driver initializes in safe mode, it will call
ice_set_safe_mode_caps. This results in clearing the capabilities
structures, in order to set them up for operating in safe mode, ensuring
many features are disabled.
This has a side effect of also clearing the capability bits that relate
to NVM update. The result is that the device driver will not indicate
support for unified update, even if the firmware is capable.
Fix this by adding the relevant capability fields to the list of values
we preserve. To simplify the code, use a common_cap structure instead of
a handful of local variables. To reduce some duplication of the
capability name, introduce a couple of macros used to restore the
capabilities values from the cached copy.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 12:43:40 +0000 (20:43 +0800)]
net/ice/base: check failed acts allocation
There is no check for failed allocation of 'acts'. Add a check and
return if memory was not successfully allocated. Also, as all 'goto out'
occur after this check there is no need to perform a check for 'acts' as
we will have returned if it is not set.
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 12:37:36 +0000 (20:37 +0800)]
net/ice/base: move a function
Move ice_flow_get_hw_prof, this is not necessary for DPDK, just
sync the code with other compile option which ice_flow_get_hw_prof
is declared as a static function.
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 12:15:52 +0000 (20:15 +0800)]
net/ice/base: support outer IP filter for GTPC
Add ptype MAC_IPV4_GTPC_TEID and MAC_IPV4_GTPC into
ice_ptypes_ipv4_ofos, ice_ptypes_ipv4_ofos_all and ice_ipv4_ofos_no_l4
Add ptype MAC_IPV6_GTPC_TEID and MAC_IPV6_GTPC into
ice_ptypes_ipv6_ofos, ice_ptypes_ipv6_ofos_all and ice_ipv6_ofos_no_l4
So outer IP can be configured as input set for GTPC packet.
Also add MAC_IPV4_GTPC_TEID and MAC_IPV6_GTPC_TEID into
ice_ptypes_gtpc, so when ICE_FLOW_SEG_HDR_GTPC is requested, it can
take effect on all GTPC packets (with or without TEID).
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 10:39:47 +0000 (18:39 +0800)]
net/ice/base: reduce profile to recip info get from firmware
Only need to get profile_to_recip info from firmware for
profiles used by switch, no need for other free profile
in order that we can reduce the time consumed when
download a switch rule.
Qi Zhang [Wed, 26 Aug 2020 10:30:23 +0000 (18:30 +0800)]
net/ice/base: introduce Tx rate limiting on port level
The PSM Configuration has a Rate Limiter for each associated
switch port based on its relative speed from the total BW of
switch ports connected to LAN controller. The rate limiters
will be dynamic get readjusted if switch port speeds are
changed at the root node layer of the scheduler tree. Adding
a function to directly modify the EIR of root node.
Qi Zhang [Wed, 26 Aug 2020 10:24:26 +0000 (18:24 +0800)]
net/ice/base: join format strings to same line
When printing messages with ice_debug, align the printed string to the
origin line of the message in order to ease debugging and tracking
messages back to their source.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 10:09:07 +0000 (18:09 +0800)]
net/ice/base: add AQ LLDP filter control command
As of NVM ver 1.7.1 there is a new AQ command to add and remove
LLDP filters for Rx flow. This patch implements the support
structure to implement this functionality.
Signed-off-by: Dave Ertman <david.m.ertman@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 09:34:31 +0000 (17:34 +0800)]
net/ice/base: introduce and use for each bit iterator
A number of code flows iterate over a block of memory to do something
for every bit set in that memory. Use existing bit operations in a new
iterator macro to make those code flows cleaner.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 06:57:07 +0000 (14:57 +0800)]
net/ice/base: replace single-element array hack
Convert the pre-C90-extension "C struct hack" method (using a single-
element array at the end of a structure for implementing variable-length
types) to the preferred use of C99 flexible array member.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 06:42:19 +0000 (14:42 +0800)]
net/ice/base: cleanup misleading comment
The maximum Admin Queue buffer size and NVM shadow RAM sector size are
both 4 Kilobytes. Some comments refer to those as 4Kb which can be
confused with 4 Kilobits.
Update the comments to use the commonly used KB symbol instead.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 06:34:04 +0000 (14:34 +0800)]
net/ice/base: clean code wrapping
To make the wrapping a little cleaner, move the variables only applicable
to ICE_FC_AUTO into that case. Also move caching of the value to only occur
on success.
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 05:51:55 +0000 (13:51 +0800)]
net/ice/base: cleanup stack hog
In ice_flow_add_prof_sync(), struct ice_flow_prof_params has recently
grown in size hogging stack space when allocated there.
Hogging stack space should be avoided. Change allocation to be on the
heap when needed.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 05:39:35 +0000 (13:39 +0800)]
net/ice/base: fix issues around move nodes
1. Fixed the max children check when moving the last(8th) children. This
allows the parent node to hold 8 children instead of 7.
2. Check whether the VSI is already part of the given aggregator subtree
before moving it.
Fixes: 29a0c11489ef ("net/ice/base: clean code") Cc: stable@dpdk.org Signed-off-by: Victor Raj <victor.raj@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
There are a number of structures that consist of a one-element array as the
only struct member. Some of those are unused (ice_aqc_add_get_recipe_data,
ice_aqc_get_port_options_data, ice_aqc_dis_txq, etc.) so remove them.
Others are used to index into a buffer/array consisting of a variable
number of a different data or structure type. Those are unnecessary since
we can use simple pointer arithmetic or index directly into the buffer to
access individual elements of the buffer/array.
Additional code cleanups were done near areas affected by this change.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 03:20:01 +0000 (11:20 +0800)]
net/ice/base: split capabilities discovering
Using the new ice_aq_list_caps and ice_parse_(dev|func)_caps functions,
replace ice_discover_caps with two functions that each take a pointer to
the dev_caps and func_caps structures respectively.
This makes the side effect of updating the hw->dev_caps and
hw->func_caps obvious from reading the implementation of the function.
Additionally, it opens the way for enabling reading of device
capabilities outside of the initialization flow. By passing in
a pointer, another caller will be able to read the capabilities without
modifying the hw capabilities structures.
As there are no other callers, it is safe to now remove
ice_aq_discover_caps and ice_parse_caps.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 26 Aug 2020 03:14:07 +0000 (11:14 +0800)]
net/ice/base: handle error gracefully in HW table calloc
In the ice_init_hw_tbls API, if the ice_calloc for es->written
fails, catch that error and bail out gracefully, instead of
continuing with a NULL pointer.
Increase the maximum supported burst size for the bnxt vector
mode PMD from 32 to 64.
With larger burst sizes, per-burst overhead is amortized over more
packets, improving overall performance. For small packets this has
been measured to provide a 4-10% increase in single-core throughput
with testpmd iofwd.
Improve support for small ring sizes:
- Ensure that transmit free threshold is no more than 1/4 ring size.
- Ensure that receive free threshold is no more than 1/4 ring size.
- Validate requested ring sizes against minimum supported size.
- Use rxq receive free threshold instead of fixed maximum burst
size to trigger bulk receive buffer allocation.
net/bnxt: require async completion ring for vector path
Disable support for vector mode when async completions can be placed
in a receive completion ring and change the default for all platforms
to use a dedicated async completion ring.
Simplify completion handling in vector mode receive paths now that
it no longer needs to handle async completions.
Change the type of the software receive mbuf ring from an array
of structures containing an mbuf pointer to an array of pointers
to struct rte_mbuf for consistency with how this ring is currently
used by the vector mode receive function.
Yunjian Wang [Sat, 5 Sep 2020 09:36:53 +0000 (17:36 +0800)]
net/bnxt: fix freeing mbuf
We should use rte_pktmbuf_free() instead of rte_free() to free the mbuf.
Fixes: 6dc83230b43b ("net/bnxt: support port representor data path") Cc: stable@dpdk.org Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Jeff Guo [Fri, 11 Sep 2020 10:18:48 +0000 (18:18 +0800)]
net/iavf: fix command after PF reset
If PF reset is finished but VF reset is pending, VF should no need to
send any invalid cmd to PF. That would avoid mass unexpected behaviors
affecting the robust.
Some IPv6 prefix related protocol header fields are defined in this
patch, so that we can use prefix instead of full IPv6 address for RSS.
Ref https://tools.ietf.org/html/rfc6052.
Most values from the VIRTCHNL_OP_GET_VF_RESOURCES are stored in the
iavf_hw_capabilities structure. Unfortunately, it seems that
max_mtu was missed. Add this member to the structure and save it when
parsing hw config.
Currently, there is no way for a VF driver to specify that it wants to
change its device/primary unicast MAC address. This makes it
difficult/impossible for the PF driver to track the VF's device/primary
unicast MAC address, which is used for VM/VF reboot and displaying on
the host. Fix this by using 2 bits of a pad byte in the
virtchnl_ether_addr structure so the VF can specify what type of MAC
it's adding/deleting.
Below are the values that should be used by all VF drivers going
forward.
VIRTCHNL_ETHER_ADDR_LEGACY(0):
- The type should only ever be 0 for legacy AVF drivers (i.e.
drivers that don't support the new type bits). The PF drivers
will track VF's device/primary unicast MAC using with best
effort.
VIRTCHNL_ETHER_ADDR_PRIMARY(1):
- This type should only be used when the VF is changing their
device/primary unicast MAC. It should be used for both delete
and add cases related to the device/primary unicast MAC.
VIRTCHNL_ETHER_ADDR_EXTRA(2):
- This type should be used when the VF is adding and/or deleting
MAC addresses that are not the device/primary unicast MAC. For
example, extra unicast addresses and multicast addresses
assuming the PF supports "extra" addresses at all.
If a PF is parsing the type field of the virtchnl_ether_addr, then it
should use the VIRTCHNL_ETHER_ADDR_TYPE_MASK to mask the first two bits
of the type field since 0, 1, and 2 are the only valid values.
Shougang Wang [Fri, 11 Sep 2020 02:40:41 +0000 (02:40 +0000)]
net/ice: fix firmware version output
Kernel driver shows firmware version as hex but ice PMD shows
as decimal. This patch fixes the issue to make consistent with
kernel driver.
Fixes: f9204d8a23c3 ("net/ice: fix firmware version result of ethtool") Cc: stable@dpdk.org Signed-off-by: Shougang Wang <shougangx.wang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
ethdev: remove underscore prefix from internal API
'_rte_eth_dev_callback_process()' & '_rte_eth_dev_reset()' internal APIs
has unconventional underscore ('_') prefix.
Although this is not documented most probably this is to mark them as
internal. Since we have '__rte_internal' flag to mark this, removing '_'
from API names.
For '_rte_eth_dev_reset()', there is already a public API named
'rte_eth_dev_reset()', so renaming '_rte_eth_dev_reset()' to
'rte_eth_dev_internal_reset'.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> Acked-by: David Marchand <david.marchand@redhat.com> Acked-by: Sachin Saxena <sachin.saxena@nxp.com>
Hairpin helper functions were not used by drivers, but it was used only
local to ethdev. They are:
'rte_eth_dev_is_rx_hairpin_queue()'
'rte_eth_dev_is_tx_hairpin_queue()'
Exposing them as internal APIs and update mlx5 driver (only user of
hairpin) to use them.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> Acked-by: David Marchand <david.marchand@redhat.com>
Some ethdev functions are for drivers only, not for applications.
Since we have '__rte_internal' tag available now, marking internal
functions with it and moving functions to INTERNAL section in linker
script.
This is also good for documenting the internal functions.
Some internal APIs seems marked as experimental, but it doesn't make
sense to have internals APIs as experimental, updating their tag and
doxygen comments.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> Acked-by: David Marchand <david.marchand@redhat.com>
This patch is a preparation to hide the 'struct eth_dev_ops' from
applications by moving some device operations from 'struct eth_dev_ops'
to 'struct rte_eth_dev'.
Mentioned ethdev APIs are in the data path and implemented as inline
because of performance reasons.
Exposing 'struct eth_dev_ops' to applications is bad because it is a
contract between ethdev and PMDs, not really needs to be known by
applications, also changes in the struct causing ABI breakages which
shouldn't.
To be able to both keep APIs inline and hide the 'struct eth_dev_ops',
moving device operations used in ethdev inline APIs to 'struct
rte_eth_dev' to the same level with Rx/Tx burst functions.
The list of dev_ops moved:
eth_rx_queue_count_t rx_queue_count;
eth_rx_descriptor_done_t rx_descriptor_done;
eth_rx_descriptor_status_t rx_descriptor_status;
eth_tx_descriptor_status_t tx_descriptor_status;
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com> Acked-by: David Marchand <david.marchand@redhat.com> Acked-by: Sachin Saxena <sachin.saxena@nxp.com>
Marking 'rte_eth_rx_descriptor_done()' API as deprecated.
``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
APIs can be used as replacement.
Plan is to remove the API on 21.11 release.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: David Marchand <david.marchand@redhat.com>
Michael Baum [Thu, 3 Sep 2020 10:13:45 +0000 (10:13 +0000)]
net/mlx5: remove indirection table type field
Once the separation between Verbs and DevX is done using function
pointers, the type field of the indirection table structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 3 Sep 2020 10:13:40 +0000 (10:13 +0000)]
net/mlx5: rearrange creation of WQ and CQ object
Rearrangement of WQ and CQ creation for Verbs Rx queue:
1. Rename the allocation function.
2. Reduce the number of arguments that the creation functions receive.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 3 Sep 2020 10:13:39 +0000 (10:13 +0000)]
net/mlx5: rearrange creation of RQ and CQ resources
Rearrangement of RQ and CQ resource handling for DevX Rx queue:
1. Rename the allocation function so that it is understood that it
allocates all resources and not just the CQ or RQ.
2. Move the allocation and release of the doorbell into creation and
release functions.
3. Reduce the number of arguments that the creation functions receive.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 3 Sep 2020 10:13:36 +0000 (10:13 +0000)]
net/mlx5: separate Rx queue object creations
As an arrangement to Windows OS support, the Verbs operations should be
separated to another file.
By this way, the build can easily cut the unsupported Verbs APIs from
the compilation process.
Define operation structure and DevX module in addition to the existing
linux Verbs module.
Separate Rx object creation into the Verbs/DevX modules and update the
operation structure according to the OS support and the user
configuration.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 3 Sep 2020 10:13:35 +0000 (10:13 +0000)]
net/mlx5: mitigate Rx queue reference counters
The Rx queue structures manage 2 different reference counter per queue:
rxq_ctrl reference counter and rxq_obj reference counter.
There is no real need to use two different counters, it just complicates
the release functions.
Remove the rxq_obj counter and use only the rxq_ctrl counter.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 3 Sep 2020 10:13:34 +0000 (10:13 +0000)]
net/mlx5: fix types differentiation in Rx queue create
Rx HW objects can be created by both Verbs and DevX operations.
The management of the 2 types of operations are done directly in the
main flow of the object’s creations.
Some arrangements and validations were wrongly done to the irrelevant
type:
1. LRO related validations were done for Verbs type where LRO is not
supported at all.
2. Verbs allocation arrangements were done for DevX operations where it
is not needed.
3. Doorbell destroy was considered for Verbs types where it is
irrelevant.
Adjust the aforementioned points only for the relevant types.
Fixes: e79c9be91515 ("net/mlx5: support Rx hairpin queues") Fixes: 08d1838f645a ("net/mlx5: implement CQ for Rx using DevX API") Fixes: 17ed314c6c0b ("net/mlx5: allow LRO per Rx queue") Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX") Cc: stable@dpdk.org Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 3 Sep 2020 10:13:33 +0000 (10:13 +0000)]
net/mlx5: fix Rx queue state update
In order to support DevX Rx queue stop and start operations, the state
of the queue should be updated in FW.
The state update PRM command requires to set both the current state and
the new requested state.
The current state and the new requested state fields setting were
wrongly switched.
Switch them back to the correct setting.
Fixes: 161d103b231c ("net/mlx5: add queue start and stop") Cc: stable@dpdk.org Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 3 Sep 2020 10:13:32 +0000 (10:13 +0000)]
net/mlx5: fix Rx hash queue creation error flow
The mlx5_hrxq_new function allocates several resources and if one of the
allocations fails, the function jumps to an error label where it
releases all the allocated resources.
When the TIR action creation fails, the hrxq memory is not released what
can cause a resource leak.
Add an appropriate release to the hrxq pointer in the error flow.
Fixes: 772dc0eb83d3 ("net/mlx5: convert hrxq to indexed") Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX") Cc: stable@dpdk.org Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Ed Czeck [Tue, 8 Sep 2020 19:20:17 +0000 (15:20 -0400)]
net/ark: replace compile time log config with runtime
Use ARK_PMD_LOG in place of PMD_DRV_LOG, PMD_DEBUG_LOG, PMD_FUNC_LOG,
PMD_STATS_LOG, PMD_RX_LOG, and PMD_TX_LOG.
Review and adjust log levels and messages as needed.
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Guinan Sun [Tue, 8 Sep 2020 03:15:05 +0000 (03:15 +0000)]
net/ice: fix flow validation for unsupported patterns
When loading the OS default package and the pipeline mode is enabled
by the "pipeline-mode-support=1" operation. In this case, the wrong
parser is selected for processing and it will cause the unsupported
patterns(pppoes/pfcp/l2tpv3/esp/ah) to be validated successfully.
This patch corrects the parser selection issue.
Current RSS implementation is not easy to scale and maintain.
The patch refactor the code base on below design:
1. iavf_pattern_match_item->input_set_mask is the superset of
ETH_RSS_xxx.
2. iavf_pattern_match_item->meta is the virtchnl_proto_hdrs template.
3. iavf_hash_parse_pattern will generate pattern hint.
4. iavf_hash_parse_action will refine the virtchnl_proto_hdrs base on
pattern hint and ETH_RSS_xxx.
5. The refine process include
1) refine field selector of l2, l3, l4.
2) insert gtpu proto_hdr at the beginning base on pattern hint.
3) refine field selector for gtpu header.
The patch reduce the code from 4000+ line to less than 1000.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Jeff Guo <jia.guo@intel.com>
Jun Yang [Fri, 4 Sep 2020 08:39:29 +0000 (14:09 +0530)]
net/dpaa: support FMC parser for VSP
FMC tool generates and saves the setup in a file.
This patch help Parse the /tmp/fmc.bin generated by FMC to
setup RXQs for each port on FMC mode.
The parser gets the fqids and vspids from fmc.bin
Signed-off-by: Jun Yang <jun.yang@nxp.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Jun Yang [Fri, 4 Sep 2020 08:39:28 +0000 (14:09 +0530)]
net/dpaa: support virtual storage profile
This patch adds support for Virtual Storage profile (VSP) feature.
With VSP support when memory pool is created, the hw buffer pool id
i.e. bpid is not allocated; the bpid is identified by dpaa flow
create API.
The memory pool of RX queue is attached to specific BMan pool
according to the VSP ID when RX queue is setup.
For fmlib based hash queue, VSP base ID is assigned to each queue.
Signed-off-by: Jun Yang <jun.yang@nxp.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Radu Bulie [Fri, 4 Sep 2020 08:39:26 +0000 (14:09 +0530)]
bus/dpaa: support shared MAC
A shared MAC interface is an interface which can be used
by both kernel and userspace based on classification configuration
It is defined in dts with the compatible string
"fsl,dpa-ethernet-shared" which bpool will be seeded by the dpdk
partition and configured as a netdev by the dpaa Linux eth driver.
User space buffers from the bpool will be kmapped by the kernel.
Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com> Signed-off-by: Jun Yang <jun.yang@nxp.com> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch uses fmlib to configure the FMAN HW for flow
and distribution configuration, thus avoiding the need
for static FMC tool execution optionally.
Jun Yang [Fri, 4 Sep 2020 08:39:24 +0000 (14:09 +0530)]
net/dpaa: support VSP in fmlib
This patch adds support for VSP (Virtual Storage Profile)
in fmlib routines.
VSP allow a network interface to be divided into physical
and virtual instance(s).
The concept is very similar to SRIOV.
Signed-off-by: Jun Yang <jun.yang@nxp.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>