Tal Shnaiderman [Mon, 28 Dec 2020 09:54:23 +0000 (11:54 +0200)]
common/mlx5: add glue functions on Windows
Windows glue functions are added to file mlx5/windows/mlx5_glue.c.
The following APIs are supported:
get_device_list, free_device_list, open_device, close_device,
query_device, query_hca_iseg, devx_obj_create, devx_obj_destroy,
devx_obj_query, devx_obj_modify, devx_general_cmd, devx_umem_reg,
devx_umem_dereg, devx_alloc_uar, devx_free_uar, devx_fs_rule_add,
devx_fs_rule_del, devx_query_eqn
New added files:
mlx5_win_defs.h - this file imports missing definitions from Linux
rdma-core library and Linux OS.
mlx5_win_ext.h - this file contains structs that enable a unified
Linux/Windows API. Each struct has an equivalent (but different) Linux
struct. By calling with 'void *' pointers - the Linux/Windows API is
identical.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Mon, 28 Dec 2020 09:54:20 +0000 (11:54 +0200)]
common/mlx5: add Windows exports file
File drivers/common/mlx5/rte_common_mlx5_exports.def contains mlx5
Windows exported symbols under common/mlx5 directory (DLL file
name librte_common_mlx5*.dll). It is the equivalent of Linux map
file version.map but the list of symbols may be
different between the two operating systems.
Tal Shnaiderman [Mon, 28 Dec 2020 09:54:19 +0000 (11:54 +0200)]
common/mlx5: wrap event channel functions per OS
Wrap the API to create/destroy event channel and to subscribe an event
with OS calls. In Linux those calls are implemented by glue functions
while in Windows they are not supported.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Ophir Munk [Mon, 28 Dec 2020 09:54:18 +0000 (11:54 +0200)]
common/mlx5: wrap memory allocation on Windows
This commit is the Windows equivalent of the Linux implementation. The
APIs included in this commit: mlx5_os_malloc(), mlx5_os_free(). For
memory allocations (with or without alignment) we always call
_aligned_malloc(). Even if zero alignment was requested in the first
place - we always select a minimal alignment value. In this way when the
memory is free - it is always safe to call _aligned_free().
Ophir Munk [Mon, 28 Dec 2020 09:54:17 +0000 (11:54 +0200)]
common/mlx5: wrap memory allocation on Linux
mlx5_malloc() API has an alignment parameter for system memory
allocations. malloc() is called for non-aligned allocations and
posix_memalign() is called for aligned allocations. When calling
mlx5_free() there is no distinction whether the memory was originally
allocated with or without alignment. Freeing a memory may be handled
differently by operating systems. Therefore this commit wraps these APIs
with OS specific calls: mlx5_os_malloc(), mlx5_os_free().
Ophir Munk [Mon, 28 Dec 2020 09:54:16 +0000 (11:54 +0200)]
common/mlx5: add Verbs usage flag
Add a Verbs file presence indication. Under Linux it is required that
file infiniband/verbs.h is installed for building DPDK. Other
operating systems (e.g. Windows) ignore Verbs completely. This commit
adds definition HAVE_INFINIBAND_VERBS_H (file mlx5_autoconf.h) to
indicate whether DPDK compiles with Verbs or not.
Ophir Munk [Mon, 28 Dec 2020 09:54:13 +0000 (11:54 +0200)]
net/mlx5: wrap glue alloc/dealloc PD per OS
Wrap glue calls alloc_pd() and dealloc_pd() with generic OS calls. In
Linux - protection domain allocations are implemented by Verbs glue API
while in Windows it is by DevX API.
Ophir Munk [Mon, 28 Dec 2020 09:54:12 +0000 (11:54 +0200)]
net/mlx5: move static asserts to global scope
Some Windows compilers consider static_assert() as calls to another
function rather than a compiler directive which allows checking type
information at compile time. This only occurs if the static_assert call
appears inside another function scope. To solve it move the
static_assert calls to global scope in the files where they are used.
Ophir Munk [Mon, 28 Dec 2020 09:54:10 +0000 (11:54 +0200)]
net/mlx5: define MPRQ functions as static inline
Functions mlx5_check_mprq_support(), mlx5_rxq_mprq_enabled(),
mlx5_mprq_enabled() are moved from source file mlx5_rxq.c to header file
mlx5_rxtx.h and their type is updated to 'static __rte_always_inline'.
Previously the functions were declared as 'inline' in the source file
which was reported as 'unresolved external symbol' error by some Windows
linkers.
Ophir Munk [Mon, 28 Dec 2020 09:54:09 +0000 (11:54 +0200)]
net/mlx5: replace Linux sleep
Replace Linux API usleep() and nanosleep() with rte_delay_us_sleep().
The replacement occurs in shared files compiled under different
operating systems.
Ophir Munk [Mon, 28 Dec 2020 09:54:08 +0000 (11:54 +0200)]
net/mlx5: fix freeing packet pacing
Packet pacing is allocated under condition #ifdef HAVE_MLX5DV_PP_ALLOC.
In a similar way - free packet pacing index under the same condition.
This update is required to successfully compile under operating systems
which do not support packet pacing.
Tal Shnaiderman [Mon, 28 Dec 2020 09:54:05 +0000 (11:54 +0200)]
net/mlx5: fix constant array size
Before this commit the PMD used:
const int elt_n = 8
const int *stack[elt_n];
In Windows clang compiler complains:
net/mlx5/mlx5_flow.c:215:19: error: variable length array folded
to constant array as an extension [-Werror,-Wgnu-folding-constant]
Fix it by using a constant macro definition instead of a variable:
#define MLX5_RSS_EXP_ELT_N 8
const int *stack[MLX5_RSS_EXP_ELT_N];
Gregory Etelson [Fri, 11 Dec 2020 14:46:14 +0000 (16:46 +0200)]
net/mlx5: fix tunnel rules validation on VF representor
MLX5 PMD implicitly adds vxlan_decap flow action to tunnel offload
match type rules. However, VXLAN decap action on VF representors is
not supported on MLX5 PMD hardware.
The patch rejects attempt to create tunnel offload flow rules on VF
representor.
Refer commit 9c4971e5231d ("net/mlx5: update VLAN and encap actions validation")
Yuying Zhang [Wed, 6 Jan 2021 10:49:13 +0000 (10:49 +0000)]
net/iavf: support TCP/UDP flow item without input set
This patch adds an input set refinement function to support outer
and inner TCP/UDP patterns without input set for flow director filter.
For example:
1. flow create 0 ingress pattern eth / ipv4 / udp / end
actions rss queues 0 1 2 3 end / end
2. flow create 0 ingress pattern eth / ipv6 / tcp / end
actions queue index 3 / end
This patch will refine the input set when it is empty and generate
a dummy proto type as input set in L3 header which is required
by the hardware.
Gregory Etelson [Thu, 26 Nov 2020 16:43:02 +0000 (18:43 +0200)]
app/testpmd: release flows left before port stop
According to RTE flow user guide, PMD will not keep flow rules after
port stop. Application resources that refer to flow rules become
obsolete after port stop and must not be used.
Testpmd maintains linked list of active flows for each port. Entries in
that list are allocated dynamically and must be explicitly released to
prevent memory leak.
The patch releases testpmd port flow_list that holds remaining flows
before port is stopped.
Cc: stable@dpdk.org Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Lance Richardson [Fri, 18 Dec 2020 20:28:37 +0000 (15:28 -0500)]
net/bnxt: set correct checksum status in mbuf
The setting of the mbuf ol_flags field for tunneled packets
should be different depending upon whether DEV_RX_OFFLOAD_OUTER_*
offloads are enabled. Initialize ol_flags mappings based on
the receive offload configuration when the receive ring is
initialized.
Ajit Khaparde [Fri, 18 Dec 2020 01:10:54 +0000 (17:10 -0800)]
net/bnxt: remove support for some PCI IDs
As announced the deprecation notice during the 20.11 release,
remove support for NetXtreme devices belonging to BCM573xx and
BCM5740x families. Specifically the support for the following Broadcom
PCI device IDs: 0x16c8, 0x16c9, 0x16ca, 0x16ce, 0x16cf, 0x16df, 0x16d0,
0x16d1, 0x16d2, 0x16d4, 0x16d5, 0x16e7, 0x16e8, 0x16e9 has been removed.
Deprecation notice has been removed and release notes for 21.02 has
been updated accordingly.
Rx outer UDP checksum offload has been supported for
some time, but this has not been advertised in offload
capability flags. Fix this, and allow vector mode
receive to be enabled when DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
is requested.
Lance Richardson [Wed, 16 Dec 2020 15:06:18 +0000 (10:06 -0500)]
net/bnxt: fix fallback mbuf allocation logic
Fixes for fallback mbuf allocation logic.
- Preserve raw (unmasked) producer index.
- Iterate over all processed descriptors (representor and
non-representor) when checking allocation status.
- Invoke fallback allocation logic when an allocation
failure has occurred for any received packet, not
just the last.
Fixes: 6dc83230b43b ("net/bnxt: support port representor data path") Fixes: d9dd0b29ed31 ("net/bnxt: fix Rx handling and buffer allocation logic") Fixes: c7de4195cc4c ("net/bnxt: modify ring index logic") Cc: stable@dpdk.org Signed-off-by: Lance Richardson <lance.richardson@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Lance Richardson [Mon, 14 Dec 2020 18:56:38 +0000 (13:56 -0500)]
net/bnxt: fix doorbell write ordering
Write completion queue doorbell before receive descriptor
doorbell to avoid possibility of completion queue overflow
when completion queue size is equal to receive descriptor
ring size. Remove unnecessary compiler barriers (db write
functions have the necessary barriers.)
Lance Richardson [Mon, 14 Dec 2020 18:53:52 +0000 (13:53 -0500)]
net/bnxt: limit Rx representor packets per poll
Without some limit on the number of packets transferred from the
HW ring to the representor ring per burst receive call, an entire ring's
worth of packets can be transferred. This can break assumptions
about ring indices (index on return could be identical to the index
on entry, which is assumed to mean that no packets were processed),
and can result in representor packets being dropped unnecessarily
due to representor ring overflow.
Fix by limiting the number of representor packets transferred per
poll to requested burst size.
Fixes: 6dc83230b43b ("net/bnxt: support port representor data path") Cc: stable@dpdk.org Signed-off-by: Lance Richardson <lance.richardson@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Beilei Xing [Tue, 5 Jan 2021 03:12:56 +0000 (11:12 +0800)]
net/i40e: fix flex payload rule conflict
With the following commands, the second flow can't
be created successfully.
1. flow create 0 ingress pattern eth / ipv4 / udp /
raw relative is 1 pattern is 0102030405 / end
actions drop / end
2. flow destroy 0 rule 0
3. flow create 0 ingress pattern eth / ipv4 / udp /
raw relative is 1 pattern is 010203040506 / end
actions drop / end
The root cause is that a flag for flex pit isn't reset.
Fixes: 6ced3dd72f5f ("net/i40e: support flexible payload parsing for FDIR") Cc: stable@dpdk.org Reported-by: Chenmin Sun <chenmin.sun@intel.com> Signed-off-by: Beilei Xing <beilei.xing@intel.com> Acked-by: Jeff Guo <jia.guo@intel.com>
Simon Ellmann [Thu, 17 Dec 2020 17:14:52 +0000 (18:14 +0100)]
net/ixgbe: clear all queues on VF reset
ixgbe devices support up to 8 Rx and Tx queues per virtual function.
Currently, the registers of only seven queues are set to default when
resetting a VF.
Signed-off-by: Simon Ellmann <simon.ellmann@tum.de> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 05:27:45 +0000 (13:27 +0800)]
net/ice/base: remove unused struct member
The only time you can ever have a rq_last_status is if
a firmware event was somehow reporting a status on the receive
queue, which are generally firmware initiated events or
mailbox messages from a VF. Mostly this struct member was unused.
Fix this problem by still printing the value of the field in a debug
print, but don't store the value forever in a struct, potentially
creating opportunities for callers to use the wrong struct member.
Qi Zhang [Tue, 15 Dec 2020 05:08:20 +0000 (13:08 +0800)]
net/ice/base: align macro names to specification
For get PHY abilities AQ, the specification defines "report modes"
as "with media", "without media" and "active configuration". For
clarity, rename macros to align with the specification.
Qi Zhang [Tue, 15 Dec 2020 04:58:27 +0000 (12:58 +0800)]
net/ice/base: use report default config to get PHY capa
In case of new link establishment flow we should use
Report Default Configuration if FW AQ API version
supports it. This patch adds check function for Report
Default Configuration support and updates ice_set_fc(),
ice_cfg_phy_fec() and ice_aq_get_phy_caps() accordingly.
Qi Zhang [Tue, 15 Dec 2020 04:42:46 +0000 (12:42 +0800)]
net/ice/base: fix null pointer dereference
Added handling of allocation fault for ice_vsi_list_map_info
Should also check dereference of NULL pointer to filters VSI list
information for FWD_TO_VSI_LISt type only, otherwise, the FWD_TO_VSI type
filters by the given VSI can't be located.
Also the point *pi should not be NULL pointer, it is a reference to raw
data field, so remove this variable, use the reference directly.
Fixes: c7dd15931183 ("net/ice/base: add virtual switch code") Cc: stable@dpdk.org Signed-off-by: Jacek Bułatek <jacekx.bulatek@intel.com> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 04:33:53 +0000 (12:33 +0800)]
net/ice/base: modify recursive way of adding nodes
Remove the recursive way of adding the nodes to the layer in order
to reduce the stack usage. Instead the algorithm is modified to use
a while loop.
The previous code was scanning recursively the nodes horizontally.
The total stack consumption will be based on number of nodes present
on that layer. In some cases it can consume more stack.
Signed-off-by: Victor Raj <victor.raj@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 04:30:32 +0000 (12:30 +0800)]
net/ice/base: change get PHY capability log level
As the user may be expected to take action on this issue, change the
message to a warning so that the message is more easily accessible than
a debug. Also, add the error code to further aide in identifying the
problem.
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 04:26:12 +0000 (12:26 +0800)]
net/ice/base: resend some AQ commands when EBUSY
Retry sending some AQ commands, as result of EBUSY AQ error.
This change follows the latest guidelines from HW. It is better
to retry the same AQ command several times, as the result of
EBUSY, instead of returning error to the caller right away.
Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 04:21:21 +0000 (12:21 +0800)]
net/ice/base: support checking double VLAN mode
If a driver wants to configure double VLAN mode (DVM) it needs to
first check if the DDP supports DVM. To do this the driver needs to read
the package metadata section via the upload section AQ (0x04C1).
If the DDP doesn't support configuring double VLAN mode (DVM), then
there is nothing to do regarding configuring the VLAN mode of the
device.
The set_svm() or set_dvm() ops should only be called if the current
configuration supports configuring the VLAN mode of the device.
Suggested-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Dan Nowlin <dan.nowlin@intel.com> Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 04:13:57 +0000 (12:13 +0800)]
net/ice/base: fix tunnel destroy
The TCAM information in AQ command buffer is not correct when destroying
the tunnel entries. The TCAM count was always ONE even multiple entries
are destroyed, and the offset of TCAM memory was also incorrect.
This patch is to fix this issue.
Qi Zhang [Tue, 15 Dec 2020 04:00:55 +0000 (12:00 +0800)]
net/ice/base: add interface to support configuring VLAN mode
The VLAN mode of the device has to be configured while the global
configuration lock is held while downloading the DDP, specifically after
the DDP has been downloaded. In order to support this a VLAN mode
interface was added. By default the device will stay in single VLAN
mode (SVM), which is the current implementation. However, this can be
changed by implementing the .set_dvm op.
Qi Zhang [Tue, 15 Dec 2020 02:44:39 +0000 (10:44 +0800)]
net/ice/base: implement inactive NVM version get
Similar to ice_get_inactive_orom_ver, add a function to read the NVM
version data from the inactive section of flash. The primary motivation
of this function is to allow the driver to report the version of
a pending update that has not yet been activated.
To do this, refactor ice_get_nvm_ver_info to allow it to take a bank
parameter. Read from the copy of the Shadow RAM in the NVM bank, rather
than reading from the RAM copy that is loaded by the device. This
ensures we get the accurate value when reading the inactive section.
Note that the start of the Shadow RAM copy is not directly following the
CSS header, but is actually aligned to the next 64-byte boundary. The
correct word offset must be rounded up to 32-bytes.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 02:39:28 +0000 (10:39 +0800)]
net/ice/base: read option ROM combo version from CIVD
The driver currently reads the combo image version data from within the
Boot Configuration TLV block of the PFA area of the NVM. This allows
access to the active Option ROM version data, assuming that it has been
properly copied into this section.
There is no equivalent method for reading the Option ROM version data
from a pending Option ROM update, as it will not yet have been copied
into the PFA boot configuration block. Instead, replace this
implementation with one which scans for the CIVD data section of the
Option ROM image data.
This CIVD data is stored in a packed structured format within the Option
ROM. It is always aligned to a 512 byte boundary, and starts with
a special '$CIV' 4-byte signature. Data integrity is checked using
a simple modulo 256 sum of the structure bytes.
Implement a new ice_get_orom_civd_data function which allows reading
from the selected flash bank (active or inactive), and scans for valid
CIVD data. Use this instead of the boot configuration TLV in order to
report the combo version data of precisely what is in the Option ROM
data.
To allow access to reading the inactive Option ROM bank, introduce a new
ice_get_inactive_orom_ver function. Use of a new function is done in
order to avoid leaking the bank selection abstraction outside of
ice_nvm.c
With this new function, the driver can now read and display the version
of the to-be-activated Option ROM when an update has been initiated but
not yet finalized.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 02:29:16 +0000 (10:29 +0800)]
net/ice/base: allow flash read with arbitrary size
Refactor ice_read_flash_module so that it takes a size and a length
value, rather than always reading in 2-byte increments. The
ice_read_nvm_module and ice_read_orom_module wrapper functions will
still read a u16 with the byte-swapping enabled.
This will be used in a future change to implement reading of the CIVD
data from the Option ROM module.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Modify ice_get_nvm_srev and ice_get_orom_srev to take the
ice_flash_bank enumeration that specifies whether to read from the
active or the inactive flash module. Rename and refactor the
ice_read_active_nvm_module and ice_read_active_orom_module functions to
take the bank enum value as well.
With this change, ice_get_nvm_srev and ice_get_orom_srev will be usable
in a future change to implement reading the version data for a pending
flash image.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Tue, 15 Dec 2020 01:45:41 +0000 (09:45 +0800)]
net/ice/base: refactor interface for flash read
The ice_read_flash_module interface for reading from the various NVM
modules was introduced.
It's purpose is two-fold. First, it enables reading data from the CSS
header, used to allow accessing the image security revisions. Second, it
allowed reading from either the 1st or the 2nd NVM bank. This interface
was necessary because the device has two copies of each module. Only one
bank is active at a time, but it could be different for each module. The
driver had to determine which bank was active and then use that to
calculate the offset into the flash to read.
Future plans include allowing access to read not just from the active
flash bank, but also the inactive bank. This will be useful for enabling
display of the version information for a pending flash update.
The current abstraction in ice_read_flash_module is to specify the exact
bank to read. This requires callers to know whether to read from the 1st
or 2nd flash bank. This is the wrong abstraction level, since in most
cases the decision point from a caller's perspective is whether to read
from the active bank or the inactive bank.
Add a new ice_bank_select enumeration, used to indicate whether a flow
wants to read from the active, or inactive flash bank. Refactor
ice_read_flash_module to take this new enumeration instead of a raw
flash bank.
Have ice_read_flash_module select which bank to read from based on the
cached data we load during NVM initialization. With this change, it will
be come easier to implement reading version data from the inactive flash
banks in a future change.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Qiming Yang <qiming.yang@intel.com>
Dapeng Yu [Wed, 23 Dec 2020 05:30:18 +0000 (13:30 +0800)]
net/ice: check Rx queue number on RSS init
When RSS is initialized, rx queues number is used as denominator to set
default value into the RSS lookup table. If it is zero, there will be
error of being divided by 0. So add value check to avoid the error.
Fixes: 50370662b727 ("net/ice: support device and queue ops") Cc: stable@dpdk.org Signed-off-by: Dapeng Yu <dapengx.yu@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Xuan Ding [Wed, 23 Dec 2020 12:52:28 +0000 (12:52 +0000)]
net/iavf: improve default RSS
Add support to actively configure the RSS through port config.
Any kernel PF enabled default RSS will be disabled during
initialization.
Besides, default RSS will be configured based on
rte_eth_rss_conf->rss_hf.
Currently supported default rss_type: ipv4[6], ipv4[6]_udp, ipv4[6]_tcp,
ipv4[6]_sctp.
Signed-off-by: Xuan Ding <xuan.ding@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Junfeng Guo [Mon, 14 Dec 2020 06:49:09 +0000 (14:49 +0800)]
common/iavf: support eCPRI protocol header fields
Add eCPRI header and its field selectors, including MSG_TYPE, PCID
and RTCID. Since the offset of PCID is same as RTCID, we just add one
MACRO for these two fields. For MSG Type 0, ecpriRtcid/ecpriPcid field
within the eCPRI header will be extracted to Field Vector for FDIR and
RSS.
SPEC for eCPRI:
http://www.cpri.info/downloads/eCPRI_v_2.0_2019_05_10c.pdf
Dapeng Yu [Tue, 15 Dec 2020 10:10:31 +0000 (18:10 +0800)]
net/ixgbe: fix flex bytes flow director rule
When a flexbytes flow director rule is created, the FDIRCTRL.FLEX_OFFSET
register is set, and it keeps its affect even after the flow director
flexbytes rule is destroyed, causing packets to be transferred to the
wrong place.
It is because setting FDIRCTRL shall only be permitted on Flow Director
initialization flow or clearing the Flow Director table according to the
datasheet, otherwise device may behave unexpectedly.
In order to evade this limitation, simulate the Flow Director
initialization flow or clearing the Flow Director table by setting
FDIRCMD.CLEARHT to 0x1B and then clear it back to 0x0B.
Fixes: f35fec63dde1 ("net/ixgbe: enable flex bytes for generic flow API") Cc: stable@dpdk.org Signed-off-by: Dapeng Yu <dapengx.yu@intel.com> Tested-by: Jun W Zhou <junx.w.zhou@intel.com> Acked-by: Jeff Guo <jia.guo@intel.com>
Souvik Dey [Tue, 15 Dec 2020 13:28:15 +0000 (08:28 -0500)]
net/i40e: fix VLAN stripping in VF
When VF adds VLAN, Linux PF driver enables VLAN stripping by default,
this might have issues if the app configured DEV_RX_OFFLOAD_VLAN_STRIP.
This behavior of the Linux driver causes confusion with the DPDK app
using i40e_pmd. So it is better to reconfigure the vlan_offload, which
checks for DEV_RX_OFFLOAD_VLAN_STRIP flag in the dev_conf and enables or
disables the vlan strip in the PF.
Application cannot use rte_eth_dev_set_vlan_offload() to set
the VLAN_STRIP, as this will only work for the first time when
original and current config mismatch, but for all subsequent call
it will be ignored.
Fixes: 4861cde46116 ("i40e: new poll mode driver") Cc: stable@dpdk.org Signed-off-by: Souvik Dey <sodey@rbbn.com> Acked-by: Jeff Guo <jia.guo@intel.com>
Igor Ryzhov [Tue, 17 Nov 2020 08:56:39 +0000 (11:56 +0300)]
net/i40e: fix stats counters
When low and high registers are read separately, this opens the door to
a race condition:
- low register is read
- NIC updates the registers
- high register is read
Because of this, we may end up with an incorrect counter value.
Let's read the registers in one shot, as it is done in Linux kernel
since the introduction of the i40e driver.
Fixes: 4861cde46116 ("i40e: new poll mode driver") Cc: stable@dpdk.org Signed-off-by: Igor Ryzhov <iryzhov@nfware.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Liron Himi [Wed, 16 Dec 2020 21:36:52 +0000 (23:36 +0200)]
build: update meson for Marvell Armada drivers
With pkg-config support available within musdk library
(from musdk-release-SDK-10.3.5.0-PR2 version),
meson option 'lib_musdk_dir' can be removed.
PKG_CONFIG_PATH environment variable should be set appropriately
to use the musdk library.
docs are updated with new musdk version and meson instructions.
net/bonding: fix PCI address comparison on non-PCI ports
The bonding PMD will iterate over all available ETH ports and for each,
compare a chunk of bytes at an offset that would correspond to the PCI
address in an rte_pci_device.
This is incorrect and unsafe. Also, the rte_device using this PCI
address is already found, no need to compare again the PCI address of
all eth devices.
Refactoring the code to fix this, the initial check to find the PCI bus
is out of scope.
Fixes: c848b518bbc7 ("net/bonding: support bifurcated driver in eal") Cc: stable@dpdk.org Signed-off-by: Gaetan Rivet <grive@u256.net> Acked-by: Min Hu (Connor) <humin29@huawei.com>
The buffer split Rx offload is not compatible with Multi-Packet
Receiving Queue (MPRQ) Rx offload, hence, the buffer split
offload flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT and other related
values should be advertised only if there is no MPRQ engaged.
Wrong index is used to find mbufs belonging to an application in
the rxq_free_elts_sprq() function in the case of vectorized MPRQ.
elts_ci points to the last allocated mbuf in this case, not rq_ci.
Use this field to avoid double free of mbuf and segmentation fault.
Gregory Etelson [Tue, 8 Dec 2020 08:17:05 +0000 (10:17 +0200)]
net/mlx5: fix Direct Verbs flow descriptor allocation
Initialize flow descriptor tunnel member during flow creation.
Prevent access to stale data and pointers when flow descriptor is
reallocated after release.
Fix flow index validation.
Fixes: e7bfa3596a0a ("net/mlx5: separate the flow handle resource") Fixes: 8bb81f2649b1 ("net/mlx5: use thread specific flow workspace") Cc: stable@dpdk.org Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Murphy Yang [Tue, 15 Dec 2020 08:10:52 +0000 (08:10 +0000)]
net/ice: fix outer checksum flags
When received tunneled packets, the testpmd output log shows 'ol_flags'
value always is 'PKT_RX_OUTER_L4_CKSUM_UNKNOWN', but expected value is
'PKT_RX_OUTER_L4_CKSUM_GOOD' or 'PKT_RX_OUTER_L4_CKSUM_BAD'.
Add the 'PKT_RX_OUTER_L4_CKSUM_GOOD' and 'PKT_RX_OUTER_L4_CKSUM_BAD' to
'flags' for normal path, 'l3_l4_flags_shuf' for AVX2 and AVX512 vector
path and 'cksum_flags' for SSE vector path to ensure that the 'ol_flags'
can match correct flags.
Fixes: dbf3c0e77a22 ("net/ice: handle Rx flex descriptor") Fixes: 4ab7dbb0a0f6 ("net/ice: switch to Rx flexible descriptor in AVX path") Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path") Cc: stable@dpdk.org Signed-off-by: Murphy Yang <murphyx.yang@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Ting Xu [Mon, 14 Dec 2020 06:04:10 +0000 (14:04 +0800)]
net/iavf: fix memory leak in large VF
This patch fixed the issue that the memory allocated for structure
virtchnl_del_ena_dis_queues is not released at the end of the functions
iavf_enable_queues_lv, iavf_disable_queues_lv and iavf_switch_queue_lv.
Fixes: 9cf9c02bf6ee ("net/iavf: add enable/disable queues for large VF") Cc: stable@dpdk.org Signed-off-by: Ting Xu <ting.xu@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Ivan Malov [Fri, 11 Dec 2020 15:34:21 +0000 (18:34 +0300)]
common/sfc_efx/base: check for MAE privilege
VFs can't control MAE, so it's important to override the general
MAE capability bit by taking MAE privilege into account. Reorder
the code slightly to have the privileges queried before datapath
capabilities are discovered and add required MAE privilege check.
Fixes: eb4e80085fae ("common/sfc_efx/base: indicate support for MAE") Cc: stable@dpdk.org Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru> Reviewed-by: Andy Moreton <amoreton@xilinx.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Ivan Malov [Fri, 11 Dec 2020 15:34:20 +0000 (18:34 +0300)]
common/sfc_efx/base: update MCDI headers for MAE privilege
VFs and unprivileged PFs should not be able to control MAE.
Add MAE privilege to MCDI headers in order to reflect that.
Fixes: 84d3fb7d7e1e ("common/sfc_efx/base: add MAE definitions to MCDI") Cc: stable@dpdk.org Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Somnath Kotur [Thu, 3 Dec 2020 06:38:47 +0000 (12:08 +0530)]
net/bnxt: fix PF resource query
This cmd should be called by every driver after 'hwrm_func_cfg'
to get the actual number of resources allocated by the HWRM.
The values returned in the cmd are the max values for that PF.
Also, now that the max values for the PF are computed in probe itself,
no need to invoke FUNC_QCAPs or any other cmd in dev_configure_op()
as that would just override the actual max values obtained above.
The current max_rings computation does not take into account the case
when max_nq_rings is <= num_async_cpr. This results in a wrong value
like 0, when max_nq_rings is 1. Fix this by subtracting num_async_cpr
only when max_cp_rings > num_async_cpr.
Apart from this, the entire logic is currently spread across a few
macros, making it hard to read and debug this code. Move this code
into an inline function.
max_msix is not used in the max_rings calculation.
Apparently the max_msix field returned in HWRM_RESC_QCAPS is only valid
for Thor and newer chips. On Wh+ it will be equal to min_compl_rings.
Also, when a function reset is performed on an application quit, FW
will not reset the VF resource pool as per design.
This can lead to a strange condition wherein the max_msix field
on Wh+ keeps changing on each application re-load thereby throwing
throwing off the max_rings computation.
Ajit Khaparde [Tue, 1 Dec 2020 19:15:23 +0000 (11:15 -0800)]
net/bnxt: remove references to Thor
Refactor code to remove references to Thor.
Instead use P5 as in phase 5 of development cycle since it is applicable
to boards other than Thor as well.
Kalesh AP [Tue, 17 Nov 2020 07:10:24 +0000 (12:40 +0530)]
net/bnxt: release HWRM lock in error
In __bnxt_hwrm_func_qcaps, when memory allocations fails
driver is not releasing the hwrm lock. This patch fixes it
by calling hwrm_unlock in that error case.
Fixes: b7778e8a1c00 ("net/bnxt: refactor to properly allocate resources for PF/VF") Cc: stable@dpdk.org Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Samik Gupta [Fri, 6 Nov 2020 21:41:21 +0000 (16:41 -0500)]
net/bnxt: fix VNIC config on Rx queue stop
Reconfigure a vnic's default ring if the current default ring is stopped
by the application. It picks the lowest numbered ring that is currently
active to be the new default, and issues the hwrm_vnic_cfg command to
update the configuration. Applies to adapters that are not Thor-based.
Samik Gupta [Thu, 12 Nov 2020 21:28:25 +0000 (13:28 -0800)]
net/bnxt: fix Rx rings in RSS redirection table
This commit introduces a limit on the number of RX rings included in
the RSS redirection table to a value no larger than the size supported
by Thor as defined by BNXT_RSS_TBL_SIZE_THOR.
Beilei Xing [Fri, 20 Nov 2020 08:49:47 +0000 (16:49 +0800)]
net/i40e: fix global register recovery
PMD configures the global register I40E_GLINT_CTL during
device initialization to work around the Rx write back
issue. But when a device is bound from DPDK to kernel,
the global register is not recovered to the original
state, it will cause kernel driver performance drop issue.
This patch fixes this issue.
Fixes: be6c228d4da3 ("i40e: support Rx interrupt") Fixes: 4ab831449a1c ("net/i40e: fix interrupt conflict with multi-driver") Cc: stable@dpdk.org Signed-off-by: Beilei Xing <beilei.xing@intel.com> Acked-by: Jeff Guo <jia.guo@intel.com>