Stephen Hemminger [Thu, 14 Jan 2021 16:48:37 +0000 (08:48 -0800)]
test/rwlock: fix spelling and missing whitespace
Trivial fix to for spelling errors and incorrect spacing.
No change to any built code.
Fixes:
7a61fc5d1b09 ("test/rwlock: add new test-cases")
Fixes:
af75078fece3 ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Tyler Retzlaff [Fri, 15 Jan 2021 19:38:21 +0000 (11:38 -0800)]
eal/windows: fix C++ compatibility
Explicitly cast void * to type * so that EAL headers may be compiled
as C or C++.
Fixes:
e8428a9d89f1 ("eal/windows: add some basic functions and macros")
Cc: stable@dpdk.org
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Lijun Ou [Sat, 16 Jan 2021 09:02:09 +0000 (17:02 +0800)]
maintainers: update for hns3
Because Wei Hu has changed to a new job and the
email address (xavier.huwei@huawei.com) has expired,
we remove him from the hns3 maintainer list.
All patches signed-off-by Wei Hu will be copied to Lijun Ou.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Thomas Monjalon [Wed, 2 Dec 2020 16:47:03 +0000 (17:47 +0100)]
devtools: reduce ABI checks and static binaries
When testing compilation and checking ABI compatibility,
there is no real need of static binaries eating disks.
The static linkage of applications was already well tested,
though the static examples tested with meson were limited to "l3fwd" only.
The static build test with make is limited to "helloworld" example.
The ABI compatibility is checked on shared libraries,
and there is no need to test again on similar builds.
A new parameter is added to the function "build",
so the ABI check is enabled only for native gcc and clang shared builds,
32-bit, generic armv8 and ppc cross compilations.
In other words, it is disabled for some static builds and some Arm ones.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Olivier Matz [Wed, 4 Nov 2020 17:04:25 +0000 (18:04 +0100)]
test/mcslock: remove unneeded per lcore copy
Each core already comes with its local storage for mcslock (in its
stack), therefore there is no need to define an additional per-lcore
mcslock.
Fixes:
32dcb9fd2a22 ("test/mcslock: add MCS queued lock unit test")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Olivier Matz [Wed, 13 Jan 2021 08:28:06 +0000 (09:28 +0100)]
service: propagate init error in EAL
Currently, when rte_service_init() fails at initialization, the
application always gets a ENOEXEC error code. For example, with testpmd,
this is displayed as:
Cannot init EAL: Exec format error
This error code does not describe the real issue. Instead, use the error
code returned by the function.
Fixes:
e39824500825 ("service: initialize with EAL")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Tyler Retzlaff [Mon, 11 Jan 2021 23:16:36 +0000 (15:16 -0800)]
eal/windows: build reciprocal division functions
Build rte_reciprocal.c and export the following functions on windows:
* rte_reciprocal_value
* rte_reciprocal_value_u64
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Tyler Retzlaff [Thu, 14 Jan 2021 21:22:35 +0000 (13:22 -0800)]
bus/pci: fix build with Windows SDK >= 10.0.20253
NetUIO device class and interface GUIDs are defined in system
headers starting from platform SDK v10.0.20253. Inspect SDK
version to avoid redefinition.
Pre-release SDKs do not promise compatibility and a narrow
subset of SDKs may still be subject to redefinition.
Fixes:
c76ec01b4591 (bus/pci: support netuio on Windows)
Cc: stable@dpdk.org
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Ranjit Menon <ranjit.menon@intel.com>
Thomas Monjalon [Wed, 6 Jan 2021 09:19:43 +0000 (10:19 +0100)]
doc: fix figure numbering in graph guide
Some figures had a title inside the picture but not in RST file.
As a consequence, some versions of Sphinx are emitting a warning.
Warning, treated as error:
doc/guides/prog_guide/graph_lib.rst:64:
no number is assigned for figure: figure-anatomy-of-a-node
The titles are moved from SVG to RST,
except for graph_mem_layout.svg where in-picture title must be kept.
Fixes:
4dc6d8e63c16 ("doc: add graph library guide")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Gaetan Rivet [Mon, 9 Nov 2020 13:37:55 +0000 (14:37 +0100)]
bus/dpaa: optimize device name parsing
Device name parsing is done on all buses during device iterations at
either EAL or ethdev levels.
When a bus implements device name parsing slowly, all iterations are
impacted. Efficient implementation is important.
The DPAA bus device name parsing has two issues: it allocates dynamic
memory and uses snprintf without a real need for it. Both can be
avoided, which improves the parsing performance.
The function is also simpler and shorter.
Signed-off-by: Gaetan Rivet <grive@u256.net>
Yicai Lu [Wed, 16 Dec 2020 13:36:30 +0000 (21:36 +0800)]
ip_frag: remove padding length of fragment
In some situations, we would get several ip fragments, which total
data length is less than min_ip_len(64) and padding with zeros.
We simulated intermediate fragments by modifying the MTU.
To illustrate the problem, we simplify the packet format and
ignore the impact of the packet header.In namespace2,
a packet whose data length is 1520 is sent.
When the packet passes tap2, the packet is divided into two
fragments: fragment A and B, similar to (1520 = 1510 + 10).
When the packet passes tap3, the larger fragment packet A is
divided into two fragments A1 and A2, similar to (1510 = 1500 + 10).
Finally, the bond interface receives three fragments:
A1, A2, and B (1520 = 1500 + 10 + 10).
One fragmented packet A2 is smaller than the minimum Ethernet
frame length, so it needs to be padded.
|---------------------------------------------------|
| HOST |
| |--------------| |----------------------------| |
| | ns2 | | |--------------| | |
| | |--------| | | |--------| |--------| | |
| | | tap1 | | | | tap2 | ns1| tap3 | | |
| | |mtu=1510| | | |mtu=1510| |mtu=1500| | |
| |--|1.1.1.1 |--| |--|1.1.1.2 |----|2.1.1.1 |--| |
| |--------| |--------| |--------| |
| | | | |
| |-----------------| | |
| | |
| |--------| |
| | bond | |
|--------------------------------------|mtu=1500|---|
|--------|
When processing the preceding packets above,
DPDK would aggregate fragmented packets A2 and B.
And error packets are generated, which padding(zero)
is displayed in the middle of the packet.
A2 + B:
0000 fa 16 3e 9f fb 82 fa 47 b2 57 dc 20 08 00 45 00
0010 00 33 b4 66 00 ba 3f 01 c1 a5 01 01 01 01 02 01
0020 01 02 c0 c1 c2 c3 c4 c5 c6 c7 00 00 00 00 00 00
0030 00 00 00 00 00 00 00 00 00 00 00 00 c8 c9 ca cb
0040 cc cd ce cf d0 d1 d2 d3 d4 d5 d6 d7 d8 d9 da db
0050 dc dd de df e0 e1 e2 e3 e4 e5 e6
So, we would calculate the length of padding, and remove
the padding in pkt_len and data_len before aggregation.
And also we have the fix for both ipv4 and ipv6.
Fixes:
7f0983ee331c ("ip_frag: check fragment length of incoming packet")
Cc: stable@dpdk.org
Signed-off-by: Yicai Lu <luyicai@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Yi Yang [Thu, 19 Nov 2020 06:43:31 +0000 (14:43 +0800)]
gso: support VXLAN UDP/IPv4
As most NICs do not support segmentation for VXLAN-encapsulated
UDP/IPv4 packets, this patch adds VXLAN UDP/IPv4 GSO support.
Signed-off-by: Yi Yang <yangyi01@inspur.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
Pallavi Kadam [Tue, 22 Dec 2020 00:45:11 +0000 (16:45 -0800)]
drivers/net: build i40e and mlx5 on Windows
Allows i40e and mlx5 PMDs to compile on Windows and disable other drivers.
Disable few i40e warnings with Clang such as comparison of integers of
different signs and macro redefinitions.
Signed-off-by: Pallavi Kadam <pallavi.kadam@intel.com>
Reviewed-by: Ranjit Menon <ranjit.menon@intel.com>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Pallavi Kadam [Tue, 22 Dec 2020 00:45:10 +0000 (16:45 -0800)]
eal/windows: add random function
The file rte_random.c is required to build i40e PMD on Windows.
Add rte_rand variable to export file.
Redefine _m_prefetchw for Clang toolchain due to following error
with respect to conflicting types:
FAILED: lib/
76b5a35@@rte_eal@sta/librte_eal_common_rte_random.c.obj
clang @lib/
76b5a35@@rte_eal@sta/librte_eal_common_rte_random.c.obj.rsp
In file included from ../lib/librte_eal/common/rte_random.c:13:
In file included from ..\lib/librte_eal/include\rte_eal.h:20:
In file included from ..\lib/librte_eal/include\rte_per_lcore.h:25:
In file included from ..\lib/librte_eal/windows/include\pthread.h:21:
In file included from ..\lib/librte_eal/windows/include\rte_windows.h:27:
In file included from C:\Program Files (x86)\Windows Kits\10\include\
10.0.18362.0\um\windows.h:171:
In file included from C:\Program Files (x86)\Windows Kits\10\include\
10.0.18362.0\shared\windef.h:24:
In file included from C:\Program Files (x86)\Windows Kits\10\include\
10.0.18362.0\shared\minwindef.h:182:
C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um\
winnt.h:3324:1: error: conflicting types for '_m_prefetchw'
_m_prefetchw (
^
C:\Program Files\LLVM\lib\clang\10.0.0\include\prfchwintrin.h:50:1:
note: previous definition is here
_m_prefetchw(void *__P)
^
1 error generated.
Signed-off-by: Pallavi Kadam <pallavi.kadam@intel.com>
Reviewed-by: Ranjit Menon <ranjit.menon@intel.com>
Tal Shnaiderman [Sun, 3 Jan 2021 10:28:27 +0000 (12:28 +0200)]
doc: add Windows support for mlx5
Windows is supported by mlx5 PMD.
The mlx5 guide is updated with the needed information.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Ophir Munk [Tue, 12 Jan 2021 12:58:39 +0000 (14:58 +0200)]
common/mlx5: enable compilation on Windows
Enable mlx5 common driver on Windows with clang compilation.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Thu, 7 Jan 2021 11:45:45 +0000 (13:45 +0200)]
common/mlx5: fix pointer cast on Windows
While compiling with clang 11 the callers of the
__mlx5_bit_off macro warns on the cast of pointers to
unsigned long which is a smaller int type in Windows.
warning: cast to smaller integer type 'unsigned long'
from 'u8 (*)[16]' [-Wpointer-to-int-cast]
To resolve it the type is changed to uintptr_t to be
compatible for both Linux and Windows.
Fixes:
865a0c15672c ("net/mlx5: add Direct Verbs prepare function")
Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:41 +0000 (08:19 +0000)]
common/mlx5: remove doorbell allocation functions
The mlx5_devx_dbr_page structure was used to allocate and release the
umem of the doorbells.
Since doorbell and buffer have used same umem, this structure is
useless.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:40 +0000 (08:19 +0000)]
net/mlx5: move Rx RQ creation to common
Using common function for Rx RQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:39 +0000 (08:19 +0000)]
common/mlx5: share DevX RQ creation
The RQ object in DevX is used currently only in net driver, but it is
shared for future.
Add a structure that contains all the resources, and provide creation
and release functions for it.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:38 +0000 (08:19 +0000)]
net/mlx5: move ASO SQ creation to common
Using common function for ASO SQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:37 +0000 (08:19 +0000)]
net/mlx5: move Tx SQ creation to common
Using common function for Tx SQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:36 +0000 (08:19 +0000)]
net/mlx5: move rearm and clock queue SQ creation to common
Using common function for DevX SQ creation for rearm and clock queue.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:35 +0000 (08:19 +0000)]
regex/mlx5: move DevX SQ creation to common
Using common function for DevX SQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:34 +0000 (08:19 +0000)]
common/mlx5: share DevX SQ creation
The SQ object in DevX is created in several places and in several
different drivers.
In all places almost all the details are the same, and in particular the
allocations of the required resources.
Add a structure that contains all the resources, and provide creation
and release functions for it.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:33 +0000 (08:19 +0000)]
common/mlx5: enhance page size configuration
The PRM calculates page size in 4K, so need to reduce the log_wq_pg_sz
attribute.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:32 +0000 (08:19 +0000)]
net/mlx5: move Rx CQ creation to common
Using common function for Rx CQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:31 +0000 (08:19 +0000)]
net/mlx5: move Tx CQ creation to common
Using common function for Tx CQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:30 +0000 (08:19 +0000)]
net/mlx5: move ASO CQ creation to common
Use common function for ASO CQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:29 +0000 (08:19 +0000)]
net/mlx5: move rearm and clock queue CQ creation to common
Using common function for CQ creation at rearm queue and clock queue.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:28 +0000 (08:19 +0000)]
vdpa/mlx5: move DevX CQ creation to common
Using common function for DevX CQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:27 +0000 (08:19 +0000)]
regex/mlx5: move DevX CQ creation to common
Using common function for DevX CQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:26 +0000 (08:19 +0000)]
common/mlx5: share DevX CQ creation
The CQ object in DevX is created in several places and in several
different drivers.
In all places almost all the details are the same, and in particular the
allocations of the required resources.
Add a structure that contains all the resources, and provide creation
and release functions for it.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:25 +0000 (08:19 +0000)]
net/mlx5: fix leak on ASO SQ creation failure
In ASO SQ creation, the PMD allocates umem buffer for SQ.
When umem buffer allocation fails, the MR and CQ memory are not freed
what caused a memory leak.
Free it.
Fixes:
f935ed4b645a ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:24 +0000 (08:19 +0000)]
net/mlx5: remove CQE padding device argument
The data-path code doesn't take care on 'rxq_cqe_pad_en' and use padded
CQE for any case when the system cache-line size is 128B.
This makes the argument redundant.
Remove it.
Fixes:
bc91e8db12cd ("net/mlx5: add 128B padding of Rx completion entry")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Wed, 6 Jan 2021 08:19:23 +0000 (08:19 +0000)]
common/mlx5: fix completion queue entry size configuration
According to the current data-path implementation in the PMD the CQE
size must follow the cache-line size.
So, the configuration of the CQE size should be depended in
RTE_CACHE_LINE_SIZE.
Wrongly, part of the CQE creations didn't follow it exactly what caused
an incompatibility between HW and SW in the data-path when working in
128B cache-line size systems.
Adjust the rule for any CQE creation.
Remove the cqe_size attribute from the DevX CQ creation command and set
it inside the command translation according to the cache-line size.
Fixes:
79a7e409a2f6 ("common/mlx5: prepare support of packet pacing")
Fixes:
5cd0a83f413e ("common/mlx5: support more fields in DevX CQ create")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Dekel Peled [Sun, 10 Jan 2021 17:37:56 +0000 (19:37 +0200)]
net/mlx5: fix hairpin flow split decision
Previously, the identification of hairpin queue was done using
mlx5_rxq_get_type() function.
Recent patch replaced it with use of mlx5_rxq_get_hairpin_conf(),
and check of the return value conf != NULL.
The case of return value is NULL (queue is not hairpin) was not handled.
As result, non-hairpin flows were wrongly handled.
This patch adds the required check for return value is NULL.
Fixes:
509f8470de55 ("net/mlx5: do not split hairpin flow in explicit mode")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Thu, 7 Jan 2021 13:08:27 +0000 (15:08 +0200)]
net/mlx5: split multi-thread flow handling per OS
multi-threaded flows feature uses pthread function pthread_key_create
but for Windows the destruction option in the function is unimplemented.
To resolve it, Windows will implement destruction mechanism to cleanup
mlx5_flow_workspace object for each terminated thread.
Linux flow will keep the current behavior.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Khoa To <khot@microsoft.com>
Kiran Kumar K [Mon, 21 Dec 2020 07:45:18 +0000 (13:15 +0530)]
net/octeontx2: support 24B custom L2 header parsing
Adding support to parse 24B custom L2 header. Added devargs support to
configure the PKIND, and removed the restriction to support custom
headers on non SDP interface.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Sunil Kumar Kori [Mon, 21 Dec 2020 14:03:08 +0000 (19:33 +0530)]
net/octeontx2: fix corruption in segments list
On Tx, lastseg->next is not being reset to null for multi segmented
packet and same mbuf can be used on Rx which has a stale mbuf entry into
mbuf->next.
On Rx, application receives mbuf with mbuf->next uninitialized though
mbuf->nb_segs is correct. Application iterates over all segments using
mbuf->next ignoring mbuf->nb_segs which leads to undefined behavior.
So earlier assumption of just having right value in mbuf->nb_segs is
enough, is incorrect. Mbuf must contain valid and synced value in
nb_segs and next pointer.
Fixes:
364eb0e46683 ("net/octeontx2: avoid per packet barrier with multi segment")
Cc: stable@dpdk.org
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Yunjian Wang [Mon, 7 Dec 2020 11:37:15 +0000 (19:37 +0800)]
net/mvneta: check allocation in Rx queue flush
The function rte_malloc() could return NULL, the return value
need to be checked.
Fixes:
ce7ea764597e ("net/mvneta: support Rx/Tx")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Liron Himi <lironh@marvell.com>
Somnath Kotur [Thu, 24 Dec 2020 09:37:34 +0000 (15:07 +0530)]
net/bnxt: check chip reset in stop and close
While the error recovery thread is running, an application
can invoke dev_stop or dev_close_op thus triggering a race
and unwanted consequences if dev_close is invoked while the
recovery is not yet completed.
Fix by having another lock to synchronize between the 2 threads and
return EGAIN if adapter is in the middle of recovery when dev_stop
or dev_close ops are invoked
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Somnath Kotur [Thu, 24 Dec 2020 09:37:33 +0000 (15:07 +0530)]
net/bnxt: fix error handling in device start
Call bnxt_dev_stop in error path of bnxt_dev_start_op() to keep
it simple and consistent
Fixes:
c09f57b49c13 ("net/bnxt: add start/stop/link update operations")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Somnath Kotur [Thu, 24 Dec 2020 09:37:32 +0000 (15:07 +0530)]
net/bnxt: fix lock init and destroy
Invoking init/uninit locks in init_resources and uninit_resources
would end up initializing and destroying locks on every port start
stop which is not desired.
Move the 2 routines to dev_init and dev_close respectively as
locks need to be initialized and destroyed only once during the
lifetime of the driver.
Fixes:
1cb3d39a48f7 ("net/bnxt: synchronize between flow related functions")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Sun, 20 Dec 2020 05:24:30 +0000 (21:24 -0800)]
net/bnxt: add Rx logic for 58818 chips
1. On the new 58818 chips, the RX completion is largely the same except
for the new completion opcode and the stripped VLAN format and
checksum status. Added bnxt_parse_csum_v2(), bnxt_parse_pkt_type_v2()
and bnxt_rx_vlan_v2() to support the new RX completion logic.
2. Disable vector mode RX/TX for 58818 chips for now.
3. The cfa_code format on 58818 chips is different than legacy chips.
So skip cfa_code parsing logic on 58818 chips for now.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Sun, 20 Dec 2020 05:24:29 +0000 (21:24 -0800)]
net/bnxt: modify context memory allocation
Newer devices like SR2 may have chip backing store and do not require
host backed memory allocation.
In these cases, HWRM_FUNC_BACKING_STORE_QCAPS will return a zero entry
size to indicate contexts for which the host should not allocate backing
store.
Selectively allocate context memory based on device capabilities and
only enable backing store for the appropriate contexts
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Sun, 20 Dec 2020 05:24:28 +0000 (21:24 -0800)]
net/bnxt: support LRO for SR2 chip
Add the new chip specific TPA v2 logic to bnxt_tpa_start() to fully
support TPA on the new chip.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Sun, 20 Dec 2020 05:24:27 +0000 (21:24 -0800)]
net/bnxt: modify VNIC accounting
Modify VNIC accounting when enabling RFS on newer chips.
Unlike legacy chips, newer chips don't need additional VNIC resources
for ntuple filter. Fix the code accordingly so that we don't reserve
and allocate additional VNICs on newer chips.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Sun, 20 Dec 2020 05:24:26 +0000 (21:24 -0800)]
net/bnxt: add new Rx checksum mode
The 58818 chips support two different checksum modes.
Host driver has to register with FW which checksum mode it
prefers to use. DPDK driver want to use "cs_all_ok_mode=1".
FW advertises the support of the different checksum modes
on per VNIC basis in the HWRM_VNIC_QCAPS response.
Driver should use HWRM_VNIC_CFG to configure the needed
checksum mode.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Sun, 20 Dec 2020 05:24:25 +0000 (21:24 -0800)]
net/bnxt: support 58818 chip family
The new chip (Stingray 2) is part of the P5 chip family with a number
of changes:
1. Implement the epoch doorbell bit for 58818 chip. With the new
doorbell infrastructure and the unbounded index logic, now set the
epoch doorbell bit to support proper doorbell operation on the new
chip. Toggle epoch bit of all rings when it's wrapped to support
doorbell overflow checking.
2. Get the legacy doorbell size from firmware. Legacy doorbell support
has been removed in Stingray 2. So, the fast path doorbell pages
start from the base of the BAR. Drivers need to use
legacy_l2_db_space_size_kb field in the hwrm_func_qcfg_output
response to get the legacy doorbell page offset from the BAR.
3. Set VALID doorbell bit on 58818 chip family. This class of chip has a
valid doorbell bit added and it needs to be set.
4. Use "chip_num" returned by firmware. The "chip_num" field in the
HWRM_VER_GET output returns the chip number. Use this value to
identify chip category for 58818 chip family.
5. Added device ids for Stingray2 PF/VF devices.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Jeff Guo [Wed, 13 Jan 2021 05:31:29 +0000 (13:31 +0800)]
net/ice: refactor packet type parsing
If the capability of a PTYPE within a specific package could be
negotiated, no need to maintain a different PTYPE list for each
type of the package when parsing PTYPE. So refactor the PTYPE
parsing mechanism for each flow engines.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Jeff Guo [Wed, 13 Jan 2021 05:31:28 +0000 (13:31 +0800)]
net/ice/base: add more packet type values
Add some macros for some PType value.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Murphy Yang [Fri, 8 Jan 2021 08:30:11 +0000 (08:30 +0000)]
net/i40e: add null input checks
Pointer 'NULL' check for 'mac_addr' or 'conf' within i40e PMD APIs.
Fixes:
66c78f4799ff ("net/i40e: add support for packet template to flow director")
Fixes:
04b443fb2c43 ("net/i40e: fix port id type")
Cc: stable@dpdk.org
Signed-off-by: Murphy Yang <murphyx.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Haiyue Wang [Tue, 12 Jan 2021 08:13:02 +0000 (16:13 +0800)]
net/iavf: support new VLAN capabilities
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.
The AVF needs to query the VLAN capabilities based on current device
configuration firstly.
AVF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and AVF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.
When port VLAN is updated by DCF, the AVF needs to reset to query the
new VLAN capabilities.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Haiyue Wang [Tue, 12 Jan 2021 08:13:01 +0000 (16:13 +0800)]
net/ice: add DCF VLAN handling
Add the DCF port representor infrastructure for the VFs of DCF attached
PF. Then the standard ethdev API like VLAN can be used to configure the
VFs.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Haiyue Wang [Tue, 12 Jan 2021 08:13:00 +0000 (16:13 +0800)]
net/iavf: support CRC strip disabling
The VF will check the PF's CRC strip capability firstly, then set the
'CRC strip disable' value in the queue configuration according to the
RX CRC offload setting.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Thu, 7 Jan 2021 04:53:33 +0000 (12:53 +0800)]
common/iavf: update copyright date
Updated the Copyright for 2021.
Updated FreeBSD IAVF driver of version.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Thu, 7 Jan 2021 05:15:04 +0000 (13:15 +0800)]
common/iavf: support VLAN offload by DCF
Add new opcode VIRTCHNL_OP_DCF_VLAN_OFFLOAD to set VLAN offload
by DCF, the virtchnl message includes:
1. A valid target VF
2. Type of VLAN to be supported: outer or inner
3. Ethertype of the VLAN (either 0x8100 or 0x88A8 or 0x9100)
4. VLAN insert settings
a). No insert offload, VLAN ID in the packet (default)
b). Offload via transmit descriptor
c). Insert as a port VLAN (via VSI)
5. VLAN strip settings
a). Strip (and discard)
b). Strip and place in descriptor
c). No Strip
6. VLAN ID for the target VF
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Thu, 7 Jan 2021 05:07:08 +0000 (13:07 +0800)]
common/iavf: support new VLAN capabilities
Currently VIRTCHNL only allows for VLAN filtering and offloads to happen
on a single 802.1Q VLAN. Add support to filter and offload on inner,
outer, and/or inner + outer VLANs.
This is done by introducing the new capability
VIRTCHNL_VF_OFFLOAD_VLAN_V2. The flow to negotiate this new capability
is shown below.
1. VF - sets the VIRTCHNL_VF_OFFLOAD_VLAN_V2 bit in the
virtchnl_vf_resource.vf_caps_flags during the
VIRTCHNL_OP_GET_VF_RESOURCES request message. The VF should also set
the VIRTCHNL_VF_OFFLOAD_VLAN bit in case the PF driver doesn't
support the new capability.
2. PF - sets the VLAN capability bit it supports in the
VIRTCHNL_OP_GET_VF_RESOURCES response message. This will either be
VIRTCHNL_VF_OFFLOAD_VLAN_V2, VIRTCHNL_VF_OFFLOAD_VLAN, or none.
3. VF - If the VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability was ACK'd by the
PF, then the VF needs to request the VLAN capabilities of the
PF/Device by issuing a VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS request.
If the VIRTCHNL_VF_OFFLOAD_VLAN capability was ACK'd then the VF
knows only single 802.1Q VLAN filtering/offloads are supported. If no
VLAN capability is ACK'd then the PF/Device doesn't support hardware
VLAN filtering/offloads for this VF.
4. PF - Populates the virtchnl_vlan_caps structure based on what it
allows/supports for that VF and sends that response via
VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS.
After VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS is successfully negotiated
the VF driver needs to interpret the capabilities supported by the
underlying PF/Device. The VF will be allowed to filter/offload the
inner 802.1Q, outer (various ethertype), inner 802.1Q + outer
(various ethertypes), or none based on which fields are set.
The VF will also need to interpret where the VLAN tag should be inserted
and/or stripped based on the negotiated capabilities.
Also, update the virtchnl_op_str() function to support the added opcodes.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Leyi Rong [Wed, 6 Jan 2021 05:35:48 +0000 (13:35 +0800)]
net/ice: enlarge Rx queue rearm threshold to 64
We observe performance drop on ice AVX512 data path after stop and
start by using testpmd.
As CPU polling is faster in AVX512 path, L3 contested accesses is
intensified when rxrearm_start is a random value after testpmd
stop/start.
Enlarge ICE_RXQ_REARM_THRESH to 64 to ease the contested accesses and
fix the performance drop issue.
Cc: stable@dpdk.org
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Murphy Yang [Fri, 8 Jan 2021 07:17:52 +0000 (07:17 +0000)]
net/ice: disable IPv4 checksum offload in vector Tx
ICE choices vector TX path or basic TX path by macro
'ICE_NO_VECTOR_FLAGS'.
This patch adds 'DEV_TX_OFFLOAD_IPV4_CKSUM' in 'ICE_NO_VECTOR_FLAGS'
to make IPv4 checksum offload processed by basic TX path.
Fixes:
a22483208800 ("net/ice: disable TSO offload in vector path")
Cc: stable@dpdk.org
Signed-off-by: Murphy Yang <murphyx.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Alvin Zhang [Fri, 8 Jan 2021 07:29:10 +0000 (15:29 +0800)]
net/ice: fix RSS lookup table initialization
RSS look-up table initialization is done incorrectly due to
divide-by-zero error.
Add a check to rx-queue count.
Fixes:
50370662b727 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Tested-by: Wei Xie <weix.xie@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Murphy Yang [Thu, 7 Jan 2021 09:17:10 +0000 (09:17 +0000)]
net/iavf: fix conflicting RSS combination rules
Currently, when use 'flow' command to create a rule with following
invalid RSS type combination, it can be created successfully.
Invalid RSS combinations list:
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP
This patch adds these combinations in 'invalid_rss_comb' array to do
valid check, if the combination check failed, the rule will be created
unsuccessful.
Fixes:
91f27b2e39ab ("net/iavf: refactor RSS")
Cc: stable@dpdk.org
Signed-off-by: Murphy Yang <murphyx.yang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Jiawen Wu [Fri, 18 Dec 2020 09:37:02 +0000 (17:37 +0800)]
net/txgbe: add security type in flow action
Add security type in flow action.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:37:01 +0000 (17:37 +0800)]
net/txgbe: add security offload in Rx and Tx
Add security offload in Rx and Tx process.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:37:00 +0000 (17:37 +0800)]
net/txgbe: destroy security session
Add support to clear a security session's private data,
get the size of a security session,
add update the mbuf with provided metadata.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:59 +0000 (17:36 +0800)]
net/txgbe: add security session create operation
Add support to configure a security session.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:58 +0000 (17:36 +0800)]
net/txgbe: add IPsec context creation
Initialize securiry context, and add support to get
security capabilities.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:56 +0000 (17:36 +0800)]
net/txgbe: add TM hierarchy commit
Add traffic manager hierarchy commit.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:55 +0000 (17:36 +0800)]
net/txgbe: support TM node add and delete
Support traffic manager node add and delete operations.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:54 +0000 (17:36 +0800)]
net/txgbe: support TM shaper profile add and delete
Support traffic manager profile add and delete operations.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:53 +0000 (17:36 +0800)]
net/txgbe: add TM capabilities get operation
Add support to get traffic manager capabilities.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:52 +0000 (17:36 +0800)]
net/txgbe: add TM configuration init and uninit
Add traffic manager configuration init and uninit operations.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:51 +0000 (17:36 +0800)]
net/txgbe: support UDP tunnel port add and delete
Support UDP tunnel port add and delete operations.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:50 +0000 (17:36 +0800)]
net/txgbe: flush all filters
Add support to flush all the filters.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:49 +0000 (17:36 +0800)]
net/txgbe: support destroying consistent filter
Add a function to destroy the flow filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:48 +0000 (17:36 +0800)]
net/txgbe: support creating consistent filter
Create a flow rule, to use the matched filter which the rule hit first.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:47 +0000 (17:36 +0800)]
net/txgbe: parse RSS filter
Check if the rule is a RSS filter rule, and get the RSS info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:46 +0000 (17:36 +0800)]
net/txgbe: restore RSS filter
Add support to restore RSS filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:45 +0000 (17:36 +0800)]
net/txgbe: parse flow director filter
Check if the rule is a flow director rule, and get the flow director info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:44 +0000 (17:36 +0800)]
net/txgbe: support flow director filter add and delete
Support add and delete operations on flow director filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:43 +0000 (17:36 +0800)]
net/txgbe: configure flow director filter
Configure flow director filter with it enabled.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:42 +0000 (17:36 +0800)]
net/txgbe: add flow director filter init and uninit
Add flow director filter init and uninit operations.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:41 +0000 (17:36 +0800)]
net/txgbe: parse L2 tunnel filter
Check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:40 +0000 (17:36 +0800)]
net/txgbe: support L2 tunnel filter add and delete
Support L2 tunnel filter add and delete operations.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:39 +0000 (17:36 +0800)]
net/txgbe: config L2 tunnel filter with e-tag
Config L2 tunnel filter with e-tag.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:38 +0000 (17:36 +0800)]
net/txgbe: add L2 tunnel filter init and uninit
Add L2 tunnel filter init and uninit.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:37 +0000 (17:36 +0800)]
net/txgbe: parse syn filter
Check if the rule is a TCP SYN rule, and get the SYN info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:36 +0000 (17:36 +0800)]
net/txgbe: support syn filter add and delete
Support add and delete operations on syn filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:35 +0000 (17:36 +0800)]
net/txgbe: parse ethertype filter
Check if the rule is a ethertype rule, and get the ethertype info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:34 +0000 (17:36 +0800)]
net/txgbe: support ethertype filter add and delete
Support add and delete operations on ethertype filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:33 +0000 (17:36 +0800)]
net/txgbe: parse n-tuple filter
Check if the rule is a n-tuple rule, and get the n-tuple info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:32 +0000 (17:36 +0800)]
net/txgbe: support ntuple filter add and delete
Support add and delete operations on ntuple filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:31 +0000 (17:36 +0800)]
net/txgbe: add ntuple filter init and uninit
Add ntuple filter init and uninit.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:30 +0000 (17:36 +0800)]
net/txgbe: add generic flow API
Introduce rte_flow with its validate, create, destroy and flush
operations into txgbe PMD.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Andrew Boyer [Wed, 16 Dec 2020 21:12:57 +0000 (13:12 -0800)]
net/ionic: stop queues when LIF is stopped
Otherwise they cannot be restarted, because the FW will reject INIT
or ENA commands on queues which are already running.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:56 +0000 (13:12 -0800)]
net/ionic: improve queue state handling
Skip ionic_lif_[rxq|txq]_init() in queue start if it's already done.
Move ionic_lif_[rxq|txq]_deinit() from queue stop to queue release.
This allows the queues to be restarted.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:55 +0000 (13:12 -0800)]
net/ionic: improve link state handling
Add UP and FW_RESET state flags.
Update the stack info when the link state changes.
Convert set_link_up/set_link_down to lif_start/lif_stop.
Condition reported link state on UP flag.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:54 +0000 (13:12 -0800)]
net/ionic: complete release on close
ionic_dev_close() is responsible for destroying the ethdev, lif, and
adapter. eth_ionic_dev_remove() calls ionic_dev_close().
Remove-on-close is now required behavior for a PMD.
Remove the UNMAINTAINED flag.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:53 +0000 (13:12 -0800)]
net/ionic: remove multi-LIF support
This feature is unused, so remove it.
There is exactly one adapter / lif / ethdev per port.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>