Satha Rao [Wed, 7 Jul 2021 16:49:16 +0000 (12:49 -0400)]
net/octeontx2: fix TM node statistics query
Until hierarchy committed TM hardware resources are not allocated
for node.
This patch check for status of HW resources before reading statistics.
Fixes:
1e25d57fae38 ("net/octeontx2: add TM stats and shaper profile")
Cc: stable@dpdk.org
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Satha Rao [Wed, 7 Jul 2021 16:49:15 +0000 (12:49 -0400)]
net/octeontx2: handle link status when device stopped
Set link status to down and don't fetch link status from kernel
when device in stopped state.
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Satheesh Paul [Tue, 6 Jul 2021 08:19:18 +0000 (13:49 +0530)]
net/cnxk: fix default MCAM allocation size
Preallocation of MCAM entries is not valid anymore since the
AF side MCAM allocation scheme has changed. This patch disables
preallocation by changing the default MCAM preallocation size
from 8 to 1.
Fixes:
168c59cfe42 ("net/octeontx2: add flow MCAM utility functions")
Cc: stable@dpdk.org
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Anoob Joseph [Thu, 1 Jul 2021 09:29:29 +0000 (14:59 +0530)]
net/octeontx2: support non-ethernet L2 header
In the inline inound path, a custom header would be present at L3 which
has sequence number & SPI. L2 need to be adjusted such that the eventual
packet would have L3 after L2. Remove assumption of L2 type in this
handling.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Meir Levi [Sun, 11 Jul 2021 13:13:14 +0000 (16:13 +0300)]
net/mvpp2: fix not supported VLAN operations status
vlan_strip and vlan_extend features need to return "unsupported"
error value.
Fixes:
ff0b8b10dc4 ("net/mvpp2: support VLAN offload")
Cc: stable@dpdk.org
Signed-off-by: Meir Levi <mlevi4@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Dana Vardi [Sun, 11 Jul 2021 13:12:49 +0000 (16:12 +0300)]
net/mvpp2: fix configured state dependency
Need to set configure flag to allow create and commit mrvl tm
hierarchy tree. tm configuration depends on parameters that are
being set in port configure stage, e.g. nb_tx_queues.
This also aligned with the tm api description.
Fixes:
429c394417 ("net/mvpp2: support traffic manager")
Cc: stable@dpdk.org
Signed-off-by: Dana Vardi <danat@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Dana Vardi [Sun, 11 Jul 2021 13:11:43 +0000 (16:11 +0300)]
net/mvpp2: fix port speed overflow
ethtool_cmd_speed return uint32 and after the arithmetic
operation in mrvl_get_max_rate func the result is out of range.
Fixes:
429c394417 ("net/mvpp2: support traffic manager")
Cc: stable@dpdk.org
Signed-off-by: Dana Vardi <danat@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Sarosh Arif [Tue, 8 Jun 2021 11:08:50 +0000 (16:08 +0500)]
net/mlx5: fix typo in vectorized Rx comments
Change "returing" to "returning".
Fixes:
2e542da70937 ("net/mlx5: add Altivec Rx")
Fixes:
570acdb1da8a ("net/mlx5: add vectorized Rx/Tx burst for ARM")
Fixes:
3c2ddbd413e3 ("net/mlx5: separate shareable vector functions")
Cc: stable@dpdk.org
Signed-off-by: Sarosh Arif <sarosh.arif@emumba.com>
Alexander Kozyrev [Tue, 13 Jul 2021 15:21:12 +0000 (18:21 +0300)]
net/mlx5: fix threshold for mbuf replenishment in MPRQ
The replenishment scheme for the vectorized MPRQ Rx burst aims
to improve the cache locality by allocating new mbufs only when
there are almost no mbufs left: one burst gap between allocated
and consumed indexes.
This gap is not big enough to accommodate a corner case when we
have a very aggressive CQE compression with multiple regular CQEs
at the beginning and 64 zipped CQEs at the end.
Need to keep in mind this case and extend the replenishment
threshold by MLX5_VPMD_RX_MAX_BURST (64) to avoid mbuf overflow.
Fixes:
5fc2e5c27d6 ("net/mlx5: fix mbuf overflow in vectorized MPRQ")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Xiaoyu Min [Wed, 7 Jul 2021 02:32:47 +0000 (10:32 +0800)]
net/mlx5: fix missing RSS expansion of IPv6 frag
IPV6_FRAG_EXT item is missed for RSS expansion which causes wrongly
expanded flows:
flow create 0 ingress pattern eth / ipv6 / udp dst is 250 / vxlan-gpe /
ipv6 / ipv6_frag_ext / end actions rss level 2 types ip end / end
Different from other items, IPV6_FRAG_EXT hasn't next field because HW
only support to do hash of UDP/TCP for non-fragment.
This MLX5_EXPANSION_IPV6_FRAG_EXT node in RSS expansion graph only helps
RSS expansion function to locate right node in graph from which start
to expand.
Fixes:
0e5a0d8f7556 ("net/mlx5: support match on IPv6 fragment extension")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Xiaoyu Min [Wed, 7 Jul 2021 02:32:46 +0000 (10:32 +0800)]
net/mlx5: fix missing RSS expandable items
Some RSS expandable items are missing which leads to the expanded
rte flow rules with wrong patterns.
Fix by adding missed items.
Fixes:
d91093b9a2af ("net/mlx5: fix RSS pattern expansion")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Gregory Etelson [Mon, 5 Jul 2021 11:40:35 +0000 (14:40 +0300)]
net/mlx5: support flow matchng on IPv4 IHL
Query MLX5 port hardware if it is capable to offload IPv4
IHL field.
Provide flow rules capability to match on IPv4 IHL field.
Minimal HCA firmware version required to offload IPv4 IHL is
xx_30_2000.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:45:00 +0000 (11:45 +0300)]
doc: add multi-thread flow rate optimizations for mlx5
This commit adds the multiple-thread flow insertion optimization
description.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:59 +0000 (11:44 +0300)]
net/mlx5: optimize Rx queue match
As hrxq struct has the indirect table pointer, while matching the
hrxq, better to use the hrxq indirect table instead of searching
from the list.
This commit optimizes the hrxq indirect table matching.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:58 +0000 (11:44 +0300)]
net/mlx5: change memory release configuration
This commit changes the index pool memory release configuration
to 0 when memory reclaim mode is not required.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:57 +0000 (11:44 +0300)]
net/mlx5: optimize hash list table allocate on demand
Currently, all the hash list tables are allocated during start up.
Since different applications may only use dedicated limited actions,
optimized the hash list table allocate on demand will save initial
memory.
This commit optimizes hash list table allocate on demand.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:56 +0000 (11:44 +0300)]
net/mlx5: enable indexed pool per-core cache
This commit enables the tag and header modify action indexed
pool per-core cache in non-reclaim memory mode.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:55 +0000 (11:44 +0300)]
net/mlx5: adjust hash bucket size
With the new per core optimization to the list, the hash bucket size
can be tuned to a more accurate number.
This commit adjusts the hash bucket size.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:54 +0000 (11:44 +0300)]
net/mlx5: move header modify allocator to ipool
Modify header actions are allocated by mlx5_malloc which has a big
overhead of memory and allocation time.
One of the action types under the modify header object is SET_TAG,
The SET_TAG action is commonly not reused by the flows and each flow has
its own value.
Hence, the mlx5_malloc becomes a bottleneck in flow insertion rate in
the common cases of SET_TAG.
Use ipool allocator for SET_TAG action.
Ipool allocator has less overhead of memory and insertion rate and has
better synchronization mechanism in multithread cases.
Different ipool is created for each optional size of modify header
handler.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:53 +0000 (11:44 +0300)]
common/mlx5: support list non-lcore operations
This commit supports the list non-lcore operations with
an extra sub-list and lock.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:52 +0000 (11:44 +0300)]
common/mlx5: optimize cache list object memory
Currently, hash list uses the cache list as bucket list. The list
in the buckets have the same name, ctx and callbacks. This wastes
the memory.
This commit abstracts all the name, ctx and callback members in the
list to a constant struct and others to the inconstant struct, uses
the wrapper functions to satisfy both hash list and cache list can
set the list constant and inconstant struct individually.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:51 +0000 (11:44 +0300)]
common/mlx5: allocate cache list memory individually
Currently, the list's local cache instance memory is allocated with
the list. As the local cache instance array size is RTE_MAX_LCORE,
most of the cases the system will only have very limited cores.
allocate the instance memory individually per core will be more
economic to the memory.
This commit changes the instance array to pointer array, allocate
the local cache memory only when the core is to be used.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:50 +0000 (11:44 +0300)]
common/mlx5: add per-lcore cache to hash list utility
Using the mlx5 list utility object in the hlist buckets.
This patch moves the list utility object to the common utility, creates
all the clone operations for all the hlist instances in the driver.
Also adjust all the utility callbacks to be generic for both list and
hlist.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:49 +0000 (11:44 +0300)]
common/mlx5: call list callbacks with context
This commit optimizes to call the list callback functions with global
context directly.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:48 +0000 (11:44 +0300)]
common/mlx5: add per-lcore sharing flag in object list
Without lcores_share flag, mlx5 PMD was sharing the rdma-core objects
between all lcores.
Having lcores_share flag disabled, means each lcore will have its own
objects, which will eventually lead to increased insertion/deletion
rates.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:47 +0000 (11:44 +0300)]
common/mlx5: move list utility from net driver
Hash list is planned to be implemented with the cache list code.
This commit moves the list utility to common directory.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:46 +0000 (11:44 +0300)]
net/mlx5: allocate list memory in create function
Currently, the list memory was allocated by the list API caller.
Move it to be allocated by the create API in order to save consistence
with the hlist utility.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:45 +0000 (11:44 +0300)]
net/mlx5: relax list utility atomic operations
The atomic operation in the list utility no need a barriers because the
critical part are managed by RW lock.
Relax them.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:44 +0000 (11:44 +0300)]
net/mlx5: manage list cache entries release
When a cache entry is allocated by lcore A and is released by lcore B,
the driver should synchronize the cache list access of lcore A.
The design decision is to manage a counter per lcore cache that will be
increased atomically when the non-original lcore decreases the reference
counter of cache entry to 0.
In list register operation, before the running lcore starts a lookup in
its cache, it will check the counter in order to free invalid entries in
its cache.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:43 +0000 (11:44 +0300)]
net/mlx5: minimize list critical sections
The mlx5 internal list utility is thread safe.
In order to synchronize list access between the threads, a RW lock is
taken for the critical sections.
The create\remove\clone\clone_free operations are in the critical
sections.
These operations are heavy and make the critical sections heavy because
they are used for memory and other resources allocations\deallocations.
Moved out the operations from the critical sections and use generation
counter in order to detect parallel allocations.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:42 +0000 (11:44 +0300)]
net/mlx5: add per-lcore cache to the list utility
When mlx5 list object is accessed by multiple cores, the list lock
counter is all the time written by all the cores what increases cache
misses in the memory caches.
In addition, when one thread accesses the list for add\remove\lookup
operation, all the other threads coming to do an operation in the list
are stuck in the lock.
Add per lcore cache to allow thread manipulations to be lockless when
the list objects are mostly reused.
Synchronization with atomic operations should be done in order to
allow threads to unregister an entry from other thread cache.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:41 +0000 (11:44 +0300)]
net/mlx5: remove cache term from the list utility
The internal mlx5 list tool is used mainly when the list objects need to
be synchronized between multiple threads.
The "cache" term is used in the internal mlx5 list API.
Next enhancements on this tool will use the "cache" term for per thread
cache management.
To prevent confusing, remove the current "cache" term from the API's
names.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Matan Azrad [Tue, 13 Jul 2021 08:44:40 +0000 (11:44 +0300)]
net/mlx5: optimize header modify action memory
Define the types of the modify header action fields to be with the
minimum size needed for the optional values range.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:39 +0000 (11:44 +0300)]
net/mlx5: replace flow list with indexed pool
The flow list is used to save the create flows and to be used only
when port closes all the flows need to be flushed.
This commit takes advantage of the index pool foreach operation to
flush all the allocated flows.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:38 +0000 (11:44 +0300)]
net/mlx5: support indexed pool non-lcore operations
This commit supports the index pool non-lcore operations with
an extra cache and lcore lock.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:37 +0000 (11:44 +0300)]
net/mlx5: add indexed pool iterator
In some cases, application may want to know all the allocated
index in order to apply some operations to the allocated index.
This commit adds the indexed pool functions to support foreach
operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:36 +0000 (11:44 +0300)]
net/mlx5: add indexed pool local cache
For object which wants efficient index allocate and free, local
cache will be very helpful.
Two level cache is introduced to allocate and free the index more
efficient. One as local and the other as global. The global cache
is able to save all the allocated index. That means all the allocated
index will not be freed. Once the local cache is full, the extra
index will be flushed to the global cache. Once local cache is empty,
first try to fetch more index from global, if global is still empty,
allocate new trunk with more index.
This commit adds new local cache mechanism for indexed pool.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Suanming Mou [Tue, 13 Jul 2021 08:44:35 +0000 (11:44 +0300)]
net/mlx5: allow limiting the indexed pool maximum index
Some ipool instances in the driver are used as ID\index allocator and
added other logic in order to work with limited index values.
Add a new configuration for ipool specify the maximum index value.
The ipool will ensure that no index bigger than the maximum value is
provided.
Use this configuration in ID allocator cases instead of the current
logics. This patch add the maximum ID configurable for the index pool.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Ruifeng Wang [Wed, 7 Jul 2021 09:03:07 +0000 (17:03 +0800)]
net/mlx5: reduce unnecessary memory access in Rx
MR btree len is a constant during Rx replenish.
Moved retrieve of the value out of loop to reduce data loads.
Slight performance uplift was measured on both N1SDP and x86.
Suggested-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Ruifeng Wang [Wed, 7 Jul 2021 09:03:06 +0000 (17:03 +0800)]
net/mlx5: remove redundant operations in NEON Rx
Mask of entries after the compressed CQE is covered by invalid mask of
non-compressed valid CQEs. Hence remove redundant calculation on mask.
The change showed slight performance uplift on N1SDP.
Fixes:
570acdb1da8a ("net/mlx5: add vectorized Rx/Tx burst for ARM")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Rongwei Liu [Tue, 13 Jul 2021 12:09:20 +0000 (15:09 +0300)]
app/testpmd: support matching on VXLAN reserved field
Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.
The examples for the "last_rsvd" pattern field are as below:
1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
This flow will exactly match the last 8-bits to be 0x80.
2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...
This flow will only match the MSB of the last 8-bits to be 1.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
Rongwei Liu [Tue, 13 Jul 2021 12:09:19 +0000 (15:09 +0300)]
net/mlx5: support matching on VXLAN reserved field
This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.
For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
Gregory Etelson [Tue, 13 Jul 2021 07:29:25 +0000 (10:29 +0300)]
app/testpmd: add flow matching on IPv4 version and IHL
The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Viacheslav Ovsiienko [Mon, 12 Jul 2021 12:40:53 +0000 (15:40 +0300)]
app/testpmd: fix offloads for newly attached port
For the newly attached ports (with "port attach" command) the
default offloads settings, configured from application command
line, were not applied, causing port start failure following
the attach.
For example, if scattering offload was configured in command
line and rxpkts was configured for multiple segments, the newly
attached port start was failed due to missing scattering offload
enable in the new port settings. The missing code to apply
the offloads to the new device and its queues is added.
The new local routine init_config_port_offloads() is introduced,
embracing the shared part of port offloads initialization code.
Fixes:
c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Huisong Li [Sat, 10 Jul 2021 01:58:34 +0000 (09:58 +0800)]
net/hns3: support multiple TC MAC pause
MAC PAUSE can take effect on a single TC or multiple TCs, depending on the
hardware. For example, the Kunpeng 920 supports MAC pause in a single TC,
and the Kunpeng 930 supports MAC pause in multiple TCs. This patch
supports MAC PAUSE in multiple TC for some hardware.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengchang Tang [Sat, 10 Jul 2021 01:58:33 +0000 (09:58 +0800)]
net/hns3: support VLAN filter state modify for VF
Since the HW limitation for VF, the VLAN filter is default enabled, and
is not allowed to be closed. Now, the limitation has been removed in
Kunpeng930 network engine, so this patch add support for VF to modify the
VLAN filter state.
A capabilities bit is added to differentiate between different platforms
and achieve compatibility. When the VF runs on an incomatible platform or
an incompatible kernel-mode driver version is used, the VF behavior is
the same as that before.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengchang Tang [Sat, 10 Jul 2021 01:58:32 +0000 (09:58 +0800)]
net/hns3: query basic info for VF
There are some features of VF depend on PF, so it's necessary for VF
to know whether current PF supports. Therefore, the final capability
set of VF will be composed of the capability set of hardware and the
capability set of PF.
For compatibility reasons, the mailbox HNS3_MBX_GET_TCINFO has been
modified to obatin more basic information about the current PF, including
the communication interface version and current PF capabilities set.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Dapeng Yu [Fri, 9 Jul 2021 06:00:56 +0000 (14:00 +0800)]
net/softnic: fix connection memory leak
In function softnic_conn_init(), a block of memory is allocated as
connection buffer, but it is never freed in softnic_conn_free(),
which cause memory leak.
Fixes:
7709a63bf178 ("net/softnic: add connection agent")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Jochen Behrens [Thu, 8 Jul 2021 14:02:25 +0000 (07:02 -0700)]
net/vmxnet3: support MSI-X interrupt
Add support for MSI-X interrupt vectors to the vmxnet3 driver.
This will allow more efficient deployments in cloud environments.
By default it will try to allocate 1 vector (0) for link
event and one MSI-X vector for each Rx queue. To simplify
things, it will only be enabled if the number of Tx and Rx
queues are equal (so that Tx/Rx share the same vector).
If for any reason vmxnet3 cannot enable intr mode, it will
fall back to the LSC only mode.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Signed-off-by: Jochen Behrens <jbehrens@vmware.com>
Martin Havlik [Tue, 22 Jun 2021 09:25:29 +0000 (11:25 +0200)]
net/bonding: check flow setting
Return value from bond_ethdev_8023ad_flow_set() is now checked
and appropriate message is logged on error.
Fixes:
112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: Martin Havlik <xhavli56@stud.fit.vutbr.cz>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Martin Havlik [Tue, 22 Jun 2021 09:25:28 +0000 (11:25 +0200)]
net/bonding: fix error message on flow verify
Return value is now saved to errval and log message on error reports
correct function name, doesn't use q_id which was out of context,
and uses up-to-date errval.
Fixes:
112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: Martin Havlik <xhavli56@stud.fit.vutbr.cz>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Jiawen Wu [Thu, 8 Jul 2021 09:32:39 +0000 (17:32 +0800)]
net/ngbe: support close and reset device
Support to close and reset device.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:38 +0000 (17:32 +0800)]
net/ngbe: add simple Tx flow
Initialize device with the simplest transmit functions.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:37 +0000 (17:32 +0800)]
net/ngbe: add simple Rx flow
Initialize device with the simplest receive function.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:36 +0000 (17:32 +0800)]
net/ngbe: support Rx queue start/stop
Initializes receive unit, support to start and stop receive unit for
specified queues.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:35 +0000 (17:32 +0800)]
net/ngbe: support Tx queue start/stop
Initializes transmit unit, support to start and stop transmit unit for
specified queues.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:34 +0000 (17:32 +0800)]
net/ngbe: support device start/stop
Setup MSI-X interrupt, complete PHY configuration and set device link
speed to start device. Disable interrupt, stop hardware and clear queues
to stop device.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:33 +0000 (17:32 +0800)]
net/ngbe: support Tx queue setup/release
Setup device Tx queue and release Tx queue.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:32 +0000 (17:32 +0800)]
net/ngbe: support Rx queue setup/release
Setup device Rx queue and release Rx queue.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:31 +0000 (17:32 +0800)]
net/ngbe: setup PHY link
Setup PHY, determine link and speed status from PHY.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:30 +0000 (17:32 +0800)]
net/ngbe: support link update
Register to handle device interrupt.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:29 +0000 (17:32 +0800)]
net/ngbe: store MAC address
Store MAC addresses and init receive address filters.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:28 +0000 (17:32 +0800)]
net/ngbe: identify and reset PHY
Identify PHY to get the PHY type, and perform a PHY reset.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:27 +0000 (17:32 +0800)]
net/ngbe: add HW initialization
Initialize the hardware by resetting the hardware in base code.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:26 +0000 (17:32 +0800)]
net/ngbe: initialize and validate EEPROM
Reset swfw lock before NVM access, init EEPROM and validate the
checksum.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:25 +0000 (17:32 +0800)]
net/ngbe: set MAC type and LAN ID with initialization
Add basic init and uninit function.
Map device IDs and subsystem IDs to single ID for easy operation.
Then initialize the shared code.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:24 +0000 (17:32 +0800)]
net/ngbe: define registers
Define all registers that will be used.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:23 +0000 (17:32 +0800)]
net/ngbe: add log and error types
Add log type and error type to trace functions.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:22 +0000 (17:32 +0800)]
net/ngbe: support probe and remove
Add device IDs for Wangxun 1Gb NICs, map device IDs to register ngbe
PMD. Add basic PCIe ethdev probe and remove.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 8 Jul 2021 09:32:21 +0000 (17:32 +0800)]
net/ngbe: add build and doc infrastructure
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for ngbe PMD.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Thomas Monjalon [Sat, 10 Jul 2021 10:01:52 +0000 (12:01 +0200)]
version: 21.08-rc1
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Viacheslav Galaktionov [Mon, 5 Jul 2021 10:02:52 +0000 (13:02 +0300)]
ethdev: keep count of representor ranges in API
In its current state, the API can overflow the user-passed buffer if a new
representor range appears between function calls.
In order to solve this problem, augment the representor info structure with
the numbers of allocated and initialized ranges. This way the users of this
structure can be sure they will not overrun the buffer.
Fixes:
85e1588ca72f ("ethdev: add API to get representor info")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Changpeng Liu [Wed, 19 May 2021 06:45:48 +0000 (14:45 +0800)]
eal: suppress error log on multi-process hotplug
This is a normal case that the primary process already
owned one device while the secondary process try to
attach it, so suppress the error log here to exclude
this case.
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
David Hunt [Tue, 22 Jun 2021 14:07:50 +0000 (15:07 +0100)]
examples/l3fwd-power: add baseline PMD management mode
The PMD Power Management scheme currently has 3 modes,
scale, monitor and pause. However, it would be nice to
have a baseline mode for easy comparison of power savings
with and without these modes.
This patch adds a 'baseline' mode were the PMD power
management is not enabled. Use --pmd-mgmt=baseline.
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Cristian Dumitrescu [Fri, 9 Jul 2021 17:07:00 +0000 (18:07 +0100)]
examples/pipeline: add FIB example
Add example for FIB with VRF and ECMP support.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Churchill Khangar <churchill.khangar@intel.com>
Cristian Dumitrescu [Thu, 8 Jul 2021 10:11:29 +0000 (11:11 +0100)]
pipeline: support LPM lookup
Add support for the Longest Prefix Match (LPM) lookup to the SWX
pipeline.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Churchill Khangar <churchill.khangar@intel.com>
Cristian Dumitrescu [Sat, 10 Jul 2021 00:20:36 +0000 (01:20 +0100)]
examples/pipeline: add selector example
Added the files to illustrate the selector table usage.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Cristian Dumitrescu [Sat, 10 Jul 2021 00:20:35 +0000 (01:20 +0100)]
examples/pipeline: support selector table
Add application-level support for selector tables.
Signed-off-by: Churchill Khangar <churchill.khangar@intel.com>
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Cristian Dumitrescu [Sat, 10 Jul 2021 00:20:34 +0000 (01:20 +0100)]
pipeline: support selector table
Add pipeline-level support for selector tables.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Cristian Dumitrescu [Fri, 2 Jul 2021 22:46:05 +0000 (23:46 +0100)]
table: support selector table
A selector table is made up of groups of weighted members, with a
given member potentially part of several groups. The select operation
returns a member ID by first selecting a group based on an input group
ID and then selecting a member within that group based on hashing one
or several input header/meta-data fields. It is very useful for
implementing an ECMP/WCMP-enabled FIB or a load balancer. It is part
of the action selector described by the P4 Portable Switch
Architecture (PSA) specification.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Churchill Khangar [Fri, 2 Jul 2021 22:46:04 +0000 (23:46 +0100)]
examples/pipeline: improve table update commands
For more flexibility, the single monolithic table update command is
split into table entry add, table entry delete, table default entry
add, pipeline commit and pipeline abort.
Signed-off-by: Churchill Khangar <churchill.khangar@intel.com>
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Cristian Dumitrescu [Mon, 5 Jul 2021 22:56:50 +0000 (23:56 +0100)]
pipeline: fix table entry read
The rte_swx_pipeline_table_entry_read() function is used to read from
a character string a table entry that is to be added to the table,
deleted from the table or set as the default entry of the table.
Addition needs both the match and the part of the entry, deletion
ignores the action part, while the default set ignores the match part,
hence the need to make both the match and the action part optional.
The logic for skipping the match or the action part was broken, hence
the current fix.
Fixes:
b32c0a2c5e4c ("pipeline: add SWX table update high level API")
Cc: stable@dpdk.org
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Venkata Suresh Kumar P <venkata.suresh.kumar.p@intel.com>
Signed-off-by: Churchill Khangar <churchill.khangar@intel.com>
Thierry Herbelot [Wed, 7 Jul 2021 11:19:05 +0000 (13:19 +0200)]
table: fix bucket empty check
Due to a typo, only 3 out of 4 keys in the bucket of the exact match
table were considered, which can result in valid keys being
incorrectly dropped from the table.
Fixes:
d0a00966618ba ("table: add exact match SWX table")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Ajit Khaparde [Thu, 8 Jul 2021 22:09:18 +0000 (15:09 -0700)]
net/bnxt: fix build
Fix build failures seen on Fedora Core 34 (GCC 11)
because of uninitialized variables.
In function ‘ulp_mapper_index_tbl_process’:
drivers/net/bnxt/tf_ulp/ulp_mapper.c:2252:43: error:
‘*(unsigned int *)((char *)&glb_res + offsetof(struct bnxt_ulp_glb_resource_info, resource_func))’
may be used uninitialized in this function
2252 | struct bnxt_ulp_glb_resource_info glb_res;
| ^~~~~~~
drivers/net/bnxt/tf_ulp/ulp_mapper.c:2252:43: error:
‘glb_res.resource_type’ may be used uninitialized in this function
In function ‘dpool_defrag’:
drivers/net/bnxt/tf_core/dpool.c:95:18: error:
‘index’ may be used uninitialized in this function
95 | uint32_t index;
| ^~~~~
Fixes:
05b405d58148 ("net/bnxt: add dpool allocator for EM allocation")
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Chengwen Feng [Mon, 28 Jun 2021 02:57:51 +0000 (10:57 +0800)]
net/hns3: fix Arm SVE build with GCC 8.3
If the target machine has SVE feature (e.g. '-march=armv8.2-a+sve'),
and compiler is gcc-8.3, it will fail, the error is arm_sve.h:
no such file or directory.
The solution:
a. If RTE_HAS_SVE_ACLE defined (it means the minimum instruction set
support SVE ACLE) then compiles it.
b. Else if the compiler support SVE ACLE then compiles it.
c. Otherwise don't compile it.
Fixes:
8c25b02b082a ("net/hns3: fix enabling SVE Rx/Tx")
Fixes:
952ebacce4f2 ("net/hns3: support SVE Rx")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Chengwen Feng [Mon, 28 Jun 2021 02:57:50 +0000 (10:57 +0800)]
config/arm: fix SVE build with GCC 8.3
If the target machine has SVE feature (e.g. "-march=armv8.2-a+sve'),
and the compiler is gcc-8.3, it will produce this error:
In file included from lib/eal/common/eal_common_options.c:38:
lib/eal/arm/include/rte_vect.h:13:10: fatal error:
arm_sve.h: No such file or directory
#include <arm_sve.h>
^~~~~~~~~~~
The root cause is that gcc-8.3 supports SVE (the macro
__ARM_FEATURE_SVE was 1), but it doesn't support SVE ACLE [1].
The solution:
a) Detect compiler whether support SVE ACLE, if support then define
RTE_HAS_SVE_ACLE macro.
b) Use the RTE_HAS_SVE_ACLE macro to include SVE header file.
[1] ACLE: Arm C Language Extensions, the SVE ACLE header file is
<arm_sve.h>, user should include it when writing ACLE SVE code.
Fixes:
67b68824a82d ("lpm/arm: support SVE")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Ruifeng Wang [Wed, 7 Jul 2021 05:48:38 +0000 (13:48 +0800)]
ring: use WFE to wait for tail update on aarch64
Instead of polling for tail to be updated, use WFE instruction.
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Gavin Hu [Wed, 7 Jul 2021 05:48:37 +0000 (13:48 +0800)]
spinlock: use WFE to reduce contention on aarch64
In acquiring a spinlock, cores repeatedly poll the lock variable.
This is replaced by rte_wait_until_equal API.
Running micro benchmarking and testpmd and l3fwd traffic tests
on ThunderX2, Ampere eMAG80 and Arm N1SDP, everything went well
and no notable performance gain nor degradation was measured.
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Tested-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Anatoly Burakov [Fri, 9 Jul 2021 16:08:11 +0000 (16:08 +0000)]
net/af_xdp: support power monitoring
Implement support for .get_monitor_addr in AF_XDP driver.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Anatoly Burakov [Fri, 9 Jul 2021 16:08:17 +0000 (16:08 +0000)]
examples/l3fwd-power: support multiqueue ethdev power management
Currently, l3fwd-power enforces the limitation of having one queue per
lcore. This is no longer necessary, so remove the limitation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Anatoly Burakov [Fri, 9 Jul 2021 16:08:16 +0000 (16:08 +0000)]
power: support monitoring multiple Rx queues
Use the new multi-monitor intrinsic to allow monitoring multiple ethdev
Rx queues while entering the energy efficient power state. The multi
version will be used unconditionally if supported, and the UMWAIT one
will only be used when multi-monitor is not supported by the hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Anatoly Burakov [Fri, 9 Jul 2021 16:08:15 +0000 (16:08 +0000)]
power: support callbacks for multiple Rx queues
Currently, there is a hard limitation on the PMD power management
support that only allows it to support a single queue per lcore. This is
not ideal as most DPDK use cases will poll multiple queues per core.
The PMD power management mechanism relies on ethdev Rx callbacks, so it
is very difficult to implement such support because callbacks are
effectively stateless and have no visibility into what the other ethdev
devices are doing. This places limitations on what we can do within the
framework of Rx callbacks, but the basics of this implementation are as
follows:
- Replace per-queue structures with per-lcore ones, so that any device
polled from the same lcore can share data
- Any queue that is going to be polled from a specific lcore has to be
added to the list of queues to poll, so that the callback is aware of
other queues being polled by the same lcore
- Both the empty poll counter and the actual power saving mechanism is
shared between all queues polled on a particular lcore, and is only
activated when all queues in the list were polled and were determined
to have no traffic.
- The limitation on UMWAIT-based polling is not removed because UMWAIT
is incapable of monitoring more than one address.
Also, while we're at it, update and improve the docs.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Anatoly Burakov [Fri, 9 Jul 2021 16:08:14 +0000 (16:08 +0000)]
power: make ethdev power management thread unsafe
Currently, we expect that only one callback can be active at any given
moment, for a particular queue configuration, which is relatively easy
to implement in a thread-safe way. However, we're about to add support
for multiple queues per lcore, which will greatly increase the
possibility of various race conditions.
We could have used something like an RCU for this use case, but absent
of a pressing need for thread safety we'll go the easy way and just
mandate that the API's are to be called when all affected ports are
stopped, and document this limitation. This greatly simplifies the
`rte_power_monitor`-related code.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Anatoly Burakov [Fri, 9 Jul 2021 16:08:13 +0000 (16:08 +0000)]
eal: add power monitor for multiple events
Use RTM and WAITPKG instructions to perform a wait-for-writes similar to
what UMWAIT does, but without the limitation of having to listen for
just one event. This works because the optimized power state used by the
TPAUSE instruction will cause a wake up on RTM transaction abort, so if
we add the addresses we're interested in to the read-set, any write to
those addresses will wake us up.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Anatoly Burakov [Fri, 9 Jul 2021 16:08:10 +0000 (16:08 +0000)]
eal: use callbacks for power monitoring comparison
Previously, the semantics of power monitor were such that we were
checking current value against the expected value, and if they matched,
then the sleep was aborted. This is somewhat inflexible, because it only
allowed us to check for a specific value in a specific way.
This commit replaces the comparison with a user callback mechanism, so
that any PMD (or other code) using `rte_power_monitor()` can define
their own comparison semantics and decision making on how to detect the
need to abort the entering of power optimized state.
Existing implementations are adjusted to follow the new semantics.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Anatoly Burakov [Fri, 9 Jul 2021 16:08:12 +0000 (16:08 +0000)]
doc: add power management to NIC features
At this point, multiple different Ethernet drivers from multiple vendors
will support the PMD power management scheme. It would be useful to add
it to the NIC feature table to indicate support for it.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Phil Yang [Wed, 7 Jul 2021 13:25:43 +0000 (15:25 +0200)]
doc: add aarch32 build guidance
Add cross-compiling guidance for 32-bit aarch32 DPDK on aarch64 host.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Juraj Linkeš [Wed, 7 Jul 2021 13:25:42 +0000 (15:25 +0200)]
config/arm: add aarch32 cross-compilation
Create meson cross file arm32_armv8a_linux_gcc. Use arm-linux-gnueabihf-
toolset which comes with standard packages on most used systems, such as
Ubuntu and CentOS.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Juraj Linkeš [Wed, 7 Jul 2021 13:25:41 +0000 (15:25 +0200)]
config/arm: add aarch32
Add aarch32 armv8 SoC to build config.
Also modify how arm flags are updated in meson build - for 32-bit build,
update only if cross-compiling.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Juraj Linkeš [Wed, 7 Jul 2021 13:25:40 +0000 (15:25 +0200)]
eal/arm: update CPU flags
There are two execution states on armv8 architecture, aarch64 and
aarch32. Add PLATFORM_STR for the latter and update RTE_ARCH_* flags
according to
e9b97392640.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>