Huisong Li [Sat, 10 Jul 2021 01:58:34 +0000 (09:58 +0800)]
net/hns3: support multiple TC MAC pause
MAC PAUSE can take effect on a single TC or multiple TCs, depending on the
hardware. For example, the Kunpeng 920 supports MAC pause in a single TC,
and the Kunpeng 930 supports MAC pause in multiple TCs. This patch
supports MAC PAUSE in multiple TC for some hardware.
Signed-off-by: Huisong Li <lihuisong@huawei.com> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Since the HW limitation for VF, the VLAN filter is default enabled, and
is not allowed to be closed. Now, the limitation has been removed in
Kunpeng930 network engine, so this patch add support for VF to modify the
VLAN filter state.
A capabilities bit is added to differentiate between different platforms
and achieve compatibility. When the VF runs on an incomatible platform or
an incompatible kernel-mode driver version is used, the VF behavior is
the same as that before.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
There are some features of VF depend on PF, so it's necessary for VF
to know whether current PF supports. Therefore, the final capability
set of VF will be composed of the capability set of hardware and the
capability set of PF.
For compatibility reasons, the mailbox HNS3_MBX_GET_TCINFO has been
modified to obatin more basic information about the current PF, including
the communication interface version and current PF capabilities set.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In function softnic_conn_init(), a block of memory is allocated as
connection buffer, but it is never freed in softnic_conn_free(),
which cause memory leak.
Add support for MSI-X interrupt vectors to the vmxnet3 driver.
This will allow more efficient deployments in cloud environments.
By default it will try to allocate 1 vector (0) for link
event and one MSI-X vector for each Rx queue. To simplify
things, it will only be enabled if the number of Tx and Rx
queues are equal (so that Tx/Rx share the same vector).
If for any reason vmxnet3 cannot enable intr mode, it will
fall back to the LSC only mode.
Signed-off-by: Yong Wang <yongwang@vmware.com> Signed-off-by: Jochen Behrens <jbehrens@vmware.com>
Martin Havlik [Tue, 22 Jun 2021 09:25:29 +0000 (11:25 +0200)]
net/bonding: check flow setting
Return value from bond_ethdev_8023ad_flow_set() is now checked
and appropriate message is logged on error.
Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control") Cc: stable@dpdk.org Signed-off-by: Martin Havlik <xhavli56@stud.fit.vutbr.cz> Acked-by: Min Hu (Connor) <humin29@huawei.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Martin Havlik [Tue, 22 Jun 2021 09:25:28 +0000 (11:25 +0200)]
net/bonding: fix error message on flow verify
Return value is now saved to errval and log message on error reports
correct function name, doesn't use q_id which was out of context,
and uses up-to-date errval.
Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control") Cc: stable@dpdk.org Signed-off-by: Martin Havlik <xhavli56@stud.fit.vutbr.cz> Acked-by: Min Hu (Connor) <humin29@huawei.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Setup MSI-X interrupt, complete PHY configuration and set device link
speed to start device. Disable interrupt, stop hardware and clear queues
to stop device.
In its current state, the API can overflow the user-passed buffer if a new
representor range appears between function calls.
In order to solve this problem, augment the representor info structure with
the numbers of allocated and initialized ranges. This way the users of this
structure can be sure they will not overrun the buffer.
Fixes: 85e1588ca72f ("ethdev: add API to get representor info") Cc: stable@dpdk.org Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Changpeng Liu [Wed, 19 May 2021 06:45:48 +0000 (14:45 +0800)]
eal: suppress error log on multi-process hotplug
This is a normal case that the primary process already
owned one device while the secondary process try to
attach it, so suppress the error log here to exclude
this case.
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
The PMD Power Management scheme currently has 3 modes,
scale, monitor and pause. However, it would be nice to
have a baseline mode for easy comparison of power savings
with and without these modes.
This patch adds a 'baseline' mode were the PMD power
management is not enabled. Use --pmd-mgmt=baseline.
Signed-off-by: David Hunt <david.hunt@intel.com> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
A selector table is made up of groups of weighted members, with a
given member potentially part of several groups. The select operation
returns a member ID by first selecting a group based on an input group
ID and then selecting a member within that group based on hashing one
or several input header/meta-data fields. It is very useful for
implementing an ECMP/WCMP-enabled FIB or a load balancer. It is part
of the action selector described by the P4 Portable Switch
Architecture (PSA) specification.
For more flexibility, the single monolithic table update command is
split into table entry add, table entry delete, table default entry
add, pipeline commit and pipeline abort.
The rte_swx_pipeline_table_entry_read() function is used to read from
a character string a table entry that is to be added to the table,
deleted from the table or set as the default entry of the table.
Addition needs both the match and the part of the entry, deletion
ignores the action part, while the default set ignores the match part,
hence the need to make both the match and the action part optional.
The logic for skipping the match or the action part was broken, hence
the current fix.
Due to a typo, only 3 out of 4 keys in the bucket of the exact match
table were considered, which can result in valid keys being
incorrectly dropped from the table.
Fix build failures seen on Fedora Core 34 (GCC 11)
because of uninitialized variables.
In function ‘ulp_mapper_index_tbl_process’:
drivers/net/bnxt/tf_ulp/ulp_mapper.c:2252:43: error:
‘*(unsigned int *)((char *)&glb_res + offsetof(struct bnxt_ulp_glb_resource_info, resource_func))’
may be used uninitialized in this function
2252 | struct bnxt_ulp_glb_resource_info glb_res;
| ^~~~~~~
drivers/net/bnxt/tf_ulp/ulp_mapper.c:2252:43: error:
‘glb_res.resource_type’ may be used uninitialized in this function
In function ‘dpool_defrag’:
drivers/net/bnxt/tf_core/dpool.c:95:18: error:
‘index’ may be used uninitialized in this function
95 | uint32_t index;
| ^~~~~
Fixes: 05b405d58148 ("net/bnxt: add dpool allocator for EM allocation") Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Chengwen Feng [Mon, 28 Jun 2021 02:57:51 +0000 (10:57 +0800)]
net/hns3: fix Arm SVE build with GCC 8.3
If the target machine has SVE feature (e.g. '-march=armv8.2-a+sve'),
and compiler is gcc-8.3, it will fail, the error is arm_sve.h:
no such file or directory.
The solution:
a. If RTE_HAS_SVE_ACLE defined (it means the minimum instruction set
support SVE ACLE) then compiles it.
b. Else if the compiler support SVE ACLE then compiles it.
c. Otherwise don't compile it.
Fixes: 8c25b02b082a ("net/hns3: fix enabling SVE Rx/Tx") Fixes: 952ebacce4f2 ("net/hns3: support SVE Rx") Cc: stable@dpdk.org Signed-off-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Chengwen Feng [Mon, 28 Jun 2021 02:57:50 +0000 (10:57 +0800)]
config/arm: fix SVE build with GCC 8.3
If the target machine has SVE feature (e.g. "-march=armv8.2-a+sve'),
and the compiler is gcc-8.3, it will produce this error:
In file included from lib/eal/common/eal_common_options.c:38:
lib/eal/arm/include/rte_vect.h:13:10: fatal error:
arm_sve.h: No such file or directory
#include <arm_sve.h>
^~~~~~~~~~~
The root cause is that gcc-8.3 supports SVE (the macro
__ARM_FEATURE_SVE was 1), but it doesn't support SVE ACLE [1].
The solution:
a) Detect compiler whether support SVE ACLE, if support then define
RTE_HAS_SVE_ACLE macro.
b) Use the RTE_HAS_SVE_ACLE macro to include SVE header file.
[1] ACLE: Arm C Language Extensions, the SVE ACLE header file is
<arm_sve.h>, user should include it when writing ACLE SVE code.
Fixes: 67b68824a82d ("lpm/arm: support SVE") Cc: stable@dpdk.org Signed-off-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Ruifeng Wang <ruifeng.wang@arm.com> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Ruifeng Wang [Wed, 7 Jul 2021 05:48:38 +0000 (13:48 +0800)]
ring: use WFE to wait for tail update on aarch64
Instead of polling for tail to be updated, use WFE instruction.
Signed-off-by: Gavin Hu <gavin.hu@arm.com> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com> Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Gavin Hu [Wed, 7 Jul 2021 05:48:37 +0000 (13:48 +0800)]
spinlock: use WFE to reduce contention on aarch64
In acquiring a spinlock, cores repeatedly poll the lock variable.
This is replaced by rte_wait_until_equal API.
Running micro benchmarking and testpmd and l3fwd traffic tests
on ThunderX2, Ampere eMAG80 and Arm N1SDP, everything went well
and no notable performance gain nor degradation was measured.
Signed-off-by: Gavin Hu <gavin.hu@arm.com> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com> Reviewed-by: Phil Yang <phil.yang@arm.com> Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> Tested-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Use the new multi-monitor intrinsic to allow monitoring multiple ethdev
Rx queues while entering the energy efficient power state. The multi
version will be used unconditionally if supported, and the UMWAIT one
will only be used when multi-monitor is not supported by the hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Tested-by: David Hunt <david.hunt@intel.com>
Currently, there is a hard limitation on the PMD power management
support that only allows it to support a single queue per lcore. This is
not ideal as most DPDK use cases will poll multiple queues per core.
The PMD power management mechanism relies on ethdev Rx callbacks, so it
is very difficult to implement such support because callbacks are
effectively stateless and have no visibility into what the other ethdev
devices are doing. This places limitations on what we can do within the
framework of Rx callbacks, but the basics of this implementation are as
follows:
- Replace per-queue structures with per-lcore ones, so that any device
polled from the same lcore can share data
- Any queue that is going to be polled from a specific lcore has to be
added to the list of queues to poll, so that the callback is aware of
other queues being polled by the same lcore
- Both the empty poll counter and the actual power saving mechanism is
shared between all queues polled on a particular lcore, and is only
activated when all queues in the list were polled and were determined
to have no traffic.
- The limitation on UMWAIT-based polling is not removed because UMWAIT
is incapable of monitoring more than one address.
Also, while we're at it, update and improve the docs.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Tested-by: David Hunt <david.hunt@intel.com>
Currently, we expect that only one callback can be active at any given
moment, for a particular queue configuration, which is relatively easy
to implement in a thread-safe way. However, we're about to add support
for multiple queues per lcore, which will greatly increase the
possibility of various race conditions.
We could have used something like an RCU for this use case, but absent
of a pressing need for thread safety we'll go the easy way and just
mandate that the API's are to be called when all affected ports are
stopped, and document this limitation. This greatly simplifies the
`rte_power_monitor`-related code.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Tested-by: David Hunt <david.hunt@intel.com>
Use RTM and WAITPKG instructions to perform a wait-for-writes similar to
what UMWAIT does, but without the limitation of having to listen for
just one event. This works because the optimized power state used by the
TPAUSE instruction will cause a wake up on RTM transaction abort, so if
we add the addresses we're interested in to the read-set, any write to
those addresses will wake us up.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> Tested-by: David Hunt <david.hunt@intel.com>
eal: use callbacks for power monitoring comparison
Previously, the semantics of power monitor were such that we were
checking current value against the expected value, and if they matched,
then the sleep was aborted. This is somewhat inflexible, because it only
allowed us to check for a specific value in a specific way.
This commit replaces the comparison with a user callback mechanism, so
that any PMD (or other code) using `rte_power_monitor()` can define
their own comparison semantics and decision making on how to detect the
need to abort the entering of power optimized state.
Existing implementations are adjusted to follow the new semantics.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Tested-by: David Hunt <david.hunt@intel.com> Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
At this point, multiple different Ethernet drivers from multiple vendors
will support the PMD power management scheme. It would be useful to add
it to the NIC feature table to indicate support for it.
Suggested-by: David Marchand <david.marchand@redhat.com> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Juraj Linkeš [Wed, 7 Jul 2021 13:25:42 +0000 (15:25 +0200)]
config/arm: add aarch32 cross-compilation
Create meson cross file arm32_armv8a_linux_gcc. Use arm-linux-gnueabihf-
toolset which comes with standard packages on most used systems, such as
Ubuntu and CentOS.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech> Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Juraj Linkeš [Wed, 7 Jul 2021 13:25:40 +0000 (15:25 +0200)]
eal/arm: update CPU flags
There are two execution states on armv8 architecture, aarch64 and
aarch32. Add PLATFORM_STR for the latter and update RTE_ARCH_* flags
according to e9b97392640.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Juraj Linkeš [Wed, 7 Jul 2021 13:25:39 +0000 (15:25 +0200)]
net/virtio: fix aarch32 build
NEON vector path of the PMD needs aarch64 support. But it was
enabled for aarch32 build as well because aarch32 build had
cpu_family set to aarch64. So build for aarch32 will fail due
to unsupported intrinsics.
Fix aarch32 build by updating meson file to exclude NEON vector
implementation for aarch32.
Fixes: 749799482a72 ("net/virtio: add to meson build") Cc: stable@dpdk.org Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Ruifeng Wang [Wed, 7 Jul 2021 13:25:38 +0000 (15:25 +0200)]
net/bnxt: fix aarch32 build
NEON vector path of the PMD needs aarch64 support. But it was
enabled for aarch32 build as well because aarch32 build had
cpu_family set to aarch64. So build for aarch32 will fail due
to unsupported intrinsics.
Fix aarch32 build by updating meson file to exclude NEON vector
implementation for aarch32.
Fixes: 398358341419 ("net/bnxt: support NEON") Cc: stable@dpdk.org Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com> Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Ruifeng Wang [Wed, 7 Jul 2021 13:25:37 +0000 (15:25 +0200)]
net/sfc: fix aarch32 build
The sfc PMD was enabled for aarch32 which is 32-bit mode but has
cpu_family set to aarch64.
As sfc support only 64-bit system, it should be disabled for aarch32.
Updated meson file to disable sfc for aarch32 build.
Fixes: 141d2870675a ("net/sfc: support aarch64 architecture") Cc: stable@dpdk.org Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Ferruh Yigit [Thu, 24 Jun 2021 13:32:16 +0000 (14:32 +0100)]
kni: update link only on change
'rte_kni_update_link()' updates virtual KNI interface link using kernel
sysfs interface.
If the requested link status is same as interface link status, do not
update the link status but return with success.
Nick Connolly [Mon, 26 Apr 2021 10:07:32 +0000 (11:07 +0100)]
build: support drivers symlink on Windows
The symlink-drivers-solibs.sh script was disabled as part of 'install'
for Windows because there is no support for shell scripts. However,
this means that driver related DLLs are not present in the installed
'libdir' directory. Add a python script to perform the install and use
it for Windows if the version of meson supports using an external
program with add_install_script (>= 0.55.0).
On Windows, symbolic links are somewhat problematic since the
SeCreateSymbolicLinkPrivilege is required to be able to create them.
In addition, different cross-compilation environments handle symbolic
links differently, e.g. WSL, Msys2, Cygwin. Rather than trying to
distinguish these scenarios, the python script will perform a file copy
for any Windows specific names.
On Windows, the shared library outputs have different names depending
upon which toolset has been used to build them. The script currently
handles Clang and GCC.
On Linux the functionality is unchanged, but could be replaced with the
python script once the required minimum version of meson is >= 0.55.0.
Cc: stable@dpdk.org Signed-off-by: Nick Connolly <nick.connolly@mayadata.io> Tested-by: Narcisa Vasile <navasile@linux.microsoft.com> Acked-by: Narcisa Vasile <navasile@linux.microsoft.com> Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
On arm platform, the value in "/sys/.../cpuinfo_cur_freq" may not
be exactly the same as what was set when using CPPC cpufreq driver.
For other cpufreq driver, no need to round it currently, or else
this check will fail with turbo enabled. For example, with acpi_cpufreq,
cpuinfo_cur_freq can be 2401000 which is equal to freqs[0].It should
not be rounded to 2400000.
Fixes: 606a234c6d360 ("test/power: round CPU frequency to check") Cc: stable@dpdk.org Signed-off-by: Richael Zhuang <richael.zhuang@arm.com> Acked-by: David Hunt <david.hunt@intel.com>
Currently in DPDK only acpi_cpufreq and pstate_cpufreq drivers are
supported, which are both not available on arm64 platforms. Add
support for cppc_cpufreq driver which works on most arm64 platforms.
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com> Acked-by: David Hunt <david.hunt@intel.com>
Dmitry Kozlyuk [Wed, 30 Jun 2021 16:22:35 +0000 (19:22 +0300)]
doc: fix build on Windows with Meson 0.58
The `doc` target used `echo` as its command.
On Windows, `echo` is always a shell built-in, there is no binary.
Starting from meson 0.58, `run_target()` always searches for command
executable and no longer accepts `echo` as such on Windows.
Replace plain `echo` with a Python one-liner.
Fixes: d02a2dab2dfb ("doc: support building HTML guides with meson") Cc: stable@dpdk.org Reported-by: Rob Scheepens <rob.scheepens@nutanix.com> Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> Acked-by: Luca Boccassi <bluca@debian.org> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Thomas Monjalon <thomas@monjalon.net>
Juraj Linkeš [Tue, 6 Jul 2021 09:44:28 +0000 (11:44 +0200)]
build: use platform for generic and native builds
The current meson option 'machine' should only specify the ISA, which is
not sufficient for Arm, where setting ISA implies other settings as well
(and is used in Arm configuration as such).
Use the existing 'platform' meson option to differentiate the type of
the build (native/generic) and set ISA accordingly, unless the user
chooses to override it with a new option, 'cpu_instruction_set'.
The 'machine' option set the ISA in x86 builds and set native/default
'build type' in aarch64 builds. These two new variables, 'platform' and
'cpu_instruction_set', now properly set both ISA and build type for all
architectures in a uniform manner.
The 'machine' option also doesn't describe very well what it sets. The
new option, 'cpu_instruction_set', is much more descriptive. Keep
'machine' for backwards compatibility.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Anoob Joseph [Fri, 9 Jul 2021 06:28:43 +0000 (11:58 +0530)]
crypto/cnxk: add PCI ID for cn9k
Add PCI ID for crypo_cn9k PMD.
To avoid conflicting PCI ID in crypto_octeontx2 and crypto_cn9k PMDs,
disable crypto_cn9k PMD when built with octeontx2 config.
The lack of PCI ID is causing debug build to fail on Ubuntu 18.04
for crypto_cn9k PMD.
Reported-by: Ali Alnubani <alialnu@nvidia.com> Suggested-by: David Marchand <david.marchand@redhat.com> Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Joyce Kong [Tue, 6 Jul 2021 06:54:03 +0000 (01:54 -0500)]
net/i40e: fix descriptor scan on Arm
For Arm platforms, reading descs can get re-ordered, then the
status of DD bits will be discontinuous, so add the logic to
only process continuous descs by checking DD bits.
Fixes: 4861cde46116 ("i40e: new poll mode driver") Cc: stable@dpdk.org Signed-off-by: Joyce Kong <joyce.kong@arm.com> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Dapeng Yu [Wed, 16 Jun 2021 01:20:53 +0000 (09:20 +0800)]
net/ice: fix VXLAN flow director creation
In original implementation, error returned when creating VXLAN flow
director with SCTP or TCP as layer 4 protocol of inner segment.
There are several root causes for the error:
1. ice_fdir_input_set_hdrs() set ICE_FLOW_SEG_HDR_UDP into protocol
header flag of inner segment of VXLAN FDIR rule, even if it shall be
ICE_FLOW_SEG_HDR_TCP or ICE_FLOW_SEG_HDR_SCTP
2. ice_fdir_input_set_hdrs() set ICE_FLOW_SEG_HDR_VXLAN into protocol
header flag of segments of VXLAN FDIR rule, it not necessary, and can
be set automatically by ice_flow_set_fld() later
3. flow type: ICE_FLTR_PTYPE_NONF_IPV4_UDP_VXLAN hides the flow type of
inner segment of VXLAN FDIR rule, then further causes function:
ice_fdir_get_gen_prgm_pkt() cannot write correct protocol id into inner
segment of training packet.
This patch fixes those defects described above.
Fixes: 855d23a07b36 ("net/ice: support VXLAN VNI field in flow director") Cc: stable@dpdk.org Signed-off-by: Dapeng Yu <dapengx.yu@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Dapeng Yu [Wed, 16 Jun 2021 01:20:52 +0000 (09:20 +0800)]
net/ice/base: fix VXLAN flow director creation
In original implementation, error returned when creating VXLAN flow
director with SCTP or TCP as layer 4 protocol of inner segment.
There are several root causes for the error:
1. ice_fdir_udp4_vxlan_pkt[] is not adapted to the TCP and SCTP protocol.
Its length cannot hold TCP header, only UDP protocol was supported in
original implementation
2. VXLAN VNI offset: 45 is inconsistent with IETF RFC 7348
This patch fixes those defects described above.
Fixes: 608cd0a5e283 ("net/ice/base: support VXLAN VNI field in flow director") Cc: stable@dpdk.org Signed-off-by: Dapeng Yu <dapengx.yu@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
net/ice: support QoS bandwidth config after VF reset in DCF
When VF reset happens, the QoS bandwidth configuration will be lost. If
the reset is not caused by DCB change, it is supposed to replay the
bandwidth configuration to VF by DCF. In this patch, when a vsi update
PF event is received from PF after VF reset, and it is confirmed that
DCB is not changed, bandwidth configuration will be replayed.
net/iavf: simplify flow director rules for IP fragment
This patch simplify the pattern of flow rules of FDIR for IP fragment.
Flow rule can be created by the following command:
1. flow create 0 ingress pattern eth /
ipv4 fragment_offset spec 0x2000 fragment_offset mask 0x2000 /
end <actions>
2. flow create 0 ingress pattern eth / ipv6 /
ipv6_frag_ext fragment_offset spec 0x0001 fragment_offset mask 0x0001 /
end <actions>
net/ice: simplify flow director rules for IP fragment
This patch simplify the pattern of flow rules of FDIR for IP fragment.
Flow rule can be created by the following command:
1. flow create 0 ingress pattern eth /
ipv4 fragment_offset spec 0x2000 fragment_offset mask 0x2000 /
end <actions>
2. flow create 0 ingress pattern eth / ipv6 /
ipv6_frag_ext fragment_offset spec 0x0001 fragment_offset mask 0x0001 /
end <actions>
testpmd> port attach 0000:5e:00.0
Attaching a new port...
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:5e:00.0 (socket 0)
ice_load_pkg(): failed to open file: /lib/firmware/intel/ice/ddp/ice.pkg
ice_dev_init(): Failed to load the DDP package,Use safe-mode-support=1 to
enter Safe Mode
EAL: Releasing PCI mapped resource for 0000:5e:00.0
EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x2200000000
EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x2202000000
EAL: Driver cannot attach the device (0000:5e:00.0)
EAL: Failed to attach device on primary process
testpmd: Failed to attach port 0000:5e:00.0
common/mlx5: fix compatibility with OFED port query API
The compilation flag HAVE_MLX5DV_DR_DEVX_PORT depends on presence
of mlx5dv_query_devx_port routine in rdma-core library.
The mlx5dv_query_devx_port routine exists only in OFED versions
of rdma-core library and is being planned to be removed and replaced
with Upstream compatible mlx5dv_query_port.
As mlx5dv_query_devx_port is being removed all the dependencies on
the HAVE_MLX5DV_DR_DEVX_PORT compilation flag are reconsidered.
The new compilation flag HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT is for
backward compatibility with older OFED versions.
Fixes: 6cfe84fbe7b1 ("net/mlx5: fix port action for LAG") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
In order to get E-Switch vport identifiers the mlx5 PMD relies
on two approaches:
[a] use port query API if it is provided by rdma-core library
[b] otherwise, deduce vport ids from the related VF index
The latter is not reliable and may not work with newer kernel
drivers and in some configurations (LAG), causing E-Switch
malfunction. Hence, engaging the port query API is highly
desirable.
Depending on rdma-core version the port query API is:
- very old OFED versions have no query API (approach [b])
- rdma-core OFED < 5.5 provides mlx5dv_query_devx_port,
HAVE_MLX5DV_DR_DEVX_PORT flag is defined (approach [a])
- rdma-core OFED >= 5.5 has mlx5dv_query_port, flag
HAVE_MLX5DV_DR_DEVX_PORT_V35 is defined (approach [a])
- future OFED versions might remove mlx5dv_query_devx_port
and HAVE_MLX5DV_DR_DEVX_PORT will not be defined
- Upstream rdma-core < v35 has no port query API (approach [b])
- Upstream rdma-core >= v35 has mlx5dv_query_port, flag
HAVE_MLX5DV_DR_DEVX_PORT_V35 is defined (approach [a])
In order to support the new mlx5dv_query_port routine, the
conditional compilation flag HAVE_MLX5DV_DR_DEVX_PORT_V35
is introduced by this patch. The flag HAVE_MLX5DV_DR_DEVX_PORT
is kept for compatibility with previous rdma-core versions.
Despite this patch is not a bugfix (it follows the introduced API
variation in underlying library), it resolves the compatibility
issue and is highly desired to be ported to DPDK LTS.
Jiawei Wang [Tue, 6 Jul 2021 08:12:27 +0000 (11:12 +0300)]
net/mlx5: control flow rules with identical pattern
In order to allow\disallow configuring rules with identical
patterns, the new device argument 'allow_duplicate_pattern'
is introduced.
If allow, these rules be inserted successfully and only the
first rule take affect.
If disallow, the first rule will be inserted and other rules
be rejected.
The default is to allow.
Set it to 0 if disallow, for example:
-a <PCI_BDF>,allow_duplicate_pattern=0
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
When creating hierarchy meter, its color rules will increase next
meter's reference count, so when destroy the hierarchy meter, also
need to dereference the next meter's count.
During flushing all meters of a port, need to destroy all hierarchy
meters and their policies first, to dereference the last meter in
hierarchy. Then all meters have no reference and can be destroyed.
When using meter hierarchy with multiple meters, every meter may have
drop counter, so a packet being set red color by one meter should be
counted to that specific meter only.
To support this, add tag action in the color rule so packet going to
next new meter can have its meter id, so as to be counted to the
correct drop counter in drop table.
This makes the meter policy support meter action. So multiple meters
can be chained as a meter hierarchy.
Only termination meter is allowed as the last meter in a hierarchy,
and there're two cases:
1. The last meter has non-RSS policy, can directly create sub-policy
and color rules during each meter's policy creation.
2. The last meter has RSS policy, don't create sub-policy/rules when
creating meter policy. Only when a RTE flow is using the meter hierarchy,
will iterate all meters of the hierarchy and create needed sub-
policies and color rules for them.
Xiaoyu Min [Fri, 2 Jul 2021 08:34:48 +0000 (16:34 +0800)]
net/mlx5: limit inner RSS expansion for MPLS
If user wants to do MPLS inner RSS and only provides pattern
till MPLS without inner items [1], RSS expansion will expand flows
into 13 sub-flows[2] which is too many and it impacts flow insert
rate, stack usage becomes large as well.
This expansion into 13 sub-flows seems not worthy of and it can
be significantly reduced (i.e, 7 sub-flows [3]) by user providing
at least one inner L2/L3 item [4].
[1]:
pattern eth / ipv4 / udp / mpls / end actions rss type tcp udp ip
end level 2 / end
[2]:
eth / ipv4 / udp / mpls
eth / ipv4 / udp / mpls / ipv4
eth / ipv4 / udp / mpls / ipv4 / udp
eth / ipv4 / udp / mpls / ipv4 / tcp
eth / ipv4 / udp / mpls / ipv6
eth / ipv4 / udp / mpls / ipv6 / udp
eth / ipv4 / udp / mpls / ipv6 / tcp
eth / ipv4 / udp / mpls / eth / ipv4
eth / ipv4 / udp / mpls / eth / ipv4 / udp
eth / ipv4 / udp / mpls / eth / ipv4 / tcp
eth / ipv4 / udp / mpls / eth / ipv6
eth / ipv4 / udp / mpls / eth / ipv6 / udp
eth / ipv4 / udp / mpls / eth / ipv6 / tcp
[3]:
eth / ipv4 / udp / mpls / eth
eth / ipv4 / udp / mpls / eth / ipv4 / udp
eth / ipv4 / udp / mpls / eth / ipv4 / tcp
eth / ipv4 / udp / mpls / eth / ipv6
eth / ipv4 / udp / mpls / eth / ipv6 / udp
eth / ipv4 / udp / mpls / eth / ipv6 / tcp
[4]:
pattern eth / ipv4 / udp / mpls / eth / end actions rss type tcp udp ip
level 2 / end
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
net/mlx5: fix offset calculation for modify field action
Offsets are not taken into account during MAC addresses
manipulation for the MODIFY_FIELD action. That leads to
a wrong split between 0-15 and 16-47 bits and corrupted
data being copied to/from MAC addresses. Use both source
and destination offsets to calcucate the proper modify
header action specification.
Fixes: fdd0c046f4 ("net/mlx5: fix modify field action order for MAC") Cc: stable@dpdk.org Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
MLX5 PMD supports L3 and L4 integrity bits.
L4 checksum-ok bit was not translated correctly.
The patch updates the l4_csum_ok integrity bit translation.
If there are many VFs the Netlink message length sent by kernel
in reply to RTM_GETLINK request can be large. We should query
the size of message being received in advance and allocate
the large enough buffer to handle these large messages.
Fixes: ccdcba53a3f4 ("net/mlx5: use Netlink to add/remove MAC addresses") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Xiaoyu Min [Thu, 1 Jul 2021 05:54:56 +0000 (13:54 +0800)]
net/mlx5: fix match MPLS over GRE with key
Currently PMD needs previous layer information in order to set
corresponding match field for MPLSoGRE or MPLSoUDP.
GRE_KEY item is missing as supported previous layer when translate
item MPLS, which causes flow[1] cannot match MPLS over GRE traffic.
According to RFC4023, MPLS over GRE tunnel with optional key
field needs to be supported too.
By adding missing GRE_KEY as supported previous layer fix problem.
[1]:
flow create 0 ingress pattern eth / ipv6 / gre k_bit is 1 / gre_key /
mpls label is 966138 / end actions queue index 1 / mark id 0xa / end
Fixes: a7a0365565a4 ("net/mlx5: match GRE key and present bits") Cc: stable@dpdk.org Signed-off-by: Xiaoyu Min <jackmin@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
Gregory Etelson [Wed, 30 Jun 2021 07:19:52 +0000 (10:19 +0300)]
net/mlx5: fix pattern expansion in RSS flow rules
Flow rule pattern may be implicitly expanded by the PMD if the rule
has RSS flow action. The expansion adds network headers to the
original pattern. The new pattern lists all network levels that
participate in the rule RSS action.
The patch validates that buffer for expanded pattern has enough bytes
for new flow items.
Haifei Luo [Mon, 31 May 2021 02:22:08 +0000 (05:22 +0300)]
net/mlx5: add more details to flow dump
Currently the flow dump provides few information about actions
- just the pointers. Add implementations to display details for
counter, modify_hdr and encap_decap actions.
For counter, the regular flow operation query is engaged and
the counter content information is provided, including hits
and bytes values.For modify_hdr, encap_and decap actions,
the information stored in the ipool objects is dumped.
There are the formats of information presented in the dump:
Counter: rec_type,id,hits,bytes
Modify_hdr: rec_type,id,actions_number,actions
Encap_decap: rec_type,id,buf
Signed-off-by: Haifei Luo <haifeil@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Currently when creating meter policy, a src port_id match item will
always be added in switch domain. So if one meter is used by another
port, it will not work correctly.
This issue is solved:
1. If policy fate action is port_id, add the src port_id match item,
and the meter cannot be shared by another port.
2. If policy fate action isn't port_id, don't add the src port_id
match, meter can be shared by another port.
This fix enables one meter being shared by different ports. User can
create a meter flow using a port_id match item to make this meter
shared by other port.
Fixes: afb4aa4f122 ("net/mlx5: support meter policy operations") Cc: stable@dpdk.org Signed-off-by: Shun Hao <shunh@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
The meter policy handlers are managed by user IDs and the driver used l3
table in order to map the user ID to the internal driver handler of the
policy.
The l3 table was wrongly saved in the shared device structure which
manages all the switch domain ports what made the user IDs shared
between different ethdev ports.
Move the policy l3 table to be per port by saving it in the port private
structure.
Fixes: afb4aa4f122 ("net/mlx5: support meter policy operations") Cc: stable@dpdk.org Signed-off-by: Shun Hao <shunh@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>