Jiawen Wu [Fri, 18 Dec 2020 09:36:42 +0000 (17:36 +0800)]
net/txgbe: add flow director filter init and uninit
Add flow director filter init and uninit operations.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:41 +0000 (17:36 +0800)]
net/txgbe: parse L2 tunnel filter
Check if the rule is a L2 tunnel rule, and get the L2 tunnel info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:40 +0000 (17:36 +0800)]
net/txgbe: support L2 tunnel filter add and delete
Support L2 tunnel filter add and delete operations.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:39 +0000 (17:36 +0800)]
net/txgbe: config L2 tunnel filter with e-tag
Config L2 tunnel filter with e-tag.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:38 +0000 (17:36 +0800)]
net/txgbe: add L2 tunnel filter init and uninit
Add L2 tunnel filter init and uninit.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:37 +0000 (17:36 +0800)]
net/txgbe: parse syn filter
Check if the rule is a TCP SYN rule, and get the SYN info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:36 +0000 (17:36 +0800)]
net/txgbe: support syn filter add and delete
Support add and delete operations on syn filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:35 +0000 (17:36 +0800)]
net/txgbe: parse ethertype filter
Check if the rule is a ethertype rule, and get the ethertype info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:34 +0000 (17:36 +0800)]
net/txgbe: support ethertype filter add and delete
Support add and delete operations on ethertype filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:33 +0000 (17:36 +0800)]
net/txgbe: parse n-tuple filter
Check if the rule is a n-tuple rule, and get the n-tuple info.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:32 +0000 (17:36 +0800)]
net/txgbe: support ntuple filter add and delete
Support add and delete operations on ntuple filter.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:31 +0000 (17:36 +0800)]
net/txgbe: add ntuple filter init and uninit
Add ntuple filter init and uninit.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Fri, 18 Dec 2020 09:36:30 +0000 (17:36 +0800)]
net/txgbe: add generic flow API
Introduce rte_flow with its validate, create, destroy and flush
operations into txgbe PMD.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Andrew Boyer [Wed, 16 Dec 2020 21:12:57 +0000 (13:12 -0800)]
net/ionic: stop queues when LIF is stopped
Otherwise they cannot be restarted, because the FW will reject INIT
or ENA commands on queues which are already running.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:56 +0000 (13:12 -0800)]
net/ionic: improve queue state handling
Skip ionic_lif_[rxq|txq]_init() in queue start if it's already done.
Move ionic_lif_[rxq|txq]_deinit() from queue stop to queue release.
This allows the queues to be restarted.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:55 +0000 (13:12 -0800)]
net/ionic: improve link state handling
Add UP and FW_RESET state flags.
Update the stack info when the link state changes.
Convert set_link_up/set_link_down to lif_start/lif_stop.
Condition reported link state on UP flag.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:54 +0000 (13:12 -0800)]
net/ionic: complete release on close
ionic_dev_close() is responsible for destroying the ethdev, lif, and
adapter. eth_ionic_dev_remove() calls ionic_dev_close().
Remove-on-close is now required behavior for a PMD.
Remove the UNMAINTAINED flag.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:53 +0000 (13:12 -0800)]
net/ionic: remove multi-LIF support
This feature is unused, so remove it.
There is exactly one adapter / lif / ethdev per port.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:52 +0000 (13:12 -0800)]
net/ionic: preserve Rx mode across LIF stop/start
Otherwise, non-default settings (like PROMISC) get reset.
This will become important when link toggling is tied to LIF stop/start.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Andrew Boyer [Wed, 16 Dec 2020 21:12:51 +0000 (13:12 -0800)]
net/ionic: preserve RSS state unless RETA size changes
This preserves settings across a LIF stop/start.
This will become important when link toggling is tied to LIF stop/start.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Jiayu Hu [Mon, 11 Jan 2021 12:16:27 +0000 (07:16 -0500)]
vhost: enhance async enqueue for small packets
Async enqueue offloads large copies to DMA devices, and small copies
are still performed by the CPU. However, it requires users to get
enqueue completed packets by rte_vhost_poll_enqueue_completed(), even
if they are completed by the CPU when rte_vhost_submit_enqueue_burst()
returns. This design incurs extra overheads of tracking completed
pktmbufs and function calls, thus degrading performance on small packets.
This patch enhances async enqueue for small packets by enabling
rte_vhost_submit_enqueue_burst() to return completed packets.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Tested-by: Yinan Wang <yinan.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Jiayu Hu [Mon, 11 Jan 2021 12:16:26 +0000 (07:16 -0500)]
vhost: cleanup async enqueue
This patch removes unnecessary check and function calls, and it changes
appropriate types for internal variables and fixes typos.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Tested-by: Yinan Wang <yinan.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Maxime Coquelin [Fri, 8 Jan 2021 09:41:49 +0000 (10:41 +0100)]
net/virtio: fix memory init with vDPA backend
This patch fixes an overhead met with mlx5-vdpa Kernel
driver, where for every page in the mapped area, all the
memory tables gets updated. For example, with 2MB hugepages,
a single IOTLB_UPDATE for a 1GB region causes 512 memory
updates on mlx5-vdpa side.
Using batching mode, the mlx5 driver will only trigger a
single memory update for all the IOTLB updates that happen
between the batch begin and batch end commands.
Fixes:
6b901437056e ("net/virtio: introduce vhost-vDPA backend")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Maxime Coquelin [Fri, 8 Jan 2021 09:41:48 +0000 (10:41 +0100)]
net/virtio: add missing backend features negotiation
This patch adds missing backend features negotiation for
in Vhost-vDPA. Without it, IOTLB messages v2 could be sent
by Virtio-user PMD while not supported by the backend.
Fixes:
6b901437056e ("net/virtio: introduce vhost-vDPA backend")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Karra Satwik [Sun, 20 Dec 2020 22:44:43 +0000 (04:14 +0530)]
net/cxgbe: accept VLAN flow items without ethertype
When apps pass the RTE_FLOW_ITEM_TYPE_VLAN without setting the
ethertype field in RTE_FLOW_ITEM_TYPE_ETH, then assume 0x8100
VLAN by default and don't reject the rule.
Fixes:
55f003d8884c ("net/cxgbe: support flow API for matching QinQ VLAN")
Cc: stable@dpdk.org
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
John Daley [Thu, 17 Dec 2020 01:37:15 +0000 (17:37 -0800)]
net/enic: remove deprecated flow director code
The Flow Director (FDIR) API was removed in release 20.11.
This patch removes the remainder of the FDIR code in the
PMD.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Selwin Sebastian [Wed, 6 Jan 2021 08:00:58 +0000 (13:30 +0530)]
net/axgbe: support reading FW version
Added support for fw_version_get API
Signed-off-by: Selwin Sebastian <selwin.sebastian@amd.com>
Acked-by: Somalapuram Amaranath <asomalap@amd.com>
Xuan Ding [Fri, 8 Jan 2021 08:38:44 +0000 (08:38 +0000)]
net/ice: refactor PF RSS
This patch refactors the PF RSS code based on the below design:
1. ice_pattern_match_item->input_set_mask is the superset of
ETH_RSS_xxx.
2. ice_pattern_match_item->meta is the ice_rss_hash_cfg template.
3. ice_hash_parse_pattern will generate pattern hint.
4. ice_hash_parse_action will refine the ice_rss_hash_cfg based on
the pattern hint and rss_type.
5. The refine process includes:
1) refine protocol headers(VLAN/PPPOE/GTPU).
2) refine hash bit fields of l2, l3, l4.
3) refine hash bit fields for gtpu header.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Ruifeng Wang [Tue, 12 Jan 2021 02:57:08 +0000 (02:57 +0000)]
config/arm: add Neoverse N2
Add Arm Neoverse N2 cpu support.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Ruifeng Wang [Tue, 12 Jan 2021 02:57:07 +0000 (02:57 +0000)]
common/octeontx2: fix build with SVE
Building with gcc 10.2 with SVE extension enabled got error:
{standard input}: Assembler messages:
{standard input}:4002: Error: selected processor does not support `mov z3.b,#0'
{standard input}:4003: Error: selected processor does not support `whilelo p1.b,xzr,x7'
{standard input}:4005: Error: selected processor does not support `ld1b z0.b,p1/z,[x8]'
{standard input}:4006: Error: selected processor does not support `whilelo p4.s,wzr,w7'
This is because inline assembly code explicitly resets cpu model to
not have SVE support. Thus SVE instructions generated by compiler
auto vectorization got rejected by assembler.
Added SVE to the cpu model specified by inline assembly for SVE support.
Not replacing the inline assembly with C atomics because the driver relies
on specific LSE instruction to interface to co-processor [1].
Fixes:
8a4f835971f5 ("common/octeontx2: add IO handling APIs")
Cc: stable@dpdk.org
[1] https://mails.dpdk.org/archives/dev/2021-January/196092.html
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Ruifeng Wang [Tue, 12 Jan 2021 02:57:06 +0000 (02:57 +0000)]
net/octeontx: fix build with SVE
Building with gcc 10.2 with SVE extension enabled got error:
{standard input}: Assembler messages:
{standard input}:91: Error: selected processor does not support `addvl x4,x8,#-1'
{standard input}:95: Error: selected processor does not support `ptrue p1.d,all'
{standard input}:135: Error: selected processor does not support `whilelo p2.d,xzr,x5'
{standard input}:137: Error: selected processor does not support `decb x1'
This is because inline assembly code explicitly resets cpu model to
not have SVE support. Thus SVE instructions generated by compiler
auto vectorization got rejected by assembler.
Added SVE to the cpu model specified by inline assembly for SVE support.
Not replacing the inline assembly with C atomics because the driver relies
on specific LSE instruction to interface to co-processor [1].
Fixes:
f0c7bb1bf778 ("net/octeontx/base: add octeontx IO operations")
Cc: stable@dpdk.org
[1] https://mails.dpdk.org/archives/dev/2021-January/196092.html
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Ruifeng Wang [Tue, 12 Jan 2021 02:57:05 +0000 (02:57 +0000)]
net/hns3: fix build with SVE
Building with SVE extension enabled stopped with error:
error: ACLE function ‘svwhilelt_b64_s32’ requires ISA extension ‘sve’
18 | #define PG64_256BIT svwhilelt_b64(0, 4)
This is caused by unintentional cflags reset.
Fixed the issue by not touching cflags, and using flags defined by
compiler.
Fixes:
952ebacce4f2 ("net/hns3: support SVE Rx")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Ruifeng Wang [Tue, 12 Jan 2021 02:57:04 +0000 (02:57 +0000)]
lpm/arm: support SVE
Added new path to do lpm4 lookup by using scalable vector extension.
The SVE path will be selected if compiler has flag SVE set.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Ruifeng Wang [Thu, 14 Jan 2021 06:59:25 +0000 (06:59 +0000)]
test: improve coverage on LPM tbl8
Existing test cases create 256 tbl8 groups for testing. The number covers
only 8 bit next_hop/group field. Since the next_hop/group field had been
extended to 24-bits, creating more than 256 groups in tests can improve
the coverage.
Coverage was not expanded to reach the max supported group number, because
it would take too much time to run for this fast-test.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Ruifeng Wang [Thu, 14 Jan 2021 06:59:22 +0000 (06:59 +0000)]
lpm: fix vector IPv4 lookup
rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
groups are created. This is caused by incorrect type casting of tbl8
group index that been stored in tbl24 entry. The casting caused group
index truncation and hence wrong tbl8 group been searched.
Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.
Fixes:
dc81ebbacaeb ("lpm: extend IPv4 next hop field")
Fixes:
cbc2f1dccfba ("lpm/arm: support NEON")
Fixes:
d2cc7959342b ("lpm: add AltiVec for ppc64")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Vladimir Medvedkin [Tue, 15 Dec 2020 18:25:19 +0000 (18:25 +0000)]
fib6: improve AVX512 lookup performance
Improved performance for AVX512 FIB6 lookup by doubling the number
of flows being processed
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Dmitry Kozlyuk [Tue, 12 Jan 2021 00:36:02 +0000 (03:36 +0300)]
build: fix linker flags on Windows
The --export-dynamic linker option is only applicable to ELF.
On Windows, where COFF is used, it causes warnings:
x86_64-w64-mingw32-ld: warning: --export-dynamic is not supported
for PE+ targets, did you mean --export-all-symbols? (MinGW)
LINK : warning LNK4044: unrecognized option '/-export-dynamic';
ignored (clang)
Don't add --export-dynamic on Windows anywhere.
Fixes:
b031e13d7f0d ("build: fix plugin load on static build")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Ranjit Menon <ranjit.menon@intel.com>
Eugeny Parshutin [Wed, 2 Dec 2020 17:48:06 +0000 (20:48 +0300)]
doc: add vtune profiling config to prog guide
Return back 'profiling with vtune' section to profiling programmers
guide with updated instruction on how to enable vtune profiling
with meson configuration option.
Fixes:
89c67ae2cba7 ("doc: remove references to make from prog guide")
Cc: stable@dpdk.org
Signed-off-by: Eugeny Parshutin <eugeny.parshutin@linux.intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Thomas Monjalon [Wed, 2 Dec 2020 17:15:21 +0000 (18:15 +0100)]
devtools: adjust verbosity of ABI check
The scripts gen-abi.sh and check-abi.sh are updated
to print error messages to stderr so they are likely never ignored.
When called from test-meson-builds.sh, the standard messages on stdout
can be more quiet depending on the verbosity settings.
The beginning of the ABI check is announced in verbose mode.
The commands are printed in very verbose mode.
The check result details are available in verbose mode.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Ophir Munk [Sun, 10 Jan 2021 11:10:23 +0000 (11:10 +0000)]
app/regex: measure performance with precise clock
Performance measurement (elapsed time and Gbps) are based on Linux
clock() API. The resolution is improved by replacing the clock() API
with rte_rdtsc_precise() API.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Ophir Munk [Sun, 10 Jan 2021 11:10:22 +0000 (11:10 +0000)]
app/regex: measure performance per queue pair
Up to this commit measuring the parsing elapsed time and Giga bits per
second performance was done on the aggregation of all QPs (per core).
This commit separates the time measurements per individual QP.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Ophir Munk [Sun, 10 Jan 2021 11:10:21 +0000 (11:10 +0000)]
app/regex: support multiple cores
Up to this commit the regex application was running with multiple QPs on
a single core. This commit adds the option to specify a number of cores
on which multiple QPs will run.
A new parameter 'nb_lcores' was added to configure the number of cores:
--nb_lcores <num of cores>.
If not configured the number of cores is set to 1 by default. On
application startup a few initial steps occur by the main core: the
number of QPs and cores are parsed. The QPs are distributed as evenly
as possible on the cores. The regex device and all QPs are initialized.
The data file is read and saved in a buffer. Then for each core the
application calls rte_eal_remote_launch() with the worker routine
(run_regex) as its parameter.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Ophir Munk [Sun, 10 Jan 2021 11:10:20 +0000 (11:10 +0000)]
app/regex: read data file once at startup
Up to this commit the input data file was read from scratch for each QP,
which is redundant. Starting from this commit the data file is read only
once at startup. Each QP will clone the data.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Ophir Munk [Sun, 10 Jan 2021 11:10:19 +0000 (11:10 +0000)]
app/regex: support multiple queue pairs
Up to this commit the regex application used one QP which was assigned a
number of jobs, each with a different segment of a file to parse. This
commit adds support for multiple QPs assignments. All QPs will be
assigned the same number of jobs, with the same segments of file to
parse. It will enable comparing functionality with different numbers of
QPs. All queues are managed on one core with one thread. This commit
focuses on changing routines API to support multi QPs, mainly, QP scalar
variables are replaced by per-QP struct instance. The enqueue/dequeue
operations are interleaved as follows:
enqueue(QP #1)
enqueue(QP #2)
...
enqueue(QP #n)
dequeue(QP #1)
dequeue(QP #2)
...
dequeue(QP #n)
A new parameter 'nb_qps' was added to configure the number of QPs:
--nb_qps <num of qps>.
If not configured, nb_qps is set to 1 by default.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Ophir Munk [Sun, 10 Jan 2021 11:10:18 +0000 (11:10 +0000)]
app/regex: move mempool creation to worker routine
Function rte_pktmbuf_pool_create() is moved from init_port() routine to
run_regex() routine. Looking forward on multi core support - init_port()
will be called only once as part of application startup while mem pool
creation should be called multiple times (per core).
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Ori Kam [Thu, 17 Dec 2020 10:37:31 +0000 (12:37 +0200)]
regex/mlx5: add response flags
This commit propagate the response flags from the regex engine.
Signed-off-by: Francis Kelly <fkelly@nvidia.com>
Signed-off-by: Ori Kam <orika@nvidia.com>
Ori Kam [Thu, 17 Dec 2020 10:37:30 +0000 (12:37 +0200)]
regexdev: add resource limit reached flag
When scanning a buffer it is possible that the scan will abort
due to some internal resource limit.
This commit adds such response flag, so application can handle such cases.
Signed-off-by: Francis Kelly <fkelly@nvidia.com>
Signed-off-by: Ori Kam <orika@nvidia.com>
Tal Shnaiderman [Wed, 6 Jan 2021 20:35:53 +0000 (22:35 +0200)]
eal: add generic thread-local-storage functions
Add support for TLS functionality in EAL.
The following functions are added:
rte_thread_tls_key_create - create a TLS data key.
rte_thread_tls_key_delete - delete a TLS data key.
rte_thread_tls_value_set - set value bound to the TLS key
rte_thread_tls_value_get - get value bound to the TLS key
TLS key is defined by the new type rte_tls_key.
The API allocates the thread local storage (TLS) key.
Any thread of the process can subsequently use this key
to store and retrieve values that are local to the thread.
Those functions are added in addition to TLS capability
in rte_per_lcore.h to allow abstraction of the pthread
layer for all operating systems.
Windows implementation is under librte_eal/windows and
implemented using WIN32 API for Windows only.
Unix implementation is under librte_eal/unix and
implemented using pthread for UNIX compilation.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Tal Shnaiderman [Wed, 6 Jan 2021 20:35:52 +0000 (22:35 +0200)]
eal: move thread affinity functions to new file
Move the definition of the functions
rte_thread_set_affinity and rte_thread_get_affinity
to new file, rte_thread.h
The file will implement generic threading functionality
and will only host threading functions which do not reference
pthread API.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Alvin Zhang [Fri, 8 Jan 2021 05:35:40 +0000 (13:35 +0800)]
net/i40e: refactor RSS flow
1. Delete original code.
2. Add 2 tables(One maps flow pattern and RSS type to PCTYPE,
another maps RSS type to input set).
3. Parse RSS pattern and RSS type to get PCTYPE.
4. Parse RSS action to get queues, RSS function and hash field.
5. Create and destroy RSS filters.
6. Create new files for hash flows.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Alvin Zhang [Fri, 8 Jan 2021 05:35:39 +0000 (13:35 +0800)]
net/i40e: fix returned code for RSS hardware failure
The API should return the system error status, but it returned the
hardware error status, this is confuses the caller.
This patch adds check on hardware execution status and returns -EIO
in case of hardware execution failure.
Fixes:
1d4b2b4966bb ("net/i40e: fix VF overwrite PF RSS LUT for X722")
Fixes:
d0a349409bd7 ("i40e: support AQ based RSS config")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Alvin Zhang [Fri, 8 Jan 2021 05:35:38 +0000 (13:35 +0800)]
doc: fix RSS flow description in i40e guide
The command here does not create a queue region, but only sets the
lookup table, so the descriptions in the doc is not exact.
Fixes:
feaae285b342 ("net/i40e: support hash configuration in RSS flow")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 03:09:14 +0000 (11:09 +0800)]
net/ice/base: update copyright date
Updated the Copyright for 2021
Updated ice driver version.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 04:22:49 +0000 (12:22 +0800)]
net/ice/base: update add scheduler node counter
The number of nodes added counter was updated incorrectly. This issue
was exposed when the driver tried to add more than 128 queues per TC.
Fix added to update the counter correctly.
Fixes:
93e84b1bfc92 ("net/ice/base: add basic Tx scheduler")
Cc: stable@dpdk.org
Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 04:20:26 +0000 (12:20 +0800)]
net/ice/base: cleanup style
A few style issues reported by checkpatch have snuck into the code,
resolve the style issues.
PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 04:16:20 +0000 (12:16 +0800)]
net/ice/base: support GTPU inner for AVF flow director
Add dummy packets for IPV4_GTPU with inner IPV4/UDP/TCP with all
kinds of GTPU (EH) type (i.e., IP/EH/DL/UL) for AVF FDIR.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 04:10:55 +0000 (12:10 +0800)]
net/ice/base: limit forced overrides based on FW version
Beyond a specific version of firmware, there is no need to provide
override values to the firmware when setting PHY capabilities. In this
case, we do not need to indicate whether we're in Strict or Lenient Link
Mode.
In the case of translating capabilities to the configuration structure,
the module compliance enforcement is already correctly set by firmware,
so the extra code block is redundant.
Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 04:03:52 +0000 (12:03 +0800)]
net/ice/base: fix memory handling
Fixed memory handling when memory allocated in user space was handled
as memory allocated in kernel space within QV os_dep implementation
of the ice_memdup function.
Fixes:
93e84b1bfc92 ("net/ice/base: add basic Tx scheduler")
Cc: stable@dpdk.org
Signed-off-by: Andrii Pypchenko <andrii.pypchenko@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 04:01:49 +0000 (12:01 +0800)]
net/ice/base: add package ptype enable information
Scan the 'Marker PType TCAM' session to retrieve the Rx parser PTYPE
enable information from the current package.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 03:57:11 +0000 (11:57 +0800)]
net/ice/base: remove deprecated field
hw_vsi_id is used to replace vsi_id, so remove the deprecated vsi_id.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Fri, 8 Jan 2021 03:54:45 +0000 (11:54 +0800)]
net/ice/base: align add VSI and update VSI AQ command buffer
Aligned the buffer the following admin commands to their new
definitions:
* 0x210 = add_vsi
* 0x211 = update_vsi
Signed-off-by: Shay Amir <shay.amir@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Maxime Coquelin [Tue, 5 Jan 2021 15:34:46 +0000 (16:34 +0100)]
net/virtio: improve logs in vhost-vDPA DMA mapping
This patch adds debug logs in vhost_vdpa_dma_map() and
vhost_vdpa_dma_unmap() to ease debugging.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Maxime Coquelin [Tue, 5 Jan 2021 12:57:28 +0000 (13:57 +0100)]
vhost: refactor memory regions mapping
This patch moves memory region mmaping and related
preparation in a dedicated function in order to simplify
VHOST_USER_SET_MEM_TABLE request handling function.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Maxime Coquelin [Tue, 5 Jan 2021 12:57:27 +0000 (13:57 +0100)]
vhost: refactor postcopy registration
This patch moves the registration of postcopy to a
dedicated function, with the goal of simplifying
VHOST_USER_SET_MEM_TABLE request handling function.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Maxime Coquelin [Tue, 5 Jan 2021 12:57:26 +0000 (13:57 +0100)]
vhost: refactor postcopy region registration
This patch moves the registration of memory regions to
userfaultfd to a dedicated function, with the goal of
simplifying VHOST_USER_SET_MEM_TABLE request handling
function.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Xueming Li [Wed, 6 Jan 2021 03:06:30 +0000 (03:06 +0000)]
vdpa/mlx5: add hardware queue moderation
The next parameters control the HW queue moderation feature.
This feature helps to control the traffic performance and latency
trade-off.
Each packet completion report from HW to SW requires CQ processing by SW
and triggers interrupt for the guest driver. Interrupt report and
handling cost CPU cycles and time and the amount of this affects
directly on packet performance and latency.
hw_latency_mode parameters [int]
0, HW default.
1, Latency is counted from the first packet completion report.
2, Latency is counted from the last packet completion.
hw_max_latency_us parameters [int]
0 - 4095, The maximum time in microseconds that packet completion
report can be delayed.
hw_max_pending_comp parameter [int]
0 - 65535, The maximum number of pending packets completions in an HW
queue.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xueming Li [Wed, 6 Jan 2021 03:06:29 +0000 (03:06 +0000)]
common/mlx5: support vDPA completion queue moderation
This patch introduces new parameters for VirtQ CQ moderation, used for
performance tuning.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 15:50:33 +0000 (23:50 +0800)]
vhost: replace SMP with thread fence for control path
Simply replace the smp barriers with atomic thread fence for vhost control
path, if there are no synchronization points.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 15:50:32 +0000 (23:50 +0800)]
vhost: replace SMP with thread fence for packed vring
Simply replace smp barriers with atomic thread fence for
virtio packed vring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 15:50:31 +0000 (23:50 +0800)]
vhost: relax full barriers for used idx
Used idx can be synchronized by one-way barrier instead of full
write barrier for split vring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 15:50:30 +0000 (23:50 +0800)]
vhost: relax full barriers for desc flags
Relax the full read barrier to one-way barrier for desc flags in
packed vring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 15:50:29 +0000 (23:50 +0800)]
vhost: remove unnecessary SMP barrier for avail idx
The ordering between avail index and desc reads has been enforced
by load-acquire for split vring, so smp_rmb barrier is not needed
behind it.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 15:50:28 +0000 (23:50 +0800)]
vhost: remove unnecessary SMP barrier for desc flags
As function desc_is_avail performs a load-acquire barrier to
enforce the ordering between desc flags and desc content, it is
unnecessary to add a rte_smp_rmb barrier around the trace which
follows desc_is_avail.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 15:50:27 +0000 (23:50 +0800)]
examples/vhost_blk: replace SMP barrier with thread fence
Simply replace the rte_smp_mb barriers with SEQ_CST atomic thread fence,
if there is no load/store operations.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Joyce Kong [Mon, 21 Dec 2020 15:50:26 +0000 (23:50 +0800)]
examples/vhost: relax memory ordering when enqueue/dequeue
Use C11 atomic APIs with one-way barriers to replace two-way
barriers when operating enqueue/dequeue. Used->idx and avail->idx
are the synchronization points for split vring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 14:23:21 +0000 (22:23 +0800)]
net/virtio: replace full barrier with thread fence
Replace the smp barriers with atomic thread fence for synchronization
between different threads, if there are no load/store operations.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 14:23:20 +0000 (22:23 +0800)]
net/virtio: replace full barrier with relaxed ones for Arm
Relax the full write barriers to one-way barriers for virtio
control path for Arm platform
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 14:23:19 +0000 (22:23 +0800)]
net/virtio: replace SMP barrier with IO barrier
Replace rte_smp_wmb/rmb with rte_io_wmb/rmb as they are the same on x86
and ppc platforms. Then, for function virtqueue_fetch_flags_packed/
virtqueue_store_flags_packed, the if and else branch are still identical
for the platforms except Arm.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Mon, 21 Dec 2020 14:23:18 +0000 (22:23 +0800)]
net/virtio: remove unnecessary read memory barrier
As desc_is_used has a load-acquire or rte_io_rmb inside
and wait for used desc in virtqueue, it is ok to remove
virtio_rmb behind it.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Olivier Matz [Fri, 18 Dec 2020 13:23:52 +0000 (14:23 +0100)]
net/virtio-user: fix protocol features advertising
When connected to a vhost-user backend, the flag
VHOST_USER_F_PROTOCOL_FEATURES is not advertised, preventing to do
multiqueue (the VHOST_USER_PROTOCOL_F_MQ protocol feature is ignored by
some backends if the VHOST_USER_F_PROTOCOL_FEATURES feature is not set).
When setting vhost-user features, advertise this flag if it was
advertised by our peer.
Fixes:
8e7561054ac7 ("net/virtio: support vhost-user protocol features")
Cc: stable@dpdk.org
Suggested-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Jiawei Zhu [Fri, 11 Dec 2020 16:53:18 +0000 (00:53 +0800)]
net/virtio-user: fix run closing stdin and close callfd
When i < VIRTIO_MAX_VIRTQUEUES and j == i,
dev->callfds[i] and dev->kickfds[i] are default 0.
So it will close(0), close the standard input (stdin).
And when the code fails in kickfd creation,
it will leaves one callfd not closed.
Fixes:
e6e7ad8b3024 ("net/virtio-user: move eventfd open/close into init/uninit")
Cc: stable@dpdk.org:
Signed-off-by: Jiawei Zhu <zhujiawei12@huawei.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xueming Li [Wed, 2 Dec 2020 23:36:43 +0000 (23:36 +0000)]
vdpa/mlx5: set default event mode to polling
For better performance and latency, this patch sets default event
handling mode to polling mode which uses dedicate thread per device to
poll and process event.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xueming Li [Wed, 2 Dec 2020 23:36:42 +0000 (23:36 +0000)]
vdpa/mlx5: add CPU core parameter to bind polling thread
This patch adds new device argument to specify cpu core affinity to
event polling thread for better latency and throughput. The thread
could be also located by name "vDPA-mlx5-<id>".
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xueming Li [Wed, 2 Dec 2020 23:36:41 +0000 (23:36 +0000)]
vdpa/mlx5: default polling mode delay time to zero
To improve performance and latency, this patch sets Rx polling mode
default delay time to zero.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xueming Li [Wed, 2 Dec 2020 23:36:40 +0000 (23:36 +0000)]
vdpa/mlx5: set polling mode default delay to zero
To improve throughput and latency, this patch allows Rx polling timer
delay to 0us.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Tue, 17 Nov 2020 10:06:35 +0000 (18:06 +0800)]
net/virtio: add election for packed vector NEON path
Add NEON vectorized path selection logic. Default setting comes from
vectorized devarg, then checks each criteria.
Packed ring vectorized neon path need:
NEON is supported by compiler and host
VERSION_1 and IN_ORDER features are negotiated
mergeable feature is not negotiated
LRO offloading is disabled
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Tue, 17 Nov 2020 10:06:34 +0000 (18:06 +0800)]
net/virtio: add vectorized packed ring NEON Tx
Optimize packed ring Tx batch path with NEON instructions.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Tue, 17 Nov 2020 10:06:33 +0000 (18:06 +0800)]
net/virtio: add vectorized packed ring NEON Rx
Optimize packed ring Rx batch path with NEON instructions.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Joyce Kong [Tue, 17 Nov 2020 10:06:32 +0000 (18:06 +0800)]
net/virtio: separate AVX Rx/Tx
Split out AVX instruction based virtio packed ring Rx and Tx
implementation to a separate file.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Ophir Munk [Sun, 3 Jan 2021 12:15:49 +0000 (12:15 +0000)]
net/mlx5: wrap sampling actions per OS
Wrap glue calls dr_create_flow_action_sampler() and
dr_create_flow_action_dest_array() as OS-specific functions.
This is a follow up on
commit
b293fbf9672b ("net/mlx5: add OS specific flow actions operations")
On Windows, the sampling actions wrappers currently return ENOTSUP.
Using configuration definitions HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE and
HAVE_MLX5_DR_CREATE_ACTION_DEST_ARRAY the missing sampling DV structs
are added as stubs to windows/mlx5_glue.h file.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Tue, 15 Dec 2020 08:48:32 +0000 (08:48 +0000)]
net/mlx5: fix leak on Tx queue creation failure
In Tx queue creation, there are two validations for the Tx
configuration.
When one of them fails, the MR btree memory was not freed what caused a
memory leak.
Free it.
Fixes:
f6d9ab4e769f ("net/mlx5: check Tx queue size overflow")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Tue, 15 Dec 2020 08:48:31 +0000 (08:48 +0000)]
net/mlx5: fix leak on Rx queue creation failure
In Rx queue creation, there are some validations for the Rx
configuration.
When one of them fails, the MR btree memory was not freed what caused a
memory leak.
Free it.
Fixes:
974f1e7ef146 ("net/mlx5: add new memory region support")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Shiri Kuzin [Thu, 31 Dec 2020 09:33:28 +0000 (11:33 +0200)]
net/mlx5: fix VXLAN decap on non-VXLAN flow
The vxlan_decap action performs decapsulation of the VXLAN tunnel.
Currently we can create a flow with vxlan_decap without
matching on VXLAN header.
To solve this issue this patch adds validation verifying
that the VXLAN item was detected when specifying
vxlan_decap action.
Fixes:
49d6465af3e1 ("net/mlx5: add VXLAN decap action to Direct Verbs")
Cc: stable@dpdk.org
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
Suanming Mou [Tue, 15 Dec 2020 03:46:24 +0000 (11:46 +0800)]
net/mlx5: fix shared RSS and mark actions combination
In order to allow mbuf mark ID update in Rx data-path, there is a
mechanism in the PMD to enable it according to the rte_flows.
When a flow with mark ID and RSS/QUEUE action exists, all the relevant
Rx queues will be enabled to report the mark ID.
When shared RSS action is combined with mark action, the PMD mechanism
misses the Rx queues updates.
This commit handles the shared RSS case in the mechanism too.
Fixes:
e1592b6c4dea ("net/mlx5: make Rx queue thread safe")
Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Mon, 28 Dec 2020 12:33:02 +0000 (14:33 +0200)]
net/mlx5: fix comparison sign in flow engine
The clang compiler warns on size mismatches of several
comparisons.
warning: comparison of integers of different signs
To resolve those the right types is used/cast to.
Fixes:
3e8edd0ef848 ("net/mlx5: update metadata register ID query")
Fixes:
e554b672aa05 ("net/mlx5: support flow tag")
Fixes:
c8f0abe7f89d ("net/mlx5: fix meter color register consideration")
Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Mon, 28 Dec 2020 12:33:01 +0000 (14:33 +0200)]
net/mlx5: skip IPv6 broadcast flow creation failure
IPv6 broadcast flow creation is unsupported in Windows.
do not fail on IPv6 broadcast flow creation on this mast
to avoid entire default rules creation failure.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Mon, 28 Dec 2020 12:33:00 +0000 (14:33 +0200)]
net/mlx5: wrap flow domain sync per OS
use OS functions for flow_dv_sync_domain to compile
Windows.
mlx5_os_flow_dr_sync_domain is unsupported for Windows.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Mon, 28 Dec 2020 12:32:59 +0000 (14:32 +0200)]
net/mlx5: initialize context list mutex dynamically
The mutex mlx5_dev_ctx_list_mutex was initialized with
PTHREAD_MUTEX_INITIALIZER global macro however this macro
is not supported on Windows OS shim implementation of pthreads
in DPDK.
Moved the init of this mutex to RTE_INIT to support this mutex
on both OSs.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Mon, 28 Dec 2020 12:32:58 +0000 (14:32 +0200)]
net/mlx5: use OS-independent code in ASO feature
Modify the ASO feature to use OS independent code
not to break Windows build.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tal Shnaiderman [Mon, 28 Dec 2020 12:32:57 +0000 (14:32 +0200)]
net/mlx5: fix device name size on Windows
Windows Devx interface name is the same as device name with
different size then IF_NAMESIZE. To support it MLX5_NAMESIZE
is defined with IF_NAMESIZE value for Linux and MLX5_FS_NAME_MAX
value for Windows.
Fixes:
e9c0b96e3526 ("net/mlx5: move Linux ifname function")
Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>