David Marchand [Mon, 3 May 2021 16:43:43 +0000 (18:43 +0200)]
net/virtio: refactor Tx offload helper
Purely cosmetic but it is rather odd to have an "offload" helper that
checks if it actually must do something.
We already have the same checks in most callers, so move this branch
in them.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Flavio Leitner <fbl@sysclose.org>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
David Marchand [Mon, 3 May 2021 16:43:42 +0000 (18:43 +0200)]
net/virtio: do not touch Tx offload flags
Tx offload flags are of the application responsibility.
Leave the mbuf alone and use a local storage for implicit tcp checksum
offloading in case of TSO.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Flavio Leitner <fbl@sysclose.org>
Matan Azrad [Sun, 2 May 2021 10:45:10 +0000 (13:45 +0300)]
vdpa/mlx5: improve interrupt management
The driver should notify the guest for each traffic burst detected by CQ
polling.
The CQ polling trigger is defined by `event_mode` device argument,
either by busy polling on all the CQs or by blocked call to HW
completion event using DevX channel.
Also, the polling event modes can move to blocked call when the
traffic rate is low.
The current blocked call uses the EAL interrupt API suffering a lot
of overhead in the API management and serve all the drivers and
libraries using only single thread.
Use blocking FD of the DevX channel in order to do blocked call
directly by the DevX channel FD mechanism.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Cheng Jiang [Tue, 27 Apr 2021 08:03:34 +0000 (08:03 +0000)]
vhost: add batch datapath for async packed ring
Add batch datapath for async vhost packed ring to improve the
performance of small packet processing.
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Cheng Jiang [Tue, 27 Apr 2021 08:03:33 +0000 (08:03 +0000)]
vhost: support packed ring in async datapath
For now async vhost data path only supports split ring. This patch
enables packed ring in async vhost data path to make async vhost
compatible with virtio 1.1 spec.
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Cheng Jiang [Tue, 27 Apr 2021 08:03:32 +0000 (08:03 +0000)]
vhost: refactor async split ring functions
This patch moves some code of async vhost split ring into
inline functions to improve the readability. Also, it
changes the pointer index style of iterator to make the
code more concise.
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Jiayu Hu <jiayu.hu@intel.com>
Cheng Jiang [Tue, 27 Apr 2021 03:14:01 +0000 (03:14 +0000)]
examples/vhost: fix overflow in argument parsing
Change the way passing args to fix potential overflow in args process.
Coverity issue: 363741
Fixes:
965b06f03582 ("examples/vhost: enhance getopt_long usage")
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xueming Li [Wed, 14 Apr 2021 14:14:04 +0000 (22:14 +0800)]
net/virtio: fix vectorized Rx queue rearm
When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
high PPS rate, testpmd often start and receive packets of rxd without
further growth.
Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
packets and drop. When Rx burst size >= Rx queue size, all descriptors
in used queue consumed without rearm, device can't receive more packets.
The next Rx burst returned at once since no used descriptors found,
rearm logic was skipped, rx vq kept in starving state.
To avoid rx vq starving, this patch always check the available queue,
rearm if needed even no used descriptor reported by device.
Fixes:
fc3d66212fed ("virtio: add vector Rx")
Fixes:
2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
Fixes:
52b5a707e6ca ("net/virtio: add Altivec Rx")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Ciara Power [Wed, 5 May 2021 15:22:48 +0000 (15:22 +0000)]
telemetry: fix race on callbacks list
The list_commands() function accessed the callbacks list,
but did not take the lock. This may have caused inconsistencies if
callbacks were being registered at the same time.
This is now fixed to lock before iterating the list,
and unlock afterwards.
Fixes:
f38748736eb2 ("telemetry: add default callback commands")
Cc: stable@dpdk.org
Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Jerin Jacob [Mon, 3 May 2021 16:34:28 +0000 (22:04 +0530)]
telemetry: hide internal define
Remove TELEMETRY_MAX_CALLBACKS symbol from the public
rte_telemetry.h header file.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Stanislaw Kardach [Wed, 28 Apr 2021 14:25:53 +0000 (16:25 +0200)]
test/distributor: fix burst flush on worker quit
While working on RISC-V port I have encountered a situation where worker
threads get stuck in the rte_distributor_return_pkt() function in the
burst test.
Investigation showed some of the threads enter this function with
flag RTE_DISTRIB_GET_BUF set in the d->retptr64[0]. At the same time the
main thread has already passed rte_distributor_process() so nobody will
clear this flag and hence workers can't return.
What I've noticed is that adding a flush just after the last _process(),
similarly to how quit_workers() function is written in the
test_distributor.c fixes the issue.
Lukasz Wojciechowski reproduced the same issue on x86 using a VM with 32
emulated CPU cores to force some lcores not to be woken up.
Fixes:
7c3287a10535 ("test/distributor: add performance test for burst mode")
Cc: stable@dpdk.org
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Acked-by: David Hunt <david.hunt@intel.com>
Tested-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Reviewed-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Stanislaw Kardach [Wed, 28 Apr 2021 14:25:52 +0000 (16:25 +0200)]
test/distributor: fix worker notification in burst mode
Because a single worker can process more than one packet from the
distributor, the final set of notifications in burst mode should be
sent one-by-one to ensure that each worker has a chance to wake up.
This fix mirrors the change done in the functional test by
commit
f72bff0ec272 ("test/distributor: fix quitting workers in burst
mode").
Fixes:
c3eabff124e6 ("distributor: add unit tests")
Cc: stable@dpdk.org
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Acked-by: David Hunt <david.hunt@intel.com>
Tested-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Reviewed-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Hemant Agrawal [Thu, 29 Apr 2021 05:55:48 +0000 (11:25 +0530)]
ethdev: add missing buses in device iterator
This patch fixes issue with OVS 2.15 not working on
DPAA/FSLMC based platform due to missing support for
these busses in dev_iterate.
This patch adds dpaa_bus and fslmc to dev iterator
for bus arguments.
Fixes:
214ed1acd125 ("ethdev: add iterator to match devargs input")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Chengwen Feng [Fri, 30 Apr 2021 09:04:04 +0000 (17:04 +0800)]
net/hns3: increase readability in logs
Some logs format u64 variables, mostly using hexadecimal which was not
readable.
This patch formats most u64 variables in decimal, and add '0x' prefix
to the ones that are not adjusted.
Fixes:
c37ca66f2b27 ("net/hns3: support RSS")
Fixes:
2790c6464725 ("net/hns3: support device reset")
Fixes:
8839c5e202f3 ("net/hns3: support device stats")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Fri, 30 Apr 2021 09:04:03 +0000 (17:04 +0800)]
net/hns3: remove unused VMDq code
VMDq is not supported yet, so remove the unused code.
Fixes:
d51867db65c1 ("net/hns3: add initialization")
Fixes:
1265b5372d9d ("net/hns3: add some definitions for data structure and macro")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Fri, 30 Apr 2021 09:04:02 +0000 (17:04 +0800)]
net/hns3: remove read when enabling TM QCN error event
According to the HW manual, the read operation is unnecessary when
enabling TM QCN error event, so remove it.
Fixes:
f53a793bb7c2 ("net/hns3: add more hardware error types")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Fri, 30 Apr 2021 06:28:50 +0000 (14:28 +0800)]
net/hns3: fix vector Rx burst limitation
Currently, driver uses the macro HNS3_DEFAULT_RX_BURST whose value is
32 to limit the vector Rx burst size, as a result, the burst size
can't exceed 32.
This patch fixes this problem by support big burst size.
Also adjust HNS3_DEFAULT_RX_BURST to 64 as it performs better than 32.
Fixes:
a3d4f4d291d7 ("net/hns3: support NEON Rx")
Fixes:
952ebacce4f2 ("net/hns3: support SVE Rx")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Fri, 30 Apr 2021 06:28:49 +0000 (14:28 +0800)]
net/hns3: log flow director configuration
The rte flow interface does not support the API of the capability
set. Therefore, fdir configuration logs are added to facilitate
debugging.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Fri, 30 Apr 2021 06:28:48 +0000 (14:28 +0800)]
net/hns3: improve IO path data cache usage
This patch improves data cache usage by:
1. Rearrange the rxq frequency accessed fields in the IO path to the
first 128B.
2. Rearrange the txq frequency accessed fields in the IO path to the
first 64B.
3. Make sure ptype table align cacheline size which is 128B instead of
min cacheline size which is 64B because the L1/L2 is 64B and L3 is
128B on Kunpeng ARM platform.
The performance gains are 1.5% in 64B packet macfwd scenarios.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Fri, 30 Apr 2021 06:28:47 +0000 (14:28 +0800)]
net/hns3: use existing macro to get array size
This patch uses RTE_DIM() instead of ARRAY_SIZE().
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Fri, 30 Apr 2021 06:28:46 +0000 (14:28 +0800)]
net/hns3: refactor optimised register write
This patch modifies hns3_write_reg_opt() API implementation because
the rte_write32() already uses rte_io_wmb().
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Fri, 30 Apr 2021 06:28:45 +0000 (14:28 +0800)]
net/hns3: remove some unused capabilities
This patch deletes some unused capabilities, include:
1. Delete some unused firmware capabilities definition, which are:
UDP_GSO, ATR, INT_QL, SIMPLE_BD, TX_PUSH, FEC and PAUSE.
2. Delete some unused driver capabilities definition, which are:
UDP_GSO, TX_PUSH.
3. Also redefine HNS3_DEV_SUPPORT_* as enum type, and change some of
the values. Note: the HNS3_DEV_SUPPORT_* values is used only inside
the driver, so it's safe to change the values.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Gregory Etelson [Thu, 29 Apr 2021 18:36:58 +0000 (21:36 +0300)]
net/mlx5: support integrity flow item
MLX5 PMD supports the following integrity filters for outer and
inner network headers:
- l3_ok
- l4_ok
- ipv4_csum_ok
- l4_csum_ok
`level` values 0 and 1 reference outer headers.
`level` > 1 reference inner headers.
Flow rule items supplied by application must explicitly specify
network headers referred by integrity item. For example:
flow create 0 ingress
pattern
integrity level is 0 value mask l3_ok value spec l3_ok /
eth / ipv6 / end …
or
flow create 0 ingress
pattern
integrity level is 0 value mask l4_ok value spec 0 /
eth / ipv4 proto is udp / end …
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Gregory Etelson [Thu, 29 Apr 2021 18:36:57 +0000 (21:36 +0300)]
common/mlx5: add PRM definitions for integrity check
Add integrity and IPv4 IHL bits to PRM file.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Gregory Etelson [Thu, 29 Apr 2021 18:36:56 +0000 (21:36 +0300)]
ethdev: fix integrity flow item
Add integrity item definition to the rte_flow_desc_item array.
The new entry allows to build RTE flow item from a data
stored in rte_flow_item_integrity type.
Fixes:
b10a421a1f3b ("ethdev: add packet integrity check flow rules")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Min Hu (Connor) [Thu, 29 Apr 2021 09:19:03 +0000 (17:19 +0800)]
net/hns3: fix IEEE 1588 PTP for scalar scattered Rx
When jumbo frame is enabled, Rx function will choose 'Scalar Scattered'
function which has no PTP handling.
This patch fixes it by adding PTP handling in 'Scalar Scattered'
function.
Fixes:
38b539d96eb6 ("net/hns3: support IEEE 1588 PTP")
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Huisong Li [Thu, 29 Apr 2021 09:03:59 +0000 (17:03 +0800)]
net/hns3: fix MAC enable failure rollback
If driver fails to enable MAC, it does not need to rollback the MAC
configuration. This patch fixes it.
Fixes:
bdaf190f8235 ("net/hns3: support link speed autoneg for PF")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Min Hu (Connor) [Thu, 29 Apr 2021 06:12:07 +0000 (14:12 +0800)]
doc: add build config option in hns3 guide
This patch adds description of max TQP number per PF for config file
option.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Kalesh AP [Fri, 23 Apr 2021 05:22:26 +0000 (10:52 +0530)]
net/bnxt: drop unused attribute
Remove "__rte_unused" instances that are wrongly marked.
Fixes:
6dc83230b43b ("net/bnxt: support port representor data path")
Fixes:
1bf01f5135f8 ("net/bnxt: prevent device access when device is in reset")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Andrew Rybchenko [Wed, 28 Apr 2021 14:17:02 +0000 (17:17 +0300)]
net/sfc: fix mark support in EF100 native Rx datapath
Decouple user mark from user flag. Usage of mark does not require to
use flag as well. Flag is not actually supported yet.
Fixes:
1aacc3d388d3 ("net/sfc: support user mark and flag Rx for EF100")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Wenjun Wu [Thu, 29 Apr 2021 08:27:24 +0000 (16:27 +0800)]
net/i40e: extend VF reset waiting time
When starting VF, VF will issue reset command to PF, wait a fixed
amount of time, and assume VF reset is done on PF side. However,
compared with kernel PF, DPDK PF needs more time to setup. If we
run DPDK PF to support DPDK VF, the original delay will not be
enough.
When we first start VF after PF is launched, the execution
time of the statement info.msg_buf = rte_zmalloc("msg_buffer",
info.buf_len, 0); in the function i40e_dev_handle_aq_msg is more
than 200ms. It may cause VF start error.
Since iavf can hardly trigger this issue and i40evf will be replaced
by iavf in future DPDK versions, this patch provide a workaround.
We extend VF reset waiting time from 200ms to 500ms so that
VF can start normally when using DPDK PF and DPDK VF in most cases.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Robin Zhang [Wed, 28 Apr 2021 08:04:52 +0000 (08:04 +0000)]
net/i40e: fix primary MAC type when starting port
When start port, all MAC addresses will be set. We should set the MAC
type of default MAC address as VIRTCHNL_ETHER_ADDR_PRIMARY.
Fixes:
3f604ddf33cf ("net/i40e: fix lack of MAC type when set MAC address")
Cc: stable@dpdk.org
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Robin Zhang [Wed, 28 Apr 2021 08:04:51 +0000 (08:04 +0000)]
net/iavf: fix primary MAC type when starting port
When start port, all MAC addresses will be set. We should set the MAC
type of default MAC address as VIRTCHNL_ETHER_ADDR_PRIMARY.
Fixes:
b335e7203475 ("net/iavf: fix lack of MAC type when set MAC address")
Cc: stable@dpdk.org
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Wei Huang [Thu, 29 Apr 2021 02:33:40 +0000 (22:33 -0400)]
raw/ifpga: fix device name format
The device name format used in ifpga_rawdev_create() was changed to
"IFPGA:%02x:%02x.%x", but the format used in ifpga_rawdev_destroy()
was left as "IFPGA:%x:%02x.%x", it should be changed synchronously.
To prevent further similar errors, macro "IFPGA_RAWDEV_NAME_FMT" is
defined to replace this format string.
Fixes:
9c006c45d0c5 ("raw/ifpga: scan PCIe BDF device tree")
Cc: stable@dpdk.org
Signed-off-by: Wei Huang <wei.huang@intel.com>
Acked-by: Tianfei Zhang <tianfei.zhang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Wenzhuo Lu [Thu, 29 Apr 2021 01:33:57 +0000 (09:33 +0800)]
net/iavf: fix Rx function selection
A performance drop is caused by that the RX scalar path
is selected when AVX512 is disabled and some HW offload
is enabled.
Actually, the HW offload is supported by AVX2 and SSE.
In this scenario AVX2 path should be chosen.
This patch removes the offload related check for SSE and AVX2
as SSE and AVX2 do support the offload features.
No implementation change about the data path.
Fixes:
eff56a7b9f97 ("net/iavf: add offload path for Rx AVX512")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Michael Baum [Thu, 29 Apr 2021 09:55:42 +0000 (12:55 +0300)]
net/mlx5: use aging by counter when counter exists
The driver support 2 mechanisms in order to support AGE action:
1. Aging by counter - HW counter will be configured to the flow traffic,
the driver polls the counter values efficiently to detect flow timeout.
2. Aging by ASO flow hit bit - HW ASO flow-hit bit is allocated for the
flow, the driver polls the bit efficiently to detect flow timeout.
ASO bit is only single bit resource while counter is 16 bytes, hence, it
is better to use ASO instead of counter for aging.
When a non-shared COUNT action is also configured to the flow, the
driver can use the same counter also for AGE action and no need to
create more ASO action for it.
The current code always uses ASO when it is supported in the device,
change it to reuse the non-shared counter if it exists in the flow.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 29 Apr 2021 09:55:41 +0000 (12:55 +0300)]
net/mlx5: fix flow age event triggering
A FLOW_AGE event should be invoked when a new aged-out flow is detected
by the PMD after the last user get-aged query calling.
The PMD manages 2 flags for this information and check them in order to
decide if an event should be invoked:
MLX5_AGE_EVENT_NEW - a new aged-out flow was detected. after the last
check.
MLX5_AGE_TRIGGER - get-aged query was called after the last aged-out
flow.
The 2 flags were unset after the event invoking.
When the user calls get-aged query from the event callback, the TRIGGER
flag was set inside the user callback and unset directly after the
callback what may stop the event invoking forever.
Unset the TRIGGER flag before the event invoking in order to allow set
it by the user callback.
Fixes:
f935ed4b645a ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org
Reported-by: David Bouyeure <david.bouyeure@fraudbuster.mobi>
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Thu, 29 Apr 2021 09:55:40 +0000 (12:55 +0300)]
app/testpmd: support indirect counter action query
Counter action query was implemented as part of flow query, but was not
implemented as part of indirect action query.
This patch adds the required implementation.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Michael Baum [Thu, 29 Apr 2021 09:55:39 +0000 (12:55 +0300)]
app/testpmd: remove indirect RSS action query
The port_action_handle_query function supports query operation for
indirect RSS action.
No driver currently supports this operation, and this support is
unnecessary.
Remove it.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Michael Baum [Thu, 29 Apr 2021 09:55:38 +0000 (12:55 +0300)]
net/mlx5: support flow count action handle
Existing API supports counter action to count traffic of a single flow.
The user can share the count action among different flows using the
shared flag and the same counter ID in the count action configuration.
Recent patch [1] introduced the indirect action API.
Using this API, an action can be created as indirect, unattached to any
flow rule.
Multiple flows can then be created using the same indirect action.
The new API also supports query operation of an indirect action.
The new API is more efficient because the driver gets it's own handler
for the count action instead of managing a mapping between the user ID
to the driver handle.
Support create, query and destroy indirect action operations for flow
count action.
Application will use the indirect action query operation to query this
count action.
In the meantime the old sharing mechanism (with the sharing flag)
continues to be supported, and the user can choose the way he wants to
share the counter.
The new indirect action API is only supported in DevX, so sharing
counter action in Verbs can only be done through the old mechanism.
[1] https://mails.dpdk.org/archives/dev/2020-July/174110.html
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Chengchang Tang [Wed, 28 Apr 2021 07:20:55 +0000 (15:20 +0800)]
net/hns3: select Tx prepare based on Tx offload
Tx prepare should be called only when necessary to reduce the impact on
performance.
For partial TX offload, users need to call rte_eth_tx_prepare() to
invoke the tx_prepare callback of PMDs. In this callback, the PMDs
adjust the packet based on the offloading used by the user. (e.g. For
some PMDs, pseudo-headers need to be calculated when the TX cksum is
offloaded.)
However, for the users, they cannot grasp all the hardware and PMDs
characteristics. As a result, users cannot decide when they need to
actually call tx_prepare. Therefore, we should assume that the user
calls rte_eth_tx_prepare() when using any Tx offloading to ensure that
related functions work properly. Whether packets need to be adjusted
should be determined by PMDs. They can make judgments in the
dev_configure or queue_setup phase. When the related function is not
used, the pointer of tx_prepare should be set to NULL to reduce the
performance loss caused by invoking rte_eth_tx_repare().
In this patch, if tx_prepare is not required for the offloading used by
the users, the tx_prepare pointer will be set to NULL.
Fixes:
bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Wed, 28 Apr 2021 07:20:54 +0000 (15:20 +0800)]
net/hns3: remove unused macros
The hns3_is_csq() and cmq_ring_to_dev() macro were defined in previous
version but never used.
Fixes:
737f30e1c3ab ("net/hns3: support command interface with firmware")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Wed, 28 Apr 2021 07:20:53 +0000 (15:20 +0800)]
net/hns3: fix time delta calculation
Currently, driver uses gettimeofday() API to get the time, and
then calculate the time delta, the delta will be used mainly in
judging timeout process.
But the time which gets from gettimeofday() API isn't monotonically
increasing. The process may fail if the system time is changed.
We use the following scheme to fix it:
1. Add hns3_clock_gettime() API which will get the monotonically
increasing time.
2. Add hns3_clock_calctime_ms() API which will get the milliseconds of
the monotonically increasing time.
3. Add hns3_clock_calctime_ms() API which will calc the milliseconds of
a given time.
Fixes:
2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Wed, 28 Apr 2021 07:20:52 +0000 (15:20 +0800)]
net/hns3: log time delta in decimal format
If the reset process cost too much time, driver will log one error
message which formats the time delta, but the formatting is using
hexadecimal which was not readable.
This patch fixes it by formatting in decimal format.
Fixes:
2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Chengwen Feng [Wed, 28 Apr 2021 07:20:51 +0000 (15:20 +0800)]
net/hns3: support preferred burst size and queues in VF
This patch supports get preferred burst size and queues when call
rte_eth_dev_info_get() API with VF.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Huisong Li [Wed, 28 Apr 2021 06:40:46 +0000 (14:40 +0800)]
app/testpmd: remove redundant forwarding initialization
The fwd_config_setup() is called after init_fwd_streams().
The fwd_config_setup() will reinitialize forwarding streams.
This patch removes init_fwd_streams() from init_config().
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Huisong Li [Wed, 28 Apr 2021 06:40:45 +0000 (14:40 +0800)]
app/testpmd: add forwarding configuration to DCB config
This patch adds fwd_config_setup() at the end of cmd_config_dcb_parsed()
to update "cur_fwd_config", so that the actual forwarding streams can be
queried by the "show config fwd" cmd.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Huisong Li [Wed, 28 Apr 2021 06:40:44 +0000 (14:40 +0800)]
app/testpmd: verify DCB config during forward config
Currently, the check for doing DCB test is assigned to
start_packet_forwarding(), which will be called when
run "start" cmd. But fwd_config_setup() is used in many
scenarios, such as, "port config all rxq".
This patch moves the check from start_packet_forwarding()
to fwd_config_setup().
Fixes:
7741e4cf16c0 ("app/testpmd: VMDq and DCB updates")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Huisong Li [Wed, 28 Apr 2021 06:40:43 +0000 (14:40 +0800)]
app/testpmd: check DCB info support for configuration
Currently, '.get_dcb_info' must be supported for the port doing DCB
test, or all information in 'rte_eth_dcb_info' are zero. It should be
prevented when user run cmd "port config 0 dcb vt off 4 pfc off".
This patch adds the check for support of reporting dcb info.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Huisong Li [Wed, 28 Apr 2021 06:40:42 +0000 (14:40 +0800)]
app/testpmd: fix DCB re-configuration
After DCB mode is configured, if we decrease the number of RX and TX
queues, fwd_config_setup() will be called to setup the DCB forwarding
configuration. And forwarding streams are updated based on new queue
numbers in fwd_config_setup(), but the mapping between the TC and
queues obtained by rte_eth_dev_get_dcb_info() is still old queue
numbers (old queue numbers are greater than new queue numbers).
In this case, the segment fault happens. So rte_eth_dev_configure()
should be called again to update the mapping between the TC and
queues before rte_eth_dev_get_dcb_info().
Like:
set nbcore 4
port stop all
port config 0 dcb vt off 4 pfc on
port start all
port stop all
port config all rxq 8
port config all txq 8
Fixes:
900550de04a7 ("app/testpmd: add dcb support")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Huisong Li [Wed, 28 Apr 2021 06:40:41 +0000 (14:40 +0800)]
app/testpmd: fix DCB forwarding configuration
After DCB mode is configured, the operations of port stop and port start
change the value of the global variable "dcb_test", As a result, the
forwarding configuration from DCB to RSS mode, namely,
“dcb_fwd_config_setup()” to "rss_fwd_config_setup()".
Currently, the 'dcb_flag' field in struct 'rte_port' indicates whether
the port is configured with DCB. And it is sufficient to have
'dcb_config' as a global variable to control the DCB test status. So
this patch deletes the "dcb_test".
In addition, setting 'dcb_config' at the end of init_port_dcb_config()
in case that ports fail to enter DCB mode.
Fixes:
900550de04a7 ("app/testpmd: add dcb support")
Fixes:
ce8d561418d4 ("app/testpmd: add port configuration settings")
Fixes:
7741e4cf16c0 ("app/testpmd: VMDq and DCB updates")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Huisong Li [Wed, 28 Apr 2021 06:40:40 +0000 (14:40 +0800)]
app/testpmd: fix forward lcores number for DCB
For the DCB forwarding test, each core is assigned to each traffic class.
Number of forwarding cores for DCB test must be equal or less than number
of total TC. Otherwise, the following problems may occur:
1/ Redundant polling threads will be created when forwarding cores number
is greater than total TC number.
2/ Two cores would try to use a same queue on a port when Rx/Tx queue
number is greater than the used TC number, which is not allowed.
Fixes:
900550de04a7 ("app/testpmd: add dcb support")
Fixes:
ce8d561418d4 ("app/testpmd: add port configuration settings")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Jiawen Wu [Thu, 29 Apr 2021 10:33:35 +0000 (18:33 +0800)]
net/txgbe: add copyright owner
All rights reserved by Beijing Wangxun Technology Co., Ltd.
Part of the code references Intel.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Jiawen Wu [Thu, 29 Apr 2021 10:33:34 +0000 (18:33 +0800)]
net/txgbe: remove port representor
Remove port representor in device probe process, because it is not
supported by the driver yet.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 29 Apr 2021 10:33:33 +0000 (18:33 +0800)]
net/txgbe: support VXLAN-GPE
Support VXLAN-GPE in UDP tunnel port add and delete.
Fix to parsing packet type to pass hardware checksum.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Jiawen Wu [Thu, 29 Apr 2021 10:33:32 +0000 (18:33 +0800)]
net/txgbe: fix MTU limitation for VF
When requested MTU is bigger than mbuf size and scattered Rx is not
enabled, setting MTU fails for VF.
But scattered Rx can be enabled in next port start if required, so
enabling setting MTU bigger than mbuf size if device is stopped
independent from scattered Rx configuration.
Fixes:
a2beaa4a769e ("net/txgbe: support VF MTU update")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Dapeng Yu [Thu, 29 Apr 2021 07:06:14 +0000 (15:06 +0800)]
net/softnic: fix meter policies initialization
Initialize meter policy list before use to avoid segment fault
Fixes:
0d73ddf25faa ("net/softnic: add meter profile")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Jeff Guo [Tue, 27 Apr 2021 07:17:41 +0000 (15:17 +0800)]
maintainers: update for e1000/igc/ixgbe/i40e
Remove Jeff Guo from the maintainers list of igc, i40e, ixgbe & e1000
PMD.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Min Hu (Connor) [Tue, 27 Apr 2021 02:08:45 +0000 (10:08 +0800)]
net/kni: warn on stop failure
Return value of function 'eth_kni_dev_stop' passed to 'ret' is
rewritten later, and this is unreasonable.
This patch fixes it.
Fixes:
62024eb82756 ("ethdev: change stop operation callback to return int")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Chengchang Tang [Tue, 27 Apr 2021 00:54:22 +0000 (08:54 +0800)]
net/tap: check ioctl on restore
After restoring the remote states, the return value of ioctl() is not
checked. Therefore, users cannot know whether the remote state is
restored successfully.
This patch add log for restoring failure.
Fixes:
4810d3af8343 ("net/tap: restore state of remote device when closing")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Min Hu (Connor) [Mon, 26 Apr 2021 11:57:57 +0000 (19:57 +0800)]
app/testpmd: fix division by zero on socket memory dump
Variable total, which may be zero and result in segmentation fault.
This patch fixed it.
Fixes:
9b1249d9ff69 ("app/testpmd: support dumping socket memory")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Hongbo Zheng [Sun, 25 Apr 2021 12:54:29 +0000 (20:54 +0800)]
net/txgbe: fix null pointer check
In function cons_parse_ntuple_filter, item->spec and item->mask
should be confirmed not null before use memcmp on it, current
judgement (item->spec || item->mask) just can confirm item->spec
or item->mask is not null, and cause null pointer be used in
memcmp.
This patch fix this problem.
Fixes:
b7eeecb17556 ("net/txgbe: parse n-tuple filter")
Cc: stable@dpdk.org
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Jiawen Wu <jiawenwu@trustnetic.com>
Huisong Li [Sun, 25 Apr 2021 12:06:29 +0000 (20:06 +0800)]
net/hns3: fix link speed when port is down
When the port is link down state, it is meaningless to display the
port link speed. It should be an undefined state.
Fixes:
59fad0f32135 ("net/hns3: support link update operation")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Huisong Li [Sun, 25 Apr 2021 12:06:28 +0000 (20:06 +0800)]
net/hns3: fix link status when port is stopped
When port is stopped, link down should be reported to user. For HNS3
PF driver, link status comes from link status of hardware. If the port
supports NCSI feature, hardware MAC will not be disabled. At this case,
even if the port is stopped, the link status is still Up. So driver
should set link down when the port is stopped.
Fixes:
59fad0f32135 ("net/hns3: support link update operation")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Tal Shnaiderman [Wed, 21 Apr 2021 16:34:41 +0000 (19:34 +0300)]
net/mlx5: support checksum offload on Windows
Support of the checksum offloading by checking
the relevant FW capability (csum_cap) for NIC support.
RX supported offloads:
DEV_RX_OFFLOAD_IPV4_CKSUM
DEV_RX_OFFLOAD_UDP_CKSUM
DEV_RX_OFFLOAD_TCP_CKSUM
TX supported offloads:
DEV_TX_OFFLOAD_IPV4_CKSUM
DEV_TX_OFFLOAD_UDP_CKSUM
DEV_TX_OFFLOAD_TCP_CKSUM
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Odi Assli <odia@nvidia.com>
Tal Shnaiderman [Wed, 21 Apr 2021 16:34:40 +0000 (19:34 +0300)]
common/mlx5: read checksum capability from DevX
mlx5 in Windows needs the hca capability csum_cap
to query the NIC for checksum offloading support.
Added the capability as part of the capabilities
queried by the PMD using DevX.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Odi Assli <odia@nvidia.com>
Tal Shnaiderman [Wed, 21 Apr 2021 16:34:39 +0000 (19:34 +0300)]
net/mlx5: fix unsupported offloads disablement
mlx5 offloads which are unsupported on Windows
are currently disabled by checks with IBV/DV flags
which are irrelevant to Windows.
The checks are removed until they are fully available.
Fixes:
93f4ece91a1f ("net/mlx5: spawn ethdev ports on Windows")
Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Odi Assli <odia@nvidia.com>
Viacheslav Ovsiienko [Wed, 21 Apr 2021 08:10:13 +0000 (08:10 +0000)]
net/mlx5: fix probing device in legacy bonding mode
If the device was configured as legacy bond one (without
involving E-Switch), the mlx5 PMD erroneously tried to deduce
the vport index raising the fatal error and preventing
device from being used.
The patch checks whether there is E-Switch present and we
should use vport index indeed.
Fixes:
2eb4d0107acc ("net/mlx5: refactor PCI probing on Linux")
Fixes:
d5c06b1b10ae ("net/mlx5: query vport index match mode and parameters")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Viacheslav Ovsiienko [Sat, 17 Apr 2021 17:14:14 +0000 (20:14 +0300)]
net/mlx4: fix buffer leakage on device close
The mlx4 PMD tracks the buffers (mbufs) for the packets being
transmitted in the dedicated array named as "elts". The tx_burst
routine frees the mbufs from this array once it needs to rearm
the hardware descriptor and store the new mbuf, so it looks
like as replacement mbuf pointer in the elts array.
On the device stop mlx4 PMD freed only the part of elts according
tail and head pointers, leaking the rest of buffers, remained in
the elts array.
Fixes:
a2ce2121c01c ("net/mlx4: separate Tx configuration functions")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Viacheslav Ovsiienko [Mon, 5 Apr 2021 10:07:16 +0000 (10:07 +0000)]
net/mlx5: remove drop queue function prototypes
There are some leftovers of removed code - there are
no drop queue handling routines anymore.
Fixes:
78be885295b8 ("net/mlx5: handle drop queues as regular queues")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Li Zhang [Tue, 27 Apr 2021 10:41:34 +0000 (13:41 +0300)]
net/mlx5: support meter PPS profile
Currently meter algorithms only supports bytes units for meter profiles.
Using ASO feature, the driver can support metering in per packet units.
Add support for packet units in meter profiles.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Shun Hao [Tue, 27 Apr 2021 10:43:54 +0000 (13:43 +0300)]
net/mlx5: connect meter policy to created flows
Currently ASO meter must be followed by policy table, so this adds
the support that connecting meter and policy table.
There are several cases to be considered:
1. For non-termination policy, connect meter to the default policy
table.
2. For non-RSS termination policy case, simply get the policy
table id and connect meter to it.
3. For RSS termination policy case, need to split the flow due
to RSS info in policy, and translate each sub-flow using that RSS,
then create the sub policy table to be connected.
4. In termination policy case, if there's no actions to modify the
packet before meter, no need to use set_tag to save meter id in
register. Only add a new flow in drop table using the same match
criteria as suf-flow, to save cache miss.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Li Zhang [Tue, 27 Apr 2021 10:43:53 +0000 (13:43 +0300)]
net/mlx5: prepare sub-policy for flow with meter
When a flow has a RSS action, the driver splits
each sub flow finally is configured with
a different HW TIR action.
Any RSS action configured in meter policy may cause
a split in the flow configuration.
To save performance, any TIR action will be configured
in different flow table, so policy can be split to
sub-policies per TIR in the flow creation time.
Create a function to prepare the policy and
its sub-policies for a configured flow with meter.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Li Zhang [Tue, 27 Apr 2021 10:43:52 +0000 (13:43 +0300)]
net/mlx5: support meter creation with policy
Create a meter with the new pre-defined policy.
The following cases to be considered:
1.Add entry match with meter_id in global drop table.
2.For non-termination policy (policy id 0),
add jump rule to suffix table for green and
jump rule to drop table for red.
3.Allocate counter per meter in drop table.
4.Allocate meter resource per domain per color.
5.It can work with both ASO and legacy meter HW objects.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Li Zhang [Tue, 27 Apr 2021 10:43:51 +0000 (13:43 +0300)]
net/mlx5: support meter policy operations
MLX5 PMD checks the validation of actions in policy while add
a new meter policy, if pass the validation, allocates the new
policy object from the meter policy indexed memory pool.
It is common to use the same policy for multiple meters.
MLX5 PMD supports two types of policy: termination policy and
no-termination policy.
Implement the next policy operations:
validate:
The driver doesn't support to configure actions in the flow
after the meter action except one case when the meter policy
is configured to do nothing in GREEN\YELLOW and only DROP action
in RED, this special policy is called non-terminated policy
and is handed as a singleton object internally.
For all the terminated policies, the next actions are supported:
GREEN - QUEUE, RSS, PORT_ID, JUMP, DROP, MARK and SET_TAG.
YELLOW - not supported at all -> must be empty.
RED - must include DROP action.
Hence, in ingress case, for example,
QUEUE\RSS\JUMP must be configured as last action for GREEN color.
All the above limitations will be validated.
create:
Validate the policy configuration.
Prepare the related tables and actions.
destroy:
Release the created policy resources.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Stephen Hemminger [Fri, 23 Apr 2021 21:04:44 +0000 (14:04 -0700)]
net/bnxt: use prefix on global function
When statically linked the function prandom_bytes is exposed
and might conflict with something in application. All driver
functions should use the same prefix.
Fixes:
9738793f28ec ("net/bnxt: add VNIC functions and structs")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Fri, 23 Apr 2021 05:19:29 +0000 (10:49 +0530)]
net/bnxt: remove unused function parameters
1. Clean up unused function parameters.
2. Declare no external referenced function as static and remove
their prototype from the header file.
Fixes:
ec77c6298301 ("net/bnxt: add stats context allocation")
Fixes:
200b64ba0be8 ("net/bnxt: free statistics context")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Thu, 22 Apr 2021 04:12:00 +0000 (09:42 +0530)]
net/bnxt: remove unnecessary forward declarations
This patch removes several redundant forward declarations of
functions and structure.
Fixes:
0b42b92ae429 ("net/bnxt: fix xstats by id")
Fixes:
cf4f055a6578 ("net/bnxt: remove EEM system memory support")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Stephen Hemminger [Wed, 21 Apr 2021 23:09:29 +0000 (16:09 -0700)]
net/bnxt: skip get statistics for stopped queues
An application using rte_flow may define a large number of queues
but only use a small subset of them at any one time.
Since querying the status of each queue requires a request/spin/reply
with the firmware, optimize by skipping the request for queues not
running.
For those queues the statistics will be 0.
This cuts the cost of single xstats query in half and has even
bigger gain for simple stats query.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Thierry Herbelot [Fri, 23 Apr 2021 12:25:15 +0000 (14:25 +0200)]
net/virtio: fix kernel set memtable for multi-queue device
Restore the original code, where VHOST_SET_MEM_TABLE is applied to
all vhostfds of the device.
Fixes:
539d910c9c76 ("net/virtio: add virtio-user memory tables ops")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thomas Monjalon [Wed, 21 Apr 2021 17:59:23 +0000 (19:59 +0200)]
vdpa/mlx5: improve portability of thread naming
The function pthread_setname_np is non-portable,
so it may be unavailable in old glibc or other systems.
The function rte_thread_setname is workarounding portability issues.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Chengwen Feng [Wed, 21 Apr 2021 01:37:49 +0000 (09:37 +0800)]
net/virtio: fix getline memory leakage
This patch fixes getline memory leakage when parsing dynamic major num.
Fixes:
7d62bf6f54ba ("net/virtio: introduce vhost-vDPA backend type")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Balazs Nemeth [Fri, 16 Apr 2021 10:25:19 +0000 (12:25 +0200)]
vhost: allocate and free packets in bulk in Tx packed
Move allocation out further and perform all allocation in bulk. The same
goes for freeing packets. In the process, also introduce
virtio_dev_pktmbuf_prep and make virtio_dev_pktmbuf_alloc use that.
Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Balazs Nemeth [Tue, 13 Apr 2021 13:31:03 +0000 (15:31 +0200)]
vhost: remove remaining packets count
The remained variable stores the same information as the difference
between count and pkt_idx. Remove the remained variable to simplify.
Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Balazs Nemeth [Wed, 28 Apr 2021 02:17:31 +0000 (10:17 +0800)]
vhost: read last used index once
Instead of calculating the address of a packed descriptor based on the
vq->desc_packed and vq->last_used_idx every time, store that base
address in desc_base. On arm, this saves 176 bytes in code size of
function in which vhost_flush_enqueue_batch_packed gets inlined.
Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Cheng Jiang [Wed, 17 Mar 2021 05:40:54 +0000 (05:40 +0000)]
examples/vhost: fix ioat ring space in callbacks
We use ioat ring space for determining if ioat callbacks can enqueue a
packet to ioat device. But there is one slot can't be used in ioat
ring due to the ioat driver design, so we need to reduce one slot in
ioat ring to prevent ring size mismatch in ioat callbacks.
Fixes:
2aa47e94bfb2 ("examples/vhost: add ioat ring space count and check")
Cc: stable@dpdk.org
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Jiayu Hu [Tue, 20 Apr 2021 08:57:46 +0000 (04:57 -0400)]
doc: update async vhost register/unregister
This patch is to update programmer guide for register/unregister
copy devices in vhost.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Jiayu Hu [Tue, 20 Apr 2021 08:57:45 +0000 (04:57 -0400)]
vhost: fix redundant vring status change notification
When VHOST_USER_F_PROTOCOL_FEATURES is not negotiated,
there is no need for vhost_user_set_vring_kick() to
notify the application of vring enabled, as
vhost_user_msg_handler() also notifies the application.
This patch is to remove unnecessary vring_state_changed() call.
Fixes:
d0fcc38f5fa4 ("vhost: improve device readiness notifications")
Cc: stable@dpdk.org
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Tested-by: Yinan Wang <yinan.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Jiayu Hu [Tue, 20 Apr 2021 08:57:44 +0000 (04:57 -0400)]
vhost: remove unnecessary free
This patch removes unnecessary rte_free() for async_pkts_info
and async_descs_split.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Yinan Wang <yinan.wang@intel.com>
Jiayu Hu [Tue, 20 Apr 2021 08:57:43 +0000 (04:57 -0400)]
vhost: fix queue initialization
This patch allocates vhost queue by rte_zmalloc() to avoid
undefined values.
Fixes:
a277c7159876 ("vhost: refactor code structure")
Cc: stable@dpdk.org
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Yinan Wang <yinan.wang@intel.com>
Yuying Zhang [Mon, 26 Apr 2021 07:17:49 +0000 (07:17 +0000)]
net/ice/base: clean duplicate in finding GTPU dummy packet
Four GTPU tunnel types are used twice to find GTPU dummy packets
(ipv4_gtpu_ipv4/ipv6, ipv6_gtpu_ipv4/ipv6). Clean redundant code.
Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Min Hu (Connor) [Tue, 27 Apr 2021 08:51:21 +0000 (16:51 +0800)]
net/e1000: fix flow error message object
This patch fixes parameter misuse when set rte flow action error.
Fixes:
c0688ef1eded ("net/igb: parse flow API n-tuple filter")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Anatoly Burakov [Mon, 26 Apr 2021 13:49:12 +0000 (13:49 +0000)]
net/i40e: support power management on VF
When .get_monitor_addr API was introduced, it was implemented in the
i40e driver, but only for the physical function; the virtual function
portion of the driver does not support that API.
Add the missing function pointer to VF device structure.
The i40e driver is not meant to use the VF portion any more, as
currently i40e VF devices are supposed to be managed by iavf drier, but
add this just in case it needs backporting later.
Fixes:
a683abf90a22 ("net/i40e: implement power management API")
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Anatoly Burakov [Mon, 26 Apr 2021 13:49:11 +0000 (13:49 +0000)]
net/ixgbe: support power management on VF
When .get_monitor_addr API was introduced, it was implemented in the
ixgbe driver, but only for the physical function; the virtual function
portion of the driver does not support that API.
Add the missing function pointer to VF device structure.
Fixes:
3982b7967bb7 ("net/ixgbe: implement power management API")
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Wenzhuo Lu [Tue, 27 Apr 2021 02:24:28 +0000 (10:24 +0800)]
net/iavf: fix Tx L4 checksum
Leverage the behavior of the scalar path, preparing
packets is necessary for vector paths which support checksum
offload.
Fixes:
059f18ae2aec ("net/iavf: add offload path for Tx AVX512")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Yuying Zhang [Mon, 26 Apr 2021 06:02:08 +0000 (06:02 +0000)]
net/ice/base: fix inner L4 offset for GTPU dummy packet
Fix inner L4 offset of ipv6_gtpu_ipv6_tcp/udp dummy packet.
Fixes:
bd4d9a89dbc1 ("net/ice/base: add GTP filtering via advanced switch filter")
Cc: stable@dpdk.org
Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Qi Zhang [Sat, 24 Apr 2021 06:03:37 +0000 (14:03 +0800)]
common/iavf: use macro to define offload/capability
Currently raw hex values are used to define specific bits for each
offload/capability in virtchnl.h. The can and has led to duplicate
defined bits. Fix this by using the BIT() macro so it's
immediately obvious which bits are used/available.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Qi Zhang [Sat, 24 Apr 2021 06:03:36 +0000 (14:03 +0800)]
common/iavf: refine comment in virtual channel
General clean up for comment in virtchnl.
Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Qi Zhang [Sat, 24 Apr 2021 06:03:35 +0000 (14:03 +0800)]
common/iavf: add enumeration for Rx descriptor ID
Support for allowing VFs to negotiate the descriptor format was added
previously.
This support requires that the VF specify which descriptor format to use
when requesting Rx queues. The VF is supposed to request the set of
supported formats via the new VIRTCHNL_OP_GET_SUPPORTED_RXDIDS, and then
set one of the supported formats in the rxdid field of the
virtchnl_rxq_info structure.
The virtchnl.h header does not provide an enumeration of the format
values. The existing implementations in the PF directly use the values
from the DDP package.
Make the formats explicit by defining an enumeration of the RXDIDs.
Provide an enumeration for the values as well as the bit positions as
returned by the supported_rxdids data from the
VIRTCHNL_OP_GET_SUPPORTED_RXDIDS.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Qi Zhang [Sat, 24 Apr 2021 06:03:34 +0000 (14:03 +0800)]
common/iavf: fix duplicated offload bit
The value of offload VIRTCHNL_VF_OFFLOAD_CRC bit already existed as
VIRTCHNL_VF_CAP_ADV_LINK_SPEED. Fix this now by changing the value of
VIRTCHNL_VF_OFFLOAD_CRC to a currently unused value.
Also, move the define for VIRTCHNL_VF_CAP_ADV_LINK_SPEED in the correct
place to line up with the other bit values and add a comment for its
purpose. Hopefully this will prevent from defining duplicate bits moving
forward.
Fixes:
e244eeafcecb ("net/iavf/base: update virtual channel")
Cc: stable@dpdk.org
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>