Anoob Joseph [Fri, 21 Sep 2018 06:20:42 +0000 (11:50 +0530)]
doc: add cryptodev features
Adding 3DES-ECB & AES-XTS to the list of ciphers supported
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Hemant Agrawal [Thu, 30 Aug 2018 05:51:05 +0000 (11:21 +0530)]
crypto/dpaa_sec: support null algos for protocol offload
NULL cipher and NULL auth support is added.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Hemant Agrawal [Thu, 30 Aug 2018 05:51:04 +0000 (11:21 +0530)]
crypto/dpaa2_sec: restructure session management
Code for session create is restructured to make scalable to
support different algorithms.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Hemant Agrawal [Thu, 30 Aug 2018 05:51:03 +0000 (11:21 +0530)]
crypto/dpaa2_sec: support out of place protocol offload
OOP support for look aside protocol support is added.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Akhil Goyal [Thu, 30 Aug 2018 05:51:02 +0000 (11:21 +0530)]
crypto/dpaa2_sec: enable sequence no rollover
With this patch sequence number will be rolled over and
SEC block will ignore the sequence number overflow error.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Akhil Goyal [Thu, 30 Aug 2018 05:51:01 +0000 (11:21 +0530)]
crypto/dpaa_sec: enable sequence no rollover
With this patch sequence number will be rolled over and
SEC block will ignore the sequence number overflow error.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Hemant Agrawal [Thu, 30 Aug 2018 05:51:00 +0000 (11:21 +0530)]
crypto/dpaa_sec: reduce number of qp per device
In dpaa_sec, there are unlimited queues, but in order
to match the DPDK handling of queue pairs, rx side queues
are still unlimited, but the application will see only limited
qp (tx queues) from dpaa_sec hw.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Akhil Goyal [Thu, 30 Aug 2018 05:50:59 +0000 (11:20 +0530)]
crypto/dpaa_sec: do not attach session for non-matching qp
if session->qp != qp to be enqueued, it should show an error and
not try to re-attach another qp.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Akhil Goyal [Thu, 30 Aug 2018 05:50:58 +0000 (11:20 +0530)]
crypto/dpaa_sec: add lock before Rx HW queue attach
This is to safeguard as the session config can be done from multi-threads.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Akhil Goyal [Thu, 30 Aug 2018 05:50:57 +0000 (11:20 +0530)]
crypto/dpaa: reset session before init
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Hemant Agrawal [Thu, 30 Aug 2018 05:50:56 +0000 (11:20 +0530)]
crypto/dpaa: update the flib RTA
hw flib code is updated as per the latest set of APIs and macros
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Pablo de Lara [Tue, 14 Aug 2018 00:54:30 +0000 (01:54 +0100)]
crypto/aesni_gcm: support all truncated digest sizes
The full digest size of GCM/GMAC algorithms is 16 bytes.
However, it is sometimes truncated to a smaller size (such as in IPSec).
This commit allows a user to generate a digest of any size
up to the full size.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Pablo de Lara [Tue, 14 Aug 2018 00:53:30 +0000 (01:53 +0100)]
crypto/aesni_gcm: remove unneeded J0 calculation
When IV size is 12, padding to 16 bytes is required
and the LSB must be set to 1, according to the spec.
However, the Multi-buffer library is already doing this,
so it is not necessary to do it in the PMD.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Pablo de Lara [Tue, 14 Aug 2018 00:38:48 +0000 (01:38 +0100)]
crypto/aesni_mb: support large HMAC key sizes
Add support for SHAx-HMAC key sizes larger than the block size.
For these sizes, the input key is digested with the non-HMAC
version of the algorithm and used as the key.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Pablo de Lara [Tue, 14 Aug 2018 00:38:47 +0000 (01:38 +0100)]
crypto/aesni_mb: support all truncated CMAC digest sizes
The full digest size of CMAC algorithm is 16 bytes.
However, it is sometimes truncated to a smaller size (such as in IPSec).
This commit allows a user to generate a digest of any size
up to the full size.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Pablo de Lara [Tue, 14 Aug 2018 00:38:46 +0000 (01:38 +0100)]
crypto/aesni_mb: fix truncated digest size for CMAC
The truncated digest size for AES-CMAC is 12 and not 16,
as the Multi-buffer library can output both 12 and 16 bytes.
Fixes:
6491dbbecebb ("crypto/aesni_mb: support AES CMAC")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Pablo de Lara [Tue, 14 Aug 2018 00:38:45 +0000 (01:38 +0100)]
crypto/aesni_mb: check for invalid digest size
When creating a crypto session, check if
ther requested digest size is supported for
AES-XCBC-MAC and AES-CCM.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Pablo de Lara [Tue, 14 Aug 2018 00:38:44 +0000 (01:38 +0100)]
crypto/aesni_mb: support all truncated HMAC digest sizes
HMAC algorithms (MD5 and SHAx) have different full digest sizes.
However, they are often truncated to a smaller size (such as in IPSec).
This commit allows a user to generate a digest of any size
up to the full size.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Pablo de Lara [Thu, 2 Aug 2018 04:49:40 +0000 (05:49 +0100)]
crypto/aesni_mb: fix possible array overrun
In order to process crypto operations in the AESNI MB PMD,
they need to be sent to the buffer manager of the Multi-buffer library,
through the "job" structure.
Currently, it is checked if there are outstanding operations to process
in the ring, before getting a new job. However, if there are no available
jobs in the manager, a flush operation needs to take place, freeing some
of the jobs, so it can be used for the outstanding operation.
In order to avoid leaving the dequeued operation without being processed,
the maximum number of operations that can be flushed is the remaining
operations to return, which is the maximum number of operations that can
be return minus the number of operations ready to be returned
(nb_ops - processed_jobs), minus 1 (for the new operation).
The problem comes when (nb_ops - processed_jobs) is 1 (last operation to
dequeue). In that case, flush_mb_mgr is called with maximum number of
operations equal to 0, which is wrong, causing a potential overrun in the
"ops" array.
Besides, the operation dequeued from the ring will be leaked, as no more
operations can be returned.
The solution is to first check if there are jobs available in the manager.
If there are not, flush operation gets called, and if enough operations
are returned from the manager, then no more outstanding operations get
dequeued from the ring, avoiding both the memory leak and the array
overrun.
If there are enough jobs, PMD tries to dequeue an operation from the ring.
If there are no operations in the ring, the new job pointer is not used,
and it will be used in the next get_next_job call, so no memory leak
happens.
Fixes:
0f548b50a160 ("crypto/aesni_mb: process crypto op on dequeue")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Anoob Joseph [Mon, 10 Sep 2018 06:40:58 +0000 (12:10 +0530)]
app/test-crypto-perf: fix double allocation of memory
The field, 'cipher_iv.data' is allocated twice when cipher is not null.
Ideally the allocation should depend only on the field
'cperf_options.cipher_iv_sz'. This will make sure this code path gets
valid for ciphers which doesn't require IV.
Fixes:
0fbd75a99fc9 ("cryptodev: move IV parameters to session")
Cc: stable@dpdk.org
Signed-off-by: Akash Saxena <akash.saxena@caviumnetworks.com>
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Anoob Joseph [Fri, 14 Sep 2018 09:24:48 +0000 (14:54 +0530)]
app/test-crypto-perf: fix check for cipher IV
IV is not required for all ciphers. Making sure the null check is done
only when 'cipher_iv_sz' is non-zero.
Fixes:
f8be1786b1b8 ("app/crypto-perf: introduce performance test application")
Cc: stable@dpdk.org
Signed-off-by: Akash Saxena <akash.saxena@caviumnetworks.com>
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Anoob Joseph [Fri, 14 Sep 2018 09:24:47 +0000 (14:54 +0530)]
app/test-crypto-perf: fix check for auth key
Authentication key is not required for all algorithms. Making sure the
null check is done only when 'auth_key_sz' is non-zero.
Fixes:
f8be1786b1b8 ("app/crypto-perf: introduce performance test application")
Cc: stable@dpdk.org
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Signed-off-by: Ayuj Verma <ayuj.verma@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Anoob Joseph [Fri, 14 Sep 2018 09:24:46 +0000 (14:54 +0530)]
app/test-crypto-perf: add checks for AEAD key
Adding validation checks for AEAD key.
Signed-off-by: Akash Saxena <akash.saxena@caviumnetworks.com>
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Anoob Joseph [Fri, 7 Sep 2018 05:55:26 +0000 (11:25 +0530)]
examples/ipsec-secgw: fix wrong session size
Crypto devices, which support lookaside protocol, exposes security
session size in addition to the crypto private symmetric session data
size. For applications using the security capabilities, both these
sizes need to be considered.
Fixes:
ec17993a145a ("examples/ipsec-secgw: support security offload")
Cc: stable@dpdk.org
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Signed-off-by: Archana Muniganti <muniganti.archana@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Anoob Joseph [Thu, 6 Sep 2018 03:10:28 +0000 (08:40 +0530)]
examples/ipsec-secgw: increase number of dev mappings
Increasing the number of cdev mappings to accommodate usage of crypto
devices with larger number of capabilities, with higher number of cores.
Required mappings : ([no of ciphers] * [no of auth] + [aead algos]) *
[no of cores]
Signed-off-by: Ankur Dwivedi <ankur.dwivedi@caviumnetworks.com>
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Tomasz Cel [Wed, 19 Sep 2018 06:27:18 +0000 (08:27 +0200)]
doc: fix missing CCM to QAT feature list
Update the QAT documentation to show that it supports CCM.
Fixes:
ab56c4d9ed9a ("crypto/qat: support AES-CCM")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Cel <tomaszx.cel@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Tomasz Duszynski [Thu, 30 Aug 2018 09:48:29 +0000 (11:48 +0200)]
doc: fix typo for cryptodev
LB stans for 'Linear Buffers'. For the 'Scatter-gatter list' we have
SGL acronym.
Fixes:
2717246ecd7d ("cryptodev: replace mbuf scatter gather flag")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Radu Nicolau [Wed, 1 Aug 2018 13:18:43 +0000 (14:18 +0100)]
net/bonding: stop and deactivate slaves on stop
When a bonding port is stopped also stop and deactivate all slaves.
Otherwise slaves will be still listed as active.
Fixes:
2efb58cbab6e ("bond: new link bonding library")
Cc: stable@dpdk.org
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Damjan Marion [Tue, 25 Sep 2018 08:16:40 +0000 (10:16 +0200)]
net/i40e: fix 25G AOC and ACC cable detection on XXV710
Fixes:
75d133dd3296 ("net/i40e: enable 25G device")
Cc: stable@dpdk.org
Signed-off-by: Damjan Marion <damarion@cisco.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Beilei Xing [Thu, 27 Sep 2018 02:13:02 +0000 (10:13 +0800)]
net/i40e: remove keeping CRC configuration for VF
Remove keeping CRC configuration since it's not
supported by i40e VF.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Jiayu Hu [Mon, 17 Sep 2018 03:54:42 +0000 (11:54 +0800)]
vhost: fix corner case for enqueue operation
When performing enqueue operations on the split and packed rings,
if the reserved buffer length from the descriptor table exceeds
65535, the returned length by fill_vec_buf_split/_packed()
overflows. This patch is to avoid this corner case.
Fixes:
f689586bc060 ("vhost: shadow used ring update")
Fixes:
fd68b4739d2c ("vhost: use buffer vectors in dequeue path")
Fixes:
2f3225a7d69b ("vhost: add vector filling support for packed ring")
Fixes:
37f5e79a271d ("vhost: add shadow used ring support for packed rings")
Fixes:
a922401f35cc ("vhost: add Rx support for packed ring")
Fixes:
ae999ce49dcb ("vhost: add Tx support for packed ring")
Cc: stable@dpdk.org
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tiwei Bie [Fri, 21 Sep 2018 12:52:44 +0000 (20:52 +0800)]
doc: update commands for virtio-user
Update the doc for virtio-user to use the latest testpmd
parameters and commands.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tiwei Bie [Fri, 21 Sep 2018 12:52:43 +0000 (20:52 +0800)]
net/virtio: add missing supported features
The virtio features VIRTIO_NET_F_CSUM, VIRTIO_NET_F_HOST_TSO4
and VIRTIO_NET_F_HOST_TSO6 are supported by the virtio PMD.
But they are missing in the supported feature set. And since
below commit:
commit
4174a7b59d05 ("net/virtio: improve Tx offload features negotiation")
Virtio PMD will announce the Tx offloading capabilities based
on the features read from the device. And virtio-user won't
report the features which are not in virtio-PMD's supported
feature set. So since that commit, virtio-user won't announce
the DEV_TX_OFFLOAD_UDP_CKSUM, DEV_TX_OFFLOAD_TCP_CKSUM and
DEV_TX_OFFLOAD_TCP_TSO offloading capabilities even if the
vhost backend supports them.
This patch adds these missing features, and virtio-user will
report them if the backend supports them.
Fixes:
142678d42959 ("net/virtio-user: fix wrongly get/set features")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tiwei Bie [Fri, 21 Sep 2018 12:52:42 +0000 (20:52 +0800)]
net/virtio-user: fix multiple queue for vhost-kernel
The multiple queue support in vhost-kernel is broken because
the dev->vhostfd is only available for vhost-user. We should
always try to enable queue pairs when it's not in server mode.
Fixes:
201a41651715 ("net/virtio-user: fix multiple queues fail in server mode")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Nikolay Nikolaev [Mon, 24 Sep 2018 20:17:32 +0000 (23:17 +0300)]
vhost: rework message handling as a callback array
Introduce vhost_message_handlers, which maps the message request
type to the message handler. Then replace the switch construct
with a map and call.
Failing vhost_user_set_features is fatal and all processing should
stop immediately and propagate the error to the upper layers. Change
the code accordingly to reflect that.
Signed-off-by: Nikolay Nikolaev <nicknickolaev@gmail.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Nikolay Nikolaev [Mon, 24 Sep 2018 20:17:25 +0000 (23:17 +0300)]
vhost: unify message handling function signature
Each vhost-user message handling function will return an int result
which is described in the new enum vh_result: error, OK and reply.
All functions will now have two arguments, virtio_net double pointer
and VhostUserMsg pointer.
Signed-off-by: Nikolay Nikolaev <nicknickolaev@gmail.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Nikolay Nikolaev [Mon, 24 Sep 2018 20:17:18 +0000 (23:17 +0300)]
vhost: handle unsupported message types in functions
Add new functions to handle the unsupported vhost message types:
- vhost_user_set_vring_err
- vhost_user_set_log_fd
Signed-off-by: Nikolay Nikolaev <nicknickolaev@gmail.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Nikolay Nikolaev [Mon, 24 Sep 2018 20:17:11 +0000 (23:17 +0300)]
vhost: make message handling functions prepare the reply
As VhostUserMsg structure is reused to generate the reply, move the
relevant fields update into the respective message handling functions.
Signed-off-by: Nikolay Nikolaev <nicknickolaev@gmail.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Nikolay Nikolaev [Mon, 24 Sep 2018 20:17:04 +0000 (23:17 +0300)]
vhost: unify struct VhostUserMsg usage
Do not use the typedef version of struct VhostUserMsg. Also unify the
related parameter name.
Signed-off-by: Nikolay Nikolaev <nicknickolaev@gmail.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Beilei Xing [Wed, 26 Sep 2018 02:44:34 +0000 (10:44 +0800)]
net/avf: remove keeping CRC configuration
Remove keeping CRC configuration as it's not supported by AVF.
Fixes:
5ce4c2be1a64 ("net/avf: fix offload capabilities")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Zyta Szpak [Tue, 25 Sep 2018 07:05:09 +0000 (09:05 +0200)]
net/mvpp2: support Tx scatter/gather
The patch introduces scatter/gather support on transmit path.
A separate Tx callback is added and set if the application
requests multisegment Tx offload. Multiple descriptors are
sent per one packet.
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Yelena Krivosheev <yelena@marvell.com>
Natalie Samsonov [Tue, 25 Sep 2018 07:05:08 +0000 (09:05 +0200)]
net/mvpp2: document MTR and TM usage
Document MTR (metering) and TM (traffic management) usage plus
do some small updates here and there.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Tomasz Duszynski [Tue, 25 Sep 2018 07:05:05 +0000 (09:05 +0200)]
net/mvpp2: align with MUSDK 18.09
This patch introduces necessary changes required by MUSDK 18.09 library.
* As of MUSDK 18.09, pp2_cookie_t is no longer available. Now
RX descriptor cookie is defined as plain u64 so existing cast
is no longer valid.
* MUSDK 18.09 increased number of available bpools (buffer hw pools) by
introducing dma regions support. Update mvpp2 driver accordingly.
* replace MV_NET_IP4_F_TOS with MV_NET_IP4_F_DSCP
Before this patch, API allowed to configure a classification rule
according to IPv4 TOS, which was not supported in classifier. This patch
fixes this by using proper field.
* use 48 bit address mask
We cannot get pointers exceeding 48 bits thus using 48 bit
mask for extracting higher IOVA address bits is enough.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Yuval Caduri <cyuval@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Reviewed-by: Shlomi Gridish <sgridish@marvell.com>
Reviewed-by: Alan Winkowski <walan@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Natalie Samsonov [Tue, 25 Sep 2018 07:05:04 +0000 (09:05 +0200)]
net/mvpp2: update MTU and MRU related calculations
This commit updates MTU and MRU related calculations.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Yelena Krivosheev <yelena@marvell.com>
Reviewed-by: Dmitri Epshtein <dima@marvell.com>
Yuval Caduri [Tue, 25 Sep 2018 07:05:03 +0000 (09:05 +0200)]
net/mvpp2: detach Tx QoS from Rx cls/QoS config
Functional change:
Open receive cls/qos related features, only if the
config file contains an rx_related configuration entry.
This allows to configure tx_related entries, w/o unintentionally
opening rx cls/qos.
Code:
'use_global_defaults' is by default set to '1'.
Only if an rx_related entry was configured, it is updated to '0'.
rx cls/qos is performed only if 'use_global_defaults' is '0'.
Default TC configuration is now only mandatory when
'use_global_defaults' is '0'.
Signed-off-by: Yuval Caduri <cyuval@marvell.com>
Reviewed-by: Natalie Samsonov <nsamsono@marvell.com>
Tested-by: Natalie Samsonov <nsamsono@marvell.com>
Tomasz Duszynski [Tue, 25 Sep 2018 07:05:02 +0000 (09:05 +0200)]
net/mvpp2: support traffic manager
Add traffic manager support.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Tomasz Duszynski [Tue, 25 Sep 2018 07:05:01 +0000 (09:05 +0200)]
net/mvpp2: add init and deinit to flow
Add init and deinit functionality to flow implementation.
Init puts structures used by flow in a sane sate.
Deinit deallocates all resources used by flow.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Reviewed-by: Shlomi Gridish <sgridish@marvell.com>
Tomasz Duszynski [Tue, 25 Sep 2018 07:05:00 +0000 (09:05 +0200)]
net/mvpp2: change default policer configuration
Change QoS configuration file syntax for port's default policer
setup.
Since default policer configuration is performed before
any other policer configuration we can pick a default id.
This simplifies default policer configuration since user
no longer has to choose ids from range [0, PP2_CLS_PLCR_NUM].
Explicitly document values for rate_limit_enable field.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Tomasz Duszynski [Tue, 25 Sep 2018 07:04:59 +0000 (09:04 +0200)]
net/mvpp2: support metering
Add support for configuring plcr via DPDK generic metering API.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Tomasz Duszynski [Tue, 25 Sep 2018 07:04:58 +0000 (09:04 +0200)]
net/mvpp2: move common code
Cleanup sources by moving common code to the pmd
header file.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Natalie Samsonov [Tue, 25 Sep 2018 07:04:57 +0000 (09:04 +0200)]
net/mvpp2: initialize ppio only once
This changes stop/start/configure behavior due to issue in MUSDK
library itself. From now on, ppio can be reconfigured only after interface
is closed.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Yuval Caduri <cyuval@marvell.com>
Paul M Stillwell Jr [Tue, 25 Sep 2018 14:31:09 +0000 (15:31 +0100)]
ethdev: fix doxygen comment to be with structure
The doxygen comment describing the rte_eth_dev_info structure
was separated from the structure itself so move the comment
back to be with the structure.
Fixes:
7238e63bce52 ("ethdev: add support for device offload capabilities")
Cc: stable@dpdk.org
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Ivan Malov [Wed, 5 Sep 2018 09:13:38 +0000 (10:13 +0100)]
net/bonding: inherit descriptor limits from slaves
Descriptor limits are used by applications to take
optimal decisions on queue sizing.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Chas Williams <chas3@att.com>
Ivan Malov [Wed, 5 Sep 2018 09:13:37 +0000 (10:13 +0100)]
net/bonding: provide default Rx/Tx configuration
Default Rx/Tx configuration has become a helpful
resource for applications relying on the optimal
values to make rte_eth_rxconf and rte_eth_txconf
structures. These structures can then be tweaked.
Default configuration is also used internally by
rte_eth_rx_queue_setup or rte_eth_tx_queue_setup
API calls when NULL pointer is passed by callers
with the argument for custom queue configuration.
The use cases of bonding driver may also benefit
from exercising default settings in the same way.
Restructure the code to collect various settings
from slave ports and make it possible to combine
default Rx/Tx configuration of these devices and
report it to the callers of rte_eth_dev_info_get.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Chas Williams <chas3@att.com>
Ferruh Yigit [Mon, 24 Sep 2018 17:31:40 +0000 (18:31 +0100)]
doc: announce CRC strip changes in release notes
Document changes done in
commit
323e7b667f18 ("ethdev: make default behavior CRC strip on Rx")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Marchand <david.marchand@6wind.com>
Rahul Lakkireddy [Mon, 24 Sep 2018 16:05:04 +0000 (21:35 +0530)]
net/cxgbe: announce Rx scatter offload
Scatter Rx is already supported by CXGBE PMD. So, add the missing
DEV_RX_OFFLOAD_SCATTER flag to the list of supported Rx offload
features.
Also, move the macros for supported list of offload features to
header file.
Fixes:
436125e64174 ("net/cxgbe: update to Rx/Tx offload API")
Cc: stable@dpdk.org
Reported-by: Martin Weiser <martin.weiser@allegro-packets.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Andrew Rybchenko [Mon, 24 Sep 2018 13:50:30 +0000 (14:50 +0100)]
net/sfc: add 50G and 100G XtremeScale X2 family adapters
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:29 +0000 (14:50 +0100)]
net/sfc/base: use transceiver ID when reading info
In efx_mcdi_phy_module_get_info() probe the
transceiver identification byte rather than assume
the module matches the fixed port type. This
supports scenarios such as a SFP mounted in a QSFP
port via a QSA module.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:28 +0000 (14:50 +0100)]
net/sfc/base: add accessor to whole link status
Add a function which makes an MCDI GET_LINK request and
packages up the results. Currently, the get-link function
is triggered from several entry points which then pass
on or store selected parts of the data. When the driver
needs to obtain the current link state, it is more
efficient to do this in a single call.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tom Millington [Mon, 24 Sep 2018 13:50:27 +0000 (14:50 +0100)]
net/sfc/base: guard Rx scale code with corresponding option
Previously only some of the code was guarded by this which caused
a build error when EFSYS_OPT_RX_SCALE is 0 (e.g. in manftest).
Signed-off-by: Tom Millington <tmillington@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:26 +0000 (14:50 +0100)]
net/sfc/base: infer port mode bandwidth from max link speed
Limit the port mode bandwidth calculations by the maximum
reported link speed. This system detects 25G vs 10G cards,
and 100G port modes vs 40G.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:25 +0000 (14:50 +0100)]
net/sfc/base: support improvements to bandwidth calculations
Change the interface to ef10_nic_get_port_mode_bandwidth()
so more NIC information can be used to infer bandwidth
requirements. Huntington calculations separated out
completely.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:24 +0000 (14:50 +0100)]
net/sfc/base: add X2 port modes to bandwidth calculator
Add cases for the new port modes supported by X2 NICs.
Lane bandwidth is calculated for pre-X2 cards so is an
underestimate for X2 in 25G/100G modes.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:23 +0000 (14:50 +0100)]
net/sfc/base: update to current port mode terminology
>From Medford onwards, the newer constants enumerating
port modes should be used.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:22 +0000 (14:50 +0100)]
net/sfc/base: adjust PHY module info interface
Adjust data types in interface to permit the complete
module information buffer to be obtained in a single
call.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:21 +0000 (14:50 +0100)]
net/sfc/base: expose PHY module device address constants
Rearrange so the valid addresses are visible to the caller.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 24 Sep 2018 13:50:20 +0000 (14:50 +0100)]
net/sfc/base: make last byte of module information available
Adjust bounds so the interface supports reading
the last available byte of data.
Fixes:
19b64c6ac35f ("net/sfc/base: import libefx base")
Cc: stable@dpdk.org
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Alejandro Lucero [Mon, 24 Sep 2018 13:43:24 +0000 (14:43 +0100)]
ethdev: fix error handling in create function
This patch fixes how function exit is handled when errors inside
rte_eth_dev_create.
Fixes:
e489007a411c ("ethdev: add generic create/destroy ethdev APIs")
Cc: stable@dpdk.org
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Evgeny Im [Fri, 21 Sep 2018 15:36:22 +0000 (16:36 +0100)]
net/failsafe: support multicast address list set
Signed-off-by: Evgeny Im <evgeny.im@oktetlabs.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Evgeny Im [Fri, 21 Sep 2018 15:36:21 +0000 (16:36 +0100)]
net/failsafe: remove not supported multicast MAC filter
set_mc_addr_list method is not implemented by the driver yet.
Fixes:
a46f8d584eb8 ("net/failsafe: add fail-safe PMD")
Cc: stable@dpdk.org
Signed-off-by: Evgeny Im <evgeny.im@oktetlabs.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:06:02 +0000 (16:36 +0530)]
mempool/dpaa: change debug log level to DP
When the system goes out of buffers temporarily, the logs
further slow down the system. There is no need for this
continuos logs.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:06:01 +0000 (16:36 +0530)]
bus/dpaa: add check for re-definition in compat
Few fields in compat are giving re-defination error
with new drivers such as caam_jr.
Checks have been added.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:06:00 +0000 (16:36 +0530)]
net/dpaa: tune prefetch in Rx path
As part of performance optimization excercise, tuning
the prefetch placement.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:05:59 +0000 (16:35 +0530)]
net/dpaa: separate Rx function for LS1046
This is to avoid the checks in datapath and help in performance.
LS1046 has different data stash settings.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Sunil Kumar Kori [Fri, 21 Sep 2018 11:05:58 +0000 (16:35 +0530)]
net/dpaa: rearrange atomic queue support
This is to align the code with dpaa2 to ease out maintenance
of both driver code bases.
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
Nipun Gupta [Fri, 21 Sep 2018 11:05:57 +0000 (16:35 +0530)]
bus/dpaa: avoid big endian conversions for contextb
minor optimization in packet handling path
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Nipun Gupta [Fri, 21 Sep 2018 11:05:56 +0000 (16:35 +0530)]
bus/dpaa: avoid tag set for eqcr in Tx path
Minor optimization for TX path.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:05:55 +0000 (16:35 +0530)]
bus/dpaa: support interrupt portal based fd
This patch add supports in bus driver for qbman to support
and configure portal based FDs, which can be used for interrupt
based processing.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:05:54 +0000 (16:35 +0530)]
net/dpaa: minor debug log enhancements
Improving the debug message for event mode and
reducing the log level for less important log
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Sachin Saxena [Fri, 21 Sep 2018 11:05:53 +0000 (16:35 +0530)]
net/dpaa: fix link speed based on MAC type
The link speed shall be on the basis of mac type.
Fixes:
799db4568c76 ("net/dpaa: support device info and speed capability")
Cc: shreyansh.jain@nxp.com
Cc: stable@dpdk.org
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:05:52 +0000 (16:35 +0530)]
net/dpaa: support scatter offload
This patch implement the sg support, which can be
enabled/disabled w.r.t configuration.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:05:51 +0000 (16:35 +0530)]
net/dpaa: fix jumbo buffer config
Set the missing dev data mtu for the correct size.
Set the max supported size in hw, if user is asking for more.
Fixes:
9658ac3a4ef6 ("net/dpaa: set the correct frame size in device MTU")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Hemant Agrawal [Fri, 21 Sep 2018 11:05:50 +0000 (16:35 +0530)]
net/dpaa: configure frame queue on MAC ID basis
The current code has the hardcoded seq for fq allocation.
It require multiple changes, when some of the interfaces
are assigned to kernel stack. Changing it on the mac
id basis provide the flexibility to assign any interface
to kernel.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Chas Williams [Thu, 20 Sep 2018 12:52:26 +0000 (08:52 -0400)]
net/bonding: fix Rx slave fairness
Some PMDs, especially ones with vector receives, require a minimum number
of receive buffers in order to receive any packets. If the first slave
read leaves less than this number available, a read from the next slave
may return 0 implying that the slave doesn't have any packets which
results in skipping over that slave as the next active slave.
To fix this, implement round robin for the slaves during receive that
is only advanced to the next slave at the end of each receive burst.
This is also done to provide some additional fairness in processing in
other bonding RX burst routines as well.
Fixes:
2efb58cbab6e ("bond: new link bonding library")
Cc: stable@dpdk.org
Signed-off-by: Chas Williams <chas3@att.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Matan Azrad <matan@mellanox.com>
Luca Boccassi [Wed, 19 Sep 2018 10:04:17 +0000 (11:04 +0100)]
net/avf: support meson build
Signed-off-by: Luca Boccassi <bluca@debian.org>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Bruce Richardson [Wed, 19 Sep 2018 10:04:16 +0000 (11:04 +0100)]
net/avf: fix missing compiler error flags
The AVF driver was missing $(WERROR_FLAGS) in it's cflags, which means
that a number of compilation errors were getting missed. This patch adds
in the flag and fixes most of the errors, just disabling the
strict-aliasing ones.
Fixes:
22b123a36d07 ("net/avf: initialize PMD")
Fixes:
69dd4c3d0898 ("net/avf: enable queue and device")
Fixes:
a2b29a7733ef ("net/avf: enable basic Rx Tx")
Fixes:
319c421f3890 ("net/avf: enable SSE Rx Tx")
CC: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Bruce Richardson [Wed, 19 Sep 2018 10:04:15 +0000 (11:04 +0100)]
net/avf: fix unused variables and label
Compiling with all warnings turned on causes errors about unused variables
and an unused label. Remove these to allow building without having to
disable those warnings.
Fixes:
69dd4c3d0898 ("net/avf: enable queue and device")
Fixes:
3fd7a3719c66 ("net/avf: enable ops for MTU setting")
Fixes:
d6bde6b5eae9 ("net/avf: enable Rx interrupt")
Fixes:
22b123a36d07 ("net/avf: initialize PMD")
Fixes:
319c421f3890 ("net/avf: enable SSE Rx Tx")
Fixes:
a2b29a7733ef ("net/avf: enable basic Rx Tx")
Fixes:
1060591eada5 ("net/avf: enable bulk allocate Rx")
CC: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Gaetan Rivet [Tue, 18 Sep 2018 08:56:38 +0000 (10:56 +0200)]
net/igb: support dev reset
Add support for passive device reset on IGB ports.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Julien Meunier [Mon, 10 Sep 2018 15:50:35 +0000 (18:50 +0300)]
net/fm10k: add imissed stats
Add support of imissed and q_errors statistics, reported by PCIE_QPRDC
register (see datasheet, section 11.27.2.60), which exposes the number
of receive packets dropped for a queue.
Signed-off-by: Julien Meunier <julien.meunier@nokia.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Didier Pallard [Wed, 19 Sep 2018 15:04:09 +0000 (17:04 +0200)]
net/ixgbe: fix missing Tx multi-segs capability
In former API, ETH_TXQ_FLAGS_NOMULTSEGS was merely a hint indicating
that application will never send multisegmented packets, allowing
pmd to choose different tx methods accordingly.
In new API, DEV_TX_OFFLOAD_MULTI_SEGS became an offload capability
that is advertised by pmds, some of them do not advertise it and
expect to never receive fragmented packets (octeontx, axgbe)
So an ethdev that supports multisegmented packets should properly
advertise it.
Problem was spotted and tested on e1000, should be also present in
ixgbe_vf representor.
Fixes:
cf80ba6e2038 ("net/ixgbe: add support for representor ports")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Didier Pallard [Wed, 19 Sep 2018 15:04:08 +0000 (17:04 +0200)]
net/i40e: fix missing Tx multi-segs capability
In former API, ETH_TXQ_FLAGS_NOMULTSEGS was merely a hint indicating
that application will never send multisegmented packets, allowing
pmd to choose different tx methods accordingly.
In new API, DEV_TX_OFFLOAD_MULTI_SEGS became an offload capability
that is advertised by pmds, some of them do not advertise it and
expect to never receive fragmented packets (octeontx, axgbe)
So an ethdev that supports multisegmented packets should properly
advertise it.
Problem was spotted and tested on e1000, should be also present in
i40e_vf representor.
Fixes:
e0cb96204b71 ("net/i40e: add support for representor ports")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Didier Pallard [Wed, 19 Sep 2018 15:04:07 +0000 (17:04 +0200)]
net/fm10k: fix missing Tx multi-segs capability
In former API, ETH_TXQ_FLAGS_NOMULTSEGS was merely a hint indicating
that application will never send multisegmented packets, allowing
pmd to choose different tx methods accordingly.
In new API, DEV_TX_OFFLOAD_MULTI_SEGS became an offload capability
that is advertised by pmds, some of them do not advertise it and
expect to never receive fragmented packets (octeontx, axgbe)
So an ethdev that supports multisegmented packets should properly
advertise it.
Problem was spotted and tested on e1000, should be also present in
fm10k.
Fixes:
30f3ce999e6a ("net/fm10k: convert to new Tx offloads API")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Didier Pallard [Wed, 19 Sep 2018 15:04:06 +0000 (17:04 +0200)]
net/e1000: fix missing Tx multi-segs capability
In former API, ETH_TXQ_FLAGS_NOMULTSEGS was merely a hint indicating
that application will never send multisegmented packets, allowing
pmd to choose different tx methods accordingly.
In new API, DEV_TX_OFFLOAD_MULTI_SEGS became an offload capability
that is advertised by pmds, some of them do not advertise it and
expect to never receive fragmented packets (octeontx, axgbe)
So an ethdev that supports multisegmented packets should properly
advertise it.
Fixes:
e5c05e6590ea ("net/e1000: convert to new Tx offloads API")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Beilei Xing [Fri, 21 Sep 2018 08:25:29 +0000 (16:25 +0800)]
net/i40e: update Rx offload
HW supports Rx scatter offload, this patch updates
Rx scatter offload for PF.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Igor Romanov [Fri, 14 Sep 2018 07:31:36 +0000 (08:31 +0100)]
net/sfc: fix a Tx queue double release possibility
There are two function that call sfc_tx_qfini():
sfc_tx_fini_queues() and sfc_tx_queue_release(). But only
sfc_tx_queue_release() sets tx_queues pointer of the device data to NULL.
It may lead to the scenario in which a queue is destroyed by
sfc_tx_fini_queues() and after the queue is attempted to be destroyed again
by sfc_tx_queue_release().
Move NULL assignment to sfc_tx_qfini().
Fixes:
b1b7ad933b39 ("net/sfc: set up and release Tx queues")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Igor Romanov [Fri, 14 Sep 2018 07:31:35 +0000 (08:31 +0100)]
net/sfc: fix an Rx queue double release possibility
There are two function that call sfc_rx_qfini():
sfc_rx_fini_queues() and sfc_rx_queue_release(). But only
sfc_rx_queue_release() sets rx_queues pointer of the device data to NULL.
It may lead to the scenario in which a queue is destroyed by
sfc_rx_fini_queues() and after the queue is attempted to be destroyed again
by sfc_rx_queue_release().
Move NULL assignment to sfc_rx_qfini().
Fixes:
ce35b05c635e ("net/sfc: implement Rx queue setup release operations")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Vijay Srivastava [Mon, 10 Sep 2018 09:33:36 +0000 (10:33 +0100)]
net/sfc/base: add helper API to make Geneve filter spec
Signed-off-by: Vijay Srivastava <vijays@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Andy Moreton [Mon, 10 Sep 2018 09:33:35 +0000 (10:33 +0100)]
net/sfc/base: fix MAC Tx stats for less or equal to 64 bytes
This statistic should include 64byte and smaller frames.
Fix EF10 calculation to match Siena code.
Fixes:
8c7c723dfe7c ("net/sfc/base: import MAC statistics")
Cc: stable@dpdk.org
Signed-off-by: Andy Moreton <amoreton@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Richard Houldsworth [Mon, 10 Sep 2018 09:33:34 +0000 (10:33 +0100)]
net/sfc/base: modify phy caps to indicate FEC request
The capability bits to request FEC modes are implicitly valid
when the corresponding FEC mode is a supported capability.
Drivers expect that it is only valid to advertise those
capabilities explicitly marked as supported. The capabilities
reported by firmware is modified with the implicit capabilities
to present the explicit model to drivers.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Ivan Malov [Mon, 10 Sep 2018 09:33:33 +0000 (10:33 +0100)]
net/sfc/base: improve handling of legacy RSS hash flags
Client drivers may use either legacy flags, for example,
EFX_RX_HASH_TCPIPV4, or generalised flags, for example,
EFX_RX_HASH(IPV4_TCP, 4TUPLE), to configure RSS hash.
The libefx is able to recognise what scheme is used.
Legacy flags may be consumed directly by a chip-specific handler to
configure the NIC, that is, on EF10, these flags can be used to fill
in legacy RSS mode field in MCDI request. Generalised flags can also
be directly used in EF10-specific handler as they are fully compatible
with additional fields of the same MCDI request.
Legacy flags undergo conversion to generalised flags before they
are consumed by a chip-specific handler. This conversion is used to
make sure that chip-specific handlers expect only generalised flags
in the input for the sake of clarity of the code.
Depending on firmware capabilities, a chip-specififc handler either
supplies the input to the NIC directly, for example,
EFX_RX_HASH(IPV4_TCP, 4TUPLE) flag will enable 4 bits in
RSS_CONTEXT_SET_FLAGS_IN_TCP_IPV4_RSS_MODE field on EF10, or takes
the opportunity to translate the input to enable bits which don't map
to the generic flag, like setting
RSS_CONTEXT_SET_FLAGS_IN_TOEPLITZ_TCPV4_EN on EF10 when the firmware
claims no support for additional modes.
However, this approach has introduced a severe problem which can be
reproduced with ultra-low-latency firmware variant. In order to enable
IP hash, EF10-specific handler requires the user to request 2-tuple
hash for IP-other, TCP and UDP traffic classes, unconditionally.
In example, IPv4 hash can be enabled using the following input:
EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE).
At the same time, on ultra-low-latency firmware, the common code will
never report support for any UDP tuple to the client driver. That is,
in the same example, the driver will use EFX_RX_HASH(IPV4_TCP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE). This input will not be recognised by
EF10-specific handler, and RSS_CONTEXT_SET_FLAGS_IN_TOEPLITZ_IPV4_EN
bit will not be set in the MCDI request.
In order to solve the problem, the patch removes conversion code
from chip-specific handlers and adds appropriate code to convert
EFX_RX_HASH() flags to their legacy counterparts to the common scale
mode set function. If the firmware does not support additional modes,
the function will convert generalised flags to legacy flags correctly
without any demand for UDP flags and pass the result to a chip-specific
handler.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>