Ori Kam [Thu, 4 Apr 2019 09:54:08 +0000 (09:54 +0000)]
net/mlx5: support jump action
When using Direct Rules we can add actions to jump between tables.
This is extra useful since rule insertion rate is much higher on other
tables compared to table zero.
If no group is selected the rule is added to group 0.
Signed-off-by: Ori Kam <orika@mellanox.com> Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Ori Kam [Thu, 4 Apr 2019 09:54:07 +0000 (09:54 +0000)]
net/mlx5: add Direct Rules API
Adds calls to the Direct Rules API inside the glue functions.
Due to difference in parameters between the Direct Rules and Direct
Verbs some of the glue functions API was updated.
Signed-off-by: Ori Kam <orika@mellanox.com> Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Ori Kam [Thu, 4 Apr 2019 09:54:06 +0000 (09:54 +0000)]
net/mlx5: prepare Direct Verbs for Direct Rule
This is the first patch of a series that is designed to enable the
Direct Rules API.
The main difference between Direct Verbs and Direct Rules from API
perspective is that in Direct Rules each action has it's own create
function and the object itself is of type void.
In this patch I'm adding functions to generate actions that currently
are done without create action, and I'm changing the action type to be
void *, so in next patches only the glue functions will need to change.
Signed-off-by: Ori Kam <orika@mellanox.com> Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Ivan Malov [Tue, 2 Apr 2019 09:28:43 +0000 (10:28 +0100)]
net/sfc: improve log about missing HW TSO support
Said message cannot be considered as warning since
the PMD anyway reports available offload capabilities
by means of device info interface. Make this log
message informational and improve its formatting
by placing the text itself on the same line.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Ivan Malov [Tue, 2 Apr 2019 09:28:42 +0000 (10:28 +0100)]
net/sfc: factor out function to get IPv4 packet ID for TSO
As a result, code duplication will be avoided in the current
TSO implementations (EFX and EF10 native). The future patch to
add support for tunnel TSO will also reuse the new function.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Igor Romanov [Tue, 2 Apr 2019 09:28:40 +0000 (10:28 +0100)]
net/sfc: introduce descriptor space check in Tx prepare
Add descriptor space check to Tx prepare function to inform a caller
that a packet that needs more than maximum Tx descriptors of a queue
can not be sent.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Igor Romanov [Tue, 2 Apr 2019 09:28:38 +0000 (10:28 +0100)]
net/sfc: support Tx preparation in EF10 simple datapath
Implement tx_prepare callback. The implementation checks for anything
only in RTE debug mode. No checks are done otherwise because EF10
simple datapath ignores Tx offloads.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Igor Romanov [Tue, 2 Apr 2019 09:28:34 +0000 (10:28 +0100)]
net/sfc: improve TSO header length check in EF10 datapath
Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.
Fixes: 6bc985e41155 ("net/sfc: support TSO in EF10 Tx datapath") Cc: stable@dpdk.org Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Ori Kam [Tue, 2 Apr 2019 07:04:06 +0000 (07:04 +0000)]
net/mlx5: fix flow counters using devx
The API that was defined in OFED 4.5 was replaced both in OFED 4.6 and
in upstream.
This commit updates the API to match the upstream one.
Fixes: f5bf91de738a ("net/mlx5: support flow counters using devx") Cc: stable@dpdk.org Signed-off-by: Ori Kam <orika@mellanox.com> Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Thomas Monjalon [Mon, 1 Apr 2019 02:27:00 +0000 (04:27 +0200)]
app/testpmd: use port sibling iterator in device cleanup
When removing a rte_device on a port-based request,
all the sibling ports must be marked as closed.
The iterator loop can be simplified by using the dedicated macro.
Thomas Monjalon [Mon, 1 Apr 2019 02:26:59 +0000 (04:26 +0200)]
net/mlx5: use port sibling iterators
Iterating over siblings was done with RTE_ETH_FOREACH_DEV()
which skips the owned ports.
The new iterators RTE_ETH_FOREACH_DEV_SIBLING()
and RTE_ETH_FOREACH_DEV_OF() are more appropriate and more correct.
Thomas Monjalon [Mon, 1 Apr 2019 02:26:58 +0000 (04:26 +0200)]
ethdev: add siblings iterators
If multiple ports share the same hardware device (rte_device),
they are siblings and can be found thanks to the new functions
and loop macros.
One iterator takes a port id as reference,
while the other one directly refers to the parent device.
The ownership is not checked because siblings may have
different owners.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com> Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
net/mlx4: enable secondary process to register DMA memory
The Memory Region (MR) for DMA memory can't be created from secondary
process due to lib/driver limitation. Whenever it is needed, secondary
process can make a request to primary process through the EAL IPC
channel (rte_mp_msg) which is established on initialization. Once a MR
is created by primary process, it is immediately visible to secondary
process because the MR list is global per a device. Thus, secondary
process can look up the list after the request is successfully returned.
net/mlx4: add control of excessive memory pinning by kernel
A new PMD parameter (mr_ext_memseg_en) is added to control extension of
memseg when creating a MR. It is enabled by default.
If enabled, mlx4_mr_create() tries to maximize the range of MR
registration so that the LKey lookup tables on datapath become smalle
and get the best performance. However, it may worsen memory utilization
because registered memory is pinned by kernel driver. Even if a page in
the extended chunk is freed, that doesn't become reusable until the
entire memory is freed and the MR is destroyed.
To make freed pages available immediately, this parameter has to be
turned off but it could drop performance.
net/mlx5: enable secondary process to register DMA memory
The Memory Region (MR) for DMA memory can't be created from secondary
process due to lib/driver limitation. Whenever it is needed, secondary
process can make a request to primary process through the EAL IPC
channel (rte_mp_msg) which is established on initialization. Once a MR
is created by primary process, it is immediately visible to secondary
process because the MR list is global per a device. Thus, secondary
process can look up the list after the request is successfully returned.
net/mlx5: add control of excessive memory pinning by kernel
A new PMD parameter (mr_ext_memseg_en) is added to control extension of
memseg when creating a MR. It is enabled by default.
If enabled, mlx5_mr_create() tries to maximize the range of MR
registration so that the LKey lookup tables on datapath become smaller
and get the best performance. However, it may worsen memory utilization
because registered memory is pinned by kernel driver. Even if a page in
the extended chunk is freed, that doesn't become reusable until the
entire memory is freed and the MR is destroyed.
To make freed pages available immediately, this parameter has to be
turned off but it could drop performance.
In order to support secondary process, a few features are required.
a) rdma-core library should allocate device resources using DPDK's
memory allocator.
b) UAR should be remapped for secondary processes. Currently, in order
not to use different data structure for secondary processes, PMD
tries to reserve identical virtual address space for both primary
and secondary processes.
c) IPC channel is necessary, which can be easily set with rte_mp APIs.
Through the channel, Verbs command FD is delivered to the secondary
process and the device stop/start event is also broadcast from
primary process.
To support secondary process, the memory allocated by library such as
completion rings (CQ) and buffer rings (WQ) must be manageable by EAL,
in order to share it with secondary processes. With new changes in
rdma-core and kernel driver, it is possible to provide an external
allocator to the library layer for this purpose. All such resources
will now be allocated within DPDK framework.
net/mlx4: change device reference for secondary process
rte_eth_devices[] is not shared between primary and secondary process,
but a static array to each process. The reverse pointer of device
(priv->dev) becomes invalid if mlx4 supports secondary process.
Instead, priv has the pointer to shared data of the device,
struct rte_eth_dev_data *dev_data;
Two macros are added,
#define PORT_ID(priv) ((priv)->dev_data->port_id)
#define ETH_DEV(priv) (&rte_eth_devices[PORT_ID(priv)])
Rx/Tx burst function pointers are stored in the rte_eth_dev structure,
which is local to a process. Even though primary process replaces the
function pointers, secondary will not run the new ones. With rte_mp
APIs, primary can easily broadcast a request to stop/start the datapath
of secondary processes.
There's more need to have PMD global data structure. This should be
initialized once per a process regardless of how many PMD instances are
probed. mlx5_init_once() is called during probing and make sure all the
init functions are called once per a process. Currently, such global
data and its initialization functions are even scattered. Rather than
'extern'-ing such variables and calling such functions one by one making
sure it is called only once by checking the validity of such variables, it
will be better to have a global storage to hold such data and a
consolidated function having all the initializations. The existing shared
memory gets more extensively used for this purpose. As there could be
multiple secondary processes, a static storage (local to process) is also
added.
As the reserved virtual address for UAR remap is a PMD global resource,
this doesn't need to be stored in the device priv structure, but in the
PMD global data.
Socket API is used for IPC in order for secondary process to acquire
Verb command file descriptor. The FD is used to remap UAR address.
The multi-process APIs (rte_mp) in EAL are newly introduced.
mlx5_socket.c is replaced with mlx5_mp.c, which uses the new APIs.
As it is PMD global infrastructure, only one IPC channel is established.
All the IPC message types may have port_id in the message if there is
need to reference a specific device.
As the memory event is propagated to secondary processes, the event is
processed redundantly. This should be processed once because the data
structure used for MR and the event is global across the processes.
Fixes: 974f1e7ef146 ("net/mlx5: add new memory region support") Cc: stable@dpdk.org Signed-off-by: Yongseok Koh <yskoh@mellanox.com> Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Yongseok Koh [Mon, 25 Mar 2019 19:13:10 +0000 (12:13 -0700)]
net/mlx5: revert mbuf address calculation for x86
When replenishing mbufs on Rx, buffer address (mbuf->buf_addr) should be
loaded. non-x86 processors (mostly RISC such as ARM and Power) are more
vulnerable to load stall. For x86, reducing the number of instructions
seems to matter most.
For x86, this is simply a load but for other architectures, it is
calculated from the address of mbuf structure by rte_mbuf_buf_addr()
without having to load the first cacheline of the mbuf.
Kevin Traynor [Wed, 27 Mar 2019 17:22:06 +0000 (17:22 +0000)]
doc: note validation and timeline required for stables
If a stable branch for a specific DPDK release is to proceed,
along with needing a maintainer, there should also be commitment
from major contributors for validation of the releases.
Also, as decided in the March 27th techboard, to facilitate user
planning, a release should be designated as a stable release
no later than 1 month after it's initial master release.
Signed-off-by: Kevin Traynor <ktraynor@redhat.com> Acked-by: Luca Boccassi <bluca@debian.org> Acked-by: Thomas Monjalon <thomas@monjalon.net>
Rami Rosen [Fri, 15 Mar 2019 09:19:00 +0000 (11:19 +0200)]
doc: fix two typos in contributing guide
This patch fixes two typos in the coding style part of
DPDK contributing guide:
- The header entry should have .h file instead of .c file.
- The will->This will
Fixes: 44a6dface13b ("doc: describe how to add new components") Cc: stable@dpdk.org Signed-off-by: Rami Rosen <ramirose@gmail.com> Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
sprintf function is not secure as it doesn't check the length of string.
More secure function snprintf is used.
Fixes: 473d1bebce ("hash: allow to store data in hash table") Cc: stable@dpdk.org Signed-off-by: Pallantla Poornima <pallantlax.poornima@intel.com> Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
sprintf function is not secure as it doesn't check the length of string.
replaced sprintf with strlcpy.
Fixes: f74df2c57e ("test/distributor: test single and burst API") Cc: stable@dpdk.org Signed-off-by: Pallantla Poornima <pallantlax.poornima@intel.com> Acked-by: David Hunt <david.hunt@intel.com>
config: increase maximum number of raw devices to 64
The current value is 10, which is not sufficient for many use-cases.
e.g. NXP LX2 with raw qdma devices can use 32-48 raw devices in some
use-cases. So, making it to 64 to cover various cases.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
net/dpaa2: update MC firmware version for FSLMC bus
MC firmware is the core component of FSLMC bus and DPAA2 devices.
Prior to this patch, MC firmware supported 10.10.x version. This
patch bumps the min supported version to 10.14.x.
Do a global replace of snprintf(..."%s",...) with strlcpy, adding in the
rte_string_fns.h header if needed. The function changes in this patch were
auto-generated via command:
replace snprintf with strlcpy without adding extra include
For files that already have rte_string_fns.h included in them, we can
do a straight replacement of snprintf(..."%s",...) with strlcpy. The
changes in this patch were auto-generated via command:
devtools/cocci: create safer version of strlcpy script
The existing cocci script for coccinelle replaces all matching instances
of snprintf() with strlcpy() without regards to header inclusion. To allow
changes without build errors, we create a safer version of this script
that only makes changes when the rte_string_fns.h header is already
included.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The original coccinelle script worked by replacing instances of
snprintf(.."%s",...) with strlcpy(), but only where the source and dest
parameters were plain identifiers. Allowing expressions for those params
opens up a wide range of other possible changes.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When creating files on disk, e.g. for EAL configuration or shared memory
locks, etc., there is no need to grant any permissions on those files to
other users. All directories are already created with 0700 permissions, so
we should create all files with 0600 permissions.
Cc: stable@dpdk.org Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Fan Zhang [Fri, 22 Mar 2019 16:34:31 +0000 (16:34 +0000)]
doc: announce cryptodev xform API change
This patch adds the deprecation notice of changing Cryptodev
symmetric xform structure. The proposed change is to making
key pointers in the crypto xforms (cipher, auth, aead) to
indicate neither the library or the drivers will not change
the content of the key buffer.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com> Acked-by: Akhil Goyal <akhil.goyal@nxp.com> Acked-by: Fiona Trahe <fiona.trahe@intel.com> Acked-by: Anoob Joseph <anoobj@marvell.com>
Change the order of operations for esp inbound post-process:
- read mbuf metadata and esp tail first for all packets in the burst
first to minimize stalls due to load latency.
- move code that is common for both transport and tunnel modes into
separate functions to reduce code duplication.
- add extra check for packet consitency
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Right now check for packet length and padding is done inside cop_prepare().
It makes sense to have all necessary checks in one place at early stage:
inside pkt_prepare().
That allows to simplify (and later hopefully) optimize cop_prepare() part.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
sa.c becomes too big, so decided to split it into 3 chunks:
- sa.c - control path related functions (init/fini, etc.)
- esp_inb.c - ESP inbound packet processing
- esp_outb.c - ESP outbound packet processing
Plus few changes in internal function names to follow the same
code convention.
No functional changes introduced.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
ipsec: change the way unprocessed mbufs are accounted
As was pointed in one of previous reviews - we can avoid updating
contents of mbuf array for successfully processed packets.
Instead store indexes of failed packets, to move them beyond the good
ones later.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Right now we first fill crypto_sym_op part of crypto_op,
then in a separate cycle we fill crypto op fields.
It makes more sense to fill whole crypto-op in one go instead.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Operations to set/update bit-fields often cause compilers
to generate suboptimal code. To avoid such negative effect,
use tx_offload raw value and mask to update l2_len and l3_len
fields within mbufs.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
mbuf: add function to generate raw Tx offload value
Operations to set/update bit-fields often cause compilers
to generate suboptimal code.
To help avoid such situation for tx_offload fields:
introduce new enum for tx_offload bit-fields lengths and offsets,
and new function to generate raw tx_offload value.
Add new test-case into UT for introduced function.
Tomasz Jozwiak [Fri, 1 Mar 2019 08:45:10 +0000 (09:45 +0100)]
app/compress-perf: add incompressible data handling
Currently, compress-perf doesn't respect incompressible
data inside one operation.
This patch adds such a functionality. Now the output buffer
in one operation is big enough to store such a data after
compression. Also added segment size checking to pass
values in right range.
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com> Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Tomasz Jozwiak [Tue, 26 Mar 2019 14:20:48 +0000 (15:20 +0100)]
drivers/qat: fix queue pair NUMA node
This patch assigns QAT queue pair resources to the correct NUMA nodes.
Any DMA'able memory should use NUMA node of QAT device
rather than socket_id of the initializing process.
Fixes: 98c4a35c736f ("crypto/qat: move common qat files to common dir") Fixes: a795248d740b ("compress/qat: add configure and clear functions") Cc: stable@dpdk.org Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com> Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Ayuj Verma [Thu, 28 Mar 2019 10:28:22 +0000 (10:28 +0000)]
cryptodev: add RSA private key feature flag
Add feature flag to reflect RSA private key
operation support using quintuple (crt) or
exponent type key. if PMD support both,
then it should set both.
App should query cryptodev feature flag to check
if Sign and Decryt with CRT keys or exponent is
supported, thus call operation with relevant
key type.
Akhil Goyal [Wed, 27 Mar 2019 11:53:37 +0000 (11:53 +0000)]
crypto/dpaa2_sec: support multi-process
- fle pool allocations should be done for each process.
- cryptodev->data is shared across muliple processes but
cryptodev itself is allocated for each process. So any
information which needs to be shared between processes,
should be kept in cryptodev->data.
Akhil Goyal [Wed, 27 Mar 2019 11:53:36 +0000 (11:53 +0000)]
crypto/dpaa_sec: fix session queue attach/detach
session inq and qp are assigned for each core from which the
packets arrive. This was not correctly handled while supporting
multiple sessions per queue pair.
This patch fixes the attach and detach of queues for each core.
Fixes: e79416d10fa3 ("crypto/dpaa_sec: support multiple sessions per queue pair") Cc: stable@dpdk.org Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Akhil Goyal [Wed, 27 Mar 2019 11:53:33 +0000 (11:53 +0000)]
drivers/crypto: update inline desc for sharing mode
SEC HW descriptor sharing mode can now be controlled
during Session preparation by the respective drivers
shared descriptors in case of non-protocol offload does not need
any sync between the subsequent jobs. Thus, changing it to
SHR_NEVER from SHR_SERIAL for cipher_only, auth_only, and gcm.
Akhil Goyal [Wed, 27 Mar 2019 11:53:32 +0000 (11:53 +0000)]
crypto/dpaa2_sec: fix offset calculation for GCM
In case of gcm, output buffer should have aad space
before the actual buffer which needs to be written.
CAAM will not write into the aad anything, it will skip
auth_only_len (aad) and write the buffer afterwards.