DPAA2 Hardware Mempool handlers allow enqueue/dequeue from NXP's
QBMAN hardware block.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS is set to 'dpaa2', if the pool
is enabled.
This memory pool currently supports packet mbuf type blocks only.
This patch adds generic functions for allowing dq storage
for the frame queues.
As the frame queues are common resource for different drivers
this is helpful.
Before DPAA2 devices can communicate using hardware queues, this patch
adds queue definitions in the FSLMC bus which the DPAA2 devices would
instantiate.
The portal driver is bound to DPIO objects discovered on the fsl-mc bus and
provides services that:
- allow other drivers, such as the Ethernet driver, to enqueue and dequeue
frames for their respective objects
A system will typically allocate 1 DPIO object per CPU to allow queuing
operations to happen simultaneously across all CPUs.
This patch will add support in fslmc vfio process to
scan and parse the dpni and dpseci object for net and crypto
devices. It will add the scanned devices to the fslmc bus.
Add support for using VFIO for dpaa2 based fsl-mc bus.
There are some differences in the way vfio used for fsl-mc bus
from the eal vfio.
- The scanning of bus for individual objects on the basis of
the DPRC container.
- The use and mapping of MC portal for object access
With the evolution of bus model, they can be further aligned with
eal vfio code.
This patch introduces the DPAA2 MC(Management complex Driver).
This is a minimal set of low level functions to send and
receive commands to the fsl-mc. It includes support for basic
management commands and commands to manipulate MC objects.
This is common to be used by various DPAA2 PMDs. e.g.net, crypto
and other drivers.
QBMAN, is a hardware block which interfaces with the other
accelerating hardware blocks (For e.g., WRIOP) on NXP's DPAA2
SoC for queue, buffer and packet scheduling.
This patch introduces a userspace driver for interfacing with
the QBMAN hw block.
The qbman-portal component provides APIs to do the low level
hardware bit twiddling for operations such as:
-initializing Qman software portals
-building and sending portal commands
-portal interrupt configuration and processing
This same/similar code is used in kernel and compat file is used
to make it working in user space.
Michal Krawczyk [Mon, 10 Apr 2017 14:28:11 +0000 (16:28 +0200)]
net/ena: calculate partial checksum if DF bit is disabled
When TSO is disabled we still have to calculate partial checksum if DF bit
if turned off. This is caused by firmware bug.
First of all, we must make sure that we are dealing with IPV4 packet.
If not, we will just skip further checking of this packet and move to
the next one.
If application will not set m2_len field, we assume we that it was Ethernet
frame because we have to look inside the packet to check for the DF flag.
To make it work properly, PMD is assuming that before sending
packet application called function rte_eth_tx_prepare().
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Jakub Palider <jpalider@gmail.com> Acked-by: Jan Medala <jan.medala@outlook.com>
Michal Krawczyk [Mon, 10 Apr 2017 14:28:10 +0000 (16:28 +0200)]
net/ena: cleanup if refilling of Rx descriptors fails
If wrong number of descriptors for refilling was passed to the Rx
repopulate function, there was memory leak which caused memory pool to
run out of resources in longer go.
In case of fail when refilling Rx descriptors, all additional mbufs
have to be released.
Fixes: 1173fca25af9 ("ena: add polling-mode driver") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Jakub Palider <jpalider@gmail.com> Acked-by: Jan Medala <jan.medala@outlook.com>
Michal Krawczyk [Mon, 10 Apr 2017 14:28:09 +0000 (16:28 +0200)]
net/ena: fix delayed cleanup of Rx descriptors
On RX path, after receiving bunch of packets, variable tracking
available descriptors in HW queue was not updated.
To fix this issue, variable tracking used descriptors must be updated
after receiving packets - it must be reduced by the amount of received
descriptors in current batch.
Additionally, variable next_to_clean in rx_ring must be updated before
entering ena_populate_rx_queue() to keep it up to date with the current
ring state.
Fixes: 1daff5260ff8 ("net/ena: use unmasked head and tail") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Jakub Palider <jpalider@gmail.com> Acked-by: Jan Medala <jan.medala@outlook.com>
Marcin Wilk [Tue, 11 Apr 2017 12:35:13 +0000 (14:35 +0200)]
net/thunderx: fix stats access out of bounds
Trying to assign more queues to stats struct break only from one loop
when the maximum size is reached. Outside loop iteration is continued.
This leads to access an array out of bounds.
Fixes: 21e3fb0050b9 ("net/thunderx: add final bits for secondary queue support") Cc: stable@dpdk.org Signed-off-by: Marcin Wilk <marcin.wilk@caviumnetworks.com> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To keep consistent with previous release, Priority Flow Control (PFC)
need to be disabled by default. This patch fixes it.
This also fixes an issue where traffic was not forwarded by testpmd
occasionally. In those cases ~4770 pps seen on one of the ports rather
than the full rate (>20mpps).
Fixes: 6f0a707e5b55 ("net/i40e: enable DCB on SRIOV VFs") Signed-off-by: Jingjing Wu <jingjing.wu@intel.com> Tested-by: David Hunt <david.hunt@intel.com>
Jeff Guo [Thu, 6 Apr 2017 02:35:26 +0000 (10:35 +0800)]
net/i40e: fix hash input set on X722
There are some new PCTYPEs on X722, but they have not been announced
on the RTE lib, so if it can not set corresponding hash input set for
these packet type, the hash function won’t work.
So we need to handle them base on the translation of the new
PCTYPE and the original PCTYPE.
Fixes: b6a0ec418274 ("i40e: use AQ for Rx control register read/write") Signed-off-by: Jeff Guo <jia.guo@intel.com> Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Having packets received without any offload flags given in the mbuf is not
very useful, and performance tests with testpmd indicates little
benefit is got with the current code by turning off the flags. This makes
the build-time option pointless, so we can remove it.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
With the mbuf rework, we now have 8 contiguous bytes to be rearmed in the
mbuf just before the 8-bytes of olflags. If we don't do the rearm write
inside the descriptor ring replenishment function, and delay it to
receiving the packet, we can do a single 16B write inside the RX function
to set both the rearm data, and the flags together.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Having packets received without any offload flags given in the mbuf is not
very useful, and performance tests with testpmd indicates little to no
benefit is got with the current code by turning off the flags. This makes
the build-time option pointless, so we can remove it.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
With the mbuf rework, we now have 8 contiguous bytes to be rearmed in the
mbuf just before the 8-bytes of olflags. If we don't do the rearm write
inside the descriptor ring replenishment function, and delay it to
receiving the packet, we can do a single 16B write inside the RX function
to set both the rearm data, and the flags together.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
Signed-off-by: Shepard Siegel <shepard.siegel@atomicrules.com> Signed-off-by: John Miller <john.miller@atomicrules.com> Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Ed Czeck [Tue, 4 Apr 2017 19:50:23 +0000 (15:50 -0400)]
net/ark: stub PMD for Atomic Rules Arkville
Enable Arkville on supported configurations
Add overview documentation
Minimum driver support for valid compile
Arkville PMD is not supported on ARM or PowerPC at this time
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com> Signed-off-by: John Miller <john.miller@atomicrules.com>
Henry Cai [Wed, 5 Apr 2017 13:19:53 +0000 (21:19 +0800)]
net/i40e: fix allocation check
function i40evf_add_del_all_mac_addr without check return
value of rte_zmalloc
Fixes: 97ac72aa71a9 ("i40e: support setting VF MAC address") Cc: stable@dpdk.org Signed-off-by: Henry Cai <caihe@huawei.com> Acked-by: Helin Zhang <helin.zhang@intel.com>
Henry Cai [Tue, 28 Mar 2017 07:32:20 +0000 (15:32 +0800)]
net/cxgbe: fix possible null pointer dereference
Check return value of malloc.
Fixes: 3bd122eef2cc ("cxgbe/base: add hardware API for Chelsio T5 series adapters") Cc: stable@dpdk.org Signed-off-by: Henry Cai <caihe@huawei.com> Acked-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Mempool_perf autotest currently does perf regression for:
* nochache
* cache
Introducing default_pool, mainly targeted for ext-mempool regression
test. Ext-mempool don't need 'cache' modes so only adding test-case
support for 'nocache' mode.
So to run ext-mempool perf regression, user has to set
RTE_MBUF_DEFAULT_MEMPOOL_OPS="<>"
There is chance of duplication ie.. if user sets
RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc" then regression
will happen twice for 'ring_mp_mc'
Mempool test currently supports:
* ring_mp_mc
* stack
Adding a new default pool options. So, ring* + stack + default
(which can be 'stack' or 'ring')
* This way, whatever the value of RTE_MBUF_DEFAULT_MEMPOOL_OPS is set,
it would be verified.
* even if that means duplicating some test (for example when "stack" is
set as default and it already part of standard test)
From the discussion in [1], it was observed that application should
have a default pool already linked even in case of shared builds.
Ring is especially important because packet mbuf creation API refer to
ring_mp_mc as default handler.
build error:
.../lib/librte_eventdev/rte_eventdev.c:371:6:
error: logical not is only applied to the left hand side of this
bitwise operator [-Werror,-Wlogical-not-parentheses]
if (!dev_conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
^
Added parentheses after the '!' to evaluate the bitwise operator first.
clang 4 gives "taking address of packed member may result in an
unaligned pointer value" warnings in a few locations [1].
Disabled "-Waddress-of-packed-member" warning for clang >= 4
[1] build errors:
.../lib/librte_eal/common/eal_common_memzone.c:275:25:
error: taking address of packed member 'mlock' of class or structure
'rte_mem_config' may result in an unaligned pointer value
[-Werror,-Waddress-of-packed-member]
rte_rwlock_write_lock(&mcfg->mlock);
^~~~~~~~~~~
.../lib/librte_ip_frag/rte_ipv4_reassembly.c:139:31:
error: taking address of packed member 'src_addr' of class or structure
'ipv4_hdr' may result in an unaligned pointer value
[-Werror,-Waddress-of-packed-member]
psd = (unaligned_uint64_t *)&ip_hdr->src_addr;
^~~~~~~~~~~~~~~~
.../lib/librte_vhost/vhost_user.c:1037:34:
error: taking address of packed member 'payload' of class or structure
'VhostUserMsg' may result in an unaligned pointer value
[-Werror,-Waddress-of-packed-member]
vhost_user_set_vring_num(dev, &msg.payload.state);
^~~~~~~~~~~~~~~~~
Jan Blunck [Tue, 11 Apr 2017 15:44:48 +0000 (17:44 +0200)]
ethdev: remove PCI helper from generic ethdev header
This moves the rte_eth_copy_pci_info() into the PCI specific ethdev
header. As a side effect this also removes it from the list of symbols
exported by the rte_ethdev library.
Signed-off-by: Jan Blunck <jblunck@infradead.org> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Jan Blunck [Tue, 11 Apr 2017 15:44:09 +0000 (17:44 +0200)]
eal: parse driver argument before probing drivers
In some cases the virtual device name should be totally different than
the driver being used for the device. Therefore lets parse the devargs for
the "driver" argument before probing drivers in vdev_probe_all_drivers().
Signed-off-by: Jan Blunck <jblunck@infradead.org> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Jan Blunck [Tue, 11 Apr 2017 15:44:08 +0000 (17:44 +0200)]
eal: add name field to generic device
This adds a name field to the generic struct rte_device. The EAL is
checking for the name being populated when registering a device but
doesn't enforce global unique names as this is left to the bus
implementations.
Signed-off-by: Jan Blunck <jblunck@infradead.org> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Jan Blunck [Tue, 11 Apr 2017 15:44:11 +0000 (17:44 +0200)]
vdev: add virtual device arguments helper function
This adds the rte_vdev_device_args() helper function to prepare for
changing the virtual drivers probe() functions take a rte_vdev_device
pointer instead of the name+args strings.
Jan Blunck [Tue, 11 Apr 2017 15:44:10 +0000 (17:44 +0200)]
vdev: add virtual device name helper function
This adds the rte_vdev_device_name() helper function to retrieve the
rte_vdev_device name which makes moving the name of the low-level
device into struct rte_device easier in the future.
Signed-off-by: Jan Blunck <jblunck@infradead.org> Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>