dpdk.git
7 years agocrypto/aesni_gcm: fix leading spaces
Pablo de Lara [Mon, 22 May 2017 10:52:34 +0000 (11:52 +0100)]
crypto/aesni_gcm: fix leading spaces

Fixes: 26c2e4ad5ad4 ("cryptodev: add capabilities discovery")

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
7 years agomem: fix malloc element resize with padding
Jamie Lavigne [Thu, 8 Jun 2017 19:12:17 +0000 (19:12 +0000)]
mem: fix malloc element resize with padding

Currently when a malloc_elem is split after resizing, any padding
present in the elem is ignored.  This causes the resized elem to be too
small when padding is present, and user data can overwrite the beginning
of the following malloc_elem.

Solve this by including the size of the padding when computing where to
split the malloc_elem.

Fixes: af75078fece3 ("first public release")

Signed-off-by: Jamie Lavigne <lavignen@amazon.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
7 years agotest/mbuf: remove global mempool
Santosh Shukla [Mon, 26 Jun 2017 10:04:54 +0000 (10:04 +0000)]
test/mbuf: remove global mempool

Currently, pool resources are allocated statically
and are not freed. Results of that test can not run more than once.

Fix removes static dependency from test application and
now allocating and freeing resources dynamically.
Test runs for more than once.

Fixes: af75078fece3 ("first public release")

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
7 years agopci: reduce debug log verbosity when probing
Ferruh Yigit [Tue, 27 Jun 2017 14:23:34 +0000 (15:23 +0100)]
pci: reduce debug log verbosity when probing

When debug level logging enabled (--log-level=8) each driver failed to
probe the device printed, like:

EAL: Driver (net_ark) doesn't match the device
EAL: Driver (net_avp) doesn't match the device
EAL: Driver (net_bnxt) doesn't match the device
EAL: Driver (net_cxgbe) doesn't match the device
EAL: Driver (net_e1000_igb) doesn't match the device
EAL: Driver (net_e1000_igb_vf) doesn't match the device
EAL: Driver (net_e1000_em) doesn't match the device
EAL: Driver (net_ena) doesn't match the device
EAL: Driver (net_enic) doesn't match the device
EAL: Driver (net_fm10k) doesn't match the device
EAL: Driver (net_i40e) doesn't match the device
EAL: Driver (net_i40e_vf) doesn't match the device
....

Overall hundreds of similar lines printed, because all drivers printed
for all devices. This is too much noise and there is already a log
message printed when device matched.

Removing the debug log completely.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
7 years agomk: fix excluding files when installing docs
Luca Boccassi [Fri, 23 Jun 2017 18:41:47 +0000 (19:41 +0100)]
mk: fix excluding files when installing docs

The --exclude parameter must be passed before the input directory to
tar, otherwise it's silently ignored and the .doctrees directory is
installed by make install-doc.

Signed-off-by: Luca Boccassi <luca.boccassi@gmail.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
7 years agomk: use make silent flag to print HTML doc version
Luca Boccassi [Fri, 23 Jun 2017 18:41:46 +0000 (19:41 +0100)]
mk: use make silent flag to print HTML doc version

Depending on the environment, make might echo the command being ran.
In mk/rte.sdkdoc.mk make is used to print the DPDK version to be
piped to doxygen. This causes the following to be written:

<div id="projectname">DPDK
&#160;<span id="projectnumber">/usr/bin/make-f/build/dpdk-jYjqnr/
 dpdk-16.11.2/mk/rte.sdkconfig.mkshowversion</span>
</div>

Use -s (--silent) to prevent echoing.

Signed-off-by: Luca Boccassi <luca.boccassi@gmail.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
7 years agoethdev: add missing symbol in map
Luca Boccassi [Thu, 22 Jun 2017 12:04:59 +0000 (13:04 +0100)]
ethdev: add missing symbol in map

The function rte_eth_tx_done_cleanup() was missing in the map file
so it cannot be used by applications linking to shared libraries.
pktgen uses it since version 3.2.0.

Fixes: 44a718c457b5 ("ethdev: add API to free consumed buffers in Tx ring")
Cc: stable@dpdk.org
Signed-off-by: Luca Boccassi <luca.boccassi@gmail.com>
7 years agopci: set default numa node for broken systems
Tonghao Zhang [Thu, 11 May 2017 01:56:33 +0000 (18:56 -0700)]
pci: set default numa node for broken systems

The NUMA node information for PCI devices provided through
sysfs is invalid for AMD Opteron(TM) Processor 62xx and 63xx
on Red Hat Enterprise Linux 6, and VMs on some hypervisors.
It is good to see more checking for valid values.

Typical wrong numa node in some VMs:
$ cat /sys/devices/pci0000:00/0000:00:18.6/numa_node
-1

Signed-off-by: Tonghao Zhang <nic@opencloud.tech>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
7 years agoring: fix return value for dequeue
Anand B Jyoti [Fri, 2 Jun 2017 06:29:51 +0000 (11:59 +0530)]
ring: fix return value for dequeue

The error return code for rte_ring_sc_dequeue_bulk() and
rte_ring_mc_dequeue_bulk() function should be -ENOENT rather
than -ENOBUFS as described in the function description.

Fixes: cfa7c9e6fc1f ("ring: make bulk and burst return values consistent")
Cc: stable@dpdk.org
Signed-off-by: Anand B Jyoti <anand.b.jyoti@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
7 years agoeventdev: define default value for dequeue timeout
Jerin Jacob [Thu, 18 May 2017 08:48:27 +0000 (14:18 +0530)]
eventdev: define default value for dequeue timeout

Defining the value 0 as default value for dequeue timeout
will help the application reduce the configuration setup
if the application is interested only in default
timeout value.

removed "min_dequeue_limit" negative testcase as
min_dequeue_limit value could be zero(which is
default timeout now) if driver has
dev_info->min_dequeue_timeout_ns  = 1.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agotest/eventdev: verify priority test prerequisite
Jerin Jacob [Thu, 1 Jun 2017 06:41:23 +0000 (12:11 +0530)]
test/eventdev: verify priority test prerequisite

octeontx specific priority test expects priority of each
event queue to be a unique value. Verify that condition
before it processes to test the priority.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoevent/octeontx: fix missing enqueue SMP barrier
Jerin Jacob [Fri, 9 Jun 2017 13:16:03 +0000 (18:46 +0530)]
event/octeontx: fix missing enqueue SMP barrier

Typically RTE_EVENT_OP_NEW issued by the producer
lcore. To reflect the write changes issued by the
producer lcore on worker lcore, an SMP write barrier
is required on producer enqueue. Fixing the missing
rte_smp_wmb() on enqueue with RTE_EVENT_OP_NEW.

Fixes: f10d322eff76 ("event/octeontx: support worker enqueue")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>
7 years agoevent/octeontx: improve dequeue performance
Jerin Jacob [Fri, 9 Jun 2017 12:06:50 +0000 (17:36 +0530)]
event/octeontx: improve dequeue performance

switch tag wait is a costly operation as it may
translate to IOB read if core swtag cache is not updated.
Do tag switch wait only when there is a tag request on
the same hardware work slot.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>
7 years agoevent/skeleton: advertise the burst mode capability
Jerin Jacob [Tue, 20 Jun 2017 14:26:48 +0000 (19:56 +0530)]
event/skeleton: advertise the burst mode capability

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoevent/sw: advertise the burst mode capability
Jerin Jacob [Wed, 14 Jun 2017 04:57:33 +0000 (10:27 +0530)]
event/sw: advertise the burst mode capability

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
7 years agoeventdev: introduce burst mode capability
Jerin Jacob [Wed, 14 Jun 2017 04:57:32 +0000 (10:27 +0530)]
eventdev: introduce burst mode capability

Introducing the burst mode capability flag to express the event device
is capable of operating in burst mode for enqueue(forward, release) and
dequeue operation. If the device is not capable, then the application
still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst()
but PMD accepts only one event at a time which is any way transparent
with the current rte_event_*_burst API semantics.

It solves two purposes:
1) Fix performance regression on the PMD which supports only nonburst
mode, and this issue is two-fold.

Typically the burst_worker main loop consists of following pseudo code:

while(1)
{
uint16_t nb_rx = rte_event_dequeue_burst(ev,..);

for (i=0; i < nb_rx; i++) {
process(ev[i]);
if (is_release_required(ev[i]))
release_the_event(ev);
}

        uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
                                events, nb_rx);
        while (nb_tx < nb_rx)
            nb_tx += rte_event_enqueue_burst(dev_id, port_id,
            events + nb_tx, nb_rx - nb_tx);
}

Typically the non_burst_worker main loop consists of following pseudo code:
while(1)
{
    uint16_t nb_rx = rte_event_dequeue_burst(&ev, , 1);
    if (!nb_rx)
        continue;
    process(ev);
    while (rte_event_enqueue_burst(dev, port, &ev, 1) != 1);
}

Following overhead has been seen on nonburst mode capable PMDs with
burst mode version
- Extra explicit release(PMD does release on implicitly on next
dequeue) and thus avoids the cost additional driver function overhead.
- Extra "for" loop for event processing which compiler cannot detect at
runtime

2) Simplify the application configuration by avoiding the application to
find the correct enqueue and dequeue depth across different PMD.
If burst mode is not supported then, PMD can ignore depth field.
This will enable to write portable applications and makes
RFC eventdev_pipeline application works on OCTEONTX PMD
http://dpdk.org/dev/patchwork/patch/23799/

If an application wishes to get the maximum performance on nonburst
capable PMD then the application can write the code in a way that by
keeping packet processing function as inline functions and launch the
workers based on the capability.
The generic burst based worker still work on those PMDs without
any code change but this scheme needed only when the application wants
to gets the maximum performance out of nonburst capable PMDs.

This patch is based the on the real world test cases
http://dpdk.org/dev/patchwork/patch/24832/, Where without this scheme
20.9% performance drop observed per core.

See worker_wrapper(), perf_queue_worker(), perf_queue_worker_burst()
functions to use this scheme in a portable way without losing performance
on both sets of PMDs and achieving the portability.
http://dpdk.org/dev/patchwork/patch/24832/

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
7 years agoeventdev: make vdev init and uninit functions optional
Jerin Jacob [Fri, 9 Jun 2017 08:37:29 +0000 (14:07 +0530)]
eventdev: make vdev init and uninit functions optional

Made libeventdev library independent of VDEV bus by moving vdev pmd
specific function to rte_eventdev_pmd_vdev.h header file. Eventdev VDEV
PMD can include that for generic eventdev VDEV init and uninit function
enablement.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoeventdev: make PCI probe and remove functions optional
Jerin Jacob [Fri, 9 Jun 2017 08:37:28 +0000 (14:07 +0530)]
eventdev: make PCI probe and remove functions optional

Made libeventdev library independent of PCI bus by moving pci pmd
specific function to rte_eventdev_pmd_pci.h header file. Eventdev PCI
PMD can include that for generic eventdev PCI probe and remove function
enablement.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoeventdev: restructure release function
Jerin Jacob [Fri, 9 Jun 2017 08:37:27 +0000 (14:07 +0530)]
eventdev: restructure release function

Remove rte_event_dev_close() from rte_event_pmd_release() function so
that rte_event_pmd_release() can be used in stateless way. This will
enable rte_event_pmd_vdev_uninit() function to avoid using
eventdev_globals global variable and the need for exposing the a
global variable to PMD.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoeventdev: remove PCI dependency from generic structures
Jerin Jacob [Fri, 9 Jun 2017 08:37:26 +0000 (14:07 +0530)]
eventdev: remove PCI dependency from generic structures

Remove the PCI dependency from generic data structures
and moved the PCI specific code to rte_event_pmd_pci*

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoevent/sw: fix mapped qid count with parallel queue
Harry van Haaren [Wed, 7 Jun 2017 10:04:44 +0000 (11:04 +0100)]
event/sw: fix mapped qid count with parallel queue

This commit fixes the counting of mapped queues to a port,
when the type of queue type is PARALLEL. Not incrementing
the count here could lead to an underflow of the count when
unlinking at a later date.

Fixes: 371a688fc159 ("event/sw: support linking queues to ports")

Reported-by: Jesse Bruni <jesse.bruni@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
7 years agoevent/octeontx: fix error msg in mbox wait response
Santosh Shukla [Mon, 29 May 2017 07:29:40 +0000 (12:59 +0530)]
event/octeontx: fix error msg in mbox wait response

Fixes: 6da9d2457 ("event/octeontx: add mailbox support")

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoevent/octeontx: add driver name in info get
Jerin Jacob [Fri, 2 Jun 2017 09:26:06 +0000 (14:56 +0530)]
event/octeontx: add driver name in info get

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoevent/sw: fix credit tracking in port dequeue
Harry van Haaren [Thu, 1 Jun 2017 15:45:54 +0000 (16:45 +0100)]
event/sw: fix credit tracking in port dequeue

Single-link optimized ports previously did not correctly track
credits when dequeued, and re-enqueued as a FORWARD type. This
could "inflate" the number of credits in the system.

A unit test is added to reproduce and verify the issue, and the
fixed implementation counts FORWARD packets, and reduces the
number of credits the port has if it is of single-link type.

Fixes: 656af9180014 ("event/sw: add worker core functions")

Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Gage Eads <gage.eads@intel.com>
7 years agoeventdev: clarify the worker thread workflow
Jerin Jacob [Thu, 18 May 2017 11:10:41 +0000 (16:40 +0530)]
eventdev: clarify the worker thread workflow

If the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
is not set indicates the device is centralized and thus needs
a dedicated scheduling thread that repeatedly calls
rte_event_schedule().

Update the worker thread code snippet to match
the description.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
7 years agoeventdev: clarify atomic and ordered queue config
Gage Eads [Wed, 21 Jun 2017 13:32:51 +0000 (19:02 +0530)]
eventdev: clarify atomic and ordered queue config

The nb_atomic_flows and nb_atomic_order_sequences fields are only inspected
if the queue is configured for atomic or ordered scheduling, respectively.
This commit updates the documentation to reflect that.

Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
7 years agoevent/sw: add queue-to-port stats
Harry van Haaren [Thu, 11 May 2017 09:56:26 +0000 (10:56 +0100)]
event/sw: add queue-to-port stats

This commit adds a new statistic to the SW eventdev PMD.
The statistic shows how many packets were sent from a
queue to a port. This provides information on how traffic
from a specific queue is being load-balanced to worker cores.

Note that these numbers should be compared across all queue
stages - the load-balancing does not try to perfectly share
each queue's traffic, rather it balances the overall traffic
from all queues to the ports.

The statistic is printed from the rte_eventdev_dump() function,
as well as being made available via the xstats API.

Unit tests have been updated to expect more per-queue statistics,
and the correctness of counts and counts after reset is verified.

Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
7 years agoip_frag: handle MTU sizes not aligned to 8 bytes
Allain Legacy [Tue, 14 Mar 2017 15:14:47 +0000 (11:14 -0400)]
ip_frag: handle MTU sizes not aligned to 8 bytes

The rte_ipv4_fragment_packet API expects that the link/interface MTU value
passed in be divisible by 8 bytes.  Given the name of the parameter is
"mtu" rather than "frag_size" it is not necessarily the case that it will
be divisible by 8.  An MTU of 1500 happens to produce a max fragment size
of 1480 (1500 - sizeof(ipv4_hdr)) which is divisible by 8 but other MTU
values such as 1600 or 9000 do not produce values that are divisible by 8.

Unfortunately, the API checks that the frag_size value produced is
divisible by 8 with a call to RTE_ASSERT which is only enabled when the
RTE_LOG_LEVEL >= RTE_LOG_DEBUG.  In cases where the log level is set
normally the code silently continues and produces IP fragments that have
invalid fragment offset values.

An application may not have control over what MTU a user selects and rather
than have each application adjust the MTU to pass a suitable value to the
fragmentation API this change modifies the fragmentation API to handle
cases where the "mtu" argument is not divisible by 8 and automatically
adjust the internal "frag_size".

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
7 years agoip_frag: free mbufs on reassembly table destroy
Dahir Osman [Mon, 5 Jun 2017 15:49:01 +0000 (11:49 -0400)]
ip_frag: free mbufs on reassembly table destroy

The rte_ip_frag_table_destroy procedure simply releases the memory for the
table without freeing the packet buffers that may be referenced in the hash
table for in-flight or incomplete packet reassembly operations.  To prevent
leaked mbufs go through the list of fragments and free each one
individually.

Fixes: 416707812c03 ("ip_frag: refactor reassembly code into a proper library")
Cc: stable@dpdk.org
Reported-by: Matt Peters <matt.peters@windriver.com>
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
7 years agotest/bonding: remove socket id check
Pablo de Lara [Wed, 21 Jun 2017 05:07:33 +0000 (06:07 +0100)]
test/bonding: remove socket id check

When creating a virtual pmd to test link bonding,
the socket id was checked, if it was in the range
of available sockets.
This check is unnecessary, as the socket specified
might not have memory anyway, so it will fail
at memory allocation.

Therefore, the best solution is to remove this check.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
7 years agonet/bonding: remove socket id check
Pablo de Lara [Wed, 21 Jun 2017 05:07:32 +0000 (06:07 +0100)]
net/bonding: remove socket id check

Socket id parsed from the user was checked
if it was in the range of available sockets.
This check is unnecessary, as the socket specified
might not have memory anyway, so it will fail
at memory allocation.

Therefore, the best solution is to remove this check.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
7 years agocrypto/scheduler: remove socket id check
Pablo de Lara [Wed, 21 Jun 2017 05:07:31 +0000 (06:07 +0100)]
crypto/scheduler: remove socket id check

Socket id parsed from the user was checked
if it was in the range of available sockets.
This check is unnecessary, as the socket specified
might not have memory anyway, so it will fail
at memory allocation.

Therefore, the best solution is to remove this check.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
7 years agocryptodev: remove socket id check
Pablo de Lara [Wed, 21 Jun 2017 05:07:30 +0000 (06:07 +0100)]
cryptodev: remove socket id check

Socket id parsed from the user was checked
if it was in the range of available sockets.
This check is unnecessary, as the socket specified
might not have memory anyway, so it will fail
at memory allocation.

Therefore, the best solution is to remove this check.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
7 years agoapp/testpmd: always build VF and MACsec functions
Thomas Monjalon [Thu, 15 Jun 2017 10:37:20 +0000 (12:37 +0200)]
app/testpmd: always build VF and MACsec functions

These functions are supported only on ixgbe.
However, they should appear in the help and returns an error
if the function is not supported or not enabled.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
7 years agolpm: fix index of tbl8
Wei Dai [Mon, 19 Jun 2017 04:14:38 +0000 (12:14 +0800)]
lpm: fix index of tbl8

From v20 to v1604, number of tbl8 can be up to 1<<24,
(uint8_t) or (uint16_t) may truncate the number of
index of tlb8 in v1604 and cause wrong number.

Fixes: dc81ebbacaeb ("lpm: extend IPv4 next hop field")
Cc: stable@dpdk.org
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
7 years agovhost: log error for badly negotiated features
Dariusz Stojaczyk [Fri, 16 Jun 2017 14:32:05 +0000 (16:32 +0200)]
vhost: log error for badly negotiated features

Since vhost_user_set_features failure is not handled in any way, a
single error log has been added to at least to let the user know that
something has gone wrong.

Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
7 years agonet/virtio: zero the whole memory zone
Tiwei Bie [Mon, 12 Jun 2017 04:34:30 +0000 (12:34 +0800)]
net/virtio: zero the whole memory zone

Zero the whole memory zone instead of the first few bytes.

Fixes: c1f86306a026 ("virtio: add new driver")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
7 years agovhost: fix crash on NUMA
Yuanhan Liu [Fri, 2 Jun 2017 00:14:46 +0000 (08:14 +0800)]
vhost: fix crash on NUMA

The queue allocation was changed, from allocating one queue-pair at a
time to one queue at a time. Most of the changes have been done, but
just with one being missed: the size of copying the old queue is still
based on queue-pair at numa_realloc(), which leads to overwritten issue.
As a result, crash may happen.

Fix it by specifying the right copy size. Also, the net queue macros
are not used any more. Remove them.

Fixes: ab4d7b9f1afc ("vhost: turn queue pair to vring")
Cc: stable@dpdk.org
Reported-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Jens Freimann <jfreiman@redhat.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
7 years agovhost: access VhostUsrMsg via packed struct
Daniel Verkamp [Fri, 26 May 2017 11:59:15 +0000 (13:59 +0200)]
vhost: access VhostUsrMsg via packed struct

Accessing fields of a packed struct through unaligned pointers is
undefined behavior. Instead of passing pointers to particular fields,
a pointer to the root struct should be used. This patch does exactly
that.

Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
7 years agovhost: fix guest pages memory leak
Dariusz Stojaczyk [Fri, 26 May 2017 11:59:14 +0000 (13:59 +0200)]
vhost: fix guest pages memory leak

This patch fixes a memory leak.
virtio_net::guest_pages is allocated in vhost_setup_mem_table(),
reallocated in add_one_guest_page(), but never freed.

Fixes: e246896178e6 ("vhost: get guest/host physical address mappings")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-by: Jens Freimann <jfreiman@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
7 years agovhost: fix malloc size too small
Dariusz Stojaczyk [Fri, 26 May 2017 11:59:13 +0000 (13:59 +0200)]
vhost: fix malloc size too small

Amount of allocated memory was too small, causing buffer overflow.

Fixes: eb32247457fe ("vhost: export guest memory regions")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-by: Jens Freimann <jfreiman@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
7 years agovhost: support Rx queue count request
Zhihong Wang [Fri, 26 May 2017 17:18:02 +0000 (13:18 -0400)]
vhost: support Rx queue count request

This patch implements the ops rx_queue_count for vhost PMD by adding
a helper function rte_vhost_rx_queue_count in vhost lib.

The ops rx_queue_count gets vhost RX queue avail count and helps to
understand the queue fill level.

Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
7 years agovhost: check allocation of guest pages
Jens Freimann [Thu, 11 May 2017 15:25:26 +0000 (17:25 +0200)]
vhost: check allocation of guest pages

When we try to allocate guest pages we need to check the return value of
malloc(). Print an error message and return when it fails.

Signed-off-by: Jens Freimann <jfreiman@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
7 years agomem: support page locking on FreeBSD
Thomas Monjalon [Thu, 15 Jun 2017 17:37:00 +0000 (19:37 +0200)]
mem: support page locking on FreeBSD

The function rte_mem_lock_page() was added for Linux only.
The file eal_common_memory.c is a better place to make it
available in FreeBSD also.

The issue is seen when trying to compile bnxt on FreeBSD:
bnxt_hwrm.c: undefined reference to `rte_mem_lock_page'

Fixes: 3097de6e6bfb ("mem: get physical address of any pointer")

Reported-by: Fangfang Wei <fangfangx.wei@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
7 years agoethdev: tidy up endianness handling in flow API
Adrien Mazarguil [Thu, 15 Jun 2017 15:48:59 +0000 (17:48 +0200)]
ethdev: tidy up endianness handling in flow API

The flow API defines several structures whose fields must be specified in
network order. This commit documents them using explicit type names and
related endianness conversion macros.

No ABI change.

Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
7 years agoeal: add static endianness conversion macros
Adrien Mazarguil [Thu, 15 Jun 2017 15:48:58 +0000 (17:48 +0200)]
eal: add static endianness conversion macros

These macros resolve to constant expressions that allow developers to
perform endianness conversion on static/const objects, even outside of
function scope as they do not translate to function calls.

This is most useful for static initializers and constant values (whenever
it has to be performed at compilation time). Run-time endianness conversion
of variable values should keep using rte_*_to_*() calls for best
performance.

Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
7 years agoeal: introduce big and little endian types
Nelio Laranjeiro [Thu, 15 Jun 2017 15:48:57 +0000 (17:48 +0200)]
eal: introduce big and little endian types

This commit introduces new rte_{le,be}{16,32,64}_t types and updates
rte_{le,be,cpu}_to_{le,be,cpu}_*() accordingly.

These types are added for documentation purposes, mainly to clarify the
byte ordering to use for storage when not CPU order. Doing so eliminates
uncertainty and conversion mistakes.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
7 years agoapp/testpmd: fix build with bypass without ixgbe
Thomas Monjalon [Thu, 15 Jun 2017 09:25:06 +0000 (11:25 +0200)]
app/testpmd: fix build with bypass without ixgbe

When ixgbe bypass is not explicitly disabled while ixgbe is disabled:
app/test-pmd/testpmd.c:304:27: error:
‘RTE_PMD_IXGBE_BYPASS_TMT_OFF’ undeclared here

The ixgbe bypass feature is meaningful only if ixgbe is enabled.
So we need to check both.

A best fix will be to enable bypass always and remove this option.

Fixes: e261265e42a1 ("ethdev: move bypass functions to ixgbe PMD")

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
7 years agoapp/testpmd: fix build without ixgbe
Thomas Monjalon [Thu, 15 Jun 2017 09:34:16 +0000 (11:34 +0200)]
app/testpmd: fix build without ixgbe

cmd_set_vf_rxmode_parsed() was defined only in the build context
of RTE_LIBRTE_IXGBE_PMD:
app/test-pmd/cmdline.c:13817:27: error: ‘cmd_set_vf_rxmode’ undeclared here

Fixes: 4cfe399f6550 ("net/bnxt: support to set VF rxmode")

Reported-by: Yongseok Koh <yskoh@mellanox.com>
Reported-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
7 years agofix typos using codespell utility
Jerin Jacob [Wed, 7 Jun 2017 05:05:06 +0000 (10:35 +0530)]
fix typos using codespell utility

Fixing typos across dpdk source code using codespell utility.
Skipped the ethdev driver's base code fixes to keep the base
code intact.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
7 years agopdump: remove unnecessary header include
Reshma Pattan [Mon, 12 Jun 2017 09:46:11 +0000 (10:46 +0100)]
pdump: remove unnecessary header include

Missed to remove unnecessary header file rte_pci.h.
Removed it now.

Fixes: bb900072ffaa ("pdump: revert PCI device name conversion")

Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
7 years agoethdev: add isolated mode to flow API
Adrien Mazarguil [Wed, 14 Jun 2017 14:48:51 +0000 (16:48 +0200)]
ethdev: add isolated mode to flow API

Isolated mode can be requested by applications on individual ports to avoid
ingress traffic outside of the flow rules they define.

Besides making ingress more deterministic, it allows PMDs to safely reuse
resources otherwise assigned to handle the remaining traffic, such as
global RSS configuration settings, VLAN filters, MAC address entries,
legacy filter API rules and so on in order to expand the set of possible
flow rule types.

To minimize code complexity, PMDs implementing this mode may provide
partial (or even no) support for flow rules when not enabled (e.g. no
priorities, no RSS action). Applications written to use the flow API are
therefore encouraged to enable it.

Once effective, leaving isolated mode may not be possible depending on PMD
implementation.

Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
7 years agohash: fix icc build
Ferruh Yigit [Tue, 13 Jun 2017 16:42:12 +0000 (17:42 +0100)]
hash: fix icc build

build error with icc version 17.0.4 (gcc version 7.0.0 compatibility):

In file included from .../dpdk/lib/librte_hash/rte_fbk_hash.h(59),
                 from .../dpdk/lib/librte_hash/rte_fbk_hash.c(54):
.../dpdk/x86_64-native-linuxapp-icc/include/rte_hash_crc.h(480):
 error #1292: unknown attribute "fallthrough"
                __attribute__ ((fallthrough));
                                ^

In file included from .../dpdk/lib/librte_hash/rte_fbk_hash.h(59),
                 from .../dpdk/lib/librte_hash/rte_fbk_hash.c(54):
.../dpdk/x86_64-native-linuxapp-icc/include/rte_hash_crc.h(486):
 error #1292: unknown attribute "fallthrough"
                __attribute__ ((fallthrough));
                                ^
This code patch hit when gcc > 7 installed and ICC doesn't recognize
fallthrough attribute.

Fixed by disabling code when compiled with ICC.

Fixes: 3dfb9facb055 ("lib: add switch fall-through comments")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
7 years agonet/mlx5: fix build with gcc 7.1
Ferruh Yigit [Tue, 13 Jun 2017 16:42:11 +0000 (17:42 +0100)]
net/mlx5: fix build with gcc 7.1

build error:
.../dpdk/drivers/net/mlx5/mlx5_fdir.c:
  In function ‘fdir_filter_to_flow_desc’:
.../dpdk/drivers/net/mlx5/mlx5_fdir.c:146:18:
 error: this statement may fall through [-Werror=implicit-fallthrough=]
   desc->dst_port = fdir_filter->input.flow.udp4_flow.dst_port;
   ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.../dpdk/drivers/net/mlx5/mlx5_fdir.c:147:2: note: here
  case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
  ^~~~

Fixed by adding fallthrough comment to the code.

Fixes: 76f5c99e6840 ("mlx5: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
7 years agonet/enic: fix build with gcc 7.1
Ferruh Yigit [Tue, 13 Jun 2017 16:42:10 +0000 (17:42 +0100)]
net/enic: fix build with gcc 7.1

build error:

.../dpdk/drivers/net/enic/base/vnic_dev.c:
  In function ‘vnic_dev_get_mac_addr’:
.../dpdk/drivers/net/enic/base/vnic_dev.c:470:12:
  error: ‘a0’ is used uninitialized in this function
  [-Werror=uninitialized]
  args[0] = *a0;
            ^~~
...dpdk/drivers/net/enic/base/vnic_dev.c:
  In function ‘vnic_dev_classifier’:
...dpdk/drivers/net/enic/base/vnic_dev.c:471:12:
  error: ‘a1’ may be used uninitialized in this function
  [-Werror=maybe-uninitialized]
  args[1] = *a1;
            ^~~
Fixed by providing initial values.

Fixes: 9913fbb91df0 ("enic/base: common code")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
7 years agonet/i40e: fix memset size
Ferruh Yigit [Tue, 13 Jun 2017 16:42:09 +0000 (17:42 +0100)]
net/i40e: fix memset size

This causes build error with gcc 7.1.1 :

...dpdk/drivers/net/i40e/i40e_flow.c:2357:2:
error: ‘memset’ used with length equal to number of elements without
       multiplication by element size [-Werror=memset-elt-size]
  memset(off_arr, 0, I40E_MAX_FLXPLD_FIED);
  ^~~~~~

...dpdk/drivers/net/i40e/i40e_flow.c:2358:2:
error: ‘memset’ used with length equal to number of elements without
       multiplication by element size [-Werror=memset-elt-size]
  memset(len_arr, 0, I40E_MAX_FLXPLD_FIED);
  ^~~~~~

Fixed by providing correct size to memset.

Fixes: 6ced3dd72f5f ("net/i40e: support flexible payload parsing for FDIR")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
7 years agokni: fix build with gcc 7.1
Ferruh Yigit [Tue, 13 Jun 2017 16:42:08 +0000 (17:42 +0100)]
kni: fix build with gcc 7.1

build error:
.../dpdk/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:
  In function ‘igb_kni_probe’:
.../dpdk/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2483:30:
  error: ‘%d’ directive output may be truncated writing between 1 and 5
  bytes into a region of size between 0 and 11
  [-Werror=format-truncation=]
        "%d.%d, 0x%08x, %d.%d.%d",
                              ^~
.../dpdk/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2483:8:
  note: directive argument in the range [0, 65535]
        "%d.%d, 0x%08x, %d.%d.%d",
        ^~~~~~~~~~~~~~~~~~~~~~~~~
.../dpdk/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2481:4:
  note: ‘snprintf’ output between 23 and 43 bytes into a destination of
  size 32
    snprintf(adapter->fw_version,
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        sizeof(adapter->fw_version),
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        "%d.%d, 0x%08x, %d.%d.%d",
        ~~~~~~~~~~~~~~~~~~~~~~~~~~
        fw.eep_major, fw.eep_minor, fw.etrack_id,
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        fw.or_major, fw.or_build, fw.or_patch);
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Fixed by increasing buffer size to 43 as suggested in compiler log.

Fixes: b9ee370557f1 ("kni: update kernel driver ethtool baseline")
Cc: stable@dpdk.org
Reported-by: Nirmoy Das <ndas@suse.de>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Markos Chandras <mchandras@suse.de>
7 years agonet/thunderx: manage PCI device mapping for SQS VFs
Jerin Jacob [Fri, 9 Jun 2017 10:27:46 +0000 (15:57 +0530)]
net/thunderx: manage PCI device mapping for SQS VFs

Since the commit e84ad157b7bc ("pci: unmap resources if probe fails"),
EAL unmaps the PCI device if ethdev probe returns positive or
negative value.

nicvf thunderx PMD needs special treatment for Secondary queue set(SQS)
PCIe VF devices, where, it expects to not unmap or free the memory
without registering the ethdev subsystem.

Enable the same behavior by using RTE_PCI_DRV_KEEP_MAPPED_RES
PCI driver flag.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
7 years agoeal/pci: introduce a PCI driver flag
Jerin Jacob [Fri, 9 Jun 2017 10:27:45 +0000 (15:57 +0530)]
eal/pci: introduce a PCI driver flag

Some ethdev devices like nicvf thunderx PMD need special treatment for
Secondary queue set(SQS) PCIe VF devices, where, it expects to not unmap
or free the memory without registering the ethdev subsystem.

Introducing a new RTE_PCI_DRV_KEEP_MAPPED_RES
PCI driver flag to request PCI subsystem to not unmap the mapped PCI
resources(PCI BAR address) if unsupported device detected.

Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
7 years agodrivers/net: document missing speed capabilities feature
Ferruh Yigit [Mon, 15 May 2017 12:30:46 +0000 (13:30 +0100)]
drivers/net: document missing speed capabilities feature

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
7 years agoethdev: remove driver name from device private data
Ferruh Yigit [Mon, 12 Jun 2017 15:25:12 +0000 (16:25 +0100)]
ethdev: remove driver name from device private data

rte_driver->name has the driver name and all physical and virtual
devices has access to it.

Previously it was not possible for virtual ethernet devices to access
rte_driver->name field (because eth_dev used to keep only pci_dev),
and it was required to save driver name in the device private struct.

After re-works on bus and vdev, it is possible for all bus types to
access rte_driver.

It is able to remove the driver name from ethdev device private data and
use eth_dev->device->driver->name.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Jan Blunck <jblunck@infradead.org>
7 years agonet/ring: use EAL APIs in PMD specific API
Ferruh Yigit [Mon, 12 Jun 2017 15:25:11 +0000 (16:25 +0100)]
net/ring: use EAL APIs in PMD specific API

When ring PMD created via PMD specific API instead of EAL abstraction
it is missing the virtual device creation done by EAL vdev.

And this makes eth_dev unusable exact same as other PMDs used, because
of some missing fields, like rte_device->name.

Now API calls EAL APIs to create ring PMDs.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
7 years agonet/ring: set ethernet device field
Ferruh Yigit [Mon, 12 Jun 2017 15:25:10 +0000 (16:25 +0100)]
net/ring: set ethernet device field

The eth_dev->device link was missing for ring PMD, adding it.

This is to generalize rte_device access from eth_dev.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
7 years agonet/szedata2: add more supported firmwares
Matej Vido [Mon, 12 Jun 2017 12:03:22 +0000 (14:03 +0200)]
net/szedata2: add more supported firmwares

Add IBUF and OBUF offsets definitions for new firmwares.

Signed-off-by: Matej Vido <vido@cesnet.cz>
7 years agonet/szedata2: move ibuf and obuf to specific header
Matej Vido [Mon, 12 Jun 2017 12:03:21 +0000 (14:03 +0200)]
net/szedata2: move ibuf and obuf to specific header

Signed-off-by: Matej Vido <vido@cesnet.cz>
7 years agonet/szedata2: refactor ibuf and obuf address definition
Matej Vido [Mon, 12 Jun 2017 12:03:20 +0000 (14:03 +0200)]
net/szedata2: refactor ibuf and obuf address definition

This is to prepare for firmwares with multiple ibufs and obufs.
Ibufs and obufs are the modules in FPGA firmware implementing
the Ethernet port.
There is one ibuf+obuf per Ethernet port.
The cards and firmwares allow one physical port to be one Ethernet
port or split into more Ethernet ports, e.g. one 100GE physical
port can be one Ethernet port of 100GE or split into ten Ethernet
ports of 10GE.
All DMA queues in the device are shared between all Ethernet ports.
Offsets of ibufs and obufs are defined in array.
Functions which operate on ibufs and obufs iterate over this array.

Signed-off-by: Matej Vido <vido@cesnet.cz>
7 years agonet/szedata2: refactor ibuf and obuf read and write
Matej Vido [Mon, 12 Jun 2017 12:03:19 +0000 (14:03 +0200)]
net/szedata2: refactor ibuf and obuf read and write

Remove unused read and write functions.
Use rte_read*, rte_write* functions to access ibuf and obuf
address space.

Signed-off-by: Matej Vido <vido@cesnet.cz>
7 years agonet/szedata2: refactor ibuf and obuf names
Matej Vido [Mon, 12 Jun 2017 12:03:18 +0000 (14:03 +0200)]
net/szedata2: refactor ibuf and obuf names

Prefix "cgmii" is removed because it is too specific.
There are different ibuf/obuf modules in different firmwares
but the address space definition is the same.
This patch makes the name general.

Signed-off-by: Matej Vido <vido@cesnet.cz>
7 years agonet/igb: flush all the filter
Wei Zhao [Mon, 12 Jun 2017 06:48:28 +0000 (14:48 +0800)]
net/igb: flush all the filter

This patch adds a function to flush all the fliter list
and filter rule on a port.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: destroy consistent filter
Wei Zhao [Mon, 12 Jun 2017 06:48:27 +0000 (14:48 +0800)]
net/igb: destroy consistent filter

This patch adds a function to destroy the flow fliter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: create consistent filter
Wei Zhao [Mon, 12 Jun 2017 06:48:26 +0000 (14:48 +0800)]
net/igb: create consistent filter

This patch adds a function to create the flow directory filter.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: parse flow API flex filter
Wei Zhao [Mon, 12 Jun 2017 06:48:25 +0000 (14:48 +0800)]
net/igb: parse flow API flex filter

check if the rule is a flex byte rule, and get the flex info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: parse flow API TCP SYN filter
Wei Zhao [Mon, 12 Jun 2017 06:48:24 +0000 (14:48 +0800)]
net/igb: parse flow API TCP SYN filter

check if the rule is a TCP SYN rule, and get the SYN info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: parse flow API ethertype filter
Wei Zhao [Mon, 12 Jun 2017 06:48:23 +0000 (14:48 +0800)]
net/igb: parse flow API ethertype filter

check if the rule is a ethertype rule, and get the ethertype info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: parse flow API n-tuple filter
Wei Zhao [Mon, 12 Jun 2017 06:48:22 +0000 (14:48 +0800)]
net/igb: parse flow API n-tuple filter

Add rule validate function and check if the rule is a
n-tuple rule, and get the n-tuple info.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: restore flex type filter
Wei Zhao [Mon, 12 Jun 2017 06:48:21 +0000 (14:48 +0800)]
net/igb: restore flex type filter

Add support for restoring flex type filter in SW.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: restore ether type filter
Wei Zhao [Mon, 12 Jun 2017 06:48:20 +0000 (14:48 +0800)]
net/igb: restore ether type filter

Add support for restoring ether type filter in SW.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: restore n-tuple filter
Wei Zhao [Mon, 12 Jun 2017 06:48:19 +0000 (14:48 +0800)]
net/igb: restore n-tuple filter

Add support for restoring n-tuple filter in SW.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/igb: store and restore TCP SYN filter
Wei Zhao [Mon, 12 Jun 2017 06:48:18 +0000 (14:48 +0800)]
net/igb: store and restore TCP SYN filter

Add support for storing and restoring TCP SYN filter in SW.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
7 years agonet/liquidio: do not touch mbuf initialized fields
Shijith Thotton [Thu, 8 Jun 2017 11:22:51 +0000 (16:52 +0530)]
net/liquidio: do not touch mbuf initialized fields

Avoid re-initializing of mbuf fields which are set while in pool.
Replaced lio_recv_buffer_alloc with rte_pktmbuf_alloc.

See commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool").

Signed-off-by: Shijith Thotton <shijith.thotton@caviumnetworks.com>
7 years agonet/bnxt: fix reporting of link status
Ajit Khaparde [Fri, 9 Jun 2017 04:24:48 +0000 (23:24 -0500)]
net/bnxt: fix reporting of link status

This patch fixes incorrect reporting of link status

1) When link is down, set speed to zero. Otherwise a wrong non-zero
   speed will be displayed.

2) DAC cables can detect there is a signal, but it necessarily does not
   mean link is up. Code previously treated this as link up.

Fixes: 7bc8e9a227cc ("net/bnxt: support async link notification")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
7 years agonet/bnxt: update HWRM defines
Ajit Khaparde [Fri, 9 Jun 2017 04:24:47 +0000 (23:24 -0500)]
net/bnxt: update HWRM defines

Some HWRM defines are missing from hsi_struct_def_dpdk.h
This patch adds them.

Also remove duplicate HWRM_RING_GRP_ALLOC entry.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
7 years agonet/bnxt: move PMD specific functions
Ajit Khaparde [Fri, 9 Jun 2017 04:24:46 +0000 (23:24 -0500)]
net/bnxt: move PMD specific functions

Move PMD specific functions in the appropriate rte_pmd_bnxt.c file

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
7 years agonet/i40e: support ether pattern for FDIR
Beilei Xing [Fri, 9 Jun 2017 08:21:23 +0000 (16:21 +0800)]
net/i40e: support ether pattern for FDIR

Previously, i40e PMD will select ethertype filter
parser when adding ether pattern rules. In fact,
FDIR also supports ether pattern.
This patch adds ether pattern support for FDIR.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
7 years agonet/i40e: update supported patterns for FDIR
Beilei Xing [Fri, 9 Jun 2017 08:21:22 +0000 (16:21 +0800)]
net/i40e: update supported patterns for FDIR

This patch updates supported patterns for flow
director filters.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
7 years agonet/i40e: support input set selection for FDIR
Beilei Xing [Fri, 9 Jun 2017 08:21:21 +0000 (16:21 +0800)]
net/i40e: support input set selection for FDIR

This patch supports input set selection for flow
director filter.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
7 years agonet/i40e: support flexible payload parsing for FDIR
Beilei Xing [Fri, 9 Jun 2017 08:21:20 +0000 (16:21 +0800)]
net/i40e: support flexible payload parsing for FDIR

This patch adds flexible payload parsing support for
flow director filter.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
7 years agonet/i40e: add NVGRE flow parsing
Beilei Xing [Wed, 7 Jun 2017 06:53:59 +0000 (14:53 +0800)]
net/i40e: add NVGRE flow parsing

This patch adds NVGRE flow parsing function to support NVGRE
classification.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
7 years agonet/i40e: refactor VXLAN flow parsing function
Beilei Xing [Wed, 7 Jun 2017 06:53:58 +0000 (14:53 +0800)]
net/i40e: refactor VXLAN flow parsing function

The current vxlan parsing function is not easy to read when parsing
filter type, this patch optimizes the function and makes it more
readable.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
7 years agonet/liquidio: fix MTU calculation from port configuration
Shijith Thotton [Tue, 6 Jun 2017 11:04:34 +0000 (16:34 +0530)]
net/liquidio: fix MTU calculation from port configuration

max_rx_pkt_len member of port RX configuration indicates max frame
length. Ethernet header and CRC length should be subtracted from it to
find MTU.

Fixes: 605164c8e79d ("net/liquidio: add API to validate VF MTU")
Cc: stable@dpdk.org
Signed-off-by: Shijith Thotton <shijith.thotton@caviumnetworks.com>
7 years agonet/mlx4: support user space Rx interrupt event
Moti Haimovsky [Tue, 6 Jun 2017 14:48:29 +0000 (17:48 +0300)]
net/mlx4: support user space Rx interrupt event

Implement rxq interrupt callbacks

Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
7 years agonet/qede: refactor Tx routine
Harish Patil [Wed, 7 Jun 2017 07:42:22 +0000 (00:42 -0700)]
net/qede: refactor Tx routine

Refactor TX routine such that TX BD updates can all be grouped together.
Based on the TX offloads requested the TX bitfields are calculated in
a temporary variable and TX BDs are updated at the end. This will minimize
the if checks also. This change is done to easily accommodate newer TX
offload operations in the future.

Signed-off-by: Harish Patil <harish.patil@cavium.com>
7 years agonet/qede: fix VXLAN tunnel Tx offload flag setting
Harish Patil [Wed, 7 Jun 2017 07:42:21 +0000 (00:42 -0700)]
net/qede: fix VXLAN tunnel Tx offload flag setting

This patch fixes missing PKT_TX_TUNNEL_VXLAN Tx offload flag from the
supported Tx offloads and an incorrect tunnel TX BD bit setting.

Fixes: 3d4bb4411683 ("net/qede: add fastpath support for VXLAN tunneling")
Cc: stable@dpdk.org
Signed-off-by: Harish Patil <harish.patil@cavium.com>
7 years agonet/qede/base: upgrade the FW to 8.20.0.0
Rasesh Mody [Wed, 7 Jun 2017 07:42:20 +0000 (00:42 -0700)]
net/qede/base: upgrade the FW to 8.20.0.0

This patch adds changes to upgrade to 8.20.0.0 FW.

Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
7 years agonet/qede: refactoring multi-queue implementation
Harish Patil [Wed, 7 Jun 2017 07:42:19 +0000 (00:42 -0700)]
net/qede: refactoring multi-queue implementation

This patch does the following refactoring and cleanup:

- As part of multi-queue support a struct member called 'type' was added
  in struct qede_fastpath in order to identify whether a queue is RX or
  TX and take actions based on that. This was unnecessary in the first
  place since pointers to RX and TX queues are already available in
  rte_eth_dev->data. So all usage of fp->type is removed.

- Remove remaining additional layer of internal callbacks for RX/TX
  queues and fastpath related operations from the qed_eth_ops_pass.
  With this change the files qede_eth_if.[c,h] are no longer needed.

- Add new per-queue start/stop APIs instead of clubbing it all together.

- Remove multiple TXQs references (num_tc and fp->txqs) since CoS is not
  supported.

- Enable sharing of the status block for each queue pair.

- Remove enum qede_dev_state and instead make use of existing port
  states RTE_ETH_QUEUE_STATE_STOPPED/RTE_ETH_QUEUE_STATE_STARTED.

- Move qede_dev_start() and qede_dev_stop() to qede_ethdev.c from
  qede_rxtc.c.

Signed-off-by: Harish Patil <harish.patil@cavium.com>
7 years agonet/qede: refactoring vport handling code
Harish Patil [Wed, 7 Jun 2017 07:42:18 +0000 (00:42 -0700)]
net/qede: refactoring vport handling code

The refactoring is mainly for two reasons:

- To remove an additional layer of internal callbacks for all vport
  related operations from the struct qed_eth_ops_pass. Instead, we
  can invoke base APIs directly.

- Splitting a single large vport-update configuration into multiple and
  independent vport-update operations. Each configuration would touch
  only the required config bits that needs an update.

Signed-off-by: Harish Patil <harish.patil@cavium.com>
7 years agodoc: update release notes for bnxt PMD
Ajit Khaparde [Thu, 1 Jun 2017 17:07:23 +0000 (12:07 -0500)]
doc: update release notes for bnxt PMD

Update release doc briefly describing updates to bnxt PMD.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
7 years agonet/bnxt: support to set VF rxmode
Ajit Khaparde [Thu, 1 Jun 2017 17:07:22 +0000 (12:07 -0500)]
net/bnxt: support to set VF rxmode

This patch adds support to configure the VF L2 Rx settings.
The per VF setting is maintained in bnxt_child_vf_info.l2_rx_mask

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
7 years agonet/bnxt: configure a default VF VLAN
Ajit Khaparde [Thu, 1 Jun 2017 17:07:21 +0000 (12:07 -0500)]
net/bnxt: configure a default VF VLAN

This patch adds code to insert a default VF VLAN.
Also track the current default VLAN per vnic for the VF.
When setting the default VLAN, avoid setting it to the current value.

Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
7 years agonet/bnxt: support to add a VF MAC address
Ajit Khaparde [Thu, 1 Jun 2017 17:07:20 +0000 (12:07 -0500)]
net/bnxt: support to add a VF MAC address

This patch adds support to allocate a filter and program
it in the hardware for every MAC address added to the specified
function.

Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>