This part of the counter optimize change the DV counter to indexed as
what have already done in verbs. In this case, all the mlx5 flow counter
can be addressed by index.
The counter index is composed of pool index and the counter offset in
the pool counter array. The batch and none batch counter dcs ID offset
0x800000 is used to avoid the mix up for the index. As batch counter dcs
ID starts from 0x800000 and none batch counter dcs starts from 0, the
0x800000 offset is added to the batch counter index to indicate the
index of batch counter.
The counter pointer in rte_flow struct will be aligned to index instead
of pointer. It will save 4 bytes memory for every rte_flow. With
millions of rte_flow, it will save MBytes memory.
This commit is a part for the DV counter optimization.
The batch counter dcs id starts from 0x800000 and none batch counter
starts from 0. As currently, the counter is changed to be indexed by
pool index and the offset of the counter in the pool counters_raw array.
It means now the counter index is same for batch and none batch counter.
Add the 0x800000 batch counter offset to the batch counter index helps
indicate the counter index is from batch or none batch container pool.
Query generation was introduced to avoid counter to be reallocated
before the counter statistics be fully updated. Since the counters
be released between query trigger and query handler may miss the
packets arrived in the trigger and handler gap period. In this case,
user can only allocate the counter while pool query_gen is greater
than the counter query_gen + 1 which indicates a new round of query
finished, the statistic is fully updated.
Split the pool query_gen to start_query_gen and end_query_gen helps
to have a better identify for the counter released in the gap period.
And it helps the counter released before query trigger or after query
handler can be reallocated more efficiently.
As none-batch counter pool allocates only one counter every time, after
the new allocated counter pop out, the pool will be empty and moved to
the end of the container list in the container.
Currently, the new non-batch counter allocation maybe happened with new
counter pool allocated, it means the new counter comes from a new pool.
While new pool is allocated, the container resize and switch happens.
In this case, after the pool becomes empty, it should be added to the
new container pool list as the pool belongs.
Update the container pointer accordingly with pool allocation to avoid
add the pool to the incorrect container.
Michal Krawczyk [Wed, 8 Apr 2020 08:29:21 +0000 (10:29 +0200)]
net/ena: update driver version to v2.1.0
The v2.1.0 is refactoring Tx and Rx paths, including few bug fixes and
is also adding a new features which are going to be available with the
newest hardware.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:20 +0000 (10:29 +0200)]
doc: add notes on ENA usage on metal instances
As AWS metal instances are supporting IOMMU, the usage of igb_uio or
vfio-pci can lead to a problems (when to use which module), especially
that the vfio-pci isn't supporting SMMU on arm64.
To clear up the problem of using those modules in various setup
conditions (with or without IOMMU) on metal instances, more detailed
explanation was added.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:19 +0000 (10:29 +0200)]
net/ena: reuse zero length Rx descriptor
Some ENA devices can pass to the driver descriptor with length 0. To
avoid extra allocation, the descriptor can be reused by simply putting
it back to the device.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:18 +0000 (10:29 +0200)]
net/ena: refactor Tx
The original Tx function was very long and was containing both cleanup
and the sending sections. Because of that it was having a lot of local
variables, big indentation and was hard to read.
This function was split into 2 sections:
* Sending - which is responsible for preparing the mbuf, mapping it
to the device descriptors and finally, sending packet to the HW
* Cleanup - which is releasing packets sent by the HW. Loop which was
releasing packets was reworked a bit, to make intention more visible
and aligned with other parts of the driver.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:17 +0000 (10:29 +0200)]
net/ena: use macros for ring index operations
To improve code readability, abstraction was added for operating on IO
rings indexes.
Driver was defining local variable for ring mask in each function that
needed to operate on the ring indexes. Now it is being stored in the
ring as this value won't change unless size of the ring will change and
macros for advancing indexes using the mask has been added.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:16 +0000 (10:29 +0200)]
net/ena: limit refill threshold by fixed value
Divider used for both Tx and Rx cleanup/refill threshold can cause too
big delay in case of the really big rings - for example if the 8k Rx
ring will be used, the refill won't trigger unless 1024 threshold will
be reached. It will also cause driver to try to allocate that much
descriptors.
Limiting it by fixed value - 256 in that case, would limit maximum
time spent in repopulate function.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:15 +0000 (10:29 +0200)]
net/ena: rework getting number of available descriptors
ena_com API should be preferred for getting number of used/available
descriptors unless extra calculation needs to be performed.
Some helper variables were added for storing values that are later
reused. Moreover, for limiting the value of sent/received packets to
the number of available descriptors, the RTE_MIN is used instead of
if function, which was doing similar thing but was less descriptive.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:14 +0000 (10:29 +0200)]
net/ena: refactor Rx
* Split main Rx function into multiple ones - the body of the main
was very big and further there were 2 nested loops, which were
making the code hard to read
* Rework how the Rx mbuf chains are being created - Instead of having
while loop which has conditional check if it's first segment, handle
this segment outside the loop and if more fragments are existing,
process them inside.
* Initialize Rx mbuf using simple function - it's the common thing for
the 1st and next segments.
* Create structure for Rx buffer to align it with Tx path, other ENA
drivers and to make the variable name more descriptive - on DPDK, Rx
buffer must hold only mbuf, so initially array of mbufs was used as
the buffers. However, it was misleading, as it was named
"rx_buffer_info". To make it more clear, the structure holding mbuf
pointer was added and now there is possibility to expand it in the
future without reworking the driver.
* Remove redundant variables and conditional checks.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:13 +0000 (10:29 +0200)]
net/ena: disable meta caching
In the LLQ (Low-latency queue) mode, the device can indicate that meta
data descriptor caching is disabled. In that case the driver should send
valid meta descriptor on every Tx packet.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:12 +0000 (10:29 +0200)]
net/ena: add Tx drops statistic
ENA device can report in the AENQ handler amount of Tx packets that were
dropped and not sent.
This statistic is showing global value for the device and because
rte_eth_stats is missing field that could indicate this value (it
isn't the Tx error), it is being presented as a extended statistic.
As the current design of extended statistics prevents tx_drops from
being an atomic variable and both tx_drops and rx_drops are only updated
from the AENQ handler, both were set as non-atomic for the alignment.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:10 +0000 (10:29 +0200)]
net/ena: support large LLQ headers
Default LLQ (Low-latency queue) maximum header size is 96 bytes and can
be too small for some types of packets - like IPv6 packets with multiple
extension. This can be fixed, by using large LLQ headers.
If the device supports larger LLQ headers, the user can activate them by
using device argument 'large_llq_hdr' with value '1'.
If the device isn't supporting this feature, the default value (96B)
will be used.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:09 +0000 (10:29 +0200)]
net/ena: refactor getting IO queues capabilities
Reading values from the device is about the maximum capabilities of the
device. Because of that, the names of the fields storing those values,
functions and temporary variables, should be more descriptive in order
to improve self documentation of the code.
In connection with this, the way of getting maximum queue size could be
simplified - no hardcoded values are needed, as the device is going to
send it's capabilities anyway.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:08 +0000 (10:29 +0200)]
net/ena: set IO ring size to valid value
IO rings were configured with the maximum allowed size for the Tx/Rx
rings. However, the application could decide to create smaller rings.
This patch is using value stored in the ring instead of the value from
the adapter which is indicating the maximum allowed value.
Fixes: df238f84c0a2 ("net/ena: recreate HW IO rings on start and stop") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:06 +0000 (10:29 +0200)]
net/ena/base: fix indentation of multiple defines
As the alignment of the defines wasn't valid, it was removed at all, so
instead of using multiple spaces or tabs, the single space after define
name is being used.
Fixes: 99ecfbf845b3 ("ena: import communication layer") Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:05 +0000 (10:29 +0200)]
net/ena/base: fix types for printing timestamps
Because ena_com is being used by multiple platforms which are using
different C versions, PRIu64 cannot be used directly and must be defined
in the platform file.
Fixes: b2b02edeb0d6 ("net/ena/base: upgrade HAL for new HW features") Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:04 +0000 (10:29 +0200)]
net/ena/base: use 48-bit memory addresses
ENA device is using 48-bit memory for IO. Because of that, the upper
limit had to be updated.
From the driver perspective, it's just a cosmetic change to make
definition of the structure 'ena_common_mem_addr' more descriptive and
the address value was verified anyway for the valid range in the
function 'ena_com_mem_addr_set()'.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:02 +0000 (10:29 +0200)]
net/ena/base: fix indentation in CQ polling
The spaces instead of tabs were used for the indent.
Fixes: 3adcba9a8987 ("net/ena: update HAL to the newer version") Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:01 +0000 (10:29 +0200)]
net/ena/base: fix documentation of functions
The documentation format was aligned and few typos were fixed.
Fixes: 99ecfbf845b3 ("ena: import communication layer") Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:00 +0000 (10:29 +0200)]
net/ena/base: add accelerated LLQ mode
In order to use the accelerated LLQ (Low-lateny queue) mode, the driver
must limit the Tx burst and be aware that the device has the meta
caching disabled. In that situation, the meta descriptor must be valid
on each Tx packet.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:28:56 +0000 (10:28 +0200)]
net/ena/base: fix testing for supported hash function
There was a bug in ena_com_fill_hash_function(), which was causing bit to
be shifted left one bit too much.
To fix that, the ENA_FFS macro is being used (returning the location of
the first bit set), hash_function value is being subtracted by 1 if any
hash function is supported by the device and BIT macro is used for
shifting for better verbosity.
Fixes: 99ecfbf845b3 ("ena: import communication layer") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Igor Chauskin [Wed, 8 Apr 2020 08:28:54 +0000 (10:28 +0200)]
net/ena/base: prevent allocation of zero sized memory
rte_memzone_reserve() will reserve the biggest contiguous memzone
available if received 0 as size param.
Fixes: 9ba7981ec992 ("ena: add communication layer for DPDK") Cc: stable@dpdk.org Signed-off-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Igor Chauskin [Wed, 8 Apr 2020 08:28:53 +0000 (10:28 +0200)]
net/ena/base: make allocation macros thread-safe
Memory allocation region id could possibly be non-unique
due to non-atomic increment, causing allocation failure.
Fixes: 9ba7981ec992 ("ena: add communication layer for DPDK") Cc: stable@dpdk.org Signed-off-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:28:52 +0000 (10:28 +0200)]
net/ena: ensure Rx buffer size is at least 1400B
Some of the ENA devices can't handle buffers which are smaller than a
1400B. Because of this limitation, size of the buffer is being checked
and limited during the Rx queue setup.
If it's below the allowed value, PMD won't finish it's configuration
successfully..
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Update a switch rule' action from "to VSI" to "to VSI List"
should only happen when the same rule has been programmed with
a different fwd destination. This is already handled by below
code block:
m_entry = ice_find_adv_rule_entry(...)
if (m_entry) {
...
ice_adv_add_update_vsi_list(...)
}
The following ice_update_pkt_fwd_rule is unnecessary and should be
removed due to:
1) If a switch rule's action is still to VSI, which means, it is
the first time be issued, we don't need to update it "to VSI
List."
2) Actually the implementation does not match the comment, it still
update the rule with "to VSI" action.
Fixes: fed0c5ca5f19 ("net/ice/base: support programming a new switch recipe") Cc: stable@dpdk.org Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Lunyuan Cui [Thu, 2 Apr 2020 07:58:47 +0000 (07:58 +0000)]
net/i40e: enable MAC address as flow director input set
Enable source MAC address and destination MAC address as FDIR's
input set for ipv4-other, ipv4-udp and ipv4-tcp. When OVS-DPDK is
working as a pure L2 switch, enable MAC address as FDIR input set
with Mark+RSS action would help the performance speed up. And FVL
FDIR supports to change input set with MAC address.
Signed-off-by: Lunyuan Cui <lunyuanx.cui@intel.com> Acked-by: Beilei Xing <beilei.xing@intel.com>
Yunjian Wang [Tue, 7 Apr 2020 11:37:27 +0000 (19:37 +0800)]
net/nfp: fix dangling pointer on probe failure
When nfp_pf_create_dev() is cleaning up, it does not correctly set
the dev_private variable to NULL, which will lead to a double free.
Fixes: ef28aa96e53b ("net/nfp: support multiprocess") Cc: stable@dpdk.org Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Acked-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
Add the previous prototype for the 'profile_hook_rx_burst_cb' function
to fix the compiler warning when the DPDK is built with
'RTE_ETHDEV_PROFILE_WITH_VTUNE' config option enabled:
/home/dpdk/lib/librte_ethdev/ethdev_profile.c:17:1: warning:
no previous prototype for profile_hook_rx_burst_cb [-Wmissing-prototypes]
Since the ring buffer with host is shared for both transmit
completions and receive packets, it is possible that transmitter
could get starved if receive ring gets full.
Better to process all outstanding events which frees up transmit
buffer slots, even if means dropping some packets.
Fixes: 7e6c82430702 ("net/netvsc: avoid over filling Rx descriptor ring") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
The netvsc PMD was putting the mac address in private data but the
core rte_ethdev doesn't allow that it. It has to be in rte_malloc'd
memory or a message will be printed on shutdown/close.
EAL: Invalid memory
Fixes: f8279f47dd89 ("net/netvsc: fix crash in secondary process") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
net/netvsc: split send buffers from Tx descriptors
The VMBus has reserved transmit area (per device) and transmit
descriptors (per queue). The previous code was always having a 1:1
mapping between send buffers and descriptors.
This can lead to one queue starving another and also buffer bloat.
Change to working more like FreeBSD where there is a pool of transmit
descriptors per queue. If send buffer is not available then no
aggregation happens but the queue can still drain.
net/netvsc: handle Rx packets during multi-channel setup
It is possible for a packet to arrive during the configuration
process when setting up multiple queue mode. This would cause
configure to fail; fix by just ignoring receive packets while
waiting for control commands.
Use the receive ring lock to avoid possible races between
oddly behaved applications doing rx_burst and control operations
concurrently.
Dekel Peled [Wed, 25 Mar 2020 08:12:31 +0000 (10:12 +0200)]
app/testpmd: enhance GTP support
This patch adds CLI option to enter the v_pt_rsv_flags value for GTP
flow pattern item.
It also adds GTP as valid item in raw_encap and raw_decap setting.
Signed-off-by: Dekel Peled <dekelp@mellanox.com> Acked-by: Ori Kam <orika@mellanox.com>
After VF reset, VF's VSI number may be changed,
the switch rule which forwards packet to the old
VSI number should be redirected to the new VSI
number.
The input set for inner type of vlan item should
be ICE_INSET_ETHERTYPE, not ICE_INSET_VLAN_OUTER.
This mac vlan filter is also part of DCF switch filter.
This patch add switch filter support for PFCP packets,
it enable switch filter to direct IPv4/IPv6 packets with
PFCP session or node payload to specific action.
net/ice: change switch parser to support flexible mask
DCF need to make configuration of flexible mask, that is to say
some input set mask may be not 0xFFFF type all one bit. In order
to direct L2/IP multicast packets, the mask for source IP maybe
0xF0000000, this patch enable switch filter parser for it.
DCF on CVL is a control plane VF which take the responsibility to
configure all the PF/global resources, this patch add support DCF
on to program forward rule to direct packets to VFs.
When an application invokes rte_eth_dev_configure consecutively without
setting up Rx/Tx queues, it will incorrectly return error while trying
to restore Rx/Tx queue configuration.
Fix configuration sequence by checking if any Rx/Tx queues are
previously configured before trying to restore them.
Krzysztof Kanas [Sun, 5 Apr 2020 21:16:57 +0000 (02:46 +0530)]
net/octeontx2: add TM capability
Add Traffic Management capability callbacks to provide
global, level and node capabilities. This patch also
adds documentation on Traffic Management Support.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Signed-off-by: Krzysztof Kanas <kkanas@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Krzysztof Kanas [Fri, 3 Apr 2020 08:52:15 +0000 (14:22 +0530)]
net/octeontx2: add Tx queue rate limit
Add Tx queue ratelimiting support. This support is mutually
exclusive with TM support i.e when TM is configured, tx queue
ratelimiting config is no more valid.
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Add dynamic parent and shaper update callbacks that
can be used to change RR Quantum or PIR/CIR rate dynamically
post hierarchy commit. Dynamic parent update callback only
supports updating RR quantum of a given child with respect to
its parent. There is no support yet to change priority or parent
itself.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Adds support to Traffic Management callbacks "node_add"
and "node_delete". These callbacks doesn't support
dynamic node addition or deletion post hierarchy commit.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Modify resource allocation and freeing logic to support
dynamic topology commit while to traffic is flowing.
This patch also modifies SQ flush to timeout based on minimum shaper
rate configured. SQ flush is further split to pre/post
functions to adhere to HW spec of 96XX C0.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
net/octeontx2: enable error and RAS interrupt in configure
Patch adds routines to set/clear nix lf error & ras interrupt enable
registers. These nix lf error interrupts get triggered if there are
any failures during nix lf configuration. This interrupts are enabled
before any hardware configurations initiated on the allocated nix lf.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Acked-by: Andrzej Ostruszka <aostruszka@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Harman Kalra [Mon, 16 Mar 2020 09:33:44 +0000 (15:03 +0530)]
net/octeontx: support Rx/Tx checksum offload
This patch implements rx/tx checksum offload. In case of
wrong checksum received (inner/outer l3/l4) it reports the
corresponding layer which has bad checksum and also corrects
it if hw checksum is enabled on tx side.
Harman Kalra [Mon, 16 Mar 2020 09:33:42 +0000 (15:03 +0530)]
net/octeontx: support set link up/down
Adding support for setting link up/down eth operation.
It is used to enable disable lmac. Also implemented a
poll function for getting the link status at regular
intervals.
Vamsi Attunuru [Mon, 16 Mar 2020 09:33:41 +0000 (15:03 +0530)]
net/octeontx: support VLAN filter offload
Patch adds support for vlan filter offload support.
MBOX messages for vlan filter on/off and vlan filter
entry add/rm are added to configure PCAM entries to
filter out the vlan traffic on a given port.
Patch also defines rx_offload_flag for vlan filtering.
Kiran Kumar K [Sat, 7 Mar 2020 09:56:53 +0000 (15:26 +0530)]
net/octeontx2: offload bad L2/L3/L4 UDP lengths detection
Octeontx2 HW has support for detecting the bad L2/L3/L4 UDP lengths.
Since DPDK does not have specific error flag for this, exposing it
as bad checksum failure in mbuff:ol_flags to leverage this feature.
These errors will be propagated to the ol_flags as follows.
Amit Gupta [Wed, 4 Mar 2020 05:47:04 +0000 (11:17 +0530)]
net/octeontx: fix meson build for disabled drivers
Add a condition to check if octeontx drivers are disabled.
octeontx drivers are built only if dependent drivers i.e.
ethdev, mempool and common/octeontx are enabled.
Bugzilla ID: 387 Fixes: 7f615033d64f ("drivers/net: build Cavium NIC PMDs with meson") Cc: stable@dpdk.org Signed-off-by: Amit Gupta <agupta3@marvell.com> Reviewed-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Harman Kalra <hkalra@marvell.com>
Igor Romanov [Mon, 30 Mar 2020 10:25:45 +0000 (11:25 +0100)]
net/sfc: check actual all multicast unknown unicast filters
Check that unknown unicast and unknown multicast filters are
applied and return an error if they are not applied. The error
is used in promiscuous and all multicast mode enable and disable
callbacks.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Igor Romanov [Mon, 30 Mar 2020 10:25:44 +0000 (11:25 +0100)]
net/sfc/base: add API to get currently operating filters
Unknown unicast filter creation may fail because of insufficient
permissions on VF. This failure is handled internally in libefx MAC
reconfiguration without any way for a user to know if it happened.
Making the MAC reconfiguration forward error code of filter
reconfiguration would be too destructive to the existing code
that may rely on the function never returning that error.
Add an API for getting the status of current unknown unicast and
all multicast filters since user must know that requested
filters are actually applied.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>