Itsuro Oda [Wed, 11 Mar 2020 23:19:18 +0000 (08:19 +0900)]
vhost: make IOTLB cache name unique among processes
Currently, iotlb cache name is comprised of vid and virtqueue
index. For example, "iotlb_cache_0_0". Because vid is assigned
per process, iotlb cache name is not unique among multi processes.
For example a secondary process uses a vhost
(ex. eth_vhost0,iface=/tmp/sock0) and another secondary process
uses a vhost (ex. eth_vhost1,iface=/tmp/sock1), iotlb cache
name of both vhost ("iotlb_cache_0_0") are same and as a result
iotlb cache is broken.
This patch makes iotlb cache name unique among milti processes
by adding process id to the iotlb cache name.
The prefix of the name is shortened to "iotlb_" since the maximum
length of pool name is 25 bytes (RTE_MEMPOOL_NAMESIZE is 26).
Note that it is just 25 characters in maximum at the moment.
Here,
* pid_t == int: max 10 digits.
* vid < MAX_VHOST_DECICE(1024): max 4 digits.
* vq_index < VHOST_MAX_VRING(256): max 3 digits.
Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions") Cc: stable@dpdk.org Signed-off-by: Itsuro Oda <oda@valinux.co.jp> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xiaolong Ye [Sat, 7 Mar 2020 13:22:35 +0000 (21:22 +0800)]
vhost: remove unused variable
VHOST_FEATURES has been removed in previous refactoring.
Fixes: 0917f9d1f059 ("vhost: use new APIs to handle features") Cc: stable@dpdk.org Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xiaolong Ye [Sat, 7 Mar 2020 13:22:34 +0000 (21:22 +0800)]
net/virtio: fix outdated comment
Fix comment that is no more correct as the code evolved.
Fixes: 9470427c88e1 ("net/virtio: do not store PCI device pointer at shared memory") Cc: stable@dpdk.org Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Itsuro Oda [Thu, 5 Mar 2020 02:54:50 +0000 (11:54 +0900)]
net/vhost: fix potential memory leak on close
If a vhost device is closed before eth_dev_configure is done
to the device, internal resources allocated to the device
would not be freed. This patch fixes it.
Fixes: 3d01b759d267 ("net/vhost: delay driver setup") Cc: stable@dpdk.org Signed-off-by: Itsuro Oda <oda@valinux.co.jp> Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Xiaolong Ye [Wed, 26 Feb 2020 13:45:34 +0000 (21:45 +0800)]
net/vhost: enable promiscuous and multicast by default
With this patch, the promiscuous and multicast fields are initialized as
enabled for vhost PMD by default, this allows the devices to be used
when running applications that attempt to enable promiscuous or
multicast mode.
Similar things have done for other virtual PMDs by commit f165210321c4
("drivers/net: enable promiscuous and multicast by default")
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
net/vhost: add options for linear and external buffer
Added vHost PMD arguments 'linear-buffer' and 'ext-buffer'
to configure 'RTE_VHOST_USER_LINEARBUF_SUPPORT' and
'RTE_VHOST_USER_EXTBUF_SUPPORT' flags in the vhost library
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Marvin Liu [Mon, 24 Feb 2020 15:14:19 +0000 (23:14 +0800)]
vhost: fix packed ring zero-copy
Available buffer ID should be stored in the zmbuf in the packed-ring
dequeue path. There's no guarantee that local queue avail index is
equal to buffer ID.
Fixes: d1eafb532268 ("vhost: add packed ring zcopy batch and single dequeue") Cc: stable@dpdk.org Signed-off-by: Marvin Liu <yong.liu@intel.com> Reported-by: Yinan Wang <yinan.wang@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Fan Zhang [Wed, 29 Jan 2020 10:19:37 +0000 (10:19 +0000)]
vhost/crypto: add missing user protocol flag
This patch fixes the vhost crypto missed
"VHOST_USER_PROTOCOL_F_CONFIG" flag problem during initialization.
Newer Qemu version requires this feature enabled.
Fixes: 939066d96563 ("vhost/crypto: add public function implementation") Cc: stable@dpdk.org Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xiaoyun Wang [Fri, 10 Apr 2020 09:21:47 +0000 (17:21 +0800)]
net/hinic: adds Tx queue xstats members
Because some apps may pass illegal parameters, driver increases
checks on illegal parameters and DFX statistics, which includes
sge_len0 and mbuf_null txq xstats member.
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
Xiaoyun Wang [Fri, 10 Apr 2020 09:21:45 +0000 (17:21 +0800)]
net/hinic/base: fix PF firmware hot-active problem
When FW is hotactive which means updating the FW but not needs
to reboot OS, FW returns HINIC_DEV_BUSY_ACTIVE_FW for pf driver
because firmware is being reinitialized, at which point the cmdq
initialization that relies on the fw channel will fail, so driver
should reinit the cmdq when port start.
Fixes: 0194313b2df6 ("net/hinic/base: fix port start during FW hot update") Cc: stable@dpdk.org Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
Ferruh Yigit [Mon, 2 Mar 2020 17:36:40 +0000 (17:36 +0000)]
net/null: fix secondary burst function selection
Secondary process uses the primary process device and while setting the
Rx/Tx functions it uses the device arguments from the secondary process
instead of the primary ones.
This may cause primary and secondary process use different Rx/Tx
functions unintentionally.
Jiaqi Min [Wed, 8 Apr 2020 10:05:22 +0000 (10:05 +0000)]
net/i40e/base: add constants for PTP pins
Introduce constants for handling PTP pins used for external
clock source.
Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Signed-off-by: Jiaqi Min <jiaqix.min@intel.com> Acked-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com> Acked-by: Xiaolong Ye <xiaolong.ye@intel.com> Acked-by: Beilei Xing <beilei.xing@intel.com>
Jiaqi Min [Wed, 8 Apr 2020 10:05:21 +0000 (10:05 +0000)]
net/i40e/base: introduce device ID for V710-TL 5G
This change is adding new device ID and handling it in the same way as
X710-T*L head of family. A new device ID is for new V710-T*L adapter
supporting speeds up to 5G.
Signed-off-by: Zalfresso-Jundzillo <marekx.zalfresso-jundzillo@intel.com> Signed-off-by: Jiaqi Min <jiaqix.min@intel.com> Acked-by: Xiaolong Ye <xiaolong.ye@intel.com> Acked-by: Beilei Xing <beilei.xing@intel.com>
Qiming Yang [Fri, 3 Apr 2020 05:42:41 +0000 (13:42 +0800)]
net/iavf: support generic flow API
This patch added iavf_flow_create, iavf_flow_destroy,
iavf_flow_flush and iavf_flow_validate support,
these are used to handle all the generic filters.
This patch supported basic L2, L3, L4 and GTPU patterns.
Signed-off-by: Qiming Yang <qiming.yang@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Yunjian Wang [Thu, 9 Apr 2020 01:59:00 +0000 (09:59 +0800)]
net/pfe: fix double free of MAC address
The 'mac_addrs' freeing has been moved to rte_eth_dev_release_port(),
so freeing 'mac_addrs' like this in pfe_eth_exit() is unnecessary and
will cause double free.
Currently, the counter struct saves both the members used by batch
counters and none batch counters. The members which are only used
by none batch counters cost 16 bytes extra memory for batch counters.
As normally there will be limited none batch counters, mix the none
batch counter and batch counter members becomes quite expensive for
batch counter. If 1 million batch counters are created, it means 16 MB
memory which will not be used by the batch counters are allocated.
Split the mlx5_flow_counter struct for batch and none batch counters
helps save the memory.
Currently, DV and verbs counters are both changed to indexed. It means
while creating the flow with counter, flow can save the indexed value to
address the counter.
Save the 4 bytes indexed value in the rte_flow instead of 8 bytes
pointer helps to save memory with millions of flows.
This part of the counter optimize change the DV counter to indexed as
what have already done in verbs. In this case, all the mlx5 flow counter
can be addressed by index.
The counter index is composed of pool index and the counter offset in
the pool counter array. The batch and none batch counter dcs ID offset
0x800000 is used to avoid the mix up for the index. As batch counter dcs
ID starts from 0x800000 and none batch counter dcs starts from 0, the
0x800000 offset is added to the batch counter index to indicate the
index of batch counter.
The counter pointer in rte_flow struct will be aligned to index instead
of pointer. It will save 4 bytes memory for every rte_flow. With
millions of rte_flow, it will save MBytes memory.
This commit is a part for the DV counter optimization.
The batch counter dcs id starts from 0x800000 and none batch counter
starts from 0. As currently, the counter is changed to be indexed by
pool index and the offset of the counter in the pool counters_raw array.
It means now the counter index is same for batch and none batch counter.
Add the 0x800000 batch counter offset to the batch counter index helps
indicate the counter index is from batch or none batch container pool.
Query generation was introduced to avoid counter to be reallocated
before the counter statistics be fully updated. Since the counters
be released between query trigger and query handler may miss the
packets arrived in the trigger and handler gap period. In this case,
user can only allocate the counter while pool query_gen is greater
than the counter query_gen + 1 which indicates a new round of query
finished, the statistic is fully updated.
Split the pool query_gen to start_query_gen and end_query_gen helps
to have a better identify for the counter released in the gap period.
And it helps the counter released before query trigger or after query
handler can be reallocated more efficiently.
As none-batch counter pool allocates only one counter every time, after
the new allocated counter pop out, the pool will be empty and moved to
the end of the container list in the container.
Currently, the new non-batch counter allocation maybe happened with new
counter pool allocated, it means the new counter comes from a new pool.
While new pool is allocated, the container resize and switch happens.
In this case, after the pool becomes empty, it should be added to the
new container pool list as the pool belongs.
Update the container pointer accordingly with pool allocation to avoid
add the pool to the incorrect container.
Michal Krawczyk [Wed, 8 Apr 2020 08:29:21 +0000 (10:29 +0200)]
net/ena: update driver version to v2.1.0
The v2.1.0 is refactoring Tx and Rx paths, including few bug fixes and
is also adding a new features which are going to be available with the
newest hardware.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:20 +0000 (10:29 +0200)]
doc: add notes on ENA usage on metal instances
As AWS metal instances are supporting IOMMU, the usage of igb_uio or
vfio-pci can lead to a problems (when to use which module), especially
that the vfio-pci isn't supporting SMMU on arm64.
To clear up the problem of using those modules in various setup
conditions (with or without IOMMU) on metal instances, more detailed
explanation was added.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:19 +0000 (10:29 +0200)]
net/ena: reuse zero length Rx descriptor
Some ENA devices can pass to the driver descriptor with length 0. To
avoid extra allocation, the descriptor can be reused by simply putting
it back to the device.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:18 +0000 (10:29 +0200)]
net/ena: refactor Tx
The original Tx function was very long and was containing both cleanup
and the sending sections. Because of that it was having a lot of local
variables, big indentation and was hard to read.
This function was split into 2 sections:
* Sending - which is responsible for preparing the mbuf, mapping it
to the device descriptors and finally, sending packet to the HW
* Cleanup - which is releasing packets sent by the HW. Loop which was
releasing packets was reworked a bit, to make intention more visible
and aligned with other parts of the driver.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:17 +0000 (10:29 +0200)]
net/ena: use macros for ring index operations
To improve code readability, abstraction was added for operating on IO
rings indexes.
Driver was defining local variable for ring mask in each function that
needed to operate on the ring indexes. Now it is being stored in the
ring as this value won't change unless size of the ring will change and
macros for advancing indexes using the mask has been added.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:16 +0000 (10:29 +0200)]
net/ena: limit refill threshold by fixed value
Divider used for both Tx and Rx cleanup/refill threshold can cause too
big delay in case of the really big rings - for example if the 8k Rx
ring will be used, the refill won't trigger unless 1024 threshold will
be reached. It will also cause driver to try to allocate that much
descriptors.
Limiting it by fixed value - 256 in that case, would limit maximum
time spent in repopulate function.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:15 +0000 (10:29 +0200)]
net/ena: rework getting number of available descriptors
ena_com API should be preferred for getting number of used/available
descriptors unless extra calculation needs to be performed.
Some helper variables were added for storing values that are later
reused. Moreover, for limiting the value of sent/received packets to
the number of available descriptors, the RTE_MIN is used instead of
if function, which was doing similar thing but was less descriptive.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:14 +0000 (10:29 +0200)]
net/ena: refactor Rx
* Split main Rx function into multiple ones - the body of the main
was very big and further there were 2 nested loops, which were
making the code hard to read
* Rework how the Rx mbuf chains are being created - Instead of having
while loop which has conditional check if it's first segment, handle
this segment outside the loop and if more fragments are existing,
process them inside.
* Initialize Rx mbuf using simple function - it's the common thing for
the 1st and next segments.
* Create structure for Rx buffer to align it with Tx path, other ENA
drivers and to make the variable name more descriptive - on DPDK, Rx
buffer must hold only mbuf, so initially array of mbufs was used as
the buffers. However, it was misleading, as it was named
"rx_buffer_info". To make it more clear, the structure holding mbuf
pointer was added and now there is possibility to expand it in the
future without reworking the driver.
* Remove redundant variables and conditional checks.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:13 +0000 (10:29 +0200)]
net/ena: disable meta caching
In the LLQ (Low-latency queue) mode, the device can indicate that meta
data descriptor caching is disabled. In that case the driver should send
valid meta descriptor on every Tx packet.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:12 +0000 (10:29 +0200)]
net/ena: add Tx drops statistic
ENA device can report in the AENQ handler amount of Tx packets that were
dropped and not sent.
This statistic is showing global value for the device and because
rte_eth_stats is missing field that could indicate this value (it
isn't the Tx error), it is being presented as a extended statistic.
As the current design of extended statistics prevents tx_drops from
being an atomic variable and both tx_drops and rx_drops are only updated
from the AENQ handler, both were set as non-atomic for the alignment.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:10 +0000 (10:29 +0200)]
net/ena: support large LLQ headers
Default LLQ (Low-latency queue) maximum header size is 96 bytes and can
be too small for some types of packets - like IPv6 packets with multiple
extension. This can be fixed, by using large LLQ headers.
If the device supports larger LLQ headers, the user can activate them by
using device argument 'large_llq_hdr' with value '1'.
If the device isn't supporting this feature, the default value (96B)
will be used.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:09 +0000 (10:29 +0200)]
net/ena: refactor getting IO queues capabilities
Reading values from the device is about the maximum capabilities of the
device. Because of that, the names of the fields storing those values,
functions and temporary variables, should be more descriptive in order
to improve self documentation of the code.
In connection with this, the way of getting maximum queue size could be
simplified - no hardcoded values are needed, as the device is going to
send it's capabilities anyway.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:08 +0000 (10:29 +0200)]
net/ena: set IO ring size to valid value
IO rings were configured with the maximum allowed size for the Tx/Rx
rings. However, the application could decide to create smaller rings.
This patch is using value stored in the ring instead of the value from
the adapter which is indicating the maximum allowed value.
Fixes: df238f84c0a2 ("net/ena: recreate HW IO rings on start and stop") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:06 +0000 (10:29 +0200)]
net/ena/base: fix indentation of multiple defines
As the alignment of the defines wasn't valid, it was removed at all, so
instead of using multiple spaces or tabs, the single space after define
name is being used.
Fixes: 99ecfbf845b3 ("ena: import communication layer") Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:05 +0000 (10:29 +0200)]
net/ena/base: fix types for printing timestamps
Because ena_com is being used by multiple platforms which are using
different C versions, PRIu64 cannot be used directly and must be defined
in the platform file.
Fixes: b2b02edeb0d6 ("net/ena/base: upgrade HAL for new HW features") Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:04 +0000 (10:29 +0200)]
net/ena/base: use 48-bit memory addresses
ENA device is using 48-bit memory for IO. Because of that, the upper
limit had to be updated.
From the driver perspective, it's just a cosmetic change to make
definition of the structure 'ena_common_mem_addr' more descriptive and
the address value was verified anyway for the valid range in the
function 'ena_com_mem_addr_set()'.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:02 +0000 (10:29 +0200)]
net/ena/base: fix indentation in CQ polling
The spaces instead of tabs were used for the indent.
Fixes: 3adcba9a8987 ("net/ena: update HAL to the newer version") Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:01 +0000 (10:29 +0200)]
net/ena/base: fix documentation of functions
The documentation format was aligned and few typos were fixed.
Fixes: 99ecfbf845b3 ("ena: import communication layer") Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:29:00 +0000 (10:29 +0200)]
net/ena/base: add accelerated LLQ mode
In order to use the accelerated LLQ (Low-lateny queue) mode, the driver
must limit the Tx burst and be aware that the device has the meta
caching disabled. In that situation, the meta descriptor must be valid
on each Tx packet.
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:28:56 +0000 (10:28 +0200)]
net/ena/base: fix testing for supported hash function
There was a bug in ena_com_fill_hash_function(), which was causing bit to
be shifted left one bit too much.
To fix that, the ENA_FFS macro is being used (returning the location of
the first bit set), hash_function value is being subtracted by 1 if any
hash function is supported by the device and BIT macro is used for
shifting for better verbosity.
Fixes: 99ecfbf845b3 ("ena: import communication layer") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Igor Chauskin [Wed, 8 Apr 2020 08:28:54 +0000 (10:28 +0200)]
net/ena/base: prevent allocation of zero sized memory
rte_memzone_reserve() will reserve the biggest contiguous memzone
available if received 0 as size param.
Fixes: 9ba7981ec992 ("ena: add communication layer for DPDK") Cc: stable@dpdk.org Signed-off-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Igor Chauskin [Wed, 8 Apr 2020 08:28:53 +0000 (10:28 +0200)]
net/ena/base: make allocation macros thread-safe
Memory allocation region id could possibly be non-unique
due to non-atomic increment, causing allocation failure.
Fixes: 9ba7981ec992 ("ena: add communication layer for DPDK") Cc: stable@dpdk.org Signed-off-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Michal Krawczyk [Wed, 8 Apr 2020 08:28:52 +0000 (10:28 +0200)]
net/ena: ensure Rx buffer size is at least 1400B
Some of the ENA devices can't handle buffers which are smaller than a
1400B. Because of this limitation, size of the buffer is being checked
and limited during the Rx queue setup.
If it's below the allowed value, PMD won't finish it's configuration
successfully..
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Update a switch rule' action from "to VSI" to "to VSI List"
should only happen when the same rule has been programmed with
a different fwd destination. This is already handled by below
code block:
m_entry = ice_find_adv_rule_entry(...)
if (m_entry) {
...
ice_adv_add_update_vsi_list(...)
}
The following ice_update_pkt_fwd_rule is unnecessary and should be
removed due to:
1) If a switch rule's action is still to VSI, which means, it is
the first time be issued, we don't need to update it "to VSI
List."
2) Actually the implementation does not match the comment, it still
update the rule with "to VSI" action.
Fixes: fed0c5ca5f19 ("net/ice/base: support programming a new switch recipe") Cc: stable@dpdk.org Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Lunyuan Cui [Thu, 2 Apr 2020 07:58:47 +0000 (07:58 +0000)]
net/i40e: enable MAC address as flow director input set
Enable source MAC address and destination MAC address as FDIR's
input set for ipv4-other, ipv4-udp and ipv4-tcp. When OVS-DPDK is
working as a pure L2 switch, enable MAC address as FDIR input set
with Mark+RSS action would help the performance speed up. And FVL
FDIR supports to change input set with MAC address.
Signed-off-by: Lunyuan Cui <lunyuanx.cui@intel.com> Acked-by: Beilei Xing <beilei.xing@intel.com>
Yunjian Wang [Tue, 7 Apr 2020 11:37:27 +0000 (19:37 +0800)]
net/nfp: fix dangling pointer on probe failure
When nfp_pf_create_dev() is cleaning up, it does not correctly set
the dev_private variable to NULL, which will lead to a double free.
Fixes: ef28aa96e53b ("net/nfp: support multiprocess") Cc: stable@dpdk.org Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Acked-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
Add the previous prototype for the 'profile_hook_rx_burst_cb' function
to fix the compiler warning when the DPDK is built with
'RTE_ETHDEV_PROFILE_WITH_VTUNE' config option enabled:
/home/dpdk/lib/librte_ethdev/ethdev_profile.c:17:1: warning:
no previous prototype for profile_hook_rx_burst_cb [-Wmissing-prototypes]
Since the ring buffer with host is shared for both transmit
completions and receive packets, it is possible that transmitter
could get starved if receive ring gets full.
Better to process all outstanding events which frees up transmit
buffer slots, even if means dropping some packets.
Fixes: 7e6c82430702 ("net/netvsc: avoid over filling Rx descriptor ring") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
The netvsc PMD was putting the mac address in private data but the
core rte_ethdev doesn't allow that it. It has to be in rte_malloc'd
memory or a message will be printed on shutdown/close.
EAL: Invalid memory
Fixes: f8279f47dd89 ("net/netvsc: fix crash in secondary process") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
net/netvsc: split send buffers from Tx descriptors
The VMBus has reserved transmit area (per device) and transmit
descriptors (per queue). The previous code was always having a 1:1
mapping between send buffers and descriptors.
This can lead to one queue starving another and also buffer bloat.
Change to working more like FreeBSD where there is a pool of transmit
descriptors per queue. If send buffer is not available then no
aggregation happens but the queue can still drain.
net/netvsc: handle Rx packets during multi-channel setup
It is possible for a packet to arrive during the configuration
process when setting up multiple queue mode. This would cause
configure to fail; fix by just ignoring receive packets while
waiting for control commands.
Use the receive ring lock to avoid possible races between
oddly behaved applications doing rx_burst and control operations
concurrently.
Dekel Peled [Wed, 25 Mar 2020 08:12:31 +0000 (10:12 +0200)]
app/testpmd: enhance GTP support
This patch adds CLI option to enter the v_pt_rsv_flags value for GTP
flow pattern item.
It also adds GTP as valid item in raw_encap and raw_decap setting.
Signed-off-by: Dekel Peled <dekelp@mellanox.com> Acked-by: Ori Kam <orika@mellanox.com>
After VF reset, VF's VSI number may be changed,
the switch rule which forwards packet to the old
VSI number should be redirected to the new VSI
number.
The input set for inner type of vlan item should
be ICE_INSET_ETHERTYPE, not ICE_INSET_VLAN_OUTER.
This mac vlan filter is also part of DCF switch filter.
This patch add switch filter support for PFCP packets,
it enable switch filter to direct IPv4/IPv6 packets with
PFCP session or node payload to specific action.
net/ice: change switch parser to support flexible mask
DCF need to make configuration of flexible mask, that is to say
some input set mask may be not 0xFFFF type all one bit. In order
to direct L2/IP multicast packets, the mask for source IP maybe
0xF0000000, this patch enable switch filter parser for it.
DCF on CVL is a control plane VF which take the responsibility to
configure all the PF/global resources, this patch add support DCF
on to program forward rule to direct packets to VFs.