Vamsi Attunuru [Fri, 25 Feb 2022 06:54:45 +0000 (12:24 +0530)]
net/cnxk: make inline inbound device usage as default
Currently inline inbound device usage is not default for eventdev,
patch renames force_inl_dev dev arg to no_inl_dev and enables inline
inbound device by default.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Satha Rao [Fri, 25 Feb 2022 04:59:26 +0000 (23:59 -0500)]
common/cnxk: check SQ node before setting BP config
Validate sq_node and parent before accessing their fields.
SQ was created without any associated TM node, this is valid negative
case, so return success while stopping TM without SQ node.
Signed-off-by: Satha Rao <skoteshwar@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Satha Rao [Fri, 25 Feb 2022 04:59:25 +0000 (23:59 -0500)]
net/cnxk: enable packet marking callbacks
cnxk platform supports red/yellow packet marking based on TM
configuration. This patch set hooks to enable/disable packet
marking for VLAN DEI, IP DSCP and IP ECN. Marking enabled only
in scalar mode.
Signed-off-by: Satha Rao <skoteshwar@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Satha Rao [Fri, 25 Feb 2022 04:59:24 +0000 (23:59 -0500)]
common/cnxk: enable packet marking
cnxk platforms supports packet marking when TM enabled with
valid shaper rates. VLAN DEI, IP ECN, or IP DSCP inside
packet will be updated based on mark flags selected.
Signed-off-by: Satha Rao <skoteshwar@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Added capability and support for inline inbound IP reassembly
in cnxk driver. The IP reassembly offload is supported only
when the inline IPSec security offload is enabled.
In case of IP reassembly incomplete, the mbufs are attached
in the mbuf dynamic field and a dynamic flag is set accordingly.
Pavan Nikhilesh [Thu, 24 Feb 2022 16:10:12 +0000 (21:40 +0530)]
net/cnxk: align prefetches to CN10K cache model
Align prefetches for CN10K cache model for vWQE in Rx and Tx.
Move mbuf->next NULL assignment to Tx path and enabled it only
when multi segments offload is enabled to reduce L1 pressure.
Add macros to detect corrupted mbuf->next values when
MEMPOOL_DEBUG is set.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Pavan Nikhilesh [Thu, 24 Feb 2022 16:10:11 +0000 (21:40 +0530)]
net/cnxk: optimize Rx packet size extraction
In vWQE mode, the mbuf address is calculated without using the
IOVA list.
Packet length can also be calculated by using NIX_PARSE_S by
which we can completely eliminate reading 2nd cache line
depending on the offloads enabled.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Vamsi Attunuru [Thu, 24 Feb 2022 09:49:31 +0000 (15:19 +0530)]
net/cnxk: support outbound soft expiry notification
Add support for soft expiry notification mechanism in outbound
path by creating required number of ring buffers and a common poll
thread which polls for soft expiry events enqueued by microcode.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Tomasz Duszynski [Thu, 24 Feb 2022 10:34:22 +0000 (11:34 +0100)]
common/cnxk: extend log on model mismatch
Model is uniquely identified by 4 numbers. Print them all in case
model being populated is not on a list of known models. This makes
debugging a bit easier.
Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com> Reviewed-by: Jakub Palider <jpalider@marvell.com> Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Suanming Mou [Thu, 24 Feb 2022 13:40:51 +0000 (15:40 +0200)]
net/mlx5: add header reformat HW steering action
HW steering header reformat action can work under bulk mode. In
this case, when create the table, bulk size of header reformat
actions will be allocated in low level. Afterwards, when create
flow, just simply specify the action index in the bulk and the
encapsulation data to the action will be enough.
Suanming Mou [Thu, 24 Feb 2022 13:40:50 +0000 (15:40 +0200)]
net/mlx5: add indirect HW steering action
HW steering can support indirect action as well. With indirect action,
the flow can be created with more flexible shared RSS action selection.
This will can save the action template with different RSS actions.
This commit adds the flow queue operation callback for:
rte_flow_async_action_handle_create();
rte_flow_async_action_handle_destroy();
rte_flow_async_action_handle_update();
Suanming Mou [Thu, 24 Feb 2022 13:40:49 +0000 (15:40 +0200)]
net/mlx5: add HW mark action
The mark action is covered by tag action internally. While it is added
the HW will add a tag to the packet. The mark value can be set as fixed
or dynamic as the action mask indicates.
Suanming Mou [Thu, 24 Feb 2022 13:40:48 +0000 (15:40 +0200)]
net/mlx5: add queue and RSS HW steering action
This commit adds the queue and RSS action. Similar to the jump action,
dynamic ones will be added to the action construct list.
Due to the queue and RSS action in template should not be destroyed
during port restart, the actions are created with standalone indirect
table as indirect action does. When port stops, detaches the indirect
table from action, when port starts, attaches the indirect table back
to the action.
One more change is made to accelerate the action creation. Currently
the mlx5_hrxq_get() function returns the object index instead of object
pointer. This introduced an extra converting the index to the object by
calling mlx5_ipool_get() in most of the case. And that extra converting
hurts multi-thread performance since mlx5_ipool_get() uses the global
lock inside. As the hash Rx queue object itself also contains the index,
returns the object directly will achieve better performance without the
global lock.
Suanming Mou [Thu, 24 Feb 2022 13:40:47 +0000 (15:40 +0200)]
net/mlx5: add flow jump action
Jump action connects different level of flow tables and allows packet
handling in the chain of flows.
A new action construct data struct is also added in this commit to help
to handle not only the dynamic jump action but also for the other
generic dynamic actions. The actions with empty mask configuration means
dynamic action, and the dedicated action will be created with the flow
action configuration during flow creation. In that dynamic action case,
the action will be appended to the table template's action list during
table creation.
When creating the flows, traverse the action list and pick the dynamic
action configuration details from flow actions as the action construct
data struct describes, then create the dedicated dynamic actions.
This commit adds the jump action and the generic dynamic action
construct mechanism.
Suanming Mou [Thu, 24 Feb 2022 13:40:45 +0000 (15:40 +0200)]
net/mlx5: add basic flow queue operation
The HW steering uses async queue-based flow rules management
mechanism. The matcher and part of the actions have been
prepared during flow table creation. Some remaining actions
will be constructed during flow creation if needed.
A flow postpone attribute bit describes if flow management
should be applied to the HW directly. An extra push function
is provided to force push all the cached flows to the HW.
Once the flow has been applied to the HW, the pull function
will be called to get the queued creation/destruction flows.
The DR rule flow memory is represented in PMD layer instead
of allocating from HW steering layer. While destroying the
flow, the flow rule memory can only be freed after the CQE
received.
The HW queue job descriptor is currently introduced to convey
the flow information and operation type between the flow
insertion/destruction in the pull function.
This commit adds the basic flow queue operation for:
rte_flow_async_create();
rte_flow_async_destroy();
rte_flow_push();
rte_flow_pull();
Suanming Mou [Thu, 24 Feb 2022 13:40:44 +0000 (15:40 +0200)]
net/mlx5: add table management
Flow table is a group of flows with the same matching criteria
and the same actions defined for them. The table defines rules
that have the same matching fields but with different matching
values. For example, matching on 5 tuple, the table will be
(IPv4 source + IPv4 dest + s_port + d_port + next_proto)
while the values for each rule will be different.
The templates' relevant matching criteria and action instances
will be created in the table creation and saved in the table.
As table attributes indicate the supported flow number, the flow
memory will also be allocated at the same time.
Suanming Mou [Thu, 24 Feb 2022 13:40:43 +0000 (15:40 +0200)]
net/mlx5: add action template management
The action template holds a list of action types that will be
used together on the same rule. The template's actions instances
will be created only when the template bind to the dedicated
group. And the created actions will be saved to each individual
group in order for best performance. The actions in a group will
not be shared with each other unless shared actions are specified.
This commit adds the action template management which stores the
flow action template.
Suanming Mou [Thu, 24 Feb 2022 13:40:42 +0000 (15:40 +0200)]
net/mlx5: add pattern template management
The pattern template defines flows that have the same matching
fields but with different matching values.
For example, matching on 5 tuple TCP flow, the template will be
(eth(null) + IPv4(source + dest) + TCP(s_port + d_port) while
the values for each rule will be different.
Due to the pattern template can be used in different domains, the
items will only be cached in pattern template create stage, while
the template is bound to a dedicated table, the HW criteria will
be created and saved to the table. The pattern templates can be
used by multiple tables. But different tables create the same
criteria and will not share the matcher between each other in order
to have better performance.
Suanming Mou [Thu, 24 Feb 2022 13:40:41 +0000 (15:40 +0200)]
net/mlx5: add port flow configuration
The hardware steering is backend to support rte_flow_async API in
mlx5 PMD. The port configuration function creates the queues and
needed flow management resources.
The PMD layer configuration function allocates the queues' context
and per-queue job descriptor pool. The job descriptor pool size
is equal to the queue size, and the job descriptors will be popped
from pool with LIFO strategy to convey the flow information during
flow insertion/destruction. Then, while polling the queued operation
result, the flow information will be extracted from the job descriptor
and the descriptor will be pushed back to the LIFO pool.
The commit creates the flow port queues and the job descriptor pools.
The new hardware steering engine relies on using dedicated steering WQEs
instead of writing to the low-level steering table entries directly.
In the first implementation the hardware steering engine supports the
new queue based Flow API, the existing synchronous non-queue based Flow
API is not supported.
A new dv_flow_en value 2 is added to manage mlx5 PMD steering engine:
dv_flow_en rte_flow API rte_flow_async API
------------------------------------------------
0 support not support
1 support not support
2 not support support
This commit introduces the extra dv_flow_en = 2 to specify the new
flow initialize and manage operation routine.
Suanming Mou [Thu, 24 Feb 2022 13:40:39 +0000 (15:40 +0200)]
net/mlx5: add HW steering low-level abstract stub
The HW steering low-level implementation will be added later in another
patch series. To avoid the linkage issues the abstract stub replacement
is provided currently.
Suanming Mou [Thu, 24 Feb 2022 13:40:38 +0000 (15:40 +0200)]
net/mlx5: introduce hardware steering operation
The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:
- performance is limited, we should invoke firmware either to
manage the entire flow, or to handle some internal steering objects
- organizing and preparing flow infrastructure (actions, matchers,
groups, etc.) on the flow inserting is sure to cause slow flow
insertion
- security, exposing the low-level steering entries directly to the
userspace may cause security risks
A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.
In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.
Timestamp resolution for an incoming and outgoing packets
is different for CN10k and CN9K. Added SoC specific
callback to retrieve timestamp in correct format
when read by application.
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
The hardware since ConnectX-7 supports waiting on
specified moment of time with new introduced wait
descriptor. A timestamp can be directly placed
into descriptor and pushed to sending queue.
Once hardware encounter the wait descriptor the
queue operation is suspended till specified moment
of time. This patch update the Tx datapath to handle
this new hardware wait capability.
PMD documentation and release notes updated accordingly.
net/mlx5: configure Tx queue with send on time offload
The wait on time configuration flag is copied to the Tx queue
structure due to performance considerations. Timestamp
mask is prepared and stored in queue structure as well.
The patch provides check for send scheduling on time hardware capability.
With this capability enabled hardware is able to handle Wait WQEs
with directly specified timestamp values. No Clock Queue is needed
anymore to handle send scheduling.
Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
testpmd> flow queue 0 create 0 postpone no
template_table 6 pattern_template 0 actions_template 0
pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
testpmd> flow queue 0 destroy 0 postpone yes rule 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
testpmd> flow pattern_template 0 create pattern_template_id 2
template eth dst is 00:16:3e:31:15:c3 / end
testpmd> flow actions_template 0 create actions_template_id 4
template drop / end mask drop / end
testpmd> flow actions_template 0 destroy actions_template 4
testpmd> flow pattern_template 0 destroy pattern_template 2
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256
Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
ethdev: bring in async indirect actions operations
Queue-based flow rules management mechanism is suitable
not only for flow rules creation/destruction, but also
for speeding up other types of Flow API management.
Indirect action object operations may be executed
asynchronously as well. Provide async versions for all
indirect action operations, namely:
rte_flow_async_action_handle_create,
rte_flow_async_action_handle_destroy and
rte_flow_async_action_handle_update.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
ethdev: bring in async queue-based flow rules operations
A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.
The rte_flow_async_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_async_destroy() function
enqueues a flow destruction to the requested queue.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.
The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.
A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.
The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.
In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.
The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
net/cnxk: add option to override outbound inline SA IV
Add option to override outbound inline SA IV for debug
purposes via environment variable. User can set env variable as:
export CN10K_ETH_SEC_IV_OVR="0x0, 0x0,..."
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Enable packet pool tail drop on RQ when inbound security is not
enabled. This is only part of the configuration. It is a NOP if
tail drop is not enabled on NPA_AURA_CTX_S. And tail drop
on packet pool AURA is enabled only when that packet pool aura
is used by inline device RQ.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
net/cnxk: use NPA batch burst free for meta buffers
Currently meta buffers are freed in bursts of one LMT line
i.e 15 pointers. Instead free them in bursts of 16 LMTlines
which is 240 ptrs for better perf.
Also mark mempool objects as get and put in missing places.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Use raw mbuf free on inline security error to simulate
HW NPA free instead of doing rte_pktmbuf_free(). This
is needed as the callback will not be called from
DPDK lcore.
Fixes: 69daa9e5022b ("net/cnxk: support inline security setup for cn10k") Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Realloc inline dev XAQ when Rx/Tx security ie enabled with
new packet pool as XAQ should be large enough to hold all
mbufs if inline outbound reports error or all mbufs.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Fix inline device RQ tagmask to get packets with receive errors
as type ETHDEV packets to callback handler so that packet buffers
can get freed. Currently only IPsec denied packets get the right
tag mask.
Fixes: ee48f711f3b0 ("common/cnxk: support NIX inline inbound and outbound setup") Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Satha Rao [Tue, 22 Feb 2022 19:35:02 +0000 (01:05 +0530)]
common/cnxk: remove tracking of mark actions
Removed roc NPC APIs which tracks addition and deletion of
mark actions. It was earlier needed to track number of mark
actions added as part of flow rules. If mark actions count
is > 0, then the function pointer for Rx would get updated
to even read mark value from CQE/WQE and populate in mbuf.
Now the same switch is done based on new Rx meta data negotiate
ethdev API.
Signed-off-by: Satha Rao <skoteshwar@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Satha Rao [Tue, 22 Feb 2022 19:35:01 +0000 (01:05 +0530)]
net/cnxk: add Rx metadata negotiate operation
Added rx_metadata_negotiate API to enable mark update RX offload.
Removed software logic to enable/disable mark update inside flow
create/destroy APIs.
Signed-off-by: Satha Rao <skoteshwar@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
common/cnxk: use SSO time counter threshold for IRQ
Enable time counter based threshold for raising SSO
EXE_INT instead of IAQ threshold. Time counter based
threshold helps getting periodic interrupts and process
pkts in burst instead of getting HW to raise an interrupt
for every new work.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
common/cnxk: support enabling AURA tail drop for RQ
Add support to enable AURA tail drop via RQ specifically
for inline device RQ's pkt pool. This is better than RQ
RED drop as it can be applied to all RQ's that are not
having security enabled but using same packet pool.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
common/cnxk: use common SA init API for default options
Use common SA init API before doing initialization based on
params. This is better so that all HW specific default values
are at single place for lookaside and inline.
common/cnxk: support inline device API without ROC NIX
Update the inline device functions to work when roc_nix is NULL.
This is required, as IPsec driver have to use these APIs to work
with inline IPsec device, but the IPsec driver might not have roc_nix
information.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Satha Rao [Tue, 22 Feb 2022 19:34:54 +0000 (01:04 +0530)]
common/cnxk: adjust shaper rates to lower boundaries
Provide a method to get floor values for a requested shaper rate,
which can assure packets should never be transmitted at a rate higher
than configured.
Keep the old API to get HW suggested values.
And introduce new parameter to select appropriate API.
Signed-off-by: Satha Rao <skoteshwar@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Fix bug in batch alloc issue failure path where it was
enqueuing invalid pointers back to the pool. The code
should rightly be falling back to default dequeue path
in such cases.
Fixes: 91531e63f43b ("mempool/cnxk: add cn10k batch dequeue") Cc: stable@dpdk.org Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Michael Baum [Wed, 23 Feb 2022 13:48:34 +0000 (15:48 +0200)]
common/mlx5: update doorbell mapping parameter name
The "tx_db_nc" devarg forces doorbell register mapping to non-cached
region eliminating the extra write memory barrier. This argument was
used in creating the UAR for Tx and thus affected its performance.
Recently [1] its use has been extended to all UAR creation in all mlx5
drivers, and now its name is no longer so accurate.
This patch changes its name to "sq_db_nc" to suit any send queue that
uses it. The old name will still work for backward compatibility.
[1] commit 5dfa003db53f ("common/mlx5: fix post doorbell barrier")
Michael Baum [Wed, 23 Feb 2022 13:48:33 +0000 (15:48 +0200)]
doc: add shared guide for mlx5 drivers
Adds new documentation for MLX5 common driver that contains:
- Its features list (doesn't exist for now).
- Its devargs description.
- Device configuration information and tutorial.
- Quick Start Guide for Mellanox OFED/EN.
Move into this doc all shared information from other MLX5 PMD docs and
add them reference to new common doc.
Michael Baum [Wed, 23 Feb 2022 13:48:30 +0000 (15:48 +0200)]
doc: remove obsolete vector Tx explanations from mlx5 guide
Vectorized routines were removed in result of Tx datapath refactoring,
and devarg keys documentation was updated.
However, more updating should have been done. In environment variables
doc, there was explanation according to vectorized Tx which isn't
relevant anymore.
Shun Hao [Tue, 22 Feb 2022 15:07:16 +0000 (17:07 +0200)]
net/mlx5: fix E-Switch manager vport ID
One of the E-Switch vports plays the special role - it is assigned as
"E-Switch manager" and has some special exclusive rights and duties - it
maintains all the representors, manages FDB domain flows, etc. By
default, the E-Switch vport index was supposed to be zero on standalone
NICs (regular ConnectX) and 0xFFFE SmartNIC (BlueField), but that was
not always correct - this index can be assigned with any value by
kernel/hypervisor.
Currently the E-Switch manager vport id is supposed to be default - 0
for standalone NICs, and 0xFFFE for the SmartNICs, and is deduced from
the device PCI id.
To handle this and do not suggest any default values, can use DevX API
to query E-Switch manager vport ID directly from the firmware during
initialization, and use that value by default. If the new method is not
provided (legacy firmware), fallback to use the PCI id approach.
Michael Baum [Mon, 14 Feb 2022 09:00:10 +0000 (11:00 +0200)]
net/mlx5: optimize Rx queue creation
Recently shared RxQ has been introduced. All shared Rx queues with same
group and queue ID share the same rxq_ctrl, but each one has
mlx5_rxq_priv structure.
The mlx5_rx_queue_setup generates a new rxq_priv structure, and looks
for a rxq_ctrl structure to refer to. If there is already a compatible
rxq_ctrl structure it refers it, otherwise it calls the mlx5_rxq_new
function that generates a new one.
This patch makes mlx5_rxq_new function "standalone", it generates a
rxq_ctrl structure regardless to specific rxq_priv structure. All
operations on the rxq_ctrl structure that depend on the new rxq_priv
structure are performed in the mlx5_rx_queue_setup function, at the same
place for either a new rxq_ctrl structure or an existing rxq_ctrl
structure.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Michael Baum [Mon, 14 Feb 2022 09:00:09 +0000 (11:00 +0200)]
net/mlx5: fix entry in shared Rx queues list
The mlx5_rxq_new function creates control structure and if it from
shared group, it is inserted into the shared RXQs list.
After that, there are some validations which in case they fail, RxQ
control object is released.
In these cases, invalid pointer to the object still in the list, and
access it may cause a crash.
Move the list insertion to the end of the function where the RxQ control
object is surely valid.
Fixes: 09c2555303be ("net/mlx5: support shared Rx queue") Cc: stable@dpdk.org Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Haifei Luo [Mon, 21 Feb 2022 08:27:21 +0000 (10:27 +0200)]
net/mlx5: refactor getting counter action pointer
Previously API flow_dv_query_count_ptr is defined to get counter's
action pointer. This DV function is directly called and the better
way is by the callback.
Add one arg in API mlx5_counter_query and the related callback
counter_query. The added arg is for counter's action pointer.
Signed-off-by: Haifei Luo <haifeil@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
VDPA driver creates two QPs(1 queue pair include 1 send queue
and 1 receive queue) per virtio queue to get traffic events
from NIC to SW.
Two QPs(called FW QP and SW QP) are created as loopback QP
and FW QP'SQ is connected to SW QP'RQ internally.
When packet receive or send out, HW will send WQE by FW QP'SQ,
then SW will get CQE from the CQ of SW QP.
With large scale and heavy traffic, the SQ's request may fail
to get ACK from RQ HW, because HW is busy.
SQ will retry the request with qpc.retry_count times and each time
wait for 4.096 uS *2^(ack_timeout) for the response. If still can’t
get RQ’s HW response, SQ will go to an error state.
16 is experienced value. It should not be too high or too low.
Too high will make QP waits too long in case it’s packet drop.
Too low will cause QP to go to an error state(retry-exceeded) easily.
Michal Krawczyk [Wed, 23 Feb 2022 12:19:44 +0000 (13:19 +0100)]
net/ena: update version to 2.6.0
This release contains multiple bug fixes and improvements, including
- Removal of the linearization function from the xmit Tx path. The
DPDK assumes checking for the mbuf segments number in the Tx prepare
function.
- Extra logs, statistics, checks...
- Cleanup of the unused variables and definitions.
- Configurable Link Status event.
- Improvements for the timer service and the reset.
- Usage of the optimized memcpy on ARM.
- MP awareness improvements - extra API support for the secondary
processes (like reading basic statistics).
- Support of the xstats API to get xstat names by ID.
- Configurable Tx completions timeout.
- Proper setting of the meta-descriptor's DF flag.
Michal Krawczyk [Wed, 23 Feb 2022 12:19:43 +0000 (13:19 +0100)]
net/ena: fix checksum flag for L4
Some HW may invalidly set checksum error bit for the valid L4 checksum.
To avoid drop of the packets in that situation, do not indicate bad
checksum for L4 Rx csum offloads. Instead, set it as unknown, so the
application will re-verify this value.
The statistics counters will still work as previously.
Fixes: 05817057faba ("net/ena: fix indication of bad L4 Rx checksums") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Dawid Gorecki [Wed, 23 Feb 2022 12:19:42 +0000 (13:19 +0100)]
net/ena: check memory BAR before initializing LLQ
The ena_com_config_dev_mode() performs many calculations related to LLQ
and then performs an admin queue call to configure LLQ in the device.
All of the operations performed by ena_com_config_dev_mode() are
unnecessary if membar hasn't been found. Move the dev_mem_base check
before ena_com_config_dev_mode() call. This prevents the unnecessary
operations from being performed.
Fixes: 2fca2a98c0d1 ("net/ena: support LLQv2") Cc: stable@dpdk.org Signed-off-by: Dawid Gorecki <dgr@semihalf.com> Reviewed-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Shai Brandes <shaibran@amazon.com>
Michal Krawczyk [Wed, 23 Feb 2022 12:19:40 +0000 (13:19 +0100)]
net/ena: fix meta descriptor DF flag setup
Whenever Tx checksum offload is being used, the meta descriptor content
is taken into consideration. Setting DF field properly in the meta
descriptor may have huge impact on the performance both for the IPv4 and
IPv6 packets.
The requirements for the df field are as below:
* No offload used - value doesn't matter
* IPv4 - 0 or 1, depending on the DF flag in the IPv4 header
* IPv6 - 1
Setting DF to 0 causes the packet to enter the slow-path in the HW and
as a result can noticeable impact the performance.
Moreover, as 'true' may not always be mapped to 1 depending on it's
definition for the given platform/compiler, for safety DF field is being
set explicitly to 1.
Michal Krawczyk [Wed, 23 Feb 2022 12:19:39 +0000 (13:19 +0100)]
net/ena: make Tx completion timeout configurable
The default missing Tx completion timeout was set to 5 seconds.
In order to provide users with the interface to control this timeout
to adjust it with the application's watchdog, the device argument for
controlling this value was added.
The parameter is called 'miss_txc_to' and can be modified using the
devargs interface:
./app -a <bdf>,miss_txc_to=UINT_NUMBER
This parameter accepts values from 0 to 60 and indicates number of
seconds after which the Tx packet will be considered as missing.
HW hints for the Tx completions timeout were removed to do not overwrite
parameter from the user. Also specifying default Tx completion timeout
value was moved from the configuration to init phase in order to
simplify default value assignment.
Dawid Gorecki [Wed, 23 Feb 2022 12:19:38 +0000 (13:19 +0100)]
net/ena: fix reset reason being overwritten
When triggering the reset, no check was performed to see if the reset
was already triggered. This could result in original reset reason being
overwritten. Add ena_trigger_reset helper function, which checks if the
reset was triggered and only sets the reset reason if the reset wasn't
triggered yet. Replace all occurrences of manually setting the reset
with ena_trigger_reset call.
Dawid Gorecki [Wed, 23 Feb 2022 12:19:36 +0000 (13:19 +0100)]
net/ena: support Tx mbuf free on demand
ENA driver did not allow applications to call tx_cleanup. Freeing Tx
mbufs was always done by the driver and it was not possible to manually
request the driver to free mbufs.
Modify ena_tx_cleanup function to accept maximum number of packets to
free and return number of packets that was freed.
Michal Krawczyk [Wed, 23 Feb 2022 12:19:35 +0000 (13:19 +0100)]
net/ena/base: make IO memzone unique per port
Originally, the ena_com memzone counter was shared by ports, which
caused the memzones to be harder to identify and could potentially
lead to race and because of that the counter had to be atomic.
This atomic counter was global variable and it couldn't work in the
multiprocess implementation.
The memzone is now being identified by the local to port memzone counter
and the port ID - both of those information can be found in the shared
data, so it can be probed easily.
Due to how the ena_com compatibility layer is written, all AQ commands
triggering functions use stack to save results of AQ and then copy them
to user given function.
Therefore to keep the compatibility layer common, introduce ENA_PROXY
macro. It either calls the wrapped function directly (in primary
process) or proxies it to the primary via DPDK IPC mechanism. Since all
proxied calls are taken under a lock share the result data through
shared memory (in struct ena_adapter) to work around 256B IPC parameter
size limit.
New proxy calls can be added by
1. Adding a new message type at the end of enum ena_mp_req
2. Adding new message arguments to the struct ena_mp_body if needed
3. Defining proxy request descriptor with ENA_PROXY_DESC. Its arguments
include handlers for request preparation and response processing.
Any of those may be empty (aside of marking arguments as used).
4. Adding request handling logic to ena_mp_primary_handle()
5. Replacing proxied function calls with ENA_PROXY(adapter, <func>, ...)
Michal Krawczyk [Wed, 23 Feb 2022 12:19:32 +0000 (13:19 +0100)]
net/ena/base: use optimized memcpy version also on Arm
As the default behavior for arm64 is to alias rte_memcpy as memcpy, ENA
cannot redefine memcpy as rte_memcpy as it would cause nested
declaration.
To make it possible to use optimized memcpy in the ena_com layer on Arm,
the driver now redefines memcpy when it is beneficial:
* For arm64 only when the flag RTE_ARCH_ARM64_MEMCPY was defined
* For arm only when the flag RTE_ARCH_ARM_NEON_MEMCPY was defined
Michal Krawczyk [Wed, 23 Feb 2022 12:19:31 +0000 (13:19 +0100)]
net/ena: perform Tx cleanup before sending packets
To increase likelihood that current burst will fit in the HW rings,
perform Tx cleanup before pushing packets to the HW. It may increase
latency a bit for sparse bursts, but the Tx flow now should be more
smooth.
It's also common order in the Tx burst function for other PMDs.
Michal Krawczyk [Wed, 23 Feb 2022 12:19:30 +0000 (13:19 +0100)]
net/ena: skip timer if reset is triggered
Some user applications may not support PMD reset handling. If they will
support timer service it could cause a situation, when information
about the reset trigger is being showed every time the timer service is
being called.
Timer service is now being skipped if the reset was already triggered.
Fixes: d9b8b106bf9d ("net/ena: add watchdog and keep alive AENQ handler") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Shai Brandes <shaibran@amazon.com>
Michal Krawczyk [Wed, 23 Feb 2022 12:19:29 +0000 (13:19 +0100)]
net/ena: make link status change interrupt configurable
ENA uses AENQ for notification about various events, like LSC, keep
alive etc. By default it was enabling all AENQ that were supported by
both the driver and the device. As a result the LSC was always processed
even if the application turned it off explicitly.
As the DPDK provides application with the possibility to configure the
LSC, ENA should respect that. AENQ groups are now being updated upon
configure step, thus LSC can be activated or disabled between ENA PMD
reconfigurations. Moreover, the LSC capability for the device is being
determined dynamically.
Michal Krawczyk [Wed, 23 Feb 2022 12:19:28 +0000 (13:19 +0100)]
net/ena: add extra Rx checksum related xstats
* Split 'bad_csum' Rx statistic into 'l3_csum_bad' and 'l4_csum_bad' to
be able to check which checksum was not calculated properly.
* Add l4_csum_good statistic, which shows how many times L4 Rx checksum
was properly offloaded.
Michal Krawczyk [Wed, 23 Feb 2022 12:19:27 +0000 (13:19 +0100)]
net/ena: remove unused offload variables
Those variables are being set, but never read. As they seem to be
leftover from the old offloads API and don't have any purpose right
now, they are simply being removed.
Fixes: a4996bd89c42 ("ethdev: new Rx/Tx offloads API") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Artur Rojek <ar@semihalf.com> Reviewed-by: Dawid Gorecki <dgr@semihalf.com> Reviewed-by: Igor Chauskin <igorch@semihalf.com> Reviewed-by: Shai Brandes <shaibran@amazon.com>
Michal Krawczyk [Wed, 23 Feb 2022 12:19:26 +0000 (13:19 +0100)]
net/ena: remove unused enumeration
The enumeration seems to be leftover from porting the Linux driver to
the DPDK. It was used nowhere and refers to the ethtool which is not
present in the DPDK.
Fixes: 372c1af5ed8f ("net/ena: add dedicated memory area for extra device info") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Artur Rojek <ar@semihalf.com> Reviewed-by: Dawid Gorecki <dgr@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Shai Brandes <shaibran@amazon.com>
Michal Krawczyk [Wed, 23 Feb 2022 12:19:25 +0000 (13:19 +0100)]
net/ena: assert on outstanding mbuf in Tx
To make sure there is no outstanding mbuf in the reused Tx queue (due to
improper cleanup, or some invalid logic on Tx path), the assertion was
added on the Tx path.
As it's being compiled out in the release version, it won't affect
the IO path performance.
Michal Krawczyk [Wed, 23 Feb 2022 12:19:24 +0000 (13:19 +0100)]
net/ena: remove Tx mbuf linearization
The linearization of the mbuf isn't common practice for the PMD, as it
can expose it's capabilities to the upper layer using
rte_eth_dev_info_get().
Moreover, the rte_eth_tx_prepare() function should also verify if the
number of segments inside the mbuf isn't too high.
Because of those 2 circumstances, it may be safer to avoid modifying
mbuf on PMD's Tx side and remove linearization at all. Instead, add
verification of the number of segments to the eth_ena_prep_pkts().
Signed-off-by: Michal Krawczyk <mk@semihalf.com> Reviewed-by: Artur Rojek <ar@semihalf.com> Reviewed-by: Dawid Gorecki <dgr@semihalf.com> Reviewed-by: Igor Chauskin <igorch@amazon.com> Reviewed-by: Shai Brandes <shaibran@amazon.com>