net/mlx5: enlarge maximal flow priority
authorDong Zhou <dongzhou@nvidia.com>
Tue, 12 Jan 2021 06:40:00 +0000 (08:40 +0200)
committerFerruh Yigit <ferruh.yigit@intel.com>
Fri, 29 Jan 2021 17:16:07 +0000 (18:16 +0100)
commit5f8ae44dd454b29843bf91d24646ce88cbdd465a
tree0fe00713dbe3b877045525cbcf9247b3d3169764
parent07627fbf15063172a9cad325082066b60d3ed96c
net/mlx5: enlarge maximal flow priority

Currently, the maximal flow priority in non-root table to user
is 4, it's not enough for user to do some flow match by priority,
such as LPM, for one IPV4 address, we need 32 priorities for each
bit of 32 mask length.

PMD will manage 3 sub-priorities per user priority according to L2,
L3 and L4. The internal priority is 16 bits, user can use priorities
from 0 - 21843.

Those enlarged flow priorities are only used for ingress or egress
flow groups greater than 0 and for any transfer flow group.

Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
doc/guides/nics/mlx5.rst
doc/guides/rel_notes/release_21_02.rst
drivers/net/mlx5/mlx5_flow.c
drivers/net/mlx5/mlx5_flow.h
drivers/net/mlx5/mlx5_flow_dv.c
drivers/net/mlx5/mlx5_flow_verbs.c