net/mlx5: support single flow dump
authorHaifei Luo <haifeil@nvidia.com>
Thu, 15 Apr 2021 11:19:24 +0000 (14:19 +0300)
committerRaslan Darawsheh <rasland@nvidia.com>
Mon, 19 Apr 2021 10:45:05 +0000 (12:45 +0200)
commitbd0a931543d9fcb437a10b6c86efcba394a4adb2
treeb75ce01ff1ea531c2a4c26f9250a34e0a9fe02ab
parenta38d22ed450d12990bda897aa6f7d8bb72977a5a
net/mlx5: support single flow dump

Modify API mlx5_flow_dev_dump to support the feature.
Modify mlx5_socket since one extra arg flow_ptr is added.

The data structure sent to DPDK application from the utility triggering
the flow dumps should be packed and endianness must be specified.
The native host endianness can be used, all exchange happens within
the same host (we use sendmsg aux data and share the file handle,
remote approach is not applicable, no inter-host communication happens).

The message structure to dump one/all flow(s):
struct mlx5_flow_dump_req {
uint32_t port_id;
uint64_t flow_ptr;
} __rte_packed;

If flow_ptr is 0, all flows for the specified port will be dumped.

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
drivers/net/mlx5/linux/mlx5_os.h
drivers/net/mlx5/linux/mlx5_socket.c
drivers/net/mlx5/mlx5.h
drivers/net/mlx5/mlx5_flow.c