net/mlx5: optimize tunnel offload index pool
authorSuanming Mou <suanmingm@nvidia.com>
Mon, 7 Dec 2020 05:58:34 +0000 (13:58 +0800)
committerFerruh Yigit <ferruh.yigit@intel.com>
Fri, 8 Jan 2021 15:03:04 +0000 (16:03 +0100)
Currently, when creating the index pool, if the trunk size is not
configured, the index pool default trunk size will be 4096.

The maximum tunnel offload supported now is 256(MLX5_MAX_TUNNELS),
create the index pool with trunk size 4096 wastes the memory.

This commits changes the tunnel offload index pool trunk size to
MLX5_MAX_TUNNELS to save the memory.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Reviewed-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
drivers/net/mlx5/mlx5.c

index 7d3f18c..52a8a25 100644 (file)
@@ -265,6 +265,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = {
        },
        [MLX5_IPOOL_TUNNEL_ID] = {
                .size = sizeof(struct mlx5_flow_tunnel),
        },
        [MLX5_IPOOL_TUNNEL_ID] = {
                .size = sizeof(struct mlx5_flow_tunnel),
+               .trunk_size = MLX5_MAX_TUNNELS,
                .need_lock = 1,
                .release_mem_en = 1,
                .type = "mlx5_tunnel_offload",
                .need_lock = 1,
                .release_mem_en = 1,
                .type = "mlx5_tunnel_offload",