net/mlx5: fix Rx packet padding config via DevX
authorAlexander Kozyrev <akozyrev@nvidia.com>
Sun, 15 Nov 2020 14:25:34 +0000 (14:25 +0000)
committerFerruh Yigit <ferruh.yigit@intel.com>
Fri, 20 Nov 2020 20:10:05 +0000 (21:10 +0100)
Received packets can be aligned to the size of the cache line on
PCI transactions. This could improve performance by avoiding
partial cache line writes in exchange for increased PCI bandwidth.

This feature is supposed to be controlled by the rxq_pkt_pad_en
devarg and it is true for an RxQ created via the Verbs API.
But in the DevX API case, it is erroneously controlled by the
rxq_cqe_pad_en devarg instead, which is in charge of the CQE
padding instead and should not control the RxQ creation.

Fix DevX RxQ creation by using the proper configuration flag for
Rx packet padding that is being set by the rxq_pkt_pad_en devarg.

Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
drivers/net/mlx5/mlx5_devx.c

index e9ceda5..34044fc 100644 (file)
@@ -294,7 +294,7 @@ static void
 mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
                       struct mlx5_devx_wq_attr *wq_attr)
 {
-       wq_attr->end_padding_mode = priv->config.cqe_pad ?
+       wq_attr->end_padding_mode = priv->config.hw_padding ?
                                        MLX5_WQ_END_PAD_MODE_ALIGN :
                                        MLX5_WQ_END_PAD_MODE_NONE;
        wq_attr->pd = priv->sh->pdn;