net/mlx5: remove redundant operations in NEON Rx
authorRuifeng Wang <ruifeng.wang@arm.com>
Wed, 7 Jul 2021 09:03:06 +0000 (17:03 +0800)
committerRaslan Darawsheh <rasland@nvidia.com>
Thu, 15 Jul 2021 13:16:26 +0000 (15:16 +0200)
Mask of entries after the compressed CQE is covered by invalid mask of
non-compressed valid CQEs. Hence remove redundant calculation on mask.
The change showed slight performance uplift on N1SDP.

Fixes: 570acdb1da8a ("net/mlx5: add vectorized Rx/Tx burst for ARM")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
drivers/net/mlx5/mlx5_rxtx_vec_neon.h

index 5c569ee..4d1710b 100644 (file)
@@ -767,16 +767,15 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
                comp_idx = __builtin_clzl(vget_lane_u64(vreinterpret_u64_u16(
                                          comp_mask), 0)) /
                                          (sizeof(uint16_t) * 8);
-               /* D.6 mask out entries after the compressed CQE. */
-               mask = vcreate_u16(comp_idx < MLX5_VPMD_DESCS_PER_LOOP ?
-                                  -1UL >> (comp_idx * sizeof(uint16_t) * 8) :
-                                  0);
-               invalid_mask = vorr_u16(invalid_mask, mask);
+               invalid_mask = vorr_u16(invalid_mask, comp_mask);
                /* D.7 count non-compressed valid CQEs. */
                n = __builtin_clzl(vget_lane_u64(vreinterpret_u64_u16(
                                   invalid_mask), 0)) / (sizeof(uint16_t) * 8);
                nocmp_n += n;
-               /* D.2 get the final invalid mask. */
+               /*
+                * D.2 mask out entries after the compressed CQE.
+                *     get the final invalid mask.
+                */
                mask = vcreate_u16(n < MLX5_VPMD_DESCS_PER_LOOP ?
                                   -1UL >> (n * sizeof(uint16_t) * 8) : 0);
                invalid_mask = vorr_u16(invalid_mask, mask);