The memory barrier was intended for descriptor data integrity (see
comments in [1]). As later NEON loads were implemented and a whole
entry is loaded in one-run and atomic, that makes the ordering of
partial loading unnecessary. Remove it accordingly.
Corrected couple of code comments.
In terms of performance, observed slightly higher average throughput
in tests with 82599ES NIC.
[1] http://patches.dpdk.org/patch/18153/
Fixes:
989a84050542 ("net/ixgbe: fix received packets number for ARM NEON")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
uint32_t var = 0;
uint32_t stat;
- /* B.1 load 1 mbuf point */
+ /* B.1 load 2 mbuf point */
mbp1 = vld1q_u64((uint64_t *)&sw_ring[pos]);
/* B.2 copy 2 mbuf point into rx_pkts */
vst1q_u64((uint64_t *)&rx_pkts[pos], mbp1);
- /* B.1 load 1 mbuf point */
+ /* B.1 load 2 mbuf point */
mbp2 = vld1q_u64((uint64_t *)&sw_ring[pos + 2]);
/* A. load 4 pkts descs */
descs[1] = vld1q_u64((uint64_t *)(rxdp + 1));
descs[2] = vld1q_u64((uint64_t *)(rxdp + 2));
descs[3] = vld1q_u64((uint64_t *)(rxdp + 3));
- rte_smp_rmb();
/* B.2 copy 2 mbuf point into rx_pkts */
vst1q_u64((uint64_t *)&rx_pkts[pos + 2], mbp2);