From: Jia He Date: Fri, 10 Nov 2017 03:30:42 +0000 (+0000) Subject: ring: guarantee load/load order in enqueue and dequeue X-Git-Tag: spdx-start~826 X-Git-Url: http://git.droids-corp.org/?p=dpdk.git;a=commitdiff_plain;h=9bc2cbb007c0a3335c5582357ae9f6d37ea0b654 ring: guarantee load/load order in enqueue and dequeue We watched a rte panic of mbuf_autotest in our qualcomm arm64 server (Amberwing). Root cause: In __rte_ring_move_cons_head() ... do { /* Restore n as it may change every loop */ n = max; *old_head = r->cons.head; //1st load const uint32_t prod_tail = r->prod.tail; //2nd load In weak memory order architectures (powerpc,arm), the 2nd load might be reodered before the 1st load, that makes *entries is bigger than we wanted. This nasty reording messed enque/deque up. cpu1(producer) cpu2(consumer) cpu3(consumer) load r->prod.tail in enqueue: load r->cons.tail load r->prod.head store r->prod.tail load r->cons.head load r->prod.tail ... store r->cons.{head,tail} load r->cons.head Then, r->cons.head will be bigger than prod_tail, then make *entries very big and the consumer will go forward incorrectly. After this patch, the old cons.head will be recaculated after failure of rte_atomic32_cmpset There is no such issue on X86, because X86 is strong memory order model. But rte_smp_rmb() doesn't have impact on runtime performance on X86, so keep the same code without architectures specific concerns. Fixes: 50d769054872 ("ring: add burst API") Cc: stable@dpdk.org Signed-off-by: Jia He Signed-off-by: Jie Liu Signed-off-by: Bing Zhao Acked-by: Jerin Jacob Acked-by: Jianbo Liu --- diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 5e9b3b7b4c..e92443813b 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -409,6 +409,12 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp, n = max; *old_head = r->prod.head; + + /* add rmb barrier to avoid load/load reorder in weak + * memory model. It is noop on x86 + */ + rte_smp_rmb(); + const uint32_t cons_tail = r->cons.tail; /* * The subtraction is done between two unsigned 32bits value @@ -517,6 +523,12 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, n = max; *old_head = r->cons.head; + + /* add rmb barrier to avoid load/load reorder in weak + * memory model. It is noop on x86 + */ + rte_smp_rmb(); + const uint32_t prod_tail = r->prod.tail; /* The subtraction is done between two unsigned 32bits value * (the result is always modulo 32 bits even if we have