net/enic: fix crash when freeing 0 packet to mempool
authorAaron Conole <aconole@redhat.com>
Wed, 2 Aug 2017 18:02:13 +0000 (14:02 -0400)
committerThomas Monjalon <thomas@monjalon.net>
Thu, 3 Aug 2017 20:55:32 +0000 (22:55 +0200)
Occasionally, the amount of packets to free from the work queue ends
perfectly on a boundary to have nb_free = 0 and pool = 0.  This causes
a segfault as follows:

  (gdb) bt
  #0  rte_mempool_default_cache
  #1  rte_mempool_put_bulk (n=0, obj_table=0x7f10deff2530, mp=0x0)
  #2  enic_free_wq_bufs (wq=wq@entry=0x7efabffcd5b0,
      completed_index=completed_index@entry=33)
  #3  0x00007f11e9c86e17 in enic_cleanup_wq (enic=<optimized out>,
      wq=wq@entry=0x7efabffcd5b0)
      at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/drivers/net/enic/enic_rxtx.c:442
  #4  0x00007f11e9c86e5f in enic_xmit_pkts (tx_queue=0x7efabffcd5b0,
      tx_pkts=0x7f10deffb1a8, nb_pkts=<optimized out>)
      at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/drivers/net/enic/enic_rxtx.c:470
  #5  0x00007f11e9e147ad in rte_eth_tx_burst (nb_pkts=<optimized out>,
      tx_pkts=0x7f10deffb1a8, queue_id=0, port_id=<optimized out>)

This commit makes the enic wq driver match other drivers who call the
bulk free, by checking that there are actual packets to free.

Fixes: 36935afbc53c ("net/enic: refactor Tx mbuf recycling")
CC: stable@dpdk.org
Reported-by: Vincent S. Cojot <vcojot@redhat.com>
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1468631
Signed-off-by: Aaron Conole <aconole@redhat.com>
Reviewed-by: John Daley <johndale@cisco.com>
drivers/net/enic/enic_rxtx.c

index 5867acf..a39172f 100644 (file)
@@ -503,7 +503,8 @@ static inline void enic_free_wq_bufs(struct vnic_wq *wq, u16 completed_index)
                tail_idx = enic_ring_incr(desc_count, tail_idx);
        }
 
-       rte_mempool_put_bulk(pool, (void **)free, nb_free);
+       if (nb_free > 0)
+               rte_mempool_put_bulk(pool, (void **)free, nb_free);
 
        wq->tail_idx = tail_idx;
        wq->ring.desc_avail += nb_to_free;