From: Eugenio Pérez Date: Wed, 29 Jan 2020 19:33:10 +0000 (+0100) Subject: vhost: flush shadow Tx if no more packets X-Git-Url: http://git.droids-corp.org/?a=commitdiff_plain;h=cdf1dc5e6a361df17d081e3e975cc586a4b7d68d;p=dpdk.git vhost: flush shadow Tx if no more packets The current implementation of vhost_net in packed vring tries to fill the shadow vector before send any actual changes to the guest. While this can be beneficial for the throughput, it conflicts with some bufferfloats methods like the linux kernel napi, that stops transmitting packets if there are too much bytes/buffers in the driver. To solve it, we flush the shadow packets at the end of virtio_dev_tx_packed if we have starved the vring, i.e. the next buffer is not available for the device. Since this last check can be expensive because of the atomic, we only check it if we have not obtained the expected "count" packets. If it happens to obtain "count" packets and there is no more available packets the caller needs to keep call virtio_dev_tx_packed again. Fixes: 31d6c6a5b820 ("vhost: optimize packed ring dequeue") Cc: stable@dpdk.org Signed-off-by: Eugenio Pérez Reviewed-by: Maxime Coquelin --- diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 73bf98bd93..37c47c7dc0 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -2133,6 +2133,20 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net *dev, return pkt_idx; } +static __rte_always_inline bool +next_desc_is_avail(const struct vhost_virtqueue *vq) +{ + bool wrap_counter = vq->avail_wrap_counter; + uint16_t next_used_idx = vq->last_used_idx + 1; + + if (next_used_idx >= vq->size) { + next_used_idx -= vq->size; + wrap_counter ^= 1; + } + + return desc_is_avail(&vq->desc_packed[next_used_idx], wrap_counter); +} + static __rte_noinline uint16_t virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -2165,9 +2179,20 @@ virtio_dev_tx_packed(struct virtio_net *dev, } while (remained); - if (vq->shadow_used_idx) + if (vq->shadow_used_idx) { do_data_copy_dequeue(vq); + if (remained && !next_desc_is_avail(vq)) { + /* + * The guest may be waiting to TX some buffers to + * enqueue more to avoid bufferfloat, so we try to + * reduce latency here. + */ + vhost_flush_dequeue_shadow_packed(dev, vq); + vhost_vring_call_packed(dev, vq); + } + } + return pkt_idx; }