From: Jianfeng Tan Date: Thu, 26 Apr 2018 15:34:07 +0000 (+0000) Subject: net/virtio-user: fix hugepage files enumeration X-Git-Url: http://git.droids-corp.org/?a=commitdiff_plain;h=169a9da64a859e1782e49886e7214982304a580f;p=dpdk.git net/virtio-user: fix hugepage files enumeration After the commit 2a04139f66b4 ("eal: add single file segments option"), one hugepage file could contain multiple hugepages which are further mapped to different memory regions. Original enumeration implementation cannot handle this situation. This patch filters out the duplicated files; and adjust the size after the enumeration. Fixes: 6a84c37e3975 ("net/virtio-user: add vhost-user adapter layer") Signed-off-by: Jianfeng Tan Acked-by: Maxime Coquelin --- diff --git a/doc/guides/howto/virtio_user_for_container_networking.rst b/doc/guides/howto/virtio_user_for_container_networking.rst index aa68b53151..476ce3a63c 100644 --- a/doc/guides/howto/virtio_user_for_container_networking.rst +++ b/doc/guides/howto/virtio_user_for_container_networking.rst @@ -109,7 +109,8 @@ We have below limitations in this solution: * Cannot work with --no-huge option. Currently, DPDK uses anonymous mapping under this option which cannot be reopened to share with vhost backend. * Cannot work when there are more than VHOST_MEMORY_MAX_NREGIONS(8) hugepages. - In another word, do not use 2MB hugepage so far. + If you have more regions (especially when 2MB hugepages are used), the option, + --single-file-segments, can help to reduce the number of shared files. * Applications should not use file name like HUGEFILE_FMT ("%smap_%d"). That will bring confusion when sharing hugepage files with backend by name. * Root privilege is a must. DPDK resolves physical addresses of hugepages diff --git a/drivers/net/virtio/virtio_user/vhost_user.c b/drivers/net/virtio/virtio_user/vhost_user.c index a6df97a002..573ef07f9b 100644 --- a/drivers/net/virtio/virtio_user/vhost_user.c +++ b/drivers/net/virtio/virtio_user/vhost_user.c @@ -138,12 +138,13 @@ struct hugepage_file_info { static int get_hugepage_file_info(struct hugepage_file_info huges[], int max) { - int idx; + int idx, k, exist; FILE *f; char buf[BUFSIZ], *tmp, *tail; char *str_underline, *str_start; int huge_index; uint64_t v_start, v_end; + struct stat stats; f = fopen("/proc/self/maps", "r"); if (!f) { @@ -183,16 +184,39 @@ get_hugepage_file_info(struct hugepage_file_info huges[], int max) if (sscanf(str_start, "map_%d", &huge_index) != 1) continue; + /* skip duplicated file which is mapped to different regions */ + for (k = 0, exist = -1; k < idx; ++k) { + if (!strcmp(huges[k].path, tmp)) { + exist = k; + break; + } + } + if (exist >= 0) + continue; + if (idx >= max) { PMD_DRV_LOG(ERR, "Exceed maximum of %d", max); goto error; } + huges[idx].addr = v_start; - huges[idx].size = v_end - v_start; + huges[idx].size = v_end - v_start; /* To be corrected later */ snprintf(huges[idx].path, PATH_MAX, "%s", tmp); idx++; } + /* correct the size for files who have many regions */ + for (k = 0; k < idx; ++k) { + if (stat(huges[k].path, &stats) < 0) { + PMD_DRV_LOG(ERR, "Failed to stat %s, %s\n", + huges[k].path, strerror(errno)); + continue; + } + huges[k].size = stats.st_size; + PMD_DRV_LOG(INFO, "file %s, size %zx\n", + huges[k].path, huges[k].size); + } + fclose(f); return idx;