From 4531d096d105a9ef65c516fc59e86fc0c56e46cc Mon Sep 17 00:00:00 2001 From: Anatoly Burakov Date: Tue, 6 Nov 2018 14:13:29 +0000 Subject: [PATCH] mem: fix use after free in legacy mem init Adding an additional failure path in DMA mask check has exposed an issue where `hugepage` pointer may point to memory that has already been unmapped, but pointer value is still not NULL, so failure handler will attempt to unmap it second time if DMA mask check fails. Fix it by setting `hugepage` pointer to NULL once it is no longer needed. Coverity issue: 325730 Fixes: 165c89b84538 ("mem: use DMA mask check for legacy memory") Signed-off-by: Anatoly Burakov --- lib/librte_eal/linuxapp/eal/eal_memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index c1b5e07911..48b23ce19a 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -1617,6 +1617,7 @@ eal_legacy_hugepage_init(void) tmp_hp = NULL; munmap(hugepage, nr_hugefiles * sizeof(struct hugepage_file)); + hugepage = NULL; /* we're not going to allocate more pages, so release VA space for * unused memseg lists -- 2.20.1