Adding an additional failure path in DMA mask check has exposed an
issue where `hugepage` pointer may point to memory that has already
been unmapped, but pointer value is still not NULL, so failure
handler will attempt to unmap it second time if DMA mask check
fails. Fix it by setting `hugepage` pointer to NULL once it is no
longer needed.
Coverity issue: 325730
Fixes:
165c89b84538 ("mem: use DMA mask check for legacy memory")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
tmp_hp = NULL;
munmap(hugepage, nr_hugefiles * sizeof(struct hugepage_file));
+ hugepage = NULL;
/* we're not going to allocate more pages, so release VA space for
* unused memseg lists