GCC 8.1 warned:
rte_memcpy.h:793:2: note: in expansion of macro 'MOVEUNALIGNED_LEFT47'
MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
^~~~~~~~~~~~~~~~~~~~
rte_memcpy.h:649:51: warning: conversion from 'size_t'
{aka 'long unsigned int'} to 'int' may change value [-Wconversion]
case 0x0B: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0B); break;
^
rte_memcpy.h:616:15: note: in definition of macro 'MOVEUNALIGNED_LEFT47_IMM'
tmp = len;
^~~
rte_memcpy.h:793:2: note: in expansion of macro 'MOVEUNALIGNED_LEFT47'
MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
^~~~~~~~~~~~~~~~~~~~
rte_memcpy.h:618:13: warning: conversion to 'size_t'
{aka 'long unsigned int'} from 'int'
may change the sign of the result [-Wsign-conversion]
tmp -= len;
^~
rte_memcpy.h:649:16: note: in expansion of macro 'MOVEUNALIGNED_LEFT47_IMM'
case 0x0B: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0B); break;
^~~~~~~~~~~~~~~~~~~~~~~~
rte_memcpy.h:793:2: note: in expansion of macro 'MOVEUNALIGNED_LEFT47'
MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
^~~~~~~~~~~~~~~~~~~~
rte_memcpy.h:618:13: warning: conversion to 'size_t'
{aka 'long unsigned int'} from 'int'
may change the sign of the result [-Wsign-conversion]
tmp -= len;
^~
We can eliminate the problems by setting the type of tmp to
size_t in the first place.
Fixes: d35cc1fe6a ("eal/x86: revert select optimized memcpy at run-time")
Cc: stable@dpdk.org
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Andy Green <andy@warmcat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
*/
#define MOVEUNALIGNED_LEFT47_IMM(dst, src, len, offset) \
__extension__ ({ \
- int tmp; \
+ size_t tmp; \
while (len >= 128 + 16 - offset) { \
xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16)); \
len -= 128; \