Currently, the maximum value of rx/tx queues are kept by EAL. But,
the value is used like below with different meanings in vhost PMD.
- The maximum value of current enabled queues.
- The maximum value of current supported queues.
This wrong double meaning will cause an issue like below steps.
* Invoke application with below option.
--vdev 'eth_vhost0,iface=<socket path>,queues=4'
* Configure queues like below.
rte_eth_dev_configure(portid, 2, 2, ...);
* Configure queues again like below.
rte_eth_dev_configure(portid, 4, 4, ...);
The second rte_eth_dev_configure() will fail because both
the maximum value of current enabled queues and supported queues
will be '2' after calling first rte_eth_dev_configure().
To fix the issue, the patch adds another variable to keep the maximum
number of supported queues in vhost PMD.
Fixes:
23981fb0d78b ("vhost: Add vhost PMD")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
struct pmd_internal {
char *dev_name;
char *iface_name;
+ uint16_t max_queues;
volatile uint16_t once;
};
eth_dev_info(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
{
+ struct pmd_internal *internal;
+
+ internal = dev->data->dev_private;
+ if (internal == NULL) {
+ RTE_LOG(ERR, PMD, "Invalid device specified\n");
+ return;
+ }
+
dev_info->driver_name = drivername;
dev_info->max_mac_addrs = 1;
dev_info->max_rx_pktlen = (uint32_t)-1;
- dev_info->max_rx_queues = dev->data->nb_rx_queues;
- dev_info->max_tx_queues = dev->data->nb_tx_queues;
+ dev_info->max_rx_queues = internal->max_queues;
+ dev_info->max_tx_queues = internal->max_queues;
dev_info->min_rx_bufsize = 0;
}
memmove(data->name, eth_dev->data->name, sizeof(data->name));
data->nb_rx_queues = queues;
data->nb_tx_queues = queues;
+ internal->max_queues = queues;
data->dev_link = pmd_link;
data->mac_addrs = eth_addr;