The following diagram describes the functionality of the DPDK Xen Packet- Switching Solution.
-.. image35_png has been renamed
-|dpdk_xen_pkt_switch|
+.. _figure_dpdk_xen_pkt_switch:
+
+.. figure:: img/dpdk_xen_pkt_switch.*
+
+ Functionality of the DPDK Xen Packet Switching Solution.
+
Note 1 The Xen hypervisor uses a mechanism called a Grant Table to share memory between domains
(`http://wiki.xen.org/wiki/Grant Table <http://wiki.xen.org/wiki/Grant%20Table>`_).
A diagram of the design is shown below, where "gva" is the Guest Virtual Address,
which is the data pointer of the mbuf, and "hva" is the Host Virtual Address:
-.. image36_png has been renamed
-|grant_table|
+.. _figure_grant_table:
+
+.. figure:: img/grant_table.*
+
+ DPDK Xen Layout
+
In this design, a Virtio ring is used as a para-virtualized interface for better performance over a Xen private ring
when packet switching to and from a VM.
The real Grant reference information is stored in this virtual address space,
where (gref, pfn) pairs follow each other with -1 as the terminator.
-.. image37_pnng has been renamed
-|grant_refs|
+.. _figure_grant_refs:
+
+.. figure:: img/grant_refs.*
+
+ Mapping Grant references to a continuous virtual address space
+
After all gref# IDs are retrieved, the host maps them to a continuous virtual address space.
With the guest mempool virtual address, the host establishes 1:1 address mapping.
.. code-block:: console
modprobe uio_pci_generic
- python tools/dpdk_nic_bind.py -b uio_pci_generic 0000:09:00:00.0
+ python usertools/dpdk-devbind.py -b uio_pci_generic 0000:09:00:00.0
In this case, 0000:09:00.0 is the PCI address for the NIC controller.
.. code-block:: console
- examples/vhost_xen/build/vhost-switch -c f -n 3 --xen-dom0 -- -p1
+ examples/vhost_xen/build/vhost-switch -l 0-3 -n 3 --xen-dom0 -- -p1
.. note::
.. code-block:: console
- ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 --vdev="eth_xenvirt0,mac=00:00:00:00:00:11"
+ ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 --vdev="net_xenvirt0,mac=00:00:00:00:00:11"
testpmd>set fwd mac
testpmd>start
.. code-block:: console
- --vdev="eth_xenvirt0,mac=00:00:00:00:00:11" --vdev="eth_xenvirt1;mac=00:00:00:00:00:22"
+ --vdev="net_xenvirt0,mac=00:00:00:00:00:11" --vdev="net_xenvirt1;mac=00:00:00:00:00:22"
Usage Examples: Injecting a Packet Stream Using a Packet Generator
.. code-block:: console
- ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 --vdev="eth_xenvirt0,mac=00:00:00:00:00:11" -- -i --eth-peer=0,00:00:00:00:00:22
+ ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 --vdev="net_xenvirt0,mac=00:00:00:00:00:11" -- -i --eth-peer=0,00:00:00:00:00:22
testpmd> set fwd mac
testpmd> start
.. code-block:: console
- ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 --vdev="eth_xenvirt0,mac=00:00:00:00:00:11" -- -i --eth-peer=0,00:00:00:00:00:22 -- -i
+ ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 --vdev="net_xenvirt0,mac=00:00:00:00:00:11" -- -i --eth-peer=0,00:00:00:00:00:22 -- -i
Run TestPMD in guest VM2:
.. code-block:: console
- ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 --vdev="eth_xenvirt0,mac=00:00:00:00:00:22" -- -i --eth-peer=0,00:00:00:00:00:33
+ ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 --vdev="net_xenvirt0,mac=00:00:00:00:00:22" -- -i --eth-peer=0,00:00:00:00:00:33
Configure a packet stream in the packet generator, and set the destination MAC address to 00:00:00:00:00:11 and VLAN to 1000.
The packets received in Virtio in guest VM1 will be forwarded to Virtio in guest VM2 and
The packet flow is:
packet generator->Virtio in guest VM1->switching backend->Virtio in guest VM2->switching backend->wire
-
-.. |grant_table| image:: img/grant_table.*
-
-.. |grant_refs| image:: img/grant_refs.*
-
-.. |dpdk_xen_pkt_switch| image:: img/dpdk_xen_pkt_switch.*