2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
34 The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
36 The benefits of using the DPDK KNI are:
38 * Faster than existing Linux TUN/TAP interfaces
39 (by eliminating system calls and copy_to_user()/copy_from_user() operations.
41 * Allows management of DPDK ports using standard Linux net tools such as ethtool, ifconfig and tcpdump.
43 * Allows an interface with the kernel network stack.
45 The components of an application using the DPDK Kernel NIC Interface are shown in :numref:`figure_kernel_nic_intf`.
47 .. _figure_kernel_nic_intf:
49 .. figure:: img/kernel_nic_intf.*
51 Components of a DPDK KNI Application
54 The DPDK KNI Kernel Module
55 --------------------------
57 The KNI kernel loadable module provides support for two types of devices:
59 * A Miscellaneous device (/dev/kni) that:
61 * Creates net devices (via ioctl calls).
63 * Maintains a kernel thread context shared by all KNI instances
64 (simulating the RX side of the net driver).
66 * For single kernel thread mode, maintains a kernel thread context shared by all KNI instances
67 (simulating the RX side of the net driver).
69 * For multiple kernel thread mode, maintains a kernel thread context for each KNI instance
70 (simulating the RX side of the new driver).
74 * Net functionality provided by implementing several operations such as netdev_ops,
75 header_ops, ethtool_ops that are defined by struct net_device,
76 including support for DPDK mbufs and FIFOs.
78 * The interface name is provided from userspace.
80 * The MAC address can be the real NIC MAC address or random.
82 KNI Creation and Deletion
83 -------------------------
85 The KNI interfaces are created by a DPDK application dynamically.
86 The interface name and FIFO details are provided by the application through an ioctl call
87 using the rte_kni_device_info struct which contains:
91 * Physical addresses of the corresponding memzones for the relevant FIFOs.
93 * Mbuf mempool details, both physical and virtual (to calculate the offset for mbuf pointers).
99 Refer to rte_kni_common.h in the DPDK source code for more details.
101 The physical addresses will be re-mapped into the kernel address space and stored in separate KNI contexts.
103 Once KNI interfaces are created, the KNI context information can be queried by calling the rte_kni_info_get() function.
105 The KNI interfaces can be deleted by a DPDK application dynamically after being created.
106 Furthermore, all those KNI interfaces not deleted will be deleted on the release operation
107 of the miscellaneous device (when the DPDK application is closed).
112 To minimize the amount of DPDK code running in kernel space, the mbuf mempool is managed in userspace only.
113 The kernel module will be aware of mbufs,
114 but all mbuf allocation and free operations will be handled by the DPDK application only.
116 :numref:`figure_pkt_flow_kni` shows a typical scenario with packets sent in both directions.
118 .. _figure_pkt_flow_kni:
120 .. figure:: img/pkt_flow_kni.*
122 Packet Flow via mbufs in the DPDK KNI
128 On the DPDK RX side, the mbuf is allocated by the PMD in the RX thread context.
129 This thread will enqueue the mbuf in the rx_q FIFO.
130 The KNI thread will poll all KNI active devices for the rx_q.
131 If an mbuf is dequeued, it will be converted to a sk_buff and sent to the net stack via netif_rx().
132 The dequeued mbuf must be freed, so the same pointer is sent back in the free_q FIFO.
134 The RX thread, in the same main loop, polls this FIFO and frees the mbuf after dequeuing it.
139 For packet egress the DPDK application must first enqueue several mbufs to create an mbuf cache on the kernel side.
141 The packet is received from the Linux net stack, by calling the kni_net_tx() callback.
142 The mbuf is dequeued (without waiting due the cache) and filled with data from sk_buff.
143 The sk_buff is then freed and the mbuf sent in the tx_q FIFO.
145 The DPDK TX thread dequeues the mbuf and sends it to the PMD (via rte_eth_tx_burst()).
146 It then puts the mbuf back in the cache.
151 Ethtool is a Linux-specific tool with corresponding support in the kernel
152 where each net device must register its own callbacks for the supported operations.
153 The current implementation uses the igb/ixgbe modified Linux drivers for ethtool support.
154 Ethtool is not supported in i40e and VMs (VF or EM devices).
156 Link state and MTU change
157 -------------------------
159 Link state and MTU change are network interface specific operations usually done via ifconfig.
160 The request is initiated from the kernel side (in the context of the ifconfig process)
161 and handled by the user space DPDK application.
162 The application polls the request, calls the application handler and returns the response back into the kernel space.
164 The application handlers can be registered upon interface creation or explicitly registered/unregistered in runtime.
165 This provides flexibility in multiprocess scenarios
166 (where the KNI is created in the primary process but the callbacks are handled in the secondary one).
167 The constraint is that a single process can register and handle the requests.
169 KNI Working as a Kernel vHost Backend
170 -------------------------------------
172 vHost is a kernel module usually working as the backend of virtio (a para- virtualization driver framework)
173 to accelerate the traffic from the guest to the host.
174 The DPDK Kernel NIC interface provides the ability to hookup vHost traffic into userspace DPDK application.
175 Together with the DPDK PMD virtio, it significantly improves the throughput between guest and host.
176 In the scenario where DPDK is running as fast path in the host, kni-vhost is an efficient path for the traffic.
181 vHost-net has three kinds of real backend implementations. They are: 1) tap, 2) macvtap and 3) RAW socket.
182 The main idea behind kni-vhost is making the KNI work as a RAW socket, attaching it as the backend instance of vHost-net.
183 It is using the existing interface with vHost-net, so it does not require any kernel hacking,
184 and is fully-compatible with the kernel vhost module.
185 As vHost is still taking responsibility for communicating with the front-end virtio,
186 it naturally supports both legacy virtio -net and the DPDK PMD virtio.
187 There is a little penalty that comes from the non-polling mode of vhost.
188 However, it scales throughput well when using KNI in multi-thread mode.
190 .. _figure_vhost_net_arch2:
192 .. figure:: img/vhost_net_arch.*
194 vHost-net Architecture Overview
200 There is only a minor difference from the original KNI traffic flows.
201 On transmit side, vhost kthread calls the RAW socket's ops sendmsg and it puts the packets into the KNI transmit FIFO.
202 On the receive side, the kni kthread gets packets from the KNI receive FIFO, puts them into the queue of the raw socket,
203 and wakes up the task in vhost kthread to begin receiving.
204 All the packet copying, irrespective of whether it is on the transmit or receive side,
205 happens in the context of vhost kthread.
206 Every vhost-net device is exposed to a front end virtio device in the guest.
208 .. _figure_kni_traffic_flow:
210 .. figure:: img/kni_traffic_flow.*
218 Before starting to use KNI as the backend of vhost, the CONFIG_RTE_KNI_VHOST configuration option must be turned on.
219 Otherwise, by default, KNI will not enable its backend support capability.
221 Of course, as a prerequisite, the vhost/vhost-net kernel CONFIG should be chosen before compiling the kernel.
223 #. Compile the DPDK and insert uio_pci_generic/igb_uio kernel modules as normal.
225 #. Insert the KNI kernel module:
227 .. code-block:: console
231 If using KNI in multi-thread mode, use the following command line:
233 .. code-block:: console
235 insmod ./rte_kni.ko kthread_mode=multiple
237 #. Running the KNI sample application:
239 .. code-block:: console
241 ./kni -c -0xf0 -n 4 -- -p 0x3 -P -config="(0,4,6),(1,5,7)"
243 This command runs the kni sample application with two physical ports.
244 Each port pins two forwarding cores (ingress/egress) in user space.
246 #. Assign a raw socket to vhost-net during qemu-kvm startup.
247 The DPDK does not provide a script to do this since it is easy for the user to customize.
248 The following shows the key steps to launch qemu-kvm with kni-vhost:
253 echo 1 > /sys/class/net/vEth0/sock_en
254 fd=`cat /sys/class/net/vEth0/sock_fd`
256 -name vm1 -cpu host -m 2048 -smp 1 -hda /opt/vm-fc16.img \
257 -netdev tap,fd=$fd,id=hostnet1,vhost=on \
258 -device virti-net-pci,netdev=hostnet1,id=net1,bus=pci.0,addr=0x4
260 It is simple to enable raw socket using sysfs sock_en and get raw socket fd using sock_fd under the KNI device node.
262 Then, using the qemu-kvm command with the -netdev option to assign such raw socket fd as vhost's backend.
266 The key word tap must exist as qemu-kvm now only supports vhost with a tap backend, so here we cheat qemu-kvm by an existing fd.
268 Compatibility Configure Option
269 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
271 There is a CONFIG_RTE_KNI_VHOST_VNET_HDR_EN configuration option in DPDK configuration file.
272 By default, it set to n, which means do not turn on the virtio net header,
273 which is used to support additional features (such as, csum offload, vlan offload, generic-segmentation and so on),
274 since the kni-vhost does not yet support those features.
276 Even if the option is turned on, kni-vhost will ignore the information that the header contains.
277 When working with legacy virtio on the guest, it is better to turn off unsupported offload features using ethtool -K.
278 Otherwise, there may be problems such as an incorrect L4 checksum error.