1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2019-2020 Intel Corporation.
4 AF_XDP Poll Mode Driver
5 ==========================
7 AF_XDP is an address family that is optimized for high performance
8 packet processing. AF_XDP sockets enable the possibility for XDP program to
9 redirect packets to a memory buffer in userspace.
11 For the full details behind AF_XDP socket, you can refer to
12 `AF_XDP documentation in the Kernel
13 <https://www.kernel.org/doc/Documentation/networking/af_xdp.rst>`_.
15 This Linux-specific PMD creates the AF_XDP socket and binds it to a
16 specific netdev queue, it allows a DPDK application to send and receive raw
17 packets through the socket which would bypass the kernel network stack.
18 Current implementation only supports single queue, multi-queues feature will
21 AF_XDP PMD enables need_wakeup flag by default if it is supported. This
22 need_wakeup feature is used to support executing application and driver on the
23 same core efficiently. This feature not only has a large positive performance
24 impact for the one core case, but also does not degrade 2 core performance and
25 actually improves it for Tx heavy workloads.
30 The following options can be provided to set up an af_xdp port in DPDK.
32 * ``iface`` - name of the Kernel interface to attach to (required);
33 * ``start_queue`` - starting netdev queue id (optional, default 0);
34 * ``queue_count`` - total netdev queue number (optional, default 1);
35 * ``shared_umem`` - PMD will attempt to share UMEM with others (optional,
37 * ``xdp_prog`` - path to custom xdp program (optional, default none);
38 * ``busy_budget`` - busy polling budget (optional, default 64);
43 This is a Linux-specific PMD, thus the following prerequisites apply:
45 * A Linux Kernel (version > v4.18) with XDP sockets configuration enabled;
46 * Both libxdp >=v1.2.2 and libbpf libraries installed, or, libbpf <=v0.6.0
47 * If using libxdp, it requires an environment variable called
48 LIBXDP_OBJECT_PATH to be set to the location of where libxdp placed its bpf
49 object files. This is usually in /usr/local/lib/bpf or /usr/local/lib64/bpf.
50 * A Kernel bound interface to attach to;
51 * For need_wakeup feature, it requires kernel version later than v5.3-rc1;
52 * For PMD zero copy, it requires kernel version later than v5.4-rc1;
53 * For shared_umem, it requires kernel version v5.10 or later and libbpf version
55 * For 32-bit OS, a kernel with version 5.4 or later is required.
56 * For busy polling, kernel version v5.11 or later is required.
58 Set up an af_xdp interface
59 -----------------------------
61 The following example will set up an af_xdp interface in DPDK:
63 .. code-block:: console
65 --vdev net_af_xdp,iface=ens786f1
67 If 'start_queue' is not specified in the vdev arguments,
68 the socket will by default be created on Rx queue 0.
69 To ensure traffic lands on this queue,
70 one can use flow steering if the network card supports it.
71 Or, a simpler way is to reduce the number of configured queues
72 for the device which will ensure that all traffic will land on queue 0
73 and thus reach the socket:
75 .. code-block:: console
77 ethtool -L ens786f1 combined 1
84 The MTU of the AF_XDP PMD is limited due to the XDP requirement of one packet
85 per page. In the PMD we report the maximum MTU for zero copy to be equal
86 to the page size less the frame overhead introduced by AF_XDP (XDP HR = 256)
87 and DPDK (frame headroom = 320). With a 4K page size this works out at 3520.
88 However in practice this value may be even smaller, due to differences between
89 the supported RX buffer sizes of the underlying kernel netdev driver.
91 For example, the largest RX buffer size supported by the underlying kernel driver
92 which is less than the page size (4096B) may be 3072B. In this case, the maximum
93 MTU value will be at most 3072, but likely even smaller than this, once relevant
94 headers are accounted for eg. Ethernet and VLAN.
96 To determine the actual maximum MTU value of the interface you are using with the
97 AF_XDP PMD, consult the documentation for the kernel driver.
99 Note: The AF_XDP PMD will fail to initialise if an MTU which violates the driver's
100 conditions as above is set prior to launching the application.
104 The sharing of UMEM is only supported for AF_XDP sockets with unique contexts.
105 The context refers to the netdev,qid tuple.
107 The following combination will fail:
109 .. code-block:: console
111 --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \
112 --vdev net_af_xdp1,iface=ens786f1,shared_umem=1 \
114 Either of the following however is permitted since either the netdev or qid differs
115 between the two vdevs:
117 .. code-block:: console
119 --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \
120 --vdev net_af_xdp1,iface=ens786f1,start_queue=1,shared_umem=1 \
122 .. code-block:: console
124 --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \
125 --vdev net_af_xdp1,iface=ens786f2,shared_umem=1 \
127 - **Preferred Busy Polling**
129 The SO_PREFER_BUSY_POLL socket option was introduced in kernel v5.11. It can
130 deliver a performance improvement for sockets with heavy traffic loads and
131 can significantly improve single-core performance in this context.
133 The feature is enabled by default in the AF_XDP PMD. To disable it, set the
134 'busy_budget' vdevarg to zero:
136 .. code-block:: console
138 --vdev net_af_xdp0,iface=ens786f1,busy_budget=0
140 The default 'busy_budget' is 64 and it represents the number of packets the
141 kernel will attempt to process in the netdev's NAPI context. You can change
142 the value for example to 256 like so:
144 .. code-block:: console
146 --vdev net_af_xdp0,iface=ens786f1,busy_budget=256
148 It is also strongly recommended to set the following for optimal performance:
150 .. code-block:: console
152 echo 2 | sudo tee /sys/class/net/ens786f1/napi_defer_hard_irqs
153 echo 200000 | sudo tee /sys/class/net/ens786f1/gro_flush_timeout
155 The above defers interrupts for interface ens786f1 and instead schedules its
156 NAPI context from a watchdog timer instead of from softirqs. More information
157 on this feature can be found at [1].
159 - **Secondary Processes**
161 Rx and Tx are not supported for secondary processes due to memory mapping of
162 the AF_XDP rings being assigned by the kernel in the primary process only.
163 However other operations including statistics retrieval are permitted.
164 The maximum number of queues permitted for PMDs operating in this model is 8
165 as this is the maximum number of fds that can be sent through the IPC APIs as
166 defined by RTE_MP_MAX_FD_NUM.
170 When using the default program (ie. when the vdev arg 'xdp_prog' is not used),
171 the following logs will appear when an application is launched:
173 .. code-block:: console
175 libbpf: elf: skipping unrecognized data section(7) .xdp_run_config
176 libbpf: elf: skipping unrecognized data section(8) xdp_metadata
178 These logs are not errors and can be ignored.
180 [1] https://lwn.net/Articles/837010/