2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 QoS Scheduler Sample Application
32 ================================
34 The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling.
39 The architecture of the QoS scheduler application is shown in the following figure.
41 .. _figure_qos_sched_app_arch:
43 .. figure:: img/qos_sched_app_arch.*
45 QoS Scheduler Application Architecture
48 There are two flavors of the runtime execution for this application,
49 with two or three threads per each packet flow configuration being used.
50 The RX thread reads packets from the RX port,
51 classifies the packets based on the double VLAN (outer and inner) and
52 the lower two bytes of the IP destination address and puts them into the ring queue.
53 The worker thread dequeues the packets from the ring and calls the QoS scheduler enqueue/dequeue functions.
54 If a separate TX core is used, these are sent to the TX ring.
55 Otherwise, they are sent directly to the TX port.
56 The TX thread, if present, reads from the TX ring and write the packets to the TX port.
58 Compiling the Application
59 -------------------------
61 To compile the application:
63 #. Go to the sample application directory:
65 .. code-block:: console
67 export RTE_SDK=/path/to/rte_sdk cd ${RTE_SDK}/examples/qos_sched
69 #. Set the target (a default target is used if not specified). For example:
73 This application is intended as a linuxapp only.
75 .. code-block:: console
77 export RTE_TARGET=x86_64-native-linuxapp-gcc
79 #. Build the application:
81 .. code-block:: console
87 To get statistics on the sample app using the command line interface as described in the next section,
88 DPDK must be compiled defining *CONFIG_RTE_SCHED_COLLECT_STATS*,
89 which can be done by changing the configuration file for the specific target to be compiled.
91 Running the Application
92 -----------------------
96 In order to run the application, a total of at least 4
97 G of huge pages must be set up for each of the used sockets (depending on the cores in use).
99 The application has a number of command line options:
101 .. code-block:: console
103 ./qos_sched [EAL options] -- <APP PARAMS>
105 Mandatory application parameters include:
107 * --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE": Packet flow configuration.
108 Multiple pfc entities can be configured in the command line,
109 having 4 or 5 items (if TX core defined or not).
111 Optional application parameters include:
113 * -i: It makes the application to start in the interactive mode.
114 In this mode, the application shows a command line that can be used for obtaining statistics while
115 scheduling is taking place (see interactive mode below for more information).
117 * --mst n: Master core index (the default value is 1).
119 * --rsz "A, B, C": Ring sizes:
121 * A = Size (in number of buffer descriptors) of each of the NIC RX rings read
122 by the I/O RX lcores (the default value is 128).
124 * B = Size (in number of elements) of each of the software rings used
125 by the I/O RX lcores to send packets to worker lcores (the default value is 8192).
127 * C = Size (in number of buffer descriptors) of each of the NIC TX rings written
128 by worker lcores (the default value is 256)
130 * --bsz "A, B, C, D": Burst sizes
132 * A = I/O RX lcore read burst size from the NIC RX (the default value is 64)
134 * B = I/O RX lcore write burst size to the output software rings,
135 worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64)
137 * C = QoS dequeue size (the default value is 32)
139 * D = Worker lcore write burst size to the NIC TX (the default value is 64)
141 * --msz M: Mempool size (in number of mbufs) for each pfc (default 2097152)
143 * --rth "A, B, C": The RX queue threshold parameters
145 * A = RX prefetch threshold (the default value is 8)
147 * B = RX host threshold (the default value is 8)
149 * C = RX write-back threshold (the default value is 4)
151 * --tth "A, B, C": TX queue threshold parameters
153 * A = TX prefetch threshold (the default value is 36)
155 * B = TX host threshold (the default value is 0)
157 * C = TX write-back threshold (the default value is 0)
159 * --cfg FILE: Profile configuration to load
161 Refer to *DPDK Getting Started Guide* for general information on running applications and
162 the Environment Abstraction Layer (EAL) options.
164 The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters
165 needed for the QoS scheduler configuration.
167 The profile file has the following format:
171 ; port configuration [port]
174 number of subports per port = 1
175 number of pipes per subport = 4096
176 queue sizes = 64 64 64 64
178 ; Subport configuration
181 tb rate = 1250000000; Bytes per second
182 tb size = 1000000; Bytes
183 tc 0 rate = 1250000000; Bytes per second
184 tc 1 rate = 1250000000; Bytes per second
185 tc 2 rate = 1250000000; Bytes per second
186 tc 3 rate = 1250000000; Bytes per second
187 tc period = 10; Milliseconds
188 tc oversubscription period = 10; Milliseconds
190 pipe 0-4095 = 0; These pipes are configured with pipe profile 0
195 tb rate = 305175; Bytes per second
196 tb size = 1000000; Bytes
198 tc 0 rate = 305175; Bytes per second
199 tc 1 rate = 305175; Bytes per second
200 tc 2 rate = 305175; Bytes per second
201 tc 3 rate = 305175; Bytes per second
202 tc period = 40; Milliseconds
204 tc 0 oversubscription weight = 1
205 tc 1 oversubscription weight = 1
206 tc 2 oversubscription weight = 1
207 tc 3 oversubscription weight = 1
209 tc 0 wrr weights = 1 1 1 1
210 tc 1 wrr weights = 1 1 1 1
211 tc 2 wrr weights = 1 1 1 1
212 tc 3 wrr weights = 1 1 1 1
214 ; RED params per traffic class and color (Green / Yellow / Red)
217 tc 0 wred min = 48 40 32
218 tc 0 wred max = 64 64 64
219 tc 0 wred inv prob = 10 10 10
220 tc 0 wred weight = 9 9 9
222 tc 1 wred min = 48 40 32
223 tc 1 wred max = 64 64 64
224 tc 1 wred inv prob = 10 10 10
225 tc 1 wred weight = 9 9 9
227 tc 2 wred min = 48 40 32
228 tc 2 wred max = 64 64 64
229 tc 2 wred inv prob = 10 10 10
230 tc 2 wred weight = 9 9 9
232 tc 3 wred min = 48 40 32
233 tc 3 wred max = 64 64 64
234 tc 3 wred inv prob = 10 10 10
235 tc 3 wred weight = 9 9 9
240 These are the commands that are currently working under the command line interface:
244 * --quit: Quits the application.
248 * stats app: Shows a table with in-app calculated statistics.
250 * stats port X subport Y: For a specific subport, it shows the number of packets that
251 went through the scheduler properly and the number of packets that were dropped.
252 The same information is shown in bytes.
253 The information is displayed in a table separating it in different traffic classes.
255 * stats port X subport Y pipe Z: For a specific pipe, it shows the number of packets that
256 went through the scheduler properly and the number of packets that were dropped.
257 The same information is shown in bytes.
258 This information is displayed in a table separating it in individual queues.
262 All of these commands work the same way, averaging the number of packets throughout a specific subset of queues.
264 Two parameters can be configured for this prior to calling any of these commands:
266 * qavg n X: n is the number of times that the calculation will take place.
267 Bigger numbers provide higher accuracy. The default value is 10.
269 * qavg period X: period is the number of microseconds that will be allowed between each calculation.
270 The default value is 100.
272 The commands that can be used for measuring average queue size are:
274 * qavg port X subport Y: Show average queue size per subport.
276 * qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic class.
278 * qavg port X subport Y pipe Z: Show average queue size per pipe.
280 * qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic class.
282 * qavg port X subport Y pipe Z tc A q B: Show average queue size of a specific queue.
287 The following is an example command with a single packet flow configuration:
289 .. code-block:: console
291 ./qos_sched -c a2 -n 4 -- --pfc "3,2,5,7" --cfg ./profile.cfg
293 This example uses a single packet flow configuration which creates one RX thread on lcore 5 reading
294 from port 3 and a worker thread on lcore 7 writing to port 2.
296 Another example with 2 packet flow configurations using different ports but sharing the same core for QoS scheduler is given below:
298 .. code-block:: console
300 ./qos_sched -c c6 -n 4 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg
302 Note that independent cores for the packet flow configurations for each of the RX, WT and TX thread are also supported,
303 providing flexibility to balance the work.
305 The EAL coremask is constrained to contain the default mastercore 1 and the RX, WT and TX cores only.
310 The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS application:
312 * A subport represents a predefined group of users.
314 * A pipe represents an individual user/subscriber.
316 * A traffic class is the representation of a different traffic type with a specific loss rate,
317 delay and jitter requirements; such as data voice, video or data transfers.
319 * A queue hosts packets from one or multiple connections of the same type belonging to the same user.
321 The traffic flows that need to be configured are application dependent.
322 This application classifies based on the QinQ double VLAN tags and the IP destination address as indicated in the following table.
324 .. _table_qos_scheduler_1:
326 .. table:: Entity Types
328 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
329 | **Level Name** | **Siblings per Parent** | **QoS Functional Description** | **Selected By** |
331 +================+=========================+==================================================+==================================+
332 | Port | - | Ethernet port | Physical port |
334 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
335 | Subport | Config (8) | Traffic shaped (token bucket) | Outer VLAN tag |
337 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
338 | Pipe | Config (4k) | Traffic shaped (token bucket) | Inner VLAN tag |
340 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
341 | Traffic Class | 4 | TCs of the same pipe services in strict priority | Destination IP address (0.0.X.0) |
343 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
344 | Queue | 4 | Queue of the same TC serviced in WRR | Destination IP address (0.0.0.X) |
346 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
348 Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters.