1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2014 Intel Corporation.
4 QoS Scheduler Sample Application
5 ================================
7 The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling.
12 The architecture of the QoS scheduler application is shown in the following figure.
14 .. _figure_qos_sched_app_arch:
16 .. figure:: img/qos_sched_app_arch.*
18 QoS Scheduler Application Architecture
21 There are two flavors of the runtime execution for this application,
22 with two or three threads per each packet flow configuration being used.
23 The RX thread reads packets from the RX port,
24 classifies the packets based on the double VLAN (outer and inner) and
25 the lower byte of the IP destination address and puts them into the ring queue.
26 The worker thread dequeues the packets from the ring and calls the QoS scheduler enqueue/dequeue functions.
27 If a separate TX core is used, these are sent to the TX ring.
28 Otherwise, they are sent directly to the TX port.
29 The TX thread, if present, reads from the TX ring and write the packets to the TX port.
31 Compiling the Application
32 -------------------------
34 To compile the sample application see :doc:`compiling`.
36 The application is located in the ``qos_sched`` sub-directory.
40 This application is intended as a linux only.
44 To get statistics on the sample app using the command line interface as described in the next section,
45 DPDK must be compiled defining *CONFIG_RTE_SCHED_COLLECT_STATS*,
46 which can be done by changing the configuration file for the specific target to be compiled.
48 Running the Application
49 -----------------------
53 In order to run the application, a total of at least 4
54 G of huge pages must be set up for each of the used sockets (depending on the cores in use).
56 The application has a number of command line options:
58 .. code-block:: console
60 ./qos_sched [EAL options] -- <APP PARAMS>
62 Mandatory application parameters include:
64 * --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE": Packet flow configuration.
65 Multiple pfc entities can be configured in the command line,
66 having 4 or 5 items (if TX core defined or not).
68 Optional application parameters include:
70 * -i: It makes the application to start in the interactive mode.
71 In this mode, the application shows a command line that can be used for obtaining statistics while
72 scheduling is taking place (see interactive mode below for more information).
74 * --mst n: Master core index (the default value is 1).
76 * --rsz "A, B, C": Ring sizes:
78 * A = Size (in number of buffer descriptors) of each of the NIC RX rings read
79 by the I/O RX lcores (the default value is 128).
81 * B = Size (in number of elements) of each of the software rings used
82 by the I/O RX lcores to send packets to worker lcores (the default value is 8192).
84 * C = Size (in number of buffer descriptors) of each of the NIC TX rings written
85 by worker lcores (the default value is 256)
87 * --bsz "A, B, C, D": Burst sizes
89 * A = I/O RX lcore read burst size from the NIC RX (the default value is 64)
91 * B = I/O RX lcore write burst size to the output software rings,
92 worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64)
94 * C = QoS dequeue size (the default value is 32)
96 * D = Worker lcore write burst size to the NIC TX (the default value is 64)
98 * --msz M: Mempool size (in number of mbufs) for each pfc (default 2097152)
100 * --rth "A, B, C": The RX queue threshold parameters
102 * A = RX prefetch threshold (the default value is 8)
104 * B = RX host threshold (the default value is 8)
106 * C = RX write-back threshold (the default value is 4)
108 * --tth "A, B, C": TX queue threshold parameters
110 * A = TX prefetch threshold (the default value is 36)
112 * B = TX host threshold (the default value is 0)
114 * C = TX write-back threshold (the default value is 0)
116 * --cfg FILE: Profile configuration to load
118 Refer to *DPDK Getting Started Guide* for general information on running applications and
119 the Environment Abstraction Layer (EAL) options.
121 The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters
122 needed for the QoS scheduler configuration.
124 The profile file has the following format:
128 ; port configuration [port]
131 number of subports per port = 1
133 ; Subport configuration
136 number of pipes per subport = 4096
137 queue sizes = 64 64 64 64 64 64 64 64 64 64 64 64 64
138 tb rate = 1250000000; Bytes per second
139 tb size = 1000000; Bytes
140 tc 0 rate = 1250000000; Bytes per second
141 tc 1 rate = 1250000000; Bytes per second
142 tc 2 rate = 1250000000; Bytes per second
143 tc 3 rate = 1250000000; Bytes per second
144 tc 4 rate = 1250000000; Bytes per second
145 tc 5 rate = 1250000000; Bytes per second
146 tc 6 rate = 1250000000; Bytes per second
147 tc 7 rate = 1250000000; Bytes per second
148 tc 8 rate = 1250000000; Bytes per second
149 tc 9 rate = 1250000000; Bytes per second
150 tc 10 rate = 1250000000; Bytes per second
151 tc 11 rate = 1250000000; Bytes per second
152 tc 12 rate = 1250000000; Bytes per second
154 tc period = 10; Milliseconds
155 tc oversubscription period = 10; Milliseconds
157 pipe 0-4095 = 0; These pipes are configured with pipe profile 0
162 tb rate = 305175; Bytes per second
163 tb size = 1000000; Bytes
165 tc 0 rate = 305175; Bytes per second
166 tc 1 rate = 305175; Bytes per second
167 tc 2 rate = 305175; Bytes per second
168 tc 3 rate = 305175; Bytes per second
169 tc 4 rate = 305175; Bytes per second
170 tc 5 rate = 305175; Bytes per second
171 tc 6 rate = 305175; Bytes per second
172 tc 7 rate = 305175; Bytes per second
173 tc 8 rate = 305175; Bytes per second
174 tc 9 rate = 305175; Bytes per second
175 tc 10 rate = 305175; Bytes per second
176 tc 11 rate = 305175; Bytes per second
177 tc 12 rate = 305175; Bytes per second
178 tc period = 40; Milliseconds
180 tc 0 oversubscription weight = 1
181 tc 1 oversubscription weight = 1
182 tc 2 oversubscription weight = 1
183 tc 3 oversubscription weight = 1
184 tc 4 oversubscription weight = 1
185 tc 5 oversubscription weight = 1
186 tc 6 oversubscription weight = 1
187 tc 7 oversubscription weight = 1
188 tc 8 oversubscription weight = 1
189 tc 9 oversubscription weight = 1
190 tc 10 oversubscription weight = 1
191 tc 11 oversubscription weight = 1
192 tc 12 oversubscription weight = 1
194 tc 12 wrr weights = 1 1 1 1
196 ; RED params per traffic class and color (Green / Yellow / Red)
199 tc 0 wred min = 48 40 32
200 tc 0 wred max = 64 64 64
201 tc 0 wred inv prob = 10 10 10
202 tc 0 wred weight = 9 9 9
204 tc 1 wred min = 48 40 32
205 tc 1 wred max = 64 64 64
206 tc 1 wred inv prob = 10 10 10
207 tc 1 wred weight = 9 9 9
209 tc 2 wred min = 48 40 32
210 tc 2 wred max = 64 64 64
211 tc 2 wred inv prob = 10 10 10
212 tc 2 wred weight = 9 9 9
214 tc 3 wred min = 48 40 32
215 tc 3 wred max = 64 64 64
216 tc 3 wred inv prob = 10 10 10
217 tc 3 wred weight = 9 9 9
219 tc 4 wred min = 48 40 32
220 tc 4 wred max = 64 64 64
221 tc 4 wred inv prob = 10 10 10
222 tc 4 wred weight = 9 9 9
224 tc 5 wred min = 48 40 32
225 tc 5 wred max = 64 64 64
226 tc 5 wred inv prob = 10 10 10
227 tc 5 wred weight = 9 9 9
229 tc 6 wred min = 48 40 32
230 tc 6 wred max = 64 64 64
231 tc 6 wred inv prob = 10 10 10
232 tc 6 wred weight = 9 9 9
234 tc 7 wred min = 48 40 32
235 tc 7 wred max = 64 64 64
236 tc 7 wred inv prob = 10 10 10
237 tc 7 wred weight = 9 9 9
239 tc 8 wred min = 48 40 32
240 tc 8 wred max = 64 64 64
241 tc 8 wred inv prob = 10 10 10
242 tc 8 wred weight = 9 9 9
244 tc 9 wred min = 48 40 32
245 tc 9 wred max = 64 64 64
246 tc 9 wred inv prob = 10 10 10
247 tc 9 wred weight = 9 9 9
249 tc 10 wred min = 48 40 32
250 tc 10 wred max = 64 64 64
251 tc 10 wred inv prob = 10 10 10
252 tc 10 wred weight = 9 9 9
254 tc 11 wred min = 48 40 32
255 tc 11 wred max = 64 64 64
256 tc 11 wred inv prob = 10 10 10
257 tc 11 wred weight = 9 9 9
259 tc 12 wred min = 48 40 32
260 tc 12 wred max = 64 64 64
261 tc 12 wred inv prob = 10 10 10
262 tc 12 wred weight = 9 9 9
267 These are the commands that are currently working under the command line interface:
271 * --quit: Quits the application.
275 * stats app: Shows a table with in-app calculated statistics.
277 * stats port X subport Y: For a specific subport, it shows the number of packets that
278 went through the scheduler properly and the number of packets that were dropped.
279 The same information is shown in bytes.
280 The information is displayed in a table separating it in different traffic classes.
282 * stats port X subport Y pipe Z: For a specific pipe, it shows the number of packets that
283 went through the scheduler properly and the number of packets that were dropped.
284 The same information is shown in bytes.
285 This information is displayed in a table separating it in individual queues.
289 All of these commands work the same way, averaging the number of packets throughout a specific subset of queues.
291 Two parameters can be configured for this prior to calling any of these commands:
293 * qavg n X: n is the number of times that the calculation will take place.
294 Bigger numbers provide higher accuracy. The default value is 10.
296 * qavg period X: period is the number of microseconds that will be allowed between each calculation.
297 The default value is 100.
299 The commands that can be used for measuring average queue size are:
301 * qavg port X subport Y: Show average queue size per subport.
303 * qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic class.
305 * qavg port X subport Y pipe Z: Show average queue size per pipe.
307 * qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic class.
309 * qavg port X subport Y pipe Z tc A q B: Show average queue size of a specific queue.
314 The following is an example command with a single packet flow configuration:
316 .. code-block:: console
318 ./qos_sched -l 1,5,7 -n 4 -- --pfc "3,2,5,7" --cfg ./profile.cfg
320 This example uses a single packet flow configuration which creates one RX thread on lcore 5 reading
321 from port 3 and a worker thread on lcore 7 writing to port 2.
323 Another example with 2 packet flow configurations using different ports but sharing the same core for QoS scheduler is given below:
325 .. code-block:: console
327 ./qos_sched -l 1,2,6,7 -n 4 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg
329 Note that independent cores for the packet flow configurations for each of the RX, WT and TX thread are also supported,
330 providing flexibility to balance the work.
332 The EAL coremask/corelist is constrained to contain the default mastercore 1 and the RX, WT and TX cores only.
337 The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS application:
339 * A subport represents a predefined group of users.
341 * A pipe represents an individual user/subscriber.
343 * A traffic class is the representation of a different traffic type with a specific loss rate,
344 delay and jitter requirements; such as data voice, video or data transfers.
346 * A queue hosts packets from one or multiple connections of the same type belonging to the same user.
348 The traffic flows that need to be configured are application dependent.
349 This application classifies based on the QinQ double VLAN tags and the IP destination address as indicated in the following table.
351 .. _table_qos_scheduler_1:
353 .. table:: Entity Types
355 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
356 | **Level Name** | **Siblings per Parent** | **QoS Functional Description** | **Selected By** |
358 +================+=========================+==================================================+==================================+
359 | Port | - | Ethernet port | Physical port |
361 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
362 | Subport | Config (8) | Traffic shaped (token bucket) | Outer VLAN tag |
364 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
365 | Pipe | Config (4k) | Traffic shaped (token bucket) | Inner VLAN tag |
367 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
368 | Traffic Class | 13 | TCs of the same pipe services in strict priority | Destination IP address (0.0.0.X) |
370 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
371 | Queue | High Priority TC: 1, | Queue of lowest priority traffic | Destination IP address (0.0.0.X) |
372 | | Lowest Priority TC: 4 | class (Best effort) serviced in WRR | |
373 +----------------+-------------------------+--------------------------------------------------+----------------------------------+
375 Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters.