2 Copyright(c) 2015 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Re-distributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 Performance Thread Sample Application
33 =====================================
35 The performance thread sample application is a derivative of the standard L3
36 forwarding application that demonstrates different threading models.
40 For a general description of the L3 forwarding applications capabilities
41 please refer to the documentation of the standard application in
44 The performance thread sample application differs from the standard L3
45 forwarding example in that it divides the TX and RX processing between
46 different threads, and makes it possible to assign individual threads to
49 Three threading models are considered:
51 #. When there is one EAL thread per physical core.
52 #. When there are multiple EAL threads per physical core.
53 #. When there are multiple lightweight threads per EAL thread.
55 Since DPDK release 2.0 it is possible to launch applications using the
56 ``--lcores`` EAL parameter, specifying cpu-sets for a physical core. With the
57 performance thread sample application its is now also possible to assign
58 individual RX and TX functions to different cores.
60 As an alternative to dividing the L3 forwarding work between different EAL
61 threads the performance thread sample introduces the possibility to run the
62 application threads as lightweight threads (L-threads) within one or
65 In order to facilitate this threading model the example includes a primitive
66 cooperative scheduler (L-thread) subsystem. More details of the L-thread
67 subsystem can be found in :ref:`lthread_subsystem`.
69 **Note:** Whilst theoretically possible it is not anticipated that multiple
70 L-thread schedulers would be run on the same physical core, this mode of
71 operation should not be expected to yield useful performance and is considered
74 Compiling the Application
75 -------------------------
77 To compile the sample application see :doc:`compiling`.
79 The application is located in the `performance-thread/l3fwd-thread` sub-directory.
81 Running the Application
82 -----------------------
84 The application has a number of command line options::
86 ./build/l3fwd-thread [EAL options] --
88 --rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
89 --tx(lcore,thread)[,(lcore,thread)]
90 [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
91 [--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
96 * ``-p PORTMASK``: Hexadecimal bitmask of ports to configure.
98 * ``-P``: optional, sets all ports to promiscuous mode so that packets are
99 accepted regardless of the packet's Ethernet MAC destination address.
100 Without this option, only packets with the Ethernet MAC destination address
101 set to the Ethernet address of the port are accepted.
103 * ``--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread)]``: the list of
104 NIC RX ports and queues handled by the RX lcores and threads. The parameters
107 * ``--tx (lcore,thread)[,(lcore,thread)]``: the list of TX threads identifying
108 the lcore the thread runs on, and the id of RX thread with which it is
109 associated. The parameters are explained below.
111 * ``--enable-jumbo``: optional, enables jumbo frames.
113 * ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
115 * ``--no-numa``: optional, disables numa awareness.
117 * ``--hash-entry-num``: optional, specifies the hash entry number in hex to be
120 * ``--ipv6``: optional, set it if running ipv6 packets.
122 * ``--no-lthreads``: optional, disables l-thread model and uses EAL threading
125 * ``--stat-lcore``: optional, run CPU load stats collector on the specified
128 * ``--parse-ptype:`` optional, set to use software to analyze packet type.
129 Without this option, hardware will check the packet type.
131 The parameters of the ``--rx`` and ``--tx`` options are:
133 * ``--rx`` parameters
135 .. _table_l3fwd_rx_parameters:
137 +--------+------------------------------------------------------+
139 +--------+------------------------------------------------------+
140 | queue | RX queue that will be read on the specified RX port |
141 +--------+------------------------------------------------------+
142 | lcore | Core to use for the thread |
143 +--------+------------------------------------------------------+
144 | thread | Thread id (continuously from 0 to N) |
145 +--------+------------------------------------------------------+
148 * ``--tx`` parameters
150 .. _table_l3fwd_tx_parameters:
152 +--------+------------------------------------------------------+
153 | lcore | Core to use for L3 route match and transmit |
154 +--------+------------------------------------------------------+
155 | thread | Id of RX thread to be associated with this TX thread |
156 +--------+------------------------------------------------------+
158 The ``l3fwd-thread`` application allows you to start packet processing in two
159 threading models: L-Threads (default) and EAL Threads (when the
160 ``--no-lthreads`` parameter is used). For consistency all parameters are used
161 in the same way for both models.
164 Running with L-threads
165 ~~~~~~~~~~~~~~~~~~~~~~
167 When the L-thread model is used (default option), lcore and thread parameters
168 in ``--rx/--tx`` are used to affinitize threads to the selected scheduler.
170 For example, the following places every l-thread on different lcores::
172 l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \
173 --rx="(0,0,0,0)(1,0,1,1)" \
176 The following places RX l-threads on lcore 0 and TX l-threads on lcore 1 and 2
179 l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \
180 --rx="(0,0,0,0)(1,0,0,1)" \
184 Running with EAL threads
185 ~~~~~~~~~~~~~~~~~~~~~~~~
187 When the ``--no-lthreads`` parameter is used, the L-threading model is turned
188 off and EAL threads are used for all processing. EAL threads are enumerated in
189 the same way as L-threads, but the ``--lcores`` EAL parameter is used to
190 affinitize threads to the selected cpu-set (scheduler). Thus it is possible to
191 place every RX and TX thread on different lcores.
193 For example, the following places every EAL thread on different lcores::
195 l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \
196 --rx="(0,0,0,0)(1,0,1,1)" \
201 To affinitize two or more EAL threads to one cpu-set, the EAL ``--lcores``
204 The following places RX EAL threads on lcore 0 and TX EAL threads on lcore 1
207 l3fwd-thread -l 0-7 -n 2 --lcores="(0,1)@0,(2,3)@1" -- -P -p 3 \
208 --rx="(0,0,0,0)(1,0,1,1)" \
216 For selected scenarios the command line configuration of the application for L-threads
217 and its corresponding EAL threads command line can be realized as follows:
219 a) Start every thread on different scheduler (1:1)::
221 l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \
222 --rx="(0,0,0,0)(1,0,1,1)" \
225 EAL thread equivalent::
227 l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \
228 --rx="(0,0,0,0)(1,0,1,1)" \
232 b) Start all threads on one core (N:1).
234 Start 4 L-threads on lcore 0::
236 l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \
237 --rx="(0,0,0,0)(1,0,0,1)" \
240 Start 4 EAL threads on cpu-set 0::
242 l3fwd-thread -l 0-7 -n 2 --lcores="(0-3)@0" -- -P -p 3 \
243 --rx="(0,0,0,0)(1,0,0,1)" \
247 c) Start threads on different cores (N:M).
249 Start 2 L-threads for RX on lcore 0, and 2 L-threads for TX on lcore 1::
251 l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \
252 --rx="(0,0,0,0)(1,0,0,1)" \
255 Start 2 EAL threads for RX on cpu-set 0, and 2 EAL threads for TX on
258 l3fwd-thread -l 0-7 -n 2 --lcores="(0-1)@0,(2-3)@1" -- -P -p 3 \
259 --rx="(0,0,0,0)(1,0,1,1)" \
266 To a great extent the sample application differs little from the standard L3
267 forwarding application, and readers are advised to familiarize themselves with
268 the material covered in the :doc:`l3_forward` documentation before proceeding.
270 The following explanation is focused on the way threading is handled in the
271 performance thread example.
274 Mode of operation with EAL threads
275 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
277 The performance thread sample application has split the RX and TX functionality
278 into two different threads, and the RX and TX threads are
279 interconnected via software rings. With respect to these rings the RX threads
280 are producers and the TX threads are consumers.
282 On initialization the TX and RX threads are started according to the command
285 The RX threads poll the network interface queues and post received packets to a
286 TX thread via a corresponding software ring.
288 The TX threads poll software rings, perform the L3 forwarding hash/LPM match,
289 and assemble packet bursts before performing burst transmit on the network
292 As with the standard L3 forward application, burst draining of residual packets
293 is performed periodically with the period calculated from elapsed time using
294 the timestamps counter.
296 The diagram below illustrates a case with two RX threads and three TX threads.
298 .. _figure_performance_thread_1:
300 .. figure:: img/performance_thread_1.*
303 Mode of operation with L-threads
304 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
306 Like the EAL thread configuration the application has split the RX and TX
307 functionality into different threads, and the pairs of RX and TX threads are
308 interconnected via software rings.
310 On initialization an L-thread scheduler is started on every EAL thread. On all
311 but the master EAL thread only a a dummy L-thread is initially started.
312 The L-thread started on the master EAL thread then spawns other L-threads on
313 different L-thread schedulers according the command line parameters.
315 The RX threads poll the network interface queues and post received packets
316 to a TX thread via the corresponding software ring.
318 The ring interface is augmented by means of an L-thread condition variable that
319 enables the TX thread to be suspended when the TX ring is empty. The RX thread
320 signals the condition whenever it posts to the TX ring, causing the TX thread
323 Additionally the TX L-thread spawns a worker L-thread to take care of
324 polling the software rings, whilst it handles burst draining of the transmit
327 The worker threads poll the software rings, perform L3 route lookup and
328 assemble packet bursts. If the TX ring is empty the worker thread suspends
329 itself by waiting on the condition variable associated with the ring.
331 Burst draining of residual packets, less than the burst size, is performed by
332 the TX thread which sleeps (using an L-thread sleep function) and resumes
333 periodically to flush the TX buffer.
335 This design means that L-threads that have no work, can yield the CPU to other
336 L-threads and avoid having to constantly poll the software rings.
338 The diagram below illustrates a case with two RX threads and three TX functions
339 (each comprising a thread that processes forwarding and a thread that
340 periodically drains the output buffer of residual packets).
342 .. _figure_performance_thread_2:
344 .. figure:: img/performance_thread_2.*
350 It is possible to display statistics showing estimated CPU load on each core.
351 The statistics indicate the percentage of CPU time spent: processing
352 received packets (forwarding), polling queues/rings (waiting for work),
353 and doing any other processing (context switch and other overhead).
355 When enabled statistics are gathered by having the application threads set and
356 clear flags when they enter and exit pertinent code sections. The flags are
357 then sampled in real time by a statistics collector thread running on another
358 core. This thread displays the data in real time on the console.
360 This feature is enabled by designating a statistics collector core, using the
361 ``--stat-lcore`` parameter.
364 .. _lthread_subsystem:
366 The L-thread subsystem
367 ----------------------
369 The L-thread subsystem resides in the examples/performance-thread/common
370 directory and is built and linked automatically when building the
371 ``l3fwd-thread`` example.
373 The subsystem provides a simple cooperative scheduler to enable arbitrary
374 functions to run as cooperative threads within a single EAL thread.
375 The subsystem provides a pthread like API that is intended to assist in
376 reuse of legacy code written for POSIX pthreads.
378 The following sections provide some detail on the features, constraints,
379 performance and porting considerations when using L-threads.
382 .. _comparison_between_lthreads_and_pthreads:
384 Comparison between L-threads and POSIX pthreads
385 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
387 The fundamental difference between the L-thread and pthread models is the
388 way in which threads are scheduled. The simplest way to think about this is to
389 consider the case of a processor with a single CPU. To run multiple threads
390 on a single CPU, the scheduler must frequently switch between the threads,
391 in order that each thread is able to make timely progress.
392 This is the basis of any multitasking operating system.
394 This section explores the differences between the pthread model and the
395 L-thread model as implemented in the provided L-thread subsystem. If needed a
396 theoretical discussion of preemptive vs cooperative multi-threading can be
397 found in any good text on operating system design.
400 Scheduling and context switching
401 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
403 The POSIX pthread library provides an application programming interface to
404 create and synchronize threads. Scheduling policy is determined by the host OS,
405 and may be configurable. The OS may use sophisticated rules to determine which
406 thread should be run next, threads may suspend themselves or make other threads
407 ready, and the scheduler may employ a time slice giving each thread a maximum
408 time quantum after which it will be preempted in favor of another thread that
409 is ready to run. To complicate matters further threads may be assigned
410 different scheduling priorities.
412 By contrast the L-thread subsystem is considerably simpler. Logically the
413 L-thread scheduler performs the same multiplexing function for L-threads
414 within a single pthread as the OS scheduler does for pthreads within an
415 application process. The L-thread scheduler is simply the main loop of a
416 pthread, and in so far as the host OS is concerned it is a regular pthread
417 just like any other. The host OS is oblivious about the existence of and
418 not at all involved in the scheduling of L-threads.
420 The other and most significant difference between the two models is that
421 L-threads are scheduled cooperatively. L-threads cannot not preempt each
422 other, nor can the L-thread scheduler preempt a running L-thread (i.e.
423 there is no time slicing). The consequence is that programs implemented with
424 L-threads must possess frequent rescheduling points, meaning that they must
425 explicitly and of their own volition return to the scheduler at frequent
426 intervals, in order to allow other L-threads an opportunity to proceed.
428 In both models switching between threads requires that the current CPU
429 context is saved and a new context (belonging to the next thread ready to run)
430 is restored. With pthreads this context switching is handled transparently
431 and the set of CPU registers that must be preserved between context switches
432 is as per an interrupt handler.
434 An L-thread context switch is achieved by the thread itself making a function
435 call to the L-thread scheduler. Thus it is only necessary to preserve the
436 callee registers. The caller is responsible to save and restore any other
437 registers it is using before a function call, and restore them on return,
438 and this is handled by the compiler. For ``X86_64`` on both Linux and BSD the
439 System V calling convention is used, this defines registers RSP, RBP, and
440 R12-R15 as callee-save registers (for more detailed discussion a good reference
441 is `X86 Calling Conventions <https://en.wikipedia.org/wiki/X86_calling_conventions>`_).
443 Taking advantage of this, and due to the absence of preemption, an L-thread
444 context switch is achieved with less than 20 load/store instructions.
446 The scheduling policy for L-threads is fixed, there is no prioritization of
447 L-threads, all L-threads are equal and scheduling is based on a FIFO
450 An L-thread is a struct containing the CPU context of the thread
451 (saved on context switch) and other useful items. The ready queue contains
452 pointers to threads that are ready to run. The L-thread scheduler is a simple
453 loop that polls the ready queue, reads from it the next thread ready to run,
454 which it resumes by saving the current context (the current position in the
455 scheduler loop) and restoring the context of the next thread from its thread
456 struct. Thus an L-thread is always resumed at the last place it yielded.
458 A well behaved L-thread will call the context switch regularly (at least once
459 in its main loop) thus returning to the scheduler's own main loop. Yielding
460 inserts the current thread at the back of the ready queue, and the process of
461 servicing the ready queue is repeated, thus the system runs by flipping back
462 and forth the between L-threads and scheduler loop.
464 In the case of pthreads, the preemptive scheduling, time slicing, and support
465 for thread prioritization means that progress is normally possible for any
466 thread that is ready to run. This comes at the price of a relatively heavier
467 context switch and scheduling overhead.
469 With L-threads the progress of any particular thread is determined by the
470 frequency of rescheduling opportunities in the other L-threads. This means that
471 an errant L-thread monopolizing the CPU might cause scheduling of other threads
472 to be stalled. Due to the lower cost of context switching, however, voluntary
473 rescheduling to ensure progress of other threads, if managed sensibly, is not
474 a prohibitive overhead, and overall performance can exceed that of an
475 application using pthreads.
481 With pthreads preemption means that threads that share data must observe
482 some form of mutual exclusion protocol.
484 The fact that L-threads cannot preempt each other means that in many cases
485 mutual exclusion devices can be completely avoided.
487 Locking to protect shared data can be a significant bottleneck in
488 multi-threaded applications so a carefully designed cooperatively scheduled
489 program can enjoy significant performance advantages.
491 So far we have considered only the simplistic case of a single core CPU,
492 when multiple CPUs are considered things are somewhat more complex.
494 First of all it is inevitable that there must be multiple L-thread schedulers,
495 one running on each EAL thread. So long as these schedulers remain isolated
496 from each other the above assertions about the potential advantages of
497 cooperative scheduling hold true.
499 A configuration with isolated cooperative schedulers is less flexible than the
500 pthread model where threads can be affinitized to run on any CPU. With isolated
501 schedulers scaling of applications to utilize fewer or more CPUs according to
502 system demand is very difficult to achieve.
504 The L-thread subsystem makes it possible for L-threads to migrate between
505 schedulers running on different CPUs. Needless to say if the migration means
506 that threads that share data end up running on different CPUs then this will
507 introduce the need for some kind of mutual exclusion system.
509 Of course ``rte_ring`` software rings can always be used to interconnect
510 threads running on different cores, however to protect other kinds of shared
511 data structures, lock free constructs or else explicit locking will be
512 required. This is a consideration for the application design.
514 In support of this extended functionality, the L-thread subsystem implements
515 thread safe mutexes and condition variables.
517 The cost of affinitizing and of condition variable signaling is significantly
518 lower than the equivalent pthread operations, and so applications using these
519 features will see a performance benefit.
525 As with applications written for pthreads an application written for L-threads
526 can take advantage of thread local storage, in this case local to an L-thread.
527 An application may save and retrieve a single pointer to application data in
530 For legacy and backward compatibility reasons two alternative methods are also
531 offered, the first is modelled directly on the pthread get/set specific APIs,
532 the second approach is modelled on the ``RTE_PER_LCORE`` macros, whereby
533 ``PER_LTHREAD`` macros are introduced, in both cases the storage is local to
537 .. _constraints_and_performance_implications:
539 Constraints and performance implications when using L-threads
540 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
543 .. _API_compatibility:
548 The L-thread subsystem provides a set of functions that are logically equivalent
549 to the corresponding functions offered by the POSIX pthread library, however not
550 all pthread functions have a corresponding L-thread equivalent, and not all
551 features available to pthreads are implemented for L-threads.
553 The pthread library offers considerable flexibility via programmable attributes
554 that can be associated with threads, mutexes, and condition variables.
556 By contrast the L-thread subsystem has fixed functionality, the scheduler policy
557 cannot be varied, and L-threads cannot be prioritized. There are no variable
558 attributes associated with any L-thread objects. L-threads, mutexes and
559 conditional variables, all have fixed functionality. (Note: reserved parameters
560 are included in the APIs to facilitate possible future support for attributes).
562 The table below lists the pthread and equivalent L-thread APIs with notes on
563 differences and/or constraints. Where there is no L-thread entry in the table,
564 then the L-thread subsystem provides no equivalent function.
566 .. _table_lthread_pthread:
568 .. table:: Pthread and equivalent L-thread APIs.
570 +----------------------------+------------------------+-------------------+
571 | **Pthread function** | **L-thread function** | **Notes** |
572 +============================+========================+===================+
573 | pthread_barrier_destroy | | |
574 +----------------------------+------------------------+-------------------+
575 | pthread_barrier_init | | |
576 +----------------------------+------------------------+-------------------+
577 | pthread_barrier_wait | | |
578 +----------------------------+------------------------+-------------------+
579 | pthread_cond_broadcast | lthread_cond_broadcast | See note 1 |
580 +----------------------------+------------------------+-------------------+
581 | pthread_cond_destroy | lthread_cond_destroy | |
582 +----------------------------+------------------------+-------------------+
583 | pthread_cond_init | lthread_cond_init | |
584 +----------------------------+------------------------+-------------------+
585 | pthread_cond_signal | lthread_cond_signal | See note 1 |
586 +----------------------------+------------------------+-------------------+
587 | pthread_cond_timedwait | | |
588 +----------------------------+------------------------+-------------------+
589 | pthread_cond_wait | lthread_cond_wait | See note 5 |
590 +----------------------------+------------------------+-------------------+
591 | pthread_create | lthread_create | See notes 2, 3 |
592 +----------------------------+------------------------+-------------------+
593 | pthread_detach | lthread_detach | See note 4 |
594 +----------------------------+------------------------+-------------------+
595 | pthread_equal | | |
596 +----------------------------+------------------------+-------------------+
597 | pthread_exit | lthread_exit | |
598 +----------------------------+------------------------+-------------------+
599 | pthread_getspecific | lthread_getspecific | |
600 +----------------------------+------------------------+-------------------+
601 | pthread_getcpuclockid | | |
602 +----------------------------+------------------------+-------------------+
603 | pthread_join | lthread_join | |
604 +----------------------------+------------------------+-------------------+
605 | pthread_key_create | lthread_key_create | |
606 +----------------------------+------------------------+-------------------+
607 | pthread_key_delete | lthread_key_delete | |
608 +----------------------------+------------------------+-------------------+
609 | pthread_mutex_destroy | lthread_mutex_destroy | |
610 +----------------------------+------------------------+-------------------+
611 | pthread_mutex_init | lthread_mutex_init | |
612 +----------------------------+------------------------+-------------------+
613 | pthread_mutex_lock | lthread_mutex_lock | See note 6 |
614 +----------------------------+------------------------+-------------------+
615 | pthread_mutex_trylock | lthread_mutex_trylock | See note 6 |
616 +----------------------------+------------------------+-------------------+
617 | pthread_mutex_timedlock | | |
618 +----------------------------+------------------------+-------------------+
619 | pthread_mutex_unlock | lthread_mutex_unlock | |
620 +----------------------------+------------------------+-------------------+
622 +----------------------------+------------------------+-------------------+
623 | pthread_rwlock_destroy | | |
624 +----------------------------+------------------------+-------------------+
625 | pthread_rwlock_init | | |
626 +----------------------------+------------------------+-------------------+
627 | pthread_rwlock_rdlock | | |
628 +----------------------------+------------------------+-------------------+
629 | pthread_rwlock_timedrdlock | | |
630 +----------------------------+------------------------+-------------------+
631 | pthread_rwlock_timedwrlock | | |
632 +----------------------------+------------------------+-------------------+
633 | pthread_rwlock_tryrdlock | | |
634 +----------------------------+------------------------+-------------------+
635 | pthread_rwlock_trywrlock | | |
636 +----------------------------+------------------------+-------------------+
637 | pthread_rwlock_unlock | | |
638 +----------------------------+------------------------+-------------------+
639 | pthread_rwlock_wrlock | | |
640 +----------------------------+------------------------+-------------------+
641 | pthread_self | lthread_current | |
642 +----------------------------+------------------------+-------------------+
643 | pthread_setspecific | lthread_setspecific | |
644 +----------------------------+------------------------+-------------------+
645 | pthread_spin_init | | See note 10 |
646 +----------------------------+------------------------+-------------------+
647 | pthread_spin_destroy | | See note 10 |
648 +----------------------------+------------------------+-------------------+
649 | pthread_spin_lock | | See note 10 |
650 +----------------------------+------------------------+-------------------+
651 | pthread_spin_trylock | | See note 10 |
652 +----------------------------+------------------------+-------------------+
653 | pthread_spin_unlock | | See note 10 |
654 +----------------------------+------------------------+-------------------+
655 | pthread_cancel | lthread_cancel | |
656 +----------------------------+------------------------+-------------------+
657 | pthread_setcancelstate | | |
658 +----------------------------+------------------------+-------------------+
659 | pthread_setcanceltype | | |
660 +----------------------------+------------------------+-------------------+
661 | pthread_testcancel | | |
662 +----------------------------+------------------------+-------------------+
663 | pthread_getschedparam | | |
664 +----------------------------+------------------------+-------------------+
665 | pthread_setschedparam | | |
666 +----------------------------+------------------------+-------------------+
667 | pthread_yield | lthread_yield | See note 7 |
668 +----------------------------+------------------------+-------------------+
669 | pthread_setaffinity_np | lthread_set_affinity | See notes 2, 3, 8 |
670 +----------------------------+------------------------+-------------------+
671 | | lthread_sleep | See note 9 |
672 +----------------------------+------------------------+-------------------+
673 | | lthread_sleep_clks | See note 9 |
674 +----------------------------+------------------------+-------------------+
679 Neither lthread signal nor broadcast may be called concurrently by L-threads
680 running on different schedulers, although multiple L-threads running in the
681 same scheduler may freely perform signal or broadcast operations. L-threads
682 running on the same or different schedulers may always safely wait on a
688 Pthread attributes may be used to affinitize a pthread with a cpu-set. The
689 L-thread subsystem does not support a cpu-set. An L-thread may be affinitized
690 only with a single CPU at any time.
695 If an L-thread is intended to run on a different NUMA node than the node that
696 creates the thread then, when calling ``lthread_create()`` it is advantageous
697 to specify the destination core as a parameter of ``lthread_create()``. See
698 :ref:`memory_allocation_and_NUMA_awareness` for details.
703 An L-thread can only detach itself, and cannot detach other L-threads.
708 A wait operation on a pthread condition variable is always associated with and
709 protected by a mutex which must be owned by the thread at the time it invokes
710 ``pthread_wait()``. By contrast L-thread condition variables are thread safe
711 (for waiters) and do not use an associated mutex. Multiple L-threads (including
712 L-threads running on other schedulers) can safely wait on a L-thread condition
713 variable. As a consequence the performance of an L-thread condition variables
714 is typically an order of magnitude faster than its pthread counterpart.
719 Recursive locking is not supported with L-threads, attempts to take a lock
720 recursively will be detected and rejected.
725 ``lthread_yield()`` will save the current context, insert the current thread
726 to the back of the ready queue, and resume the next ready thread. Yielding
727 increases ready queue backlog, see :ref:`ready_queue_backlog` for more details
728 about the implications of this.
731 N.B. The context switch time as measured from immediately before the call to
732 ``lthread_yield()`` to the point at which the next ready thread is resumed,
733 can be an order of magnitude faster that the same measurement for
739 ``lthread_set_affinity()`` is similar to a yield apart from the fact that the
740 yielding thread is inserted into a peer ready queue of another scheduler.
741 The peer ready queue is actually a separate thread safe queue, which means that
742 threads appearing in the peer ready queue can jump any backlog in the local
743 ready queue on the destination scheduler.
745 The context switch time as measured from the time just before the call to
746 ``lthread_set_affinity()`` to just after the same thread is resumed on the new
747 scheduler can be orders of magnitude faster than the same measurement for
748 ``pthread_setaffinity_np()``.
753 Although there is no ``pthread_sleep()`` function, ``lthread_sleep()`` and
754 ``lthread_sleep_clks()`` can be used wherever ``sleep()``, ``usleep()`` or
755 ``nanosleep()`` might ordinarily be used. The L-thread sleep functions suspend
756 the current thread, start an ``rte_timer`` and resume the thread when the
757 timer matures. The ``rte_timer_manage()`` entry point is called on every pass
758 of the scheduler loop. This means that the worst case jitter on timer expiry
759 is determined by the longest period between context switches of any running
762 In a synthetic test with many threads sleeping and resuming then the measured
763 jitter is typically orders of magnitude lower than the same measurement made
769 Spin locks are not provided because they are problematical in a cooperative
770 environment, see :ref:`porting_locks_and_spinlocks` for a more detailed
771 discussion on how to avoid spin locks.
774 .. _Thread_local_storage_performance:
779 Of the three L-thread local storage options the simplest and most efficient is
780 storing a single application data pointer in the L-thread struct.
782 The ``PER_LTHREAD`` macros involve a run time computation to obtain the address
783 of the variable being saved/retrieved and also require that the accesses are
784 de-referenced via a pointer. This means that code that has used
785 ``RTE_PER_LCORE`` macros being ported to L-threads might need some slight
786 adjustment (see :ref:`porting_thread_local_storage` for hints about porting
787 code that makes use of thread local storage).
789 The get/set specific APIs are consistent with their pthread counterparts both
790 in use and in performance.
793 .. _memory_allocation_and_NUMA_awareness:
795 Memory allocation and NUMA awareness
796 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
798 All memory allocation is from DPDK huge pages, and is NUMA aware. Each
799 scheduler maintains its own caches of objects: lthreads, their stacks, TLS,
800 mutexes and condition variables. These caches are implemented as unbounded lock
801 free MPSC queues. When objects are created they are always allocated from the
802 caches on the local core (current EAL thread).
804 If an L-thread has been affinitized to a different scheduler, then it can
805 always safely free resources to the caches from which they originated (because
806 the caches are MPSC queues).
808 If the L-thread has been affinitized to a different NUMA node then the memory
809 resources associated with it may incur longer access latency.
811 The commonly used pattern of setting affinity on entry to a thread after it has
812 started, means that memory allocation for both the stack and TLS will have been
813 made from caches on the NUMA node on which the threads creator is running.
814 This has the side effect that access latency will be sub-optimal after
817 This side effect can be mitigated to some extent (although not completely) by
818 specifying the destination CPU as a parameter of ``lthread_create()`` this
819 causes the L-thread's stack and TLS to be allocated when it is first scheduled
820 on the destination scheduler, if the destination is a on another NUMA node it
821 results in a more optimal memory allocation.
823 Note that the lthread struct itself remains allocated from memory on the
824 creating node, this is unavoidable because an L-thread is known everywhere by
825 the address of this struct.
828 .. _object_cache_sizing:
833 The per lcore object caches pre-allocate objects in bulk whenever a request to
834 allocate an object finds a cache empty. By default 100 objects are
835 pre-allocated, this is defined by ``LTHREAD_PREALLOC`` in the public API
836 header file lthread_api.h. This means that the caches constantly grow to meet
839 In the present implementation there is no mechanism to reduce the cache sizes
840 if system demand reduces. Thus the caches will remain at their maximum extent
843 A consequence of the bulk pre-allocation of objects is that every 100 (default
844 value) additional new object create operations results in a call to
845 ``rte_malloc()``. For creation of objects such as L-threads, which trigger the
846 allocation of even more objects (i.e. their stacks and TLS) then this can
847 cause outliers in scheduling performance.
849 If this is a problem the simplest mitigation strategy is to dimension the
850 system, by setting the bulk object pre-allocation size to some large number
851 that you do not expect to be exceeded. This means the caches will be populated
852 once only, the very first time a thread is created.
855 .. _Ready_queue_backlog:
860 One of the more subtle performance considerations is managing the ready queue
861 backlog. The fewer threads that are waiting in the ready queue then the faster
862 any particular thread will get serviced.
864 In a naive L-thread application with N L-threads simply looping and yielding,
865 this backlog will always be equal to the number of L-threads, thus the cost of
866 a yield to a particular L-thread will be N times the context switch time.
868 This side effect can be mitigated by arranging for threads to be suspended and
869 wait to be resumed, rather than polling for work by constantly yielding.
870 Blocking on a mutex or condition variable or even more obviously having a
871 thread sleep if it has a low frequency workload are all mechanisms by which a
872 thread can be excluded from the ready queue until it really does need to be
873 run. This can have a significant positive impact on performance.
876 .. _Initialization_and_shutdown_dependencies:
878 Initialization, shutdown and dependencies
879 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
881 The L-thread subsystem depends on DPDK for huge page allocation and depends on
882 the ``rte_timer subsystem``. The DPDK EAL initialization and
883 ``rte_timer_subsystem_init()`` **MUST** be completed before the L-thread sub
886 Thereafter initialization of the L-thread subsystem is largely transparent to
887 the application. Constructor functions ensure that global variables are properly
888 initialized. Other than global variables each scheduler is initialized
889 independently the first time that an L-thread is created by a particular EAL
892 If the schedulers are to be run as isolated and independent schedulers, with
893 no intention that L-threads running on different schedulers will migrate between
894 schedulers or synchronize with L-threads running on other schedulers, then
895 initialization consists simply of creating an L-thread, and then running the
898 If there will be interaction between L-threads running on different schedulers,
899 then it is important that the starting of schedulers on different EAL threads
902 To achieve this an additional initialization step is necessary, this is simply
903 to set the number of schedulers by calling the API function
904 ``lthread_num_schedulers_set(n)``, where ``n`` is the number of EAL threads
905 that will run L-thread schedulers. Setting the number of schedulers to a
906 number greater than 0 will cause all schedulers to wait until the others have
907 started before beginning to schedule L-threads.
909 The L-thread scheduler is started by calling the function ``lthread_run()``
910 and should be called from the EAL thread and thus become the main loop of the
913 The function ``lthread_run()``, will not return until all threads running on
914 the scheduler have exited, and the scheduler has been explicitly stopped by
915 calling ``lthread_scheduler_shutdown(lcore)`` or
916 ``lthread_scheduler_shutdown_all()``.
918 All these function do is tell the scheduler that it can exit when there are no
919 longer any running L-threads, neither function forces any running L-thread to
920 terminate. Any desired application shutdown behavior must be designed and
921 built into the application to ensure that L-threads complete in a timely
924 **Important Note:** It is assumed when the scheduler exits that the application
925 is terminating for good, the scheduler does not free resources before exiting
926 and running the scheduler a subsequent time will result in undefined behavior.
929 .. _porting_legacy_code_to_run_on_lthreads:
931 Porting legacy code to run on L-threads
932 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
934 Legacy code originally written for a pthread environment may be ported to
935 L-threads if the considerations about differences in scheduling policy, and
936 constraints discussed in the previous sections can be accommodated.
938 This section looks in more detail at some of the issues that may have to be
939 resolved when porting code.
942 .. _pthread_API_compatibility:
944 pthread API compatibility
945 ^^^^^^^^^^^^^^^^^^^^^^^^^
947 The first step is to establish exactly which pthread APIs the legacy
948 application uses, and to understand the requirements of those APIs. If there
949 are corresponding L-lthread APIs, and where the default pthread functionality
950 is used by the application then, notwithstanding the other issues discussed
951 here, it should be feasible to run the application with L-threads. If the
952 legacy code modifies the default behavior using attributes then if may be
953 necessary to make some adjustments to eliminate those requirements.
956 .. _blocking_system_calls:
958 Blocking system API calls
959 ^^^^^^^^^^^^^^^^^^^^^^^^^
961 It is important to understand what other system services the application may be
962 using, bearing in mind that in a cooperatively scheduled environment a thread
963 cannot block without stalling the scheduler and with it all other cooperative
964 threads. Any kind of blocking system call, for example file or socket IO, is a
965 potential problem, a good tool to analyze the application for this purpose is
966 the ``strace`` utility.
968 There are many strategies to resolve these kind of issues, each with it
969 merits. Possible solutions include:
971 * Adopting a polled mode of the system API concerned (if available).
973 * Arranging for another core to perform the function and synchronizing with
974 that core via constructs that will not block the L-thread.
976 * Affinitizing the thread to another scheduler devoted (as a matter of policy)
977 to handling threads wishing to make blocking calls, and then back again when
981 .. _porting_locks_and_spinlocks:
986 Locks and spinlocks are another source of blocking behavior that for the same
987 reasons as system calls will need to be addressed.
989 If the application design ensures that the contending L-threads will always
990 run on the same scheduler then it its probably safe to remove locks and spin
993 The only exception to the above rule is if for some reason the
994 code performs any kind of context switch whilst holding the lock
995 (e.g. yield, sleep, or block on a different lock, or on a condition variable).
996 This will need to determined before deciding to eliminate a lock.
998 If a lock cannot be eliminated then an L-thread mutex can be substituted for
1001 An L-thread blocking on an L-thread mutex will be suspended and will cause
1002 another ready L-thread to be resumed, thus not blocking the scheduler. When
1003 default behavior is required, it can be used as a direct replacement for a
1006 Spin locks are typically used when lock contention is likely to be rare and
1007 where the period during which the lock may be held is relatively short.
1008 When the contending L-threads are running on the same scheduler then an
1009 L-thread blocking on a spin lock will enter an infinite loop stopping the
1010 scheduler completely (see :ref:`porting_infinite_loops` below).
1012 If the application design ensures that contending L-threads will always run
1013 on different schedulers then it might be reasonable to leave a short spin lock
1014 that rarely experiences contention in place.
1016 If after all considerations it appears that a spin lock can neither be
1017 eliminated completely, replaced with an L-thread mutex, or left in place as
1018 is, then an alternative is to loop on a flag, with a call to
1019 ``lthread_yield()`` inside the loop (n.b. if the contending L-threads might
1020 ever run on different schedulers the flag will need to be manipulated
1023 Spinning and yielding is the least preferred solution since it introduces
1024 ready queue backlog (see also :ref:`ready_queue_backlog`).
1027 .. _porting_sleeps_and_delays:
1032 Yet another kind of blocking behavior (albeit momentary) are delay functions
1033 like ``sleep()``, ``usleep()``, ``nanosleep()`` etc. All will have the
1034 consequence of stalling the L-thread scheduler and unless the delay is very
1035 short (e.g. a very short nanosleep) calls to these functions will need to be
1038 The simplest mitigation strategy is to use the L-thread sleep API functions,
1039 of which two variants exist, ``lthread_sleep()`` and ``lthread_sleep_clks()``.
1040 These functions start an rte_timer against the L-thread, suspend the L-thread
1041 and cause another ready L-thread to be resumed. The suspended L-thread is
1042 resumed when the rte_timer matures.
1045 .. _porting_infinite_loops:
1050 Some applications have threads with loops that contain no inherent
1051 rescheduling opportunity, and rely solely on the OS time slicing to share
1052 the CPU. In a cooperative environment this will stop everything dead. These
1053 kind of loops are not hard to identify, in a debug session you will find the
1054 debugger is always stopping in the same loop.
1056 The simplest solution to this kind of problem is to insert an explicit
1057 ``lthread_yield()`` or ``lthread_sleep()`` into the loop. Another solution
1058 might be to include the function performed by the loop into the execution path
1059 of some other loop that does in fact yield, if this is possible.
1062 .. _porting_thread_local_storage:
1064 Thread local storage
1065 ^^^^^^^^^^^^^^^^^^^^
1067 If the application uses thread local storage, the use case should be
1070 In a legacy pthread application either or both the ``__thread`` prefix, or the
1071 pthread set/get specific APIs may have been used to define storage local to a
1074 In some applications it may be a reasonable assumption that the data could
1075 or in fact most likely should be placed in L-thread local storage.
1077 If the application (like many DPDK applications) has assumed a certain
1078 relationship between a pthread and the CPU to which it is affinitized, there
1079 is a risk that thread local storage may have been used to save some data items
1080 that are correctly logically associated with the CPU, and others items which
1081 relate to application context for the thread. Only a good understanding of the
1082 application will reveal such cases.
1084 If the application requires an that an L-thread is to be able to move between
1085 schedulers then care should be taken to separate these kinds of data, into per
1086 lcore, and per L-thread storage. In this way a migrating thread will bring with
1087 it the local data it needs, and pick up the new logical core specific values
1088 from pthread local storage at its new home.
1096 A convenient way to get something working with legacy code can be to use a
1097 shim that adapts pthread API calls to the corresponding L-thread ones.
1098 This approach will not mitigate any of the porting considerations mentioned
1099 in the previous sections, but it will reduce the amount of code churn that
1100 would otherwise been involved. It is a reasonable approach to evaluate
1101 L-threads, before investing effort in porting to the native L-thread APIs.
1106 The L-thread subsystem includes an example pthread shim. This is a partial
1107 implementation but does contain the API stubs needed to get basic applications
1108 running. There is a simple "hello world" application that demonstrates the
1109 use of the pthread shim.
1111 A subtlety of working with a shim is that the application will still need
1112 to make use of the genuine pthread library functions, at the very least in
1113 order to create the EAL threads in which the L-thread schedulers will run.
1114 This is the case with DPDK initialization, and exit.
1116 To deal with the initialization and shutdown scenarios, the shim is capable of
1117 switching on or off its adaptor functionality, an application can control this
1118 behavior by the calling the function ``pt_override_set()``. The default state
1121 The pthread shim uses the dynamic linker loader and saves the loaded addresses
1122 of the genuine pthread API functions in an internal table, when the shim
1123 functionality is enabled it performs the adaptor function, when disabled it
1124 invokes the genuine pthread function.
1126 The function ``pthread_exit()`` has additional special handling. The standard
1127 system header file pthread.h declares ``pthread_exit()`` with
1128 ``__attribute__((noreturn))`` this is an optimization that is possible because
1129 the pthread is terminating and this enables the compiler to omit the normal
1130 handling of stack and protection of registers since the function is not
1131 expected to return, and in fact the thread is being destroyed. These
1132 optimizations are applied in both the callee and the caller of the
1133 ``pthread_exit()`` function.
1135 In our cooperative scheduling environment this behavior is inadmissible. The
1136 pthread is the L-thread scheduler thread, and, although an L-thread is
1137 terminating, there must be a return to the scheduler in order that the system
1138 can continue to run. Further, returning from a function with attribute
1139 ``noreturn`` is invalid and may result in undefined behavior.
1141 The solution is to redefine the ``pthread_exit`` function with a macro,
1142 causing it to be mapped to a stub function in the shim that does not have the
1143 ``noreturn`` attribute. This macro is defined in the file
1144 ``pthread_shim.h``. The stub function is otherwise no different than any of
1145 the other stub functions in the shim, and will switch between the real
1146 ``pthread_exit()`` function or the ``lthread_exit()`` function as
1147 required. The only difference is that the mapping to the stub by macro
1150 A consequence of this is that the file ``pthread_shim.h`` must be included in
1151 legacy code wishing to make use of the shim. It also means that dynamic
1152 linkage of a pre-compiled binary that did not include pthread_shim.h is not be
1155 Given the requirements for porting legacy code outlined in
1156 :ref:`porting_legacy_code_to_run_on_lthreads` most applications will require at
1157 least some minimal adjustment and recompilation to run on L-threads so
1158 pre-compiled binaries are unlikely to be met in practice.
1160 In summary the shim approach adds some overhead but can be a useful tool to help
1161 establish the feasibility of a code reuse project. It is also a fairly
1162 straightforward task to extend the shim if necessary.
1164 **Note:** Bearing in mind the preceding discussions about the impact of making
1165 blocking calls then switching the shim in and out on the fly to invoke any
1166 pthread API this might block is something that should typically be avoided.
1169 Building and running the pthread shim
1170 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1172 The shim example application is located in the sample application
1173 in the performance-thread folder
1175 To build and run the pthread shim example
1177 #. Go to the example applications folder
1179 .. code-block:: console
1181 export RTE_SDK=/path/to/rte_sdk
1182 cd ${RTE_SDK}/examples/performance-thread/pthread_shim
1185 #. Set the target (a default target is used if not specified). For example:
1187 .. code-block:: console
1189 export RTE_TARGET=x86_64-native-linuxapp-gcc
1191 See the DPDK Getting Started Guide for possible RTE_TARGET values.
1193 #. Build the application:
1195 .. code-block:: console
1199 #. To run the pthread_shim example
1201 .. code-block:: console
1203 lthread-pthread-shim -c core_mask -n number_of_channels
1205 .. _lthread_diagnostics:
1207 L-thread Diagnostics
1208 ~~~~~~~~~~~~~~~~~~~~
1210 When debugging you must take account of the fact that the L-threads are run in
1211 a single pthread. The current scheduler is defined by
1212 ``RTE_PER_LCORE(this_sched)``, and the current lthread is stored at
1213 ``RTE_PER_LCORE(this_sched)->current_lthread``. Thus on a breakpoint in a GDB
1214 session the current lthread can be obtained by displaying the pthread local
1215 variable ``per_lcore_this_sched->current_lthread``.
1217 Another useful diagnostic feature is the possibility to trace significant
1218 events in the life of an L-thread, this feature is enabled by changing the
1219 value of LTHREAD_DIAG from 0 to 1 in the file ``lthread_diag_api.h``.
1221 Tracing of events can be individually masked, and the mask may be programmed
1222 at run time. An unmasked event results in a callback that provides information
1223 about the event. The default callback simply prints trace information. The
1224 default mask is 0 (all events off) the mask can be modified by calling the
1225 function ``lthread_diagniostic_set_mask()``.
1227 It is possible register a user callback function to implement more
1228 sophisticated diagnostic functions.
1229 Object creation events (lthread, mutex, and condition variable) accept, and
1230 store in the created object, a user supplied reference value returned by the
1233 The lthread reference value is passed back in all subsequent event callbacks,
1234 the mutex and APIs are provided to retrieve the reference value from
1235 mutexes and condition variables. This enables a user to monitor, count, or
1236 filter for specific events, on specific objects, for example to monitor for a
1237 specific thread signaling a specific condition variable, or to monitor
1238 on all timer events, the possibilities and combinations are endless.
1240 The callback function can be set by calling the function
1241 ``lthread_diagnostic_enable()`` supplying a callback function pointer and an
1244 Setting ``LTHREAD_DIAG`` also enables counting of statistics about cache and
1245 queue usage, and these statistics can be displayed by calling the function
1246 ``lthread_diag_stats_display()``. This function also performs a consistency
1247 check on the caches and queues. The function should only be called from the
1248 master EAL thread after all slave threads have stopped and returned to the C
1249 main program, otherwise the consistency check will fail.