1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2017 Intel Corporation.
4 Software Eventdev Poll Mode Driver
5 ==================================
7 The software eventdev is an implementation of the eventdev API, that provides a
8 wide range of the eventdev features. The eventdev relies on a CPU core to
9 perform event scheduling. This PMD can use the service core library to run the
10 scheduling function, allowing an application to utilize the power of service
11 cores to multiplex other work on the same core if required.
17 The software eventdev implements many features in the eventdev API;
26 * Load balanced (for Atomic, Ordered, Parallel queues)
27 * Single Link (for single-link queues)
30 * Each event has a priority, which can be used to provide basic QoS
33 Configuration and Options
34 -------------------------
36 The software eventdev is a vdev device, and as such can be created from the
37 application code, or from the EAL command line:
39 * Call ``rte_vdev_init("event_sw0")`` from the application
41 * Use ``--vdev="event_sw0"`` in the EAL options, which will call
42 rte_vdev_init() internally
46 .. code-block:: console
48 ./your_eventdev_application --vdev="event_sw0"
54 The scheduling quanta sets the number of events that the device attempts to
55 schedule in a single schedule call performed by the service core. Note that
56 is a *hint* only, and that fewer or more events may be scheduled in a given
59 The scheduling quanta can be set using a string argument to the vdev
62 .. code-block:: console
64 --vdev="event_sw0,sched_quanta=64"
70 The credit quanta is the number of credits that a port will fetch at a time from
71 the instance's credit pool. Higher numbers will cause less overhead in the
72 atomic credit fetch code, however it also reduces the overall number of credits
73 in the system faster. A balanced number (e.g. 32) ensures that only small numbers
74 of credits are pre-allocated at a time, while also mitigating performance impact
77 Experimentation with higher values may provide minor performance improvements,
78 at the cost of the whole system having less credits. On the other hand,
79 reducing the quanta may cause measurable performance impact but provide the
80 system with a higher number of credits at all times.
82 A value of 32 seems a good balance however your specific application may
83 benefit from a higher or reduced quanta size, experimentation is required to
84 verify possible gains.
86 .. code-block:: console
88 --vdev="event_sw0,credit_quanta=64"
90 Scheduler tuning arguments
91 ~~~~~~~~~~~~~~~~~~~~~~~~~~
93 The scheduler minimum number of events that are processed can be increased to
94 reduce per event overhead and increase internal burst sizes, which can
97 * ``min_burst`` specifies the minimum number of inflight events that can be
98 moved to the next stage in the scheduler. Default value is 1.
100 * ``refill_once`` is a switch that when set instructs the scheduler to deque
101 the events waiting in the ingress rings only once per call. The default
102 behavior is to dequeue as needed.
104 * ``deq_burst`` is the burst size used to dequeue from the port rings.
105 Default value is 32, and it should be increased to 64 or 128 when setting
108 .. code-block:: console
110 --vdev="event_sw0,min_burst=8,deq_burst=64,refill_once=1"
116 The software eventdev implementation has a few limitations. The reason for
117 these limitations is usually that the performance impact of supporting the
118 feature would be significant.
124 The software eventdev does not support creating queues that handle all types of
125 traffic. An eventdev with this capability allows enqueuing Atomic, Ordered and
126 Parallel traffic to the same queue, but scheduling each of them appropriately.
128 The reason to not allow Atomic, Ordered and Parallel event types in the
129 same queue is that it causes excessive branching in the code to enqueue packets
130 to the queue, causing a significant performance impact.
132 The ``RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES`` flag is not set in the
133 ``event_dev_cap`` field of the ``rte_event_dev_info`` struct for the software
136 Distributed Scheduler
137 ~~~~~~~~~~~~~~~~~~~~~
139 The software eventdev is a centralized scheduler, requiring a service core to
140 perform the required event distribution. This is not really a limitation but
141 rather a design decision.
143 The ``RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED`` flag is not set in the
144 ``event_dev_cap`` field of the ``rte_event_dev_info`` struct for the software
150 The eventdev API supports a timeout when dequeuing packets using the
151 ``rte_event_dequeue_burst`` function.
152 This allows a core to wait for an event to arrive, or until ``timeout`` number
153 of ticks have passed. Timeout ticks is not supported by the software eventdev
154 for performance reasons.