1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2014 Intel Corporation.
4 VM Power Management Application
5 ===============================
10 Applications running in Virtual Environments have an abstract view of
11 the underlying hardware on the Host, in particular applications cannot see
12 the binding of virtual to physical hardware.
13 When looking at CPU resourcing, the pinning of Virtual CPUs(vCPUs) to
14 Host Physical CPUs(pCPUS) is not apparent to an application
15 and this pinning may change over time.
16 Furthermore, Operating Systems on virtual machines do not have the ability
17 to govern their own power policy; the Machine Specific Registers (MSRs)
18 for enabling P-State transitions are not exposed to Operating Systems
19 running on Virtual Machines(VMs).
21 The Virtual Machine Power Management solution shows an example of
22 how a DPDK application can indicate its processing requirements using VM local
23 only information(vCPU/lcore, etc.) to a Host based Monitor which is responsible
24 for accepting requests for frequency changes for a vCPU, translating the vCPU
25 to a pCPU via libvirt and affecting the change in frequency.
27 The solution is comprised of two high-level components:
29 #. Example Host Application
31 Using a Command Line Interface(CLI) for VM->Host communication channel management
32 allows adding channels to the Monitor, setting and querying the vCPU to pCPU pinning,
33 inspecting and manually changing the frequency for each CPU.
34 The CLI runs on a single lcore while the thread responsible for managing
35 VM requests runs on a second lcore.
37 VM requests arriving on a channel for frequency changes are passed
38 to the librte_power ACPI cpufreq sysfs based library.
39 The Host Application relies on both qemu-kvm and libvirt to function.
41 This monitoring application is responsible for:
43 - Accepting requests from client applications: Client applications can
44 request frequency changes for a vCPU, translating
45 the vCPU to a pCPU via libvirt and affecting the change in frequency.
47 - Accepting policies from client applications: Client application can
48 send a policy to the host application. The
49 host application will then apply the rules of the policy independent
50 of the application. For example, the policy can contain time-of-day
51 information for busy/quiet periods, and the host application can scale
52 up/down the relevant cores when required. See the details of the guest
53 application below for more information on setting the policy values.
55 - Out-of-band monitoring of workloads via cores hardware event counters:
56 The host application can manage power for an application in a virtualised
57 OR non-virtualised environment by looking at the event counters of the
58 cores and taking action based on the branch hit/miss ratio. See the host
59 application '--core-list' command line parameter below.
61 #. librte_power for Virtual Machines
63 Using an alternate implementation for the librte_power API, requests for
64 frequency changes are forwarded to the host monitor rather than
65 the APCI cpufreq sysfs interface used on the host.
67 The l3fwd-power application will use this implementation when deployed on a VM
68 (see :doc:`l3_forward_power_man`).
70 .. _figure_vm_power_mgr_highlevel:
72 .. figure:: img/vm_power_mgr_highlevel.*
80 VM Power Management employs qemu-kvm to provide communications channels
81 between the host and VMs in the form of Virtio-Serial which appears as
82 a paravirtualized serial device on a VM and can be configured to use
83 various backends on the host. For this example each Virtio-Serial endpoint
84 on the host is configured as AF_UNIX file socket, supporting poll/select
85 and epoll for event notification.
86 In this example each channel endpoint on the host is monitored via
87 epoll for EPOLLIN events.
88 Each channel is specified as qemu-kvm arguments or as libvirt XML for each VM,
89 where each VM can have a number of channels up to a maximum of 64 per VM,
90 in this example each DPDK lcore on a VM has exclusive access to a channel.
92 To enable frequency changes from within a VM, a request via the librte_power interface
93 is forwarded via Virtio-Serial to the host, each request contains the vCPU
94 and power command(scale up/down/min/max).
95 The API for host and guest librte_power is consistent across environments,
96 with the selection of VM or Host Implementation determined at automatically
97 at runtime based on the environment.
99 Upon receiving a request, the host translates the vCPU to a pCPU via
100 the libvirt API before forwarding to the host librte_power.
102 .. _figure_vm_power_mgr_vm_request_seq:
104 .. figure:: img/vm_power_mgr_vm_request_seq.*
106 VM request to scale frequency
109 Performance Considerations
110 ~~~~~~~~~~~~~~~~~~~~~~~~~~
112 While Haswell Microarchitecture allows for independent power control for each core,
113 earlier Microarchtectures do not offer such fine grained control.
114 When deployed on pre-Haswell platforms greater care must be taken in selecting
115 which cores are assigned to a VM, for instance a core will not scale down
116 until its sibling is similarly scaled.
124 Enhanced Intel SpeedStepĀ® Technology must be enabled in the platform BIOS
125 if the power management feature of DPDK is to be used.
126 Otherwise, the sys file folder /sys/devices/system/cpu/cpu0/cpufreq will not exist,
127 and the CPU frequency-based power management cannot be used.
128 Consult the relevant BIOS documentation to determine how these settings
131 Host Operating System
132 ~~~~~~~~~~~~~~~~~~~~~
134 The Host OS must also have the *apci_cpufreq* module installed, in some cases
135 the *intel_pstate* driver may be the default Power Management environment.
136 To enable *acpi_cpufreq* and disable *intel_pstate*, add the following
137 to the grub Linux command line:
139 .. code-block:: console
143 Upon rebooting, load the *acpi_cpufreq* module:
145 .. code-block:: console
147 modprobe acpi_cpufreq
149 Hypervisor Channel Configuration
150 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
152 Virtio-Serial channels are configured via libvirt XML:
157 <name>{vm_name}</name>
158 <controller type='virtio-serial' index='0'>
159 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
161 <channel type='unix'>
162 <source mode='bind' path='/tmp/powermonitor/{vm_name}.{channel_num}'/>
163 <target type='virtio' name='virtio.serial.port.poweragent.{vm_channel_num}'/>
164 <address type='virtio-serial' controller='0' bus='0' port='{N}'/>
168 Where a single controller of type *virtio-serial* is created and up to 32 channels
169 can be associated with a single controller and multiple controllers can be specified.
170 The convention is to use the name of the VM in the host path *{vm_name}* and
171 to increment *{channel_num}* for each channel, likewise the port value *{N}*
172 must be incremented for each channel.
174 Each channel on the host will appear in *path*, the directory */tmp/powermonitor/*
175 must first be created and given qemu permissions
177 .. code-block:: console
179 mkdir /tmp/powermonitor/
180 chown qemu:qemu /tmp/powermonitor
182 Note that files and directories within /tmp are generally removed upon
183 rebooting the host and the above steps may need to be carried out after each reboot.
185 The serial device as it appears on a VM is configured with the *target* element attribute *name*
186 and must be in the form of *virtio.serial.port.poweragent.{vm_channel_num}*,
187 where *vm_channel_num* is typically the lcore channel to be used in DPDK VM applications.
189 Each channel on a VM will be present at */dev/virtio-ports/virtio.serial.port.poweragent.{vm_channel_num}*
191 Compiling and Running the Host Application
192 ------------------------------------------
197 For information on compiling DPDK and the sample applications
198 see :doc:`compiling`.
200 The application is located in the ``vm_power_manager`` sub-directory.
202 To build just the ``vm_power_manager`` application:
204 .. code-block:: console
206 export RTE_SDK=/path/to/rte_sdk
207 export RTE_TARGET=build
208 cd ${RTE_SDK}/examples/vm_power_manager/
214 The application does not have any specific command line options other than *EAL*:
216 .. code-block:: console
218 ./build/vm_power_mgr [EAL options]
220 The application requires exactly two cores to run, one core is dedicated to the CLI,
221 while the other is dedicated to the channel endpoint monitor, for example to run
222 on cores 0 & 1 on a system with 4 memory channels:
224 .. code-block:: console
226 ./build/vm_power_mgr -l 0-1 -n 4
228 After successful initialization the user is presented with VM Power Manager CLI:
230 .. code-block:: console
234 Virtual Machines can now be added to the VM Power Manager:
236 .. code-block:: console
238 vm_power> add_vm {vm_name}
240 When a {vm_name} is specified with the *add_vm* command a lookup is performed
241 with libvirt to ensure that the VM exists, {vm_name} is used as an unique identifier
242 to associate channels with a particular VM and for executing operations on a VM within the CLI.
243 VMs do not have to be running in order to add them.
245 A number of commands can be issued via the CLI in relation to VMs:
247 Remove a Virtual Machine identified by {vm_name} from the VM Power Manager.
249 .. code-block:: console
253 Add communication channels for the specified VM, the virtio channels must be enabled
254 in the VM configuration(qemu/libvirt) and the associated VM must be active.
255 {list} is a comma-separated list of channel numbers to add, using the keyword 'all'
256 will attempt to add all channels for the VM:
258 .. code-block:: console
260 add_channels {vm_name} {list}|all
262 Enable or disable the communication channels in {list}(comma-separated)
263 for the specified VM, alternatively list can be replaced with keyword 'all'.
264 Disabled channels will still receive packets on the host, however the commands
265 they specify will be ignored. Set status to 'enabled' to begin processing requests again:
267 .. code-block:: console
269 set_channel_status {vm_name} {list}|all enabled|disabled
271 Print to the CLI the information on the specified VM, the information
272 lists the number of vCPUS, the pinning to pCPU(s) as a bit mask, along with
273 any communication channels associated with each VM, along with the status of each channel:
275 .. code-block:: console
279 Set the binding of Virtual CPU on VM with name {vm_name} to the Physical CPU mask:
281 .. code-block:: console
283 set_pcpu_mask {vm_name} {vcpu} {pcpu}
285 Set the binding of Virtual CPU on VM to the Physical CPU:
287 .. code-block:: console
289 set_pcpu {vm_name} {vcpu} {pcpu}
291 Manual control and inspection can also be carried in relation CPU frequency scaling:
293 Get the current frequency for each core specified in the mask:
295 .. code-block:: console
297 show_cpu_freq_mask {mask}
299 Set the current frequency for the cores specified in {core_mask} by scaling each up/down/min/max:
301 .. code-block:: console
303 set_cpu_freq {core_mask} up|down|min|max
305 Get the current frequency for the specified core:
307 .. code-block:: console
309 show_cpu_freq {core_num}
311 Set the current frequency for the specified core by scaling up/down/min/max:
313 .. code-block:: console
315 set_cpu_freq {core_num} up|down|min|max
317 There are also some command line parameters for enabling the out-of-band
318 monitoring of branch ratio on cores doing busy polling via PMDs.
320 .. code-block:: console
322 --core-list {list of cores}
324 When this parameter is used, the list of cores specified will monitor the ratio
325 between branch hits and branch misses. A tightly polling PMD thread will have a
326 very low branch ratio, so the core frequency will be scaled down to the minimim
327 allowed value. When packets are received, the code path will alter, causing the
328 branch ratio to increase. When the ratio goes above the ratio threshold, the
329 core frequency will be scaled up to the maximum allowed value.
331 .. code-block:: console
333 --branch-ratio {ratio}
335 The branch ratio is a floating point number that specifies the threshold at which
336 to scale up or down for the given workload. The default branch ratio is 0.01,
337 and will need to be adjusted for different workloads.
340 Compiling and Running the Guest Applications
341 --------------------------------------------
343 l3fwd-power is one sample application that can be used with vm_power_manager.
345 A guest CLI is also provided for validating the setup.
347 For both l3fwd-power and guest CLI, the channels for the VM must be monitored by the
348 host application using the *add_channels* command on the host. This typically uses
349 the following commands in the host application:
351 .. code-block:: console
353 vm_power> add_vm vmname
354 vm_power> add_channels vmname all
355 vm_power> set_channel_status vmname all enabled
356 vm_power> show_vm vmname
362 For information on compiling DPDK and the sample applications
363 see :doc:`compiling`.
365 For compiling and running l3fwd-power, see :doc:`l3_forward_power_man`.
367 The application is located in the ``guest_cli`` sub-directory under ``vm_power_manager``.
369 To build just the ``guest_vm_power_manager`` application:
371 .. code-block:: console
373 export RTE_SDK=/path/to/rte_sdk
374 export RTE_TARGET=build
375 cd ${RTE_SDK}/examples/vm_power_manager/guest_cli/
381 The standard *EAL* command line parameters are required:
383 .. code-block:: console
385 ./build/guest_vm_power_mgr [EAL options] -- [guest options]
387 The guest example uses a channel for each lcore enabled. For example,
388 to run on cores 0,1,2,3:
390 .. code-block:: console
392 ./build/guest_vm_power_mgr -l 0-3
394 Optionally, there is a list of command line parameter should the user wish to send a power
395 policy down to the host application. These parameters are as follows:
397 .. code-block:: console
399 --vm-name {name of guest vm}
401 This parameter allows the user to change the Virtual Machine name passed down to the
402 host application via the power policy. The default is "ubuntu2"
404 .. code-block:: console
406 --vcpu-list {list vm cores}
408 A comma-separated list of cores in the VM that the user wants the host application to
409 monitor. The list of cores in any vm starts at zero, and these are mapped to the
410 physical cores by the host application once the policy is passed down.
411 Valid syntax includes individial cores '2,3,4', or a range of cores '2-4', or a
412 combination of both '1,3,5-7'
414 .. code-block:: console
416 --busy-hours {list of busy hours}
418 A comma-separated list of hours within which to set the core frequency to maximum.
419 Valid syntax includes individial hours '2,3,4', or a range of hours '2-4', or a
420 combination of both '1,3,5-7'. Valid hours are 0 to 23.
422 .. code-block:: console
424 --quiet-hours {list of quiet hours}
426 A comma-separated list of hours within which to set the core frequency to minimum.
427 Valid syntax includes individial hours '2,3,4', or a range of hours '2-4', or a
428 combination of both '1,3,5-7'. Valid hours are 0 to 23.
430 .. code-block:: console
432 --policy {policy type}
434 The type of policy. This can be one of the following values:
435 TRAFFIC - based on incoming traffic rates on the NIC.
436 TIME - busy/quiet hours policy.
437 BRANCH_RATIO - uses branch ratio counters to determine core busyness.
438 Not all parameters are needed for all policy types. For example, BRANCH_RATIO
439 only needs the vcpu-list parameter, not any of the hours.
442 After successful initialization the user is presented with VM Power Manager Guest CLI:
444 .. code-block:: console
448 To change the frequency of a lcore, use the set_cpu_freq command.
449 Where {core_num} is the lcore and channel to change frequency by scaling up/down/min/max.
451 .. code-block:: console
453 set_cpu_freq {core_num} up|down|min|max
455 To start the application and configure the power policy, and send it to the host:
457 .. code-block:: console
459 ./build/guest_vm_power_mgr -l 0-3 -n 4 -- --vm-name=ubuntu --policy=BRANCH_RATIO --vcpu-list=2-4
461 Once the VM Power Manager Guest CLI appears, issuing the 'send_policy now' command
462 will send the policy to the host:
464 .. code-block:: console
468 Once the policy is sent to the host, the host application takes over the power monitoring
469 of the specified cores in the policy.