echo 50000 > pkt_io/cpu.cfs_quota_us
-.. |linuxapp_launch| image:: img/linuxapp_launch.svg
+.. |linuxapp_launch| image:: img/linuxapp_launch.*
|inter_vm_comms|
-.. |perf_benchmark| image:: img/perf_benchmark.png
+.. |perf_benchmark| image:: img/perf_benchmark.*
-.. |single_port_nic| image:: img/single_port_nic.png
+.. |single_port_nic| image:: img/single_port_nic.*
-.. |inter_vm_comms| image:: img/inter_vm_comms.png
+.. |inter_vm_comms| image:: img/inter_vm_comms.*
-.. |fast_pkt_proc| image:: img/fast_pkt_proc.png
+.. |fast_pkt_proc| image:: img/fast_pkt_proc.*
packet generator->Virtio in guest VM1->switching backend->Virtio in guest VM2->switching backend->wire
-.. |grant_table| image:: img/grant_table.png
+.. |grant_table| image:: img/grant_table.*
-.. |grant_refs| image:: img/grant_refs.png
+.. |grant_refs| image:: img/grant_refs.*
-.. |dpdk_xen_pkt_switch| image:: img/dpdk_xen_pkt_switch.png
+.. |dpdk_xen_pkt_switch| image:: img/dpdk_xen_pkt_switch.*
it is not enough to simply shut the application down.
The virtual machine must also be shut down (if not, it will hold onto outdated host data).
-.. |ivshmem| image:: img/ivshmem.png
+.. |ivshmem| image:: img/ivshmem.*
When working with legacy virtio on the guest, it is better to turn off unsupported offload features using ethtool -K.
Otherwise, there may be problems such as an incorrect L4 checksum error.
-.. |kni_traffic_flow| image:: img/kni_traffic_flow.png
+.. |kni_traffic_flow| image:: img/kni_traffic_flow.*
-.. |vhost_net_arch| image:: img/vhost_net_arch.png
+.. |vhost_net_arch| image:: img/vhost_net_arch.*
-.. |pkt_flow_kni| image:: img/pkt_flow_kni.png
+.. |pkt_flow_kni| image:: img/pkt_flow_kni.*
-.. |kernel_nic_intf| image:: img/kernel_nic_intf.png
+.. |kernel_nic_intf| image:: img/kernel_nic_intf.*
then calling rte_eth_rx_queue_setup() / tx_queue_setup() for each of those queues and
finally calling rte_eth_dev_start() to allow transmission and reception of packets to begin.
-.. |forward_stats| image:: img/forward_stats.png
+.. |forward_stats| image:: img/forward_stats.*
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=2, slave=0000:00a:00.01,slave=0000:004:00.00,xmit_policy=l34' -- --port-topology=chained
-.. |bond-overview| image:: img/bond-overview.svg
-.. |bond-mode-0| image:: img/bond-mode-0.svg
-.. |bond-mode-1| image:: img/bond-mode-1.svg
-.. |bond-mode-2| image:: img/bond-mode-2.svg
-.. |bond-mode-3| image:: img/bond-mode-3.svg
-.. |bond-mode-4| image:: img/bond-mode-4.svg
-.. |bond-mode-5| image:: img/bond-mode-5.svg
+.. |bond-overview| image:: img/bond-overview.*
+.. |bond-mode-0| image:: img/bond-mode-0.*
+.. |bond-mode-1| image:: img/bond-mode-1.*
+.. |bond-mode-2| image:: img/bond-mode-2.*
+.. |bond-mode-3| image:: img/bond-mode-3.*
+.. |bond-mode-4| image:: img/bond-mode-4.*
+.. |bond-mode-5| image:: img/bond-mode-5.*
The LPM algorithm is used to implement the Classless Inter-Domain Routing (CIDR) strategy used by routers implementing IP forwarding.
-.. |tbl24_tbl8_tbl8| image:: img/tbl24_tbl8_tbl8.png
+.. |tbl24_tbl8_tbl8| image:: img/tbl24_tbl8_tbl8.*
* Pankaj Gupta, Algorithms for Routing Lookups and Packet Classification, PhD Thesis, Stanford University,
2000 (`http://klamath.stanford.edu/~pankaj/thesis/ thesis_1sided.pdf <http://klamath.stanford.edu/~pankaj/thesis/%20thesis_1sided.pdf>`_ )
-.. |tbl24_tbl8| image:: img/tbl24_tbl8.png
+.. |tbl24_tbl8| image:: img/tbl24_tbl8.*
This means that we can never have two free memory blocks adjacent to one another,
they are always merged into a single block.
-.. |malloc_heap| image:: img/malloc_heap.png
+.. |malloc_heap| image:: img/malloc_heap.*
All networking application should use mbufs to transport network packets.
-.. |mbuf1| image:: img/mbuf1.svg
+.. |mbuf1| image:: img/mbuf1.*
-.. |mbuf2| image:: img/mbuf2.svg
+.. |mbuf2| image:: img/mbuf2.*
* Any application that needs to allocate fixed-sized objects in the data plane and that will be continuously utilized by the system.
-.. |memory-management| image:: img/memory-management.svg
+.. |memory-management| image:: img/memory-management.*
-.. |memory-management2| image:: img/memory-management2.svg
+.. |memory-management2| image:: img/memory-management2.*
-.. |mempool| image:: img/mempool.svg
+.. |mempool| image:: img/mempool.*
If the number of required DPDK processes exceeds that of the number of available HPET comparators,
the TSC (which is the default timer in this release) must be used as a time source across all processes instead of the HPET.
-.. |multi_process_memory| image:: img/multi_process_memory.svg
+.. |multi_process_memory| image:: img/multi_process_memory.*
It is based on code from the FreeBSD* IP stack and contains protocol numbers (for use in IP headers),
IP-related macros, IPv4/IPv6 header structures and TCP, UDP and SCTP header structures.
-.. |architecture-overview| image:: img/architecture-overview.svg
+.. |architecture-overview| image:: img/architecture-overview.*
it is possible to have a worker stop processing packets by calling "rte_distributor_return_pkt()" to indicate that
it has finished the current packet and does not want a new one.
-.. |packet_distributor1| image:: img/packet_distributor1.png
+.. |packet_distributor1| image:: img/packet_distributor1.*
-.. |packet_distributor2| image:: img/packet_distributor2.png
+.. |packet_distributor2| image:: img/packet_distributor2.*
The selection between these implementations could be done at build time or at run-time (recommended), based on which accelerators are present in the system,
with no application changes required.
-.. |figure33| image:: img/figure33.png
+.. |figure33| image:: img/figure33.*
-.. |figure35| image:: img/figure35.png
+.. |figure35| image:: img/figure35.*
-.. |figure39| image:: img/figure39.png
+.. |figure39| image:: img/figure39.*
-.. |figure34| image:: img/figure34.png
+.. |figure34| image:: img/figure34.*
-.. |figure32| image:: img/figure32.png
+.. |figure32| image:: img/figure32.*
-.. |figure37| image:: img/figure37.png
+.. |figure37| image:: img/figure37.*
-.. |figure38| image:: img/figure38.png
+.. |figure38| image:: img/figure38.*
IXIA packet generator-> Guest VM 82599 VF port1 rx burst-> Guest VM virtio port 0 tx burst-> tap -> Linux Bridge->82599 PF-> IXIA packet generator
-.. |host_vm_comms| image:: img/host_vm_comms.png
+.. |host_vm_comms| image:: img/host_vm_comms.*
-.. |console| image:: img/console.png
+.. |console| image:: img/console.*
-.. |host_vm_comms_qemu| image:: img/host_vm_comms_qemu.png
+.. |host_vm_comms_qemu| image:: img/host_vm_comms_qemu.*
Packet generator -> 82599 VF -> Guest VM 82599 port 0 rx burst -> Guest VM VMXNET3 port 1 tx burst -> VMXNET3
device -> VMware ESXi vSwitch -> VMXNET3 device -> Guest VM VMXNET3 port 0 rx burst -> Guest VM 82599 VF port 1 tx burst -> 82599 VF -> Packet generator
-.. |vm_vm_comms| image:: img/vm_vm_comms.png
+.. |vm_vm_comms| image:: img/vm_vm_comms.*
-.. |vmxnet3_int| image:: img/vmxnet3_int.png
+.. |vmxnet3_int| image:: img/vmxnet3_int.*
-.. |vswitch_vm| image:: img/vswitch_vm.png
+.. |vswitch_vm| image:: img/vswitch_vm.*
When the output color is not red, a number of tokens equal to the length of the IP packet are
subtracted from the C or E /P or both buckets, depending on the algorithm and the output color of the packet.
-.. |flow_tru_droppper| image:: img/flow_tru_droppper.png
+.. |flow_tru_droppper| image:: img/flow_tru_droppper.*
-.. |drop_probability_graph| image:: img/drop_probability_graph.png
+.. |drop_probability_graph| image:: img/drop_probability_graph.*
-.. |drop_probability_eq3| image:: img/drop_probability_eq3.png
+.. |drop_probability_eq3| image:: img/drop_probability_eq3.*
-.. |eq2_expression| image:: img/eq2_expression.png
+.. |eq2_expression| image:: img/eq2_expression.*
-.. |drop_probability_eq4| image:: img/drop_probability_eq4.png
+.. |drop_probability_eq4| image:: img/drop_probability_eq4.*
-.. |pkt_drop_probability| image:: img/pkt_drop_probability.png
+.. |pkt_drop_probability| image:: img/pkt_drop_probability.*
-.. |pkt_proc_pipeline_qos| image:: img/pkt_proc_pipeline_qos.png
+.. |pkt_proc_pipeline_qos| image:: img/pkt_proc_pipeline_qos.*
-.. |ex_data_flow_tru_dropper| image:: img/ex_data_flow_tru_dropper.png
+.. |ex_data_flow_tru_dropper| image:: img/ex_data_flow_tru_dropper.*
-.. |ewma_filter_eq_1| image:: img/ewma_filter_eq_1.png
+.. |ewma_filter_eq_1| image:: img/ewma_filter_eq_1.*
-.. |ewma_filter_eq_2| image:: img/ewma_filter_eq_2.png
+.. |ewma_filter_eq_2| image:: img/ewma_filter_eq_2.*
-.. |data_struct_per_port| image:: img/data_struct_per_port.png
+.. |data_struct_per_port| image:: img/data_struct_per_port.*
-.. |prefetch_pipeline| image:: img/prefetch_pipeline.png
+.. |prefetch_pipeline| image:: img/prefetch_pipeline.*
-.. |pipe_prefetch_sm| image:: img/pipe_prefetch_sm.png
+.. |pipe_prefetch_sm| image:: img/pipe_prefetch_sm.*
-.. |blk_diag_dropper| image:: img/blk_diag_dropper.png
+.. |blk_diag_dropper| image:: img/blk_diag_dropper.*
-.. |m_definition| image:: img/m_definition.png
+.. |m_definition| image:: img/m_definition.*
-.. |eq2_factor| image:: img/eq2_factor.png
+.. |eq2_factor| image:: img/eq2_factor.*
-.. |sched_hier_per_port| image:: img/sched_hier_per_port.png
+.. |sched_hier_per_port| image:: img/sched_hier_per_port.*
-.. |hier_sched_blk| image:: img/hier_sched_blk.png
+.. |hier_sched_blk| image:: img/hier_sched_blk.*
* `Linux Lockless Ring Buffer Design <http://lwn.net/Articles/340400/>`_
-.. |ring1| image:: img/ring1.svg
+.. |ring1| image:: img/ring1.*
-.. |ring-enqueue1| image:: img/ring-enqueue1.svg
+.. |ring-enqueue1| image:: img/ring-enqueue1.*
-.. |ring-enqueue2| image:: img/ring-enqueue2.svg
+.. |ring-enqueue2| image:: img/ring-enqueue2.*
-.. |ring-enqueue3| image:: img/ring-enqueue3.svg
+.. |ring-enqueue3| image:: img/ring-enqueue3.*
-.. |ring-dequeue1| image:: img/ring-dequeue1.svg
+.. |ring-dequeue1| image:: img/ring-dequeue1.*
-.. |ring-dequeue2| image:: img/ring-dequeue2.svg
+.. |ring-dequeue2| image:: img/ring-dequeue2.*
-.. |ring-dequeue3| image:: img/ring-dequeue3.svg
+.. |ring-dequeue3| image:: img/ring-dequeue3.*
-.. |ring-mp-enqueue1| image:: img/ring-mp-enqueue1.svg
+.. |ring-mp-enqueue1| image:: img/ring-mp-enqueue1.*
-.. |ring-mp-enqueue2| image:: img/ring-mp-enqueue2.svg
+.. |ring-mp-enqueue2| image:: img/ring-mp-enqueue2.*
-.. |ring-mp-enqueue3| image:: img/ring-mp-enqueue3.svg
+.. |ring-mp-enqueue3| image:: img/ring-mp-enqueue3.*
-.. |ring-mp-enqueue4| image:: img/ring-mp-enqueue4.svg
+.. |ring-mp-enqueue4| image:: img/ring-mp-enqueue4.*
-.. |ring-mp-enqueue5| image:: img/ring-mp-enqueue5.svg
+.. |ring-mp-enqueue5| image:: img/ring-mp-enqueue5.*
-.. |ring-modulo1| image:: img/ring-modulo1.svg
+.. |ring-modulo1| image:: img/ring-modulo1.*
-.. |ring-modulo2| image:: img/ring-modulo2.svg
+.. |ring-modulo2| image:: img/ring-modulo2.*
TX queue initialization is done in the same way as it is done in the L2 Forwarding
Sample Application. See Section 9.4.5, "TX Queue Initialization".
-.. |dist_perf| image:: img/dist_perf.svg
+.. |dist_perf| image:: img/dist_perf.*
-.. |dist_app| image:: img/dist_app.svg
+.. |dist_app| image:: img/dist_app.*
brctl delbr br0
openvpn --rmtun --dev tap_dpdk_00
-.. |exception_path_example| image:: img/exception_path_example.svg
+.. |exception_path_example| image:: img/exception_path_example.*
Refer to the *DPDK Test Report* for more examples of traffic generator setup and the application startup command lines.
If no errors are generated in response to the startup commands, the application is running correctly.
-.. |quickassist_block_diagram| image:: img/quickassist_block_diagram.png
+.. |quickassist_block_diagram| image:: img/quickassist_block_diagram.*
**Figure 2. Kernel NIC Application Packet Flow**
-.. image3_png has been renamed to kernel_nic.png
+.. image3_png has been renamed to kernel_nic.*
|kernel_nic|
return ret;
}
-.. |kernel_nic| image:: img/kernel_nic.png
+.. |kernel_nic| image:: img/kernel_nic.*
prev_tsc = cur_tsc;
}
-.. |l2_fwd_benchmark_setup| image:: img/l2_fwd_benchmark_setup.svg
+.. |l2_fwd_benchmark_setup| image:: img/l2_fwd_benchmark_setup.*
-.. |l2_fwd_virtenv_benchmark_setup| image:: img/l2_fwd_virtenv_benchmark_setup.png
+.. |l2_fwd_virtenv_benchmark_setup| image:: img/l2_fwd_virtenv_benchmark_setup.*
It is important to note that the application creates an independent copy of each database for each socket CPU
involved in the task to reduce the time for remote memory access.
-.. |ipv4_acl_rule| image:: img/ipv4_acl_rule.png
+.. |ipv4_acl_rule| image:: img/ipv4_acl_rule.*
-.. |example_rules| image:: img/example_rules.png
+.. |example_rules| image:: img/example_rules.*
then it has to be transmitted out by a NIC connected to socket C.
The performance price for crossing the CPU socket boundary is paid twice for this packet.
-.. |load_bal_app_arch| image:: img/load_bal_app_arch.png
+.. |load_bal_app_arch| image:: img/load_bal_app_arch.*
return 0;
}
-.. |sym_multi_proc_app| image:: img/sym_multi_proc_app.png
+.. |sym_multi_proc_app| image:: img/sym_multi_proc_app.*
-.. |client_svr_sym_multi_proc_app| image:: img/client_svr_sym_multi_proc_app.png
+.. |client_svr_sym_multi_proc_app| image:: img/client_svr_sym_multi_proc_app.*
-.. |master_slave_proc| image:: img/master_slave_proc.png
+.. |master_slave_proc| image:: img/master_slave_proc.*
-.. |slave_proc_recov| image:: img/slave_proc_recov.png
+.. |slave_proc_recov| image:: img/slave_proc_recov.*
Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters.
-.. |qos_sched_app_arch| image:: img/qos_sched_app_arch.png
+.. |qos_sched_app_arch| image:: img/qos_sched_app_arch.*
low_watermark = (unsigned int *) qw_memzone->addr + sizeof(int);
}
-.. |pipeline_overview| image:: img/pipeline_overview.png
+.. |pipeline_overview| image:: img/pipeline_overview.*
-.. |ring_pipeline_perf_setup| image:: img/ring_pipeline_perf_setup.png
+.. |ring_pipeline_perf_setup| image:: img/ring_pipeline_perf_setup.*
-.. |threads_pipelines| image:: img/threads_pipelines.png
+.. |threads_pipelines| image:: img/threads_pipelines.*
* source TCP port fixed to 0
-.. |test_pipeline_app| image:: img/test_pipeline_app.png
+.. |test_pipeline_app| image:: img/test_pipeline_app.*
Any packets received on the NIC with these values is placed on the devices receive queue.
When a virtio-net device transmits packets, the VLAN tag is added to the packet by the DPDK vhost sample code.
-.. |vhost_net_arch| image:: img/vhost_net_arch.png
+.. |vhost_net_arch| image:: img/vhost_net_arch.*
-.. |qemu_virtio_net| image:: img/qemu_virtio_net.png
+.. |qemu_virtio_net| image:: img/qemu_virtio_net.*
-.. |tx_dpdk_testpmd| image:: img/tx_dpdk_testpmd.png
+.. |tx_dpdk_testpmd| image:: img/tx_dpdk_testpmd.*
-.. |vhost_net_sample_app| image:: img/vhost_net_sample_app.png
+.. |vhost_net_sample_app| image:: img/vhost_net_sample_app.*
-.. |virtio_linux_vhost| image:: img/virtio_linux_vhost.png
+.. |virtio_linux_vhost| image:: img/virtio_linux_vhost.*
set_cpu_freq {core_num} up|down|min|max
-.. |vm_power_mgr_highlevel| image:: img/vm_power_mgr_highlevel.svg
+.. |vm_power_mgr_highlevel| image:: img/vm_power_mgr_highlevel.*
-.. |vm_power_mgr_vm_request_seq| image:: img/vm_power_mgr_vm_request_seq.svg
+.. |vm_power_mgr_vm_request_seq| image:: img/vm_power_mgr_vm_request_seq.*
Please note that the statistics output will appear on the terminal where the vmdq_dcb_app is running,
rather than the terminal from which the HUP signal was sent.
-.. |vmdq_dcb_example| image:: img/vmdq_dcb_example.svg
+.. |vmdq_dcb_example| image:: img/vmdq_dcb_example.*