2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
36 The DPDK LPM library component implements the Longest Prefix Match (LPM) table search method for 32-bit keys
37 that is typically used to find the best route match in IP forwarding applications.
42 The main configuration parameter for LPM component instances is the maximum number of rules to support.
43 An LPM prefix is represented by a pair of parameters (32- bit key, depth), with depth in the range of 1 to 32.
44 An LPM rule is represented by an LPM prefix and some user data associated with the prefix.
45 The prefix serves as the unique identifier of the LPM rule.
46 In this implementation, the user data is 1-byte long and is called next hop,
47 in correlation with its main use of storing the ID of the next hop in a routing table entry.
49 The main methods exported by the LPM component are:
51 * Add LPM rule: The LPM rule is provided as input.
52 If there is no rule with the same prefix present in the table, then the new rule is added to the LPM table.
53 If a rule with the same prefix is already present in the table, the next hop of the rule is updated.
54 An error is returned when there is no available rule space left.
56 * Delete LPM rule: The prefix of the LPM rule is provided as input.
57 If a rule with the specified prefix is present in the LPM table, then it is removed.
59 * Lookup LPM key: The 32-bit key is provided as input.
60 The algorithm selects the rule that represents the best match for the given key and returns the next hop of that rule.
61 In the case that there are multiple rules present in the LPM table that have the same 32-bit key,
62 the algorithm picks the rule with the highest depth as the best match rule,
63 which means that the rule has the highest number of most significant bits matching between the input key and the rule key.
67 Implementation Details
68 ----------------------
70 The current implementation uses a variation of the DIR-24-8 algorithm that trades memory usage for improved LPM lookup speed.
71 The algorithm allows the lookup operation to be performed with typically a single memory read access.
72 In the statistically rare case when the best match rule is having a depth bigger than 24,
73 the lookup operation requires two memory read accesses.
74 Therefore, the performance of the LPM lookup operation is greatly influenced by
75 whether the specific memory location is present in the processor cache or not.
77 The main data structure is built using the following elements:
79 * A table with 2^24 entries.
81 * A number of tables (RTE_LPM_TBL8_NUM_GROUPS) with 2^8 entries.
83 The first table, called tbl24, is indexed using the first 24 bits of the IP address to be looked up,
84 while the second table(s), called tbl8, is indexed using the last 8 bits of the IP address.
85 This means that depending on the outcome of trying to match the IP address of an incoming packet to the rule stored in the tbl24
86 we might need to continue the lookup process in the second level.
88 Since every entry of the tbl24 can potentially point to a tbl8, ideally, we would have 2^24 tbl8s,
89 which would be the same as having a single table with 2^32 entries.
90 This is not feasible due to resource restrictions.
91 Instead, this approach takes advantage of the fact that rules longer than 24 bits are very rare.
92 By splitting the process in two different tables/levels and limiting the number of tbl8s,
93 we can greatly reduce memory consumption while maintaining a very good lookup speed (one memory access, most of the times).
96 .. figure:: img/tbl24_tbl8.*
98 Table split into different levels
101 An entry in tbl24 contains the following fields:
103 * next hop / index to the tbl8
107 * external entry flag
109 * depth of the rule (length)
111 The first field can either contain a number indicating the tbl8 in which the lookup process should continue
112 or the next hop itself if the longest prefix match has already been found.
113 The two flags are used to determine whether the entry is valid or not and
114 whether the search process have finished or not respectively.
115 The depth or length of the rule is the number of bits of the rule that is stored in a specific entry.
117 An entry in a tbl8 contains the following fields:
127 Next hop and depth contain the same information as in the tbl24.
128 The two flags show whether the entry and the table are valid respectively.
130 The other main data structure is a table containing the main information about the rules (IP and next hop).
131 This is a higher level table, used for different things:
133 * Check whether a rule already exists or not, prior to addition or deletion,
134 without having to actually perform a lookup.
136 * When deleting, to check whether there is a rule containing the one that is to be deleted.
137 This is important, since the main data structure will have to be updated accordingly.
142 When adding a rule, there are different possibilities.
143 If the rule's depth is exactly 24 bits, then:
145 * Use the rule (IP address) as an index to the tbl24.
147 * If the entry is invalid (i.e. it doesn't already contain a rule) then set its next hop to its value,
148 the valid flag to 1 (meaning this entry is in use),
149 and the external entry flag to 0
150 (meaning the lookup process ends at this point, since this is the longest prefix that matches).
152 If the rule's depth is exactly 32 bits, then:
154 * Use the first 24 bits of the rule as an index to the tbl24.
156 * If the entry is invalid (i.e. it doesn't already contain a rule) then look for a free tbl8,
157 set the index to the tbl8 to this value,
158 the valid flag to 1 (meaning this entry is in use), and the external entry flag to 1
159 (meaning the lookup process must continue since the rule hasn't been explored completely).
161 If the rule's depth is any other value, prefix expansion must be performed.
162 This means the rule is copied to all the entries (as long as they are not in use) which would also cause a match.
164 As a simple example, let's assume the depth is 20 bits.
165 This means that there are 2^(24 - 20) = 16 different combinations of the first 24 bits of an IP address that
167 Hence, in this case, we copy the exact same entry to every position indexed by one of these combinations.
169 By doing this we ensure that during the lookup process, if a rule matching the IP address exists,
170 it is found in either one or two memory accesses,
171 depending on whether we need to move to the next table or not.
172 Prefix expansion is one of the keys of this algorithm,
173 since it improves the speed dramatically by adding redundancy.
178 The lookup process is much simpler and quicker. In this case:
180 * Use the first 24 bits of the IP address as an index to the tbl24.
181 If the entry is not in use, then it means we don't have a rule matching this IP.
182 If it is valid and the external entry flag is set to 0, then the next hop is returned.
184 * If it is valid and the external entry flag is set to 1,
185 then we use the tbl8 index to find out the tbl8 to be checked,
186 and the last 8 bits of the IP address as an index to this table.
187 Similarly, if the entry is not in use, then we don't have a rule matching this IP address.
188 If it is valid then the next hop is returned.
190 Limitations in the Number of Rules
191 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
193 There are different things that limit the number of rules that can be added.
194 The first one is the maximum number of rules, which is a parameter passed through the API.
195 Once this number is reached,
196 it is not possible to add any more rules to the routing table unless one or more are removed.
198 The second reason is an intrinsic limitation of the algorithm.
199 As explained before, to avoid high memory consumption, the number of tbl8s is limited in compilation time
200 (this value is by default 256).
201 If we exhaust tbl8s, we won't be able to add any more rules.
202 How many of them are necessary for a specific routing table is hard to determine in advance.
204 A tbl8 is consumed whenever we have a new rule with depth bigger than 24,
205 and the first 24 bits of this rule are not the same as the first 24 bits of a rule previously added.
206 If they are, then the new rule will share the same tbl8 than the previous one,
207 since the only difference between the two rules is within the last byte.
209 With the default value of 256, we can have up to 256 rules longer than 24 bits that differ on their first three bytes.
210 Since routes longer than 24 bits are unlikely, this shouldn't be a problem in most setups.
211 Even if it is, however, the number of tbl8s can be modified.
213 Use Case: IPv4 Forwarding
214 ~~~~~~~~~~~~~~~~~~~~~~~~~
216 The LPM algorithm is used to implement Classless Inter-Domain Routing (CIDR) strategy used by routers implementing IPv4 forwarding.
221 * RFC1519 Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy,
222 `http://www.ietf.org/rfc/rfc1519 <http://www.ietf.org/rfc/rfc1519>`_
224 * Pankaj Gupta, Algorithms for Routing Lookups and Packet Classification, PhD Thesis, Stanford University,
225 2000 (`http://klamath.stanford.edu/~pankaj/thesis/ thesis_1sided.pdf <http://klamath.stanford.edu/~pankaj/thesis/%20thesis_1sided.pdf>`_ )