Summary of the invention
Technical problem to be solved by this invention is, under existing memory conditions, and the routing table information storage means of the storable route entry number of the router that is multiplied, and realize the routing device of the method.
The present invention is provide a kind of routing table information storage means, comprise step for solving the problems of the technologies described above adopted technical scheme:
Internal memory is divided into service area, compression blocks memory block;
Routing table information is divided into some logical blocks, each routing table node (a corresponding route entry of routing table node) in record routing table, its address is corresponding logical address (comprising logical block ID and block bias internal address);
Service area is divided into some work blocks; According to allocation strategy, partial logic block is kept in the work block of service area, and the mapping relations table of writing task block ID and logical block ID, another part logical block is compressed, forms compression blocks, be kept at compression blocks memory block; The logical block at the routing table node place of current accessed need be kept in work block.
Can retain multiple routing tables in router, the routing iinformation comprised between these routing tables is similar, and information itself has very large redundancy.Such as same ospf route can in ospf LSD, ospf routing table, heavily distribute routing table, ip kernel route table, transmit and exist in some shadow table simultaneously.And in actual applications, routing table is mostly among one or several subnets, routing iinformation is also among limited several down hop and interface simultaneously, and routing iinformation is very regular.The present invention utilizes the characteristic that routing table information redundancy is very large, internal memory is divided into service area, compression blocks memory block; The current logical block needing access of a part is only deposited in service area; Compression blocks memory block is stored to after other logical block all compresses, dramatically saves on memory source, when storage has the routing table information of identical route entry number, the memory headroom shared by the inventive method is adopted to only have the part of existing route table information storage means to ten parts.
Further, conveniently map, routing table information is divided into the logical block of formed objects, service area is divided into the work block identical with described logical block size.Such work block is consistent with logical block size, makes in work block simpler to the addressing of routing table node.
Allocation strategy except should preserving the current logical block needing to use in service area, can be random to the preservation of other logical block in service area, but logical block in service area may be caused like this to switch continually, because, as there being vacant work block in service area, but the current routing table node place logical block of access that needs is not in service area, then needs to switch to vacant work block by after the routing table node place compression blocks decompress(ion) of current needs access, and upgrade mapping relations table; As in service area without vacant work block and current needs access routing table node place logical block not in service area, also need first to select a work block according to allocation strategy, compression blocks memory block is saved to by after the logical block compression in this work block, switch to this work block by after the routing table node place compression blocks decompress(ion) of current needs access again, revise mapping relations table simultaneously.Further, in order to the exchange of try one's best reduction work block and compression blocks, the access of work block is added up, and obtains the access situation of logical block corresponding to this service area according to statistics, and perform allocation strategy; Described allocation strategy is, preserves the use of current needs and record nearer logical block with use in service area.Or, in service area, preserve the current logical block needing to use and frequency of utilization is higher.Or, in service area, preserve the current logical block needing to use and use is recorded comparatively closely, frequency of utilization is higher.Choice for use frequency is lower and/or use record logical block far away to enter compression blocks memory block, the logical block at the routing table node place be therefore of little use due to motion frequency low, substantially be kept in compression blocks memory block, the logical block at the routing table node place that motion frequency is high is kept in service area, this guarantees and is switched by logical block in service area too continually the access efficiency of routing table node.
A kind of routing device realizing said method is provided, comprises internal memory and divide module, logical block division module, service area division module, memory allocation module;
Internal memory divides module and is used for, and internal memory is divided into service area, compression blocks memory block;
Logical block divides module and is used for, and routing table information is divided into some logical blocks, the logical block ID that in record routing table, each routing table node is corresponding and block bias internal address;
Service area divides module and is used for, and service area is divided into some work blocks;
Memory allocation module is used for, and according to allocation strategy, is kept at by partial logic block in the work block of service area, and the mapping relations table of writing task block ID and logical block ID, another part logical block is compressed, forms compression blocks, be kept at compression blocks memory block; The logical block at the routing table node place of current accessed need be kept in work block.
Further, logical block divide module also for, routing table information is divided into the logical block of formed objects; Service area divides module and is used for, and service area is divided into the work block identical with described logical block size.
Further, memory allocation module also for, the access of work block is added up, and obtains the access situation of logical block corresponding to this service area according to statistics; Described allocation strategy is, preserves the current logical block needing to use and record nearer logical block with use in service area; Or, preserve in service area current need to use logical block and the higher logical block of frequency of utilization; Or, preserve the current logical block needing to use in service area and record the logical block comparatively near, frequency of utilization is higher with use.
The invention has the beneficial effects as follows, when memory headroom is constant, drastically increase routing table memory space, solve the memory bottleneck problem that current huge routing table information is brought.
Embodiment
Routing device adopts logical space to distribute to the memory space of routing table information, and is kept in specific memory headroom after being compressed in units of block by logical space.As shown in Figure 1, routing device comprises internal memory division module, logical block divides module, service area divides module, memory allocation module.
Internal memory divides module and first in internal memory, marks two pieces of continuous print spaces respectively as service area and compression blocks memory block according to the plan of operation of reality, and the large I of service area and compression blocks memory block is according to configuration adjustment.As shown in Figure 2, service area divides module and memory space is divided into fixed-size work block in the memory pool of service area, and the block ID specifying each work block is the serial number increased progressively with block address, be namely respectively (service area initial address/block size) ..., ((service area end address/block size)-1); Logical block divides module and routing table information is divided into some logical blocks, and record the logical space (logical block ID and block bias internal address) that in routing table, each routing table node is corresponding, the interface distributing routing table logical space and discharge is provided.As shown in Figure 3, logical block size is consistent with service area block size.Logical block ID is (logical address initial address/block size) ... ((addressable maximum logical address/block size)-1).Memory allocation module by current do not have use logical block compress after, be kept at the form of compression blocks in the memory pool of compression blocks memory block.Size due to logical block is fixing, and the size of compression blocks changes with the change of the content in logical block, but the equal and opposite in direction of size after each compression blocks decompress(ion) and work block, in order to the management of the block after compressing, set up the memory pool of a compression blocks memory block, and realize special interface and distribute and releasing memory in this memory pool, the memory size that can distribute can be 16,32,64,128,256,512,1024,2048 etc.Elongated compression blocks stores as shown in Figure 4: the global variable using two, i.e. memory pool first address bFreeBuf and internal memory bucket bQhead [].The managerial structure of additional allocation structbType (the managerial structure body to the carrying out of allocation block manages) when distributing compression blocks internal memory; When compression blocks internal memory will be discharged, internal memory is not discharged in the memory pool of compression blocks memory block, but the structbType (bp namely in Fig. 4) distributed is mounted in chained list corresponding to bQhead [].Like this when to need the internal memory distributing this size (256 bytes as shown in Figure 4) next time, just directly can fetch use from this chained list.
To when distributing new routing table node content, if when having clearance spaces to distribute in the current block in service area, then according to the side-play amount of address in the logical block ID of this block and block, be routing table peer distribution space in the block.If there is no enough remaining spaces in the current block of service area, then first check whether service area has the block not having to use, if there is the block not having to use, then for this block distributes one not by the logical address space used, and in mapping relations table, add the mapping item of work block ID and logical block ID; During the block if there is no do not used, again for this block distributes one not by the logical address space that uses and according to the side-play amount of address in the logical block ID of this block and block add mapping item in mapping relations table after, routing table peer distribution space in the block after then needing to use exchange algorithm to swap out one piece.Idiographic flow as shown in Figure 5.
To routing table node visit: first take out logical block ID corresponding to work at present block and in a register, then use side-play amount directly to access when the logical block ID that need access as current is consistent with the logical block ID that register is preserved; If time inconsistent, then use the current logical block ID of access that needs to look into mapping relations table, if found in mapping relations table, then the current logical block ID of access that needs is saved in this register, and accesses in work block corresponding to this logical block ID; If do not found in mapping relations table, then change to service area after the compression blocks decompress(ion) needing the logical block ID that accessed by needs from compression blocks memory block corresponding, and in mapping relations table, add the mapping item of this logical block ID and work block ID.
Work block and the exchange process of compression blocks are specially: if when also having vacant work block in service area, and only the direct compression blocks changed to that will need unzips in service area; If when service area does not have a vacant work block, LRU (not using at most recently) scheduling algorithm can being used, swapping out not having logical block corresponding to used work block at most.For by the logical block swapped out, if the content in this logical block was not changed, then can directly this logical block being abandoned; If the content in this logical block was changed, then need by compression blocks memory block to should logical block ID compression blocks delete, simultaneously by service area logical block compression after be saved in compression blocks memory block.For changing to (logical block is to be saved to service area for changing to after the form decompress(ion) of compression blocks), need the mapping item adding its logical block ID and work block ID in mapping relations table; For swap out (logical block is saved to compression blocks memory block for swapping out after the compression of work block), then need the mapping item of logical block ID corresponding in mapping relations table and work block ID to delete.
In amendment logical block during content, directly in service area, this logical block being modified, also should be modified in compression blocks corresponding to compression blocks memory block when needing to swap out.If when there is the exchange of work block and compression blocks, need amendment mapping relations table.
If need the content of the monoblock of deleting in service area, then direct compression blocks corresponding for logical block in compression blocks memory block to be deleted, in mapping relations table, delete relevant list item simultaneously.