US7197622B2 - Efficient mapping of signal elements to a limited range of identifiers - Google Patents

Efficient mapping of signal elements to a limited range of identifiers Download PDF

Info

Publication number
US7197622B2
US7197622B2 US10/450,827 US45082703A US7197622B2 US 7197622 B2 US7197622 B2 US 7197622B2 US 45082703 A US45082703 A US 45082703A US 7197622 B2 US7197622 B2 US 7197622B2
Authority
US
United States
Prior art keywords
context
identifier
memory
identifiers
cache line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/450,827
Other versions
US20040221132A1 (en
Inventor
Kjell Torkelsson
Lars-orjan Kling
Hákan Otto Ahl
Johan Ditmar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Assigned to TELEFONAKTIEBOLAGET LM ERICSSON reassignment TELEFONAKTIEBOLAGET LM ERICSSON ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KLIN, LARS-ORJAN, AHL, HAKAN OTTO, DITMAN, JOHAN, TORKELSSON, KJEIL
Publication of US20040221132A1 publication Critical patent/US20040221132A1/en
Application granted granted Critical
Publication of US7197622B2 publication Critical patent/US7197622B2/en
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0864Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • the present invention generally concerns the mapping of signal elements to a limited range of identifiers by means of hashing, and especially the selection of context identifiers to represent packet headers in Internet Protocol header compression, as well as cache mapping in computer systems.
  • Hashing is a conventional technique commonly used in various applications for mapping a set of signal elements (arguments) to a limited range of numeric identifiers (keys) by means of a hash function.
  • hashing a given signal element is mapped to an identifier based only on the signal element or appropriate parts thereof as input to the hash function, without any knowledge of the mapping between other signal elements and identifiers.
  • signal elements having the same content should be mapped to the same identifier, whereas signal elements of different contents should be mapped to different identifiers.
  • hash functions are usually not capable of mapping all unique signal elements to distinct identifiers, and there is a considerable risk of different elements being mapped to the same identifier (a hash collision, also referred to as a clash).
  • U.S. Pat. No. 6,097,725 describes a method for searching a bit field address in an ATM system by computing a hash key for pointing to a first address among a large number of addresses followed by sequential reading of a smaller number of subsequent entries until a match occurs.
  • IP Internet Protocol
  • IP header compression reduces the negative impacts of large IP headers significantly and allows efficient bandwidth utilization.
  • Header compression is generally based on the observation that in a packet stream, most header fields are identical in consecutive packets. For simplicity one may think of a packet stream, sometimes also referred to as a session, as all the packets sent from a particular source address and port to a particular destination address and port using the same transport protocol.
  • a basic principle for compressing headers is to establish an association between the non-changing fields of the headers in a packet stream and a context identifier (CID), which is selected to represent the headers. Headers are then replaced by compressed headers, each of which contains CID information and possibly also information that is unique to the individual packet header.
  • CID context identifier
  • FIG. 1 is a schematic diagram illustrating a full header 1 with CID association as well as a corresponding compressed header 2 .
  • the header fields can be categorized into different categories depending on how the fields are expected to change between consecutive headers in a packet stream.
  • Header compression standards such as RFC 2507 and RFC 2508 of the Internet Engineering Task Force provide such a classification for IPv6 base and extension headers, IPv4, TCP and UDP headers.
  • NO_CHANGE fields that can be inferred from other fields
  • INFERRED fields that change in an unpredictable manner
  • Information in RANDOM fields is normally included in the compressed headers, whereas information in INFERRED fields really does not have to be included in the compressed headers.
  • FIG. 2 is a schematic diagram of two interconnected routers A and B, each with header compression/decompression capabilities.
  • the routers 10 , 20 are interconnected by one or more (bi-directional) links.
  • Each router comprises a compressor 11 / 21 and a decompressor 13 / 23 , each of which is connected to a respective context memory 12 - 1 , 12 - 2 / 22 - 1 , 22 - 2 .
  • the compressor 11 in router A selects, for each packet header, a CID to represent the non-changing fields of the header and stores header information, possibly together with additional information, as a compression context in the context memory 12 - 1 of router A.
  • the initial packet of the packet stream is transmitted with a full header (FH), including the selected CID, over a given link to router B, allowing the decompressor 23 of router B to extract the compression context and the CID.
  • the extracted compression context is stored in the context memory 22 - 2 of router B.
  • Subsequent packets belonging to the same packet stream are then transmitted with compressed headers (CH) to router B.
  • the decompressor 23 of router B can use the corresponding CID values to lookup the appropriate compression context in the context memory 22 - 2 of router B, thus restoring the compressed headers to their original form.
  • full headers are typically transmitted periodically, or with an exponentially increasing interval in slow-start mode, to refresh the compression context.
  • CID selection mechanism is not.
  • the maximum range of CID values is specified in the standards.
  • TCP packets and non-TCP packets normally use separate sets of CID values with different maximum ranges.
  • Different routers have to negotiate on which CID range to use before initiating transmission.
  • different links also use separate sets of CID values.
  • the actual mechanism for generating and selecting CID values is unspecified.
  • the CID values should be unique for all packet streams that are active on a given link at any given time so that different streams are mapped to different CID values. If two or more active packet streams map to the same CID (clashing), the degree of compression is reduced since each clash requires a new full header, redefining the context of the CID, to be transmitted instead of a compressed header. Generating a unique CID for each new packet stream is therefore very important for the overall efficiency of the compression algorithm.
  • CID selection is also complicated by the fact that there is no mechanism for determining when a stream has terminated.
  • Conventional methods for generating CID values are typically based on hashing, taking the non-changing header fields as input to a hash function to generate a corresponding CID value.
  • the total number of possible headers may be extremely large, while typically the CID range is maximized to 2 8 for TCP traffic and 2 16 for non-TCP traffic.
  • FIG. 3 illustrates hash-based generation of CID values for addressing a context memory according to the prior art.
  • the header fields classified as NO_CHANGE are used as input to a hash coder 30 .
  • a suitable hash function is implemented in the hash coder 30 to generate a CID value based on the given input.
  • the generated CID value acts as an index to the context memory 12 / 22 and points to a specific address in the context memory to be used for storing corresponding header information as compression context.
  • the context memory 12 / 22 has a limited size, here illustrated with 128 memory positions from 0 to 127.
  • the corresponding CID values that are used for addressing the context memory define a CID space ranging from 0 to 127 (the CID range being equal to 128).
  • FIG. 4 illustrates the problem of CID clashing as two packet streams map to the same CID value. If a first packet belonging to stream X is mapped to the CID value 120 in the CID space, the corresponding header is stored as compression context in position 120 of the context memory. When a subsequent packet belonging to another stream Y also is mapped to the CID value 120 , we have a CID clash. When the clash occurs, the CID value 120 is redefined to represent the new stream Y and the compression context previously stored in memory position 120 is overwritten by the header of the new packet belonging to stream Y. In the overall header compression scheme, this also means that the full header of the new packet of stream Y has to be transmitted to the decompressor on the receiving side.
  • the two packet streams X and Y will continue to clash during the entire time period in which both packet streams are active, alternately overwriting each others compression contexts and necessitating the transmission of full header packets.
  • clashes will be common even when the number of simultaneously active sessions is relatively small compared to the total CID range, leading to a significant reduction of the compression efficiency.
  • the present invention overcomes these and other drawbacks of the prior art arrangements.
  • Still another object of the invention is to find a cost-effective realization of relatively low complexity for efficient mapping of signal elements to a limited range of identifiers.
  • Another object of the invention is to improve the compression rate in IP header compression to allow better utilization of available bandwidth, especially for links of low and medium speed. In this respect, it is a particular object to find an improved CID allocation scheme. It is also an object of the invention to provide a method and system for efficient mapping of different packet streams to a limited range of CID identifiers with a low probability of CID clashes.
  • Still another object of the invention is to improve cache mapping in computer systems, and to devise an efficient cache placement algorithm.
  • the general idea according to the invention is to emulate a “virtual” space of identifiers that is larger than the real space of identifiers.
  • the larger virtual identifier space is generally implemented by an intermediate memory, which provides storage for identifiers assigned from the real space of identifiers.
  • the intermediate memory is addressed by means of a larger hash value calculated from at least part of the signal element, thus allowing access to an identifier.
  • the larger virtual space gives a better distribution of signal elements to the identifiers, and is a key feature for reducing the probability of different signal elements being mapped to the same identifier (clashing). If the intermediate memory has a range that is a factor ⁇ larger than the real space of identifiers and the identifiers are assigned from the real identifier space to the relevant positions in the intermediate memory in an efficient manner, the effect will be essentially the same as if the real space of identifiers was ⁇ times larger.
  • a clash between a new signal element and another previously mapped signal element can be detected by comparing the hash value for the new signal element with the hash value associated with the already mapped signal element. If they match, the two signal elements map to the same identifier, and a clash is detected. In the case of a clash, the identifier will be reused for the new signal element. This corresponds to the way clashes are handled in conventional algorithms based on direct hashing to the real space of identifiers. This does not reduce the value of the algorithm according to the invention since clashes occur much more seldom in the extended virtual space.
  • the invention is applicable to IP header compression and the mapping of packet streams to a limited range of context identifiers (CIDs).
  • CIDs context identifiers
  • the CID value When assigning a CID value to a new session, it is important that the CID value has a low probability of belonging to an already active session. Ideally, the utilization of the CIDs is monitored and the CID that has been inactive for the longest period of time, i.e. the least recently used CID, is assigned to the new session.
  • the “oldest assigned” CID is always selected for a new session.
  • the lifetimes of the sessions are more or less the same, there is a low probability that the oldest assigned CID is still active and hence this CID is a good candidate.
  • the oldest assigned algorithm has turned out to provide a very cost-effective realization of relatively low complexity. Only a minimum of extra resources needs to be added to the already existing equipment for hash-based header compression.
  • the invention is generally applicable to hashing problems, and can also be utilized to improve e.g. hash-based cache mapping in computer systems.
  • FIG. 1 is a schematic diagram illustrating a full header with CID association as well as a corresponding compressed header
  • FIG. 2 is a schematic diagram of two interconnected routers A and B, each with header compression/decompression capabilities;
  • FIG. 3 is a schematic diagram illustrating hash-based generation of CID values for addressing a context memory according to the prior art
  • FIG. 4 is a schematic diagram illustrating the problem of CID clashing as two packet streams map to the same CID value
  • FIG. 5 is a schematic diagram illustrating a CID selection mechanism according to a preferred embodiment of the invention.
  • FIG. 6A is a schematic diagram illustrating a basic “least recently used” algorithm for assigning CID values to the virtual CID space by using a sorted list;
  • FIG. 6B is a schematic diagram illustrating a basic “oldest assigned” algorithm for cyclically assigning CID values to the virtual CID space using a NEXT CID register;
  • FIG. 7 is a schematic flow diagram of a CID selection method based on the “least recently used” algorithm according to a preferred embodiment of the invention.
  • FIG. 8 is a schematic flow diagram of a CID selection method based on the “oldest assigned” algorithm according to a preferred embodiment of the invention.
  • FIGS. 9A–D are schematic diagrams illustrating, for different values of ⁇ , the clashing probability as a function of the percentage of active CID values for a simple direct hashing algorithm on one hand and the extended CID generation algorithm of the invention on the other hand;
  • FIG. 10 is a schematic diagram illustrating a typical cache-based memory structure in a computer system
  • FIG. 11 is a schematic diagram illustrating a cache line selection mechanism according to a preferred embodiment of the invention.
  • FIG. 12A is a schematic diagram illustrating a basic “least recently used” algorithm for assigning cache line identifier (LID) values to the virtual LID space by using a sorted list;
  • FIG. 12B is a schematic diagram illustrating a basic “oldest assigned” algorithm for cyclically assigning LID values to the virtual LID space using a NEXT LID register;
  • FIG. 13 is a schematic flow diagram of a cache line selection method based on the “least recently used” algorithm according to a preferred embodiment of the invention.
  • FIG. 14 is a schematic flow diagram of a cache line selection method based on the “oldest assigned” algorithm according to a preferred embodiment of the invention.
  • FIG. 5 is a schematic diagram illustrating a CID selection mechanism according to a preferred embodiment of the invention.
  • a compressor is associated with a context memory 12 / 22 of a limited size, say 128 memory positions ranging from 0 to 127.
  • the corresponding CID values 0 to 127 for addressing the context memory define a CID space having a limited range of 128.
  • the CID selection/generation mechanism according to the invention is based on an intermediate CID memory 40 of a range that is larger than the CID range.
  • the intermediate CID memory 40 provides storage of CID values assigned from the real CID space, thereby emulating a larger virtual space of CID values.
  • the virtual CID space is a factor f larger than the real CID space, where ⁇ is any real value larger than 1.
  • is any real value larger than 1.
  • other virtual CID ranges are also feasible.
  • a hash coder 30 or any other equivalent module for implementing a suitable hash function calculates a hash value X, preferably based on the NO_CHANGE fields of an incoming packet header.
  • the calculated hash value X is used for addressing the intermediate CID memory 40 to get access to a CID.
  • the hash value X may be calculated using a standard 16-bit CRC and selecting a suitable number of output bits as the hash value.
  • alternative hash functions may be used.
  • the intermediate CD memory 40 provides storage of CID values assigned from the real CID space, allowing access to a CID when the CID memory is addressed by a hash value X.
  • the accessed CID can then be used as an address to access the context memory 12 / 22 .
  • the extended virtual CID space gives essentially the same effect as an ⁇ times larger real CID space.
  • the invention can also be used for reducing the size of the context memory while maintaining the same clashing probability (instead of reducing the clashing probability without enlarging the context memory).
  • This gives the router designer a higher degree of design freedom, making it possible to choose between smaller context memories on one hand and reduced clashing probability on the other.
  • it is even possible to reduce the size of the context memory and reduce the clashing probability at the same time by using a smaller real CID range and selecting an appropriately larger factor ⁇ .
  • each hash value is preferably stored in relation to the corresponding header context in a special Identifier Address (IDA) field in the context memory 12 / 22 and compared to the hash value X leading to the clash. If they match, the CID is reused for the packet belonging to the new stream and the context is updated, including updating the IDA field, also referred to as the CID address field. This is consistent with using a real larger CID space with direct hashing and does not reduce the value of the algorithm according to the invention.
  • IDA Identifier Address
  • the invention proposes two main schemes for CID assignment, although other schemes also are possible.
  • the CTD that has been inactive during the longest period of time should be selected and used for a packet belonging to a new packet stream.
  • this is accomplished by maintaining a sorted list 50 in which CID values within the real CID range are arranged according to their use for representing packet streams.
  • the most recently used context identifiers are placed at the end of the list, and consequently the least recently used CID can be found at the head of the list.
  • the least recently used CID which is the CID that has been inactive for the longest period of time, is the best candidate to use when a new CID is to be assigned.
  • the realization of the “least recently used” algorithm for assigning CID values to the virtual CID space becomes quite complex. Therefore this algorithm is more suitable for implementation in software, preferably by means of the linked list concept.
  • Another scheme assigns new CID values cyclically.
  • the CID values within the CID range are traversed in sequential order, and when the last CID value is reached the process starts all over again with the first CID value, for example 0, 1, 2, . . . , 127, 0, 1, 2, and so on.
  • the CID to be assigned next is stored in a NEXTCID register 60 for easy assignment to the relevant position in the intermediate memory. This results in the “oldest assigned” CID being selected for a packet belonging to a new packet stream. There is a fairly low probability that this CID is still active, and therefore this CID is a good candidate to be used when a new CID is to be assigned. If the lifetimes of all sessions are roughly the same, the “least recently used” algorithm and the “oldest assigned” algorithm are equally good.
  • the “oldest assigned” algorithm can be improved to almost match the “least recently used” algorithm with a simple addition.
  • the corresponding CID can be compared to the CID value to be assigned next (NEXTCID). If they match, the CID has been detected as active and the NEXTCID register 60 is stepped or incremented to take the value of the “next to oldest assigned” CID. This will reduce clashes in mixed traffic of long and short sessions.
  • the “oldest assigned” algorithm is particularly suitable for hardware implementation.
  • each location in the CID memory 40 can be initialized with any valid CID.
  • the CID list 50 can be initialized with CID values arranged in any arbitrary order.
  • the CID will successively move up in the CID list until it is at the head of the list at which time it will be taken by a new session.
  • the invention will significantly reduce the probability of different packet streams being mapped to the same CID. This leads to a considerable improvement of the compression rate and therefore better utilization of the available bandwidth of the links.
  • main memories are often relatively slow with rather long access times and considered as the bottlenecks of the computer systems.
  • faster memory components are generally much more expensive than the slower memory components used as main memories.
  • a common way of alleviating this problem is to use one or more levels of small and fast cache as a buffer between the processor and the larger and slower main memory.
  • a cache memory contains copies of blocks of data/instructions that are stored in the main memory. In the cache, these blocks of data/instructions corresponds to cache line data fields.
  • the system first goes to the fast cache to determine if the information is present in the cache. If the information is available in the cache, a so-called cache hit, access to the main memory is not required and the required information is taken directly from the cache. If the information is not available in the cache, a so-called cache miss, the data is fetched from the main memory into the cache, possibly overwriting other active data in the cache. Similarly, as writes to the main memory are issued, data is written to the cache and copied back to the main memory. In most applications, the use of a cache memory speeds up the operation of the overall memory system significantly. The goal is to make the memory system appear to be as large as the main memory and as fast as the cache memory.
  • L1 cache level 1 (L1) cache
  • L2 cache level 2 (L2) cache
  • FIG. 10 is a schematic diagram illustrating a typical cache-based memory structure in a computer system.
  • the cache-based memory structure may be included in any general-purpose computer known to the art. As the skilled person will understand, the description is not intended to be complete with regard to the entire computer system, but concentrates on those parts that are relevant to the use of cache memories in computer systems.
  • the computer system utilizes a typical two-level cache.
  • the processor 100 is provided with an on-chip cache 101 (level 1 cache), and also connected to an off-chip cache 102 (level 2 cache).
  • the off-chip cache 102 is connected to the processor 100 via a dedicated memory bus (including data, address and control bus).
  • the processor 100 may have two caches on each level, one cache for data and another cache for instructions. For simplicity however, only a single cache is illustrated on each level.
  • the processor 100 is connected to a main memory 103 , normally via a conventional memory controller 104 .
  • the caches 101 , 102 are typically implemented using SRAMs (Static Random Access Memory) or similar high speed memory circuits, while the main memory 103 is typically implemented using slower DRAM (Dynamic Random Access Memory) circuits.
  • the processor includes functionality for controlling both the L1 and L2 cache memories.
  • the performance of a cache is affected by the organization of the cache in general, and the placement algorithm in particular.
  • the placement algorithm determines to which blocks or lines in the relevant cache that data in the main memory are mapped.
  • the most commonly used algorithms are direct mapping, set-associative and fully associative mapping.
  • a hash function (such as a conventional modulo function) is usually applied to the memory address to determine which cache line to use.
  • Table I an example of an illustrative memory address is shown.
  • the five least significant bits 0 to 4 give the byte offset in the cache line data field.
  • the next 14 bits 5 to 18 determine in which cache line the data must be stored.
  • the cache has to store the address information in these bits as a tag in order to know which memory block that is currently stored there.
  • the most complex scheme makes use of associative addressing that allows data from any address in the main memory to be placed in any block in the cache.
  • a set-associative cache lies between the two extremes of a direct mapped cache and a fully associative cache.
  • the cache RAM is divided into n memory banks, or “ways”. Each memory address maps to a certain location in any one of the n memory banks, and there is a choice of n memory banks for storage/retrieval of the actual data.
  • a direct-mapped cache is a one-way set-associative cache.
  • Direct mapping in a cache memory is comparable to direct mapping in IP header compression.
  • the physical location in the cache to which a memory address of the main memory is mapped to is generally determined by direct hashing from appropriate parts of the memory address, and there is normally a relatively high probability that several memory addresses map to the same cache location and therefore overwrite each other. This also holds true for n-way set-associative caches.
  • each cache line in the cache memory 101 / 102 includes a valid field (V), a tag field (TAG), an identifier address field (IDA) as well as a data field (DATA).
  • V valid field
  • TAG tag field
  • IDA identifier address field
  • DATA data field
  • the valid field indicates whether the data field contains valid information.
  • the tag field contains the address information required to identify whether a word in the cache corresponds to a requested word.
  • the IDA field corresponds to the CID address field in header compression applications, and the DATA field holds the relevant data/instructions.
  • the cache line selection mechanism is based on an intermediate memory 140 of a range that is larger than the LID range.
  • the intermediate memory 140 provides storage of cache line identifier values assigned from the real LID space, thereby emulating a larger virtual space of LID values.
  • the virtual LID space is a factor ⁇ larger than the real LID space, where ⁇ is any real value larger than 1.
  • a hash coder 130 or any other equivalent module for implementing a suitable hash function calculates a bash value X, based on appropriate parts of a current address in the main memory.
  • the hash value may be calculated from the so-called cache line number part of the incoming memory address together with an appropriate number of bits from the so-called tag part of the memory address (see Table I).
  • the calculated hash value X is used for addressing the intermediate memory 140 to get access to a LID.
  • the range of the hash values X used for addressing the intermediate memory 140 has to be larger than that of the LID values.
  • the intermediate memory 140 provides storage of LID values assigned from the real LID space allowing access to a LID when the intermediate memory is addressed by a hash value X. The accessed LID can then be used to access a cache line in the cache memory 101 / 102 . Assuming that LID values are assigned from the real LID space to the intermediate memory 140 in an efficient manner, the extended virtual LID space gives essentially the same effect as an ⁇ times larger real LID space.
  • a hash value is calculated based on a memory address of interest.
  • the hash value is utilized to address an intermediate memory in which the actual cache line address is stored.
  • the tag part of the memory address is compared to stored tag information to determine if we have a cache hit or a cache miss. If they match, the data has been found. If not, we pick a new cache line.
  • the selection of a new cache line can be done using the least recently used algorithm or the oldest assigned algorithm.
  • each hash value is stored in relation to the corresponding tag information in the special IDA field in the cache memory 101 / 102 and compared to the hash value X leading to the clash. If they match, the cache line identified by the LID is updated, including updating the IDA field. This is consistent with using a real larger LID space with direct hashing and does not reduce the value of the algorithm according to the invention.
  • the memory address (in the form of the hash value) of the identifier in the intermediate memory may be deduced directly from the tag information. This means that the IDA field will become redundant, and may be removed.
  • the proposed mapping mechanism is not limited to CID selection in IP header compression and cache mapping in computer systems but can also be applied to other hashing problems. Examples of other applications in which hashing can be improved by the invention include searching databases and performing various table lookups for IP routing.
  • C is the number of possible CID values (the maximum CID range) and S is the number of simultaneously active sessions.
  • the number of sessions that do not clash is equal to the number of CID values that only have one session mapped to them. Accordingly, the fraction of sessions that clash is equal to:
  • FIGS. 9A–D are schematic diagrams illustrating, for different values of ⁇ , the clashing probability as a function of the percentage ( ⁇ ) of active CID values for a simple direct hashing algorithm (P direct ) on one hand and the extended CID generation algorithm (P extended ) of the invention on the other hand.
  • the clashing probability for the direct hashing algorithm is indicated by a solid line
  • the clashing probability for the extended algorithm is indicated by a dashed line.
  • C has been set to a typical value of 1024.
  • the CID selection mechanism according to the invention outperforms the conventional direct hashing mechanism.

Abstract

Signal elements are mapped to a limited range of identifiers by emulating a “virtual” space of identifiers larger than the real limited space of identifiers. The larger virtual identifier space is implemented by an intermediate memory, which provides storage of identifiers assigned from the real space of identifiers. For each signal element to be mapped to an identifier, the intermediate memory is addressed by a hash value calculated from at least part of the signal element, thus allowing access to an identifier. The larger virtual space gives a better distribution of signal elements to the identifiers; and reduces the probability of different signal elements being mapped to the same identifier (“clashing”). For an efficient reduction of the clashing probability, identifiers with a low probability of being active are assigned to the intermediate memory to represent new signal elements.

Description

This application is the U.S. national phase of international application PCT/SE01/02746 filed 12 Dec. 2001, which designated the U.S.
TECHNICAL FIELD OF THE INVENTION
The present invention generally concerns the mapping of signal elements to a limited range of identifiers by means of hashing, and especially the selection of context identifiers to represent packet headers in Internet Protocol header compression, as well as cache mapping in computer systems.
BACKGROUND OF THE INVENTION
Hashing is a conventional technique commonly used in various applications for mapping a set of signal elements (arguments) to a limited range of numeric identifiers (keys) by means of a hash function. In hashing, a given signal element is mapped to an identifier based only on the signal element or appropriate parts thereof as input to the hash function, without any knowledge of the mapping between other signal elements and identifiers. Ideally, signal elements having the same content should be mapped to the same identifier, whereas signal elements of different contents should be mapped to different identifiers. However, hash functions are usually not capable of mapping all unique signal elements to distinct identifiers, and there is a considerable risk of different elements being mapped to the same identifier (a hash collision, also referred to as a clash).
Therefore, a lot of research has been directed towards finding optimized hash functions with random and uniform distribution characteristics. However, the number of hash collisions is usually still considerable in many applications even though a “good” hash function is used. In many cases the number of hash collisions may be considerable even when the number of unique and simultaneously active signal elements to be mapped to the identifiers is as low as 30–40% of the total number of identifiers.
Other attempts for reducing hash collisions include resolving the collisions by means of various complicated circuitry, for example as described in U.S. Pat. No. 5,920,900.
U.S. Pat. No. 6,097,725 describes a method for searching a bit field address in an ATM system by computing a hash key for pointing to a first address among a large number of addresses followed by sequential reading of a smaller number of subsequent entries until a match occurs.
For a more thorough understanding of conventional hashing and the problems associated therewith, hashing will now be described with reference to the particular problem of selecting context identifiers in Internet Protocol (IP) header compression.
IP header compression reduces the negative impacts of large IP headers significantly and allows efficient bandwidth utilization. Header compression is generally based on the observation that in a packet stream, most header fields are identical in consecutive packets. For simplicity one may think of a packet stream, sometimes also referred to as a session, as all the packets sent from a particular source address and port to a particular destination address and port using the same transport protocol. A basic principle for compressing headers is to establish an association between the non-changing fields of the headers in a packet stream and a context identifier (CID), which is selected to represent the headers. Headers are then replaced by compressed headers, each of which contains CID information and possibly also information that is unique to the individual packet header.
FIG. 1 is a schematic diagram illustrating a full header 1 with CID association as well as a corresponding compressed header 2. Typically, the header fields can be categorized into different categories depending on how the fields are expected to change between consecutive headers in a packet stream. Header compression standards such as RFC 2507 and RFC 2508 of the Internet Engineering Task Force provide such a classification for IPv6 base and extension headers, IPv4, TCP and UDP headers. In these standards, fields that are not expected to change are classified as NO_CHANGE, fields that can be inferred from other fields are classified as INFERRED, and fields that change in an unpredictable manner are classified as RANDOM. Information in RANDOM fields is normally included in the compressed headers, whereas information in INFERRED fields really does not have to be included in the compressed headers.
FIG. 2 is a schematic diagram of two interconnected routers A and B, each with header compression/decompression capabilities. The routers 10, 20 are interconnected by one or more (bi-directional) links. Each router comprises a compressor 11/21 and a decompressor 13/23, each of which is connected to a respective context memory 12-1, 12-2/22-1, 22-2. To compress the headers of a packet stream, the compressor 11 in router A selects, for each packet header, a CID to represent the non-changing fields of the header and stores header information, possibly together with additional information, as a compression context in the context memory 12-1 of router A. The initial packet of the packet stream is transmitted with a full header (FH), including the selected CID, over a given link to router B, allowing the decompressor 23 of router B to extract the compression context and the CID. The extracted compression context is stored in the context memory 22-2 of router B. Subsequent packets belonging to the same packet stream are then transmitted with compressed headers (CH) to router B. The decompressor 23 of router B can use the corresponding CID values to lookup the appropriate compression context in the context memory 22-2 of router B, thus restoring the compressed headers to their original form. In order to alleviate problems with incorrect decompression, full headers are typically transmitted periodically, or with an exponentially increasing interval in slow-start mode, to refresh the compression context.
Although many aspects of header compression are specified in detail in existing header compression standards, the CID selection mechanism is not. The maximum range of CID values is specified in the standards. TCP packets and non-TCP packets normally use separate sets of CID values with different maximum ranges. Different routers have to negotiate on which CID range to use before initiating transmission. In general, different links also use separate sets of CID values. The actual mechanism for generating and selecting CID values, however, is unspecified.
There are some basic requirements on CID generation and selection. The CID values should be unique for all packet streams that are active on a given link at any given time so that different streams are mapped to different CID values. If two or more active packet streams map to the same CID (clashing), the degree of compression is reduced since each clash requires a new full header, redefining the context of the CID, to be transmitted instead of a compressed header. Generating a unique CID for each new packet stream is therefore very important for the overall efficiency of the compression algorithm.
CID selection is also complicated by the fact that there is no mechanism for determining when a stream has terminated.
Conventional methods for generating CID values are typically based on hashing, taking the non-changing header fields as input to a hash function to generate a corresponding CID value.
In header compression applications, the total number of possible headers may be extremely large, while typically the CID range is maximized to 28 for TCP traffic and 216 for non-TCP traffic.
FIG. 3 illustrates hash-based generation of CID values for addressing a context memory according to the prior art. For an incoming packet, the header fields classified as NO_CHANGE are used as input to a hash coder 30. A suitable hash function is implemented in the hash coder 30 to generate a CID value based on the given input. The generated CID value acts as an index to the context memory 12/22 and points to a specific address in the context memory to be used for storing corresponding header information as compression context. The context memory 12/22 has a limited size, here illustrated with 128 memory positions from 0 to 127. The corresponding CID values that are used for addressing the context memory define a CID space ranging from 0 to 127 (the CID range being equal to 128).
FIG. 4 illustrates the problem of CID clashing as two packet streams map to the same CID value. If a first packet belonging to stream X is mapped to the CID value 120 in the CID space, the corresponding header is stored as compression context in position 120 of the context memory. When a subsequent packet belonging to another stream Y also is mapped to the CID value 120, we have a CID clash. When the clash occurs, the CID value 120 is redefined to represent the new stream Y and the compression context previously stored in memory position 120 is overwritten by the header of the new packet belonging to stream Y. In the overall header compression scheme, this also means that the full header of the new packet of stream Y has to be transmitted to the decompressor on the receiving side. The two packet streams X and Y will continue to clash during the entire time period in which both packet streams are active, alternately overwriting each others compression contexts and necessitating the transmission of full header packets. In conventional hash-based CID generation, clashes will be common even when the number of simultaneously active sessions is relatively small compared to the total CID range, leading to a significant reduction of the compression efficiency.
In computer systems using cache memories, a similar problem is encountered when several memory addresses are mapped to the same cache line.
SUMMARY OF THE INVENTION
The present invention overcomes these and other drawbacks of the prior art arrangements.
It is a general object of the present invention to provide a hash-based mechanism for efficiently mapping a set of signal elements to a limited range of identifiers. In particular, it is desirable to reduce the number of hash collisions, also known as clashes. In this regard, it is a particular object of the invention to provide a method and system for mapping signal elements to a limited range of identifiers with a low probability of clashing.
Still another object of the invention is to find a cost-effective realization of relatively low complexity for efficient mapping of signal elements to a limited range of identifiers.
Another object of the invention is to improve the compression rate in IP header compression to allow better utilization of available bandwidth, especially for links of low and medium speed. In this respect, it is a particular object to find an improved CID allocation scheme. It is also an object of the invention to provide a method and system for efficient mapping of different packet streams to a limited range of CID identifiers with a low probability of CID clashes.
Still another object of the invention is to improve cache mapping in computer systems, and to devise an efficient cache placement algorithm.
These and other objects are met by the invention as defined by the accompanying patent claims.
The general idea according to the invention is to emulate a “virtual” space of identifiers that is larger than the real space of identifiers. The larger virtual identifier space is generally implemented by an intermediate memory, which provides storage for identifiers assigned from the real space of identifiers. For each signal element to be mapped to an identifier, the intermediate memory is addressed by means of a larger hash value calculated from at least part of the signal element, thus allowing access to an identifier.
The larger virtual space gives a better distribution of signal elements to the identifiers, and is a key feature for reducing the probability of different signal elements being mapped to the same identifier (clashing). If the intermediate memory has a range that is a factor ƒ larger than the real space of identifiers and the identifiers are assigned from the real identifier space to the relevant positions in the intermediate memory in an efficient manner, the effect will be essentially the same as if the real space of identifiers was ƒ times larger.
In those cases when a perfect hash function can not be found, it is necessary to detect and handle clashes in the extended virtual space of identifiers to prevent the algorithm according to the invention from degenerating. A clash between a new signal element and another previously mapped signal element can be detected by comparing the hash value for the new signal element with the hash value associated with the already mapped signal element. If they match, the two signal elements map to the same identifier, and a clash is detected. In the case of a clash, the identifier will be reused for the new signal element. This corresponds to the way clashes are handled in conventional algorithms based on direct hashing to the real space of identifiers. This does not reduce the value of the algorithm according to the invention since clashes occur much more seldom in the extended virtual space.
In particular, the invention is applicable to IP header compression and the mapping of packet streams to a limited range of context identifiers (CIDs). By means of an extended virtual CID space in which CID values are assigned from the real CID space, the risk for packet headers of different packet streams being mapped to the same context identifier can be reduced significantly. This in turn leads to improved utilization of the available bandwidth of the links used for transmitting the header compressed packet streams.
When assigning a CID value to a new session, it is important that the CID value has a low probability of belonging to an already active session. Ideally, the utilization of the CIDs is monitored and the CID that has been inactive for the longest period of time, i.e. the least recently used CID, is assigned to the new session.
Alternatively, by cyclically assigning CID values within the real range of context identifiers to new sessions, the “oldest assigned” CID is always selected for a new session. When the lifetimes of the sessions are more or less the same, there is a low probability that the oldest assigned CID is still active and hence this CID is a good candidate. The oldest assigned algorithm has turned out to provide a very cost-effective realization of relatively low complexity. Only a minimum of extra resources needs to be added to the already existing equipment for hash-based header compression.
The invention is generally applicable to hashing problems, and can also be utilized to improve e.g. hash-based cache mapping in computer systems.
The invention offers the following advantages:
    • Efficient mapping with reduced probability of clashing;
    • Cost-effective realization of low complexity;
    • Improved compression rate in IP header compression, thus allowing better utilization of available bandwidth;
    • Improved cache mapping in computer systems; and
    • Possibility to reduce the size of a context or cache memory, while still maintaining the same clashing probability.
Other advantages offered by the present invention will be appreciated upon reading of the below description of the embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention, together with further objects and advantages thereof, will be best understood by reference to the following description taken together with the accompanying drawings, in which:
FIG. 1 is a schematic diagram illustrating a full header with CID association as well as a corresponding compressed header;
FIG. 2 is a schematic diagram of two interconnected routers A and B, each with header compression/decompression capabilities;
FIG. 3 is a schematic diagram illustrating hash-based generation of CID values for addressing a context memory according to the prior art;
FIG. 4 is a schematic diagram illustrating the problem of CID clashing as two packet streams map to the same CID value;
FIG. 5 is a schematic diagram illustrating a CID selection mechanism according to a preferred embodiment of the invention;
FIG. 6A is a schematic diagram illustrating a basic “least recently used” algorithm for assigning CID values to the virtual CID space by using a sorted list;
FIG. 6B is a schematic diagram illustrating a basic “oldest assigned” algorithm for cyclically assigning CID values to the virtual CID space using a NEXT CID register;
FIG. 7 is a schematic flow diagram of a CID selection method based on the “least recently used” algorithm according to a preferred embodiment of the invention;
FIG. 8 is a schematic flow diagram of a CID selection method based on the “oldest assigned” algorithm according to a preferred embodiment of the invention;
FIGS. 9A–D are schematic diagrams illustrating, for different values of ƒ, the clashing probability as a function of the percentage of active CID values for a simple direct hashing algorithm on one hand and the extended CID generation algorithm of the invention on the other hand;
FIG. 10 is a schematic diagram illustrating a typical cache-based memory structure in a computer system;
FIG. 11 is a schematic diagram illustrating a cache line selection mechanism according to a preferred embodiment of the invention;
FIG. 12A is a schematic diagram illustrating a basic “least recently used” algorithm for assigning cache line identifier (LID) values to the virtual LID space by using a sorted list;
FIG. 12B is a schematic diagram illustrating a basic “oldest assigned” algorithm for cyclically assigning LID values to the virtual LID space using a NEXT LID register;
FIG. 13 is a schematic flow diagram of a cache line selection method based on the “least recently used” algorithm according to a preferred embodiment of the invention; and
FIG. 14 is a schematic flow diagram of a cache line selection method based on the “oldest assigned” algorithm according to a preferred embodiment of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
Throughout the drawings, the same reference characters will be used for corresponding or similar elements.
The general mechanism for mapping a set of signal elements to a limited range of identifiers will first be described with reference to the particular application of CID selection in IP header compression. Next, the invention will be described with reference to cache mapping in a computer system. It should though be understood that the invention is not limited thereto, and that the invention can be applied to other hashing problems as well.
Emulating a Larger Virtual CID Space in IP Header Compression
FIG. 5 is a schematic diagram illustrating a CID selection mechanism according to a preferred embodiment of the invention. In the following example, it is assumed that a compressor is associated with a context memory 12/22 of a limited size, say 128 memory positions ranging from 0 to 127. The corresponding CID values 0 to 127 for addressing the context memory define a CID space having a limited range of 128. In clear contrast to the conventional CID generation based on direct hashing, the CID selection/generation mechanism according to the invention is based on an intermediate CID memory 40 of a range that is larger than the CID range. The intermediate CID memory 40 provides storage of CID values assigned from the real CID space, thereby emulating a larger virtual space of CID values. The virtual CID space is a factor f larger than the real CID space, where ƒ is any real value larger than 1. In practice, it is convenient to use ƒ values that are powers of 2, resulting in possible virtual CID ranges of 256, 512, 1024, . . . for a real CID range of 128. Of course, other virtual CID ranges are also feasible.
Now, a hash coder 30 or any other equivalent module for implementing a suitable hash function calculates a hash value X, preferably based on the NO_CHANGE fields of an incoming packet header. The calculated hash value X is used for addressing the intermediate CID memory 40 to get access to a CID. For example, the hash value X may be calculated using a standard 16-bit CRC and selecting a suitable number of output bits as the hash value. Of course, alternative hash functions may be used. Compared to direct hashing from a packet header to a CID value, the range of the hash values X used for addressing the intermediate CID memory 40 has to be larger than that of the CID values. This means that if the intermediate CID memory 40 has a range that is a factor f=23 times larger than the real CID range, the hash values X are preferably 3 bits longer than the CID values.
As mentioned above, the intermediate CD memory 40 provides storage of CID values assigned from the real CID space, allowing access to a CID when the CID memory is addressed by a hash value X. The accessed CID can then be used as an address to access the context memory 12/22. Assuming that CID values are assigned from the real CID space to the intermediate memory 40 to represent packet headers and corresponding packet streams in an efficient manner, the extended virtual CID space gives essentially the same effect as an ƒ times larger real CID space.
In the prior art, a reduction of the clashing probability can be obtained only by enlarging the real CID space, resulting in a larger context memory. The invention accomplishes the reduction of the clashing probability by using only an intermediate CID memory, without enlarging the context memory. Since the overall context memory is adapted for storing compression contexts in the form of large headers, enlarging the context memory will require much more extra memory than just adding an intermediate memory for storing relatively small CID values.
Naturally, the invention can also be used for reducing the size of the context memory while maintaining the same clashing probability (instead of reducing the clashing probability without enlarging the context memory). This gives the router designer a higher degree of design freedom, making it possible to choose between smaller context memories on one hand and reduced clashing probability on the other. In fact, it is even possible to reduce the size of the context memory and reduce the clashing probability at the same time by using a smaller real CID range and selecting an appropriately larger factor ƒ.
Handling Real Clashes in the Extended Virtual CID Space
To prevent the proposed CID selection mechanism based on an extended virtual CID space from degenerating when there is a real clash in the extended virtual CID space, each hash value is preferably stored in relation to the corresponding header context in a special Identifier Address (IDA) field in the context memory 12/22 and compared to the hash value X leading to the clash. If they match, the CID is reused for the packet belonging to the new stream and the context is updated, including updating the IDA field, also referred to as the CID address field. This is consistent with using a real larger CID space with direct hashing and does not reduce the value of the algorithm according to the invention.
However, it should be understood that for certain applications when a perfect hash function can be found, there is generally no need for the CID address field and the associated hash value comparison.
Assigning CID Values with a Low Probability of Being Active
With regard to the assignment of CID values from the real CID space to the intermediate CID memory, it is important that a CID to be assigned to a new packet stream has a low probability of already belonging to an active packet stream. In this respect, the invention proposes two main schemes for CID assignment, although other schemes also are possible.
The “Least Recently Used” Algorithm for Assigning CID Values
Ideally, the CTD that has been inactive during the longest period of time should be selected and used for a packet belonging to a new packet stream. According to a preferred embodiment of the invention, illustrated schematically in FIG. 6A, this is accomplished by maintaining a sorted list 50 in which CID values within the real CID range are arranged according to their use for representing packet streams. Preferably, the most recently used context identifiers are placed at the end of the list, and consequently the least recently used CID can be found at the head of the list. The least recently used CID, which is the CID that has been inactive for the longest period of time, is the best candidate to use when a new CID is to be assigned. In hardware, the realization of the “least recently used” algorithm for assigning CID values to the virtual CID space becomes quite complex. Therefore this algorithm is more suitable for implementation in software, preferably by means of the linked list concept.
The “Oldest Assigned” Algorithm for Assigning CID Values
Another scheme, illustrated schematically in FIG. 6B, assigns new CID values cyclically. Preferably the CID values within the CID range are traversed in sequential order, and when the last CID value is reached the process starts all over again with the first CID value, for example 0, 1, 2, . . . , 127, 0, 1, 2, and so on. Naturally, the CID to be assigned next is stored in a NEXTCID register 60 for easy assignment to the relevant position in the intermediate memory. This results in the “oldest assigned” CID being selected for a packet belonging to a new packet stream. There is a fairly low probability that this CID is still active, and therefore this CID is a good candidate to be used when a new CID is to be assigned. If the lifetimes of all sessions are roughly the same, the “least recently used” algorithm and the “oldest assigned” algorithm are equally good.
Even when the lifetimes of the sessions differ from each other, the “oldest assigned” algorithm can be improved to almost match the “least recently used” algorithm with a simple addition. For each packet, the corresponding CID can be compared to the CID value to be assigned next (NEXTCID). If they match, the CID has been detected as active and the NEXTCID register 60 is stepped or incremented to take the value of the “next to oldest assigned” CID. This will reduce clashes in mixed traffic of long and short sessions.
When two routers that operate with different CID ranges are negotiating to determine which CID range to use in header compression, it is easy to select a given CID range by determining when the wrap-around in the cyclical assignment should take place. It is thus possible to traverse only a subset of the total CID range and start over from the beginning of the subset when the last CID in the subset is reached.
The “oldest assigned” algorithm is particularly suitable for hardware implementation.
In both algorithms, each location in the CID memory 40 can be initialized with any valid CID. For the “least recently used” algorithm, the CID list 50 can be initialized with CID values arranged in any arbitrary order.
CID Selection Based on the “Least Recently Used” Algorithm
Assuming that there exists a header compression implementation using simple direct hashing, the following resources are added:
    • An intermediate CID memory for storing CID numbers. The CID memory has a range that is a factor ƒ larger than the range of the real CID space.
    • A sorted CID list, logic for maintaining the sorted list as well as registers or equivalent pointing to the head and tail, respectively, of the list. The CID list has a range equal to the real CID range and contains all possible CID values.
    • A new type of field in the context memory (called the CID address field) for storing, for each header context in the context memory, the address (equal to the hash value) of the CID memory in which the corresponding CID is stored.
    • Logic for performing the necessary comparisons.
The overall CID selection algorithm based on the “least recently used” algorithm will now be described with reference to the flow diagram of FIG. 7.
Perform the following steps for each packet:
  • S1: Generate a hash value X with a range ƒ times larger than the real CID range. The hash value is generally generated by a hashing method, applied to at least part of the packet header, preferably those fields that are classified as NO_CHANGE in header compression standards.
  • S2: Address the intermediate CID memory by using X to get a CID.
  • S3: Address the context memory by using the CID to get header information and the CID memory address (hash value) associated with the stored header information.
  • S4: Compare the header of the received packet with the header information accessed from the context memory.
  • S5: If they match (on-going packet stream), use the CID to represent the packet and place the CID at the end of the CID list.
  • S6: If they don't match (a new packet stream), compare X to the CID address in the relevant CID address field in the context memory.
  • S7: If X does not match the CID address (no clash in the virtual CID space), use the CID at the head of the CID list to represent the packet and place the CID last in the CID list. Update the CID memory by the new CID. Update the addressed position in the context memory by header information from the received packet and store the corresponding hash value X (the new CID address) calculated for that packet in the CID address field.
  • S8: If X matches the CID address (a clash in the virtual CID space), reuse the same CID for the new packet, and update the context memory with regard to header information as well as CID address.
If a certain CID is not used for a while (for example because the session to which the CID has been assigned no longer is active) the CID will successively move up in the CID list until it is at the head of the list at which time it will be taken by a new session.
CID Selection based on the “Oldest Assigned” Algorithm
In this embodiment, the following resources are added to an existing header compression implementation based on simple direct hashing:
    • An intermediate CID memory for storing CID numbers. The CID memory has range that is a factor ƒ larger than the range of the real CID space.
    • A register, referred to as the NEXTCID register, for holding the next CID to be assigned to a new stream.
    • A new type of field in the context memory (called the CID address field) for storing, for each header context in the context memory, the CID memory address (equal to the hash value) in which the corresponding CID is stored.
    • Simple logic for performing the necessary comparisons.
The overall CID selection algorithm based on the “oldest assigned” algorithm will now be described with reference to the flow diagram of FIG. 8.
Perform the following steps for each packet:
  • S11 to S14 correspond to S1 to S4.
  • S15: If they match (on-going packet stream), use the CID to represent the packet. If the next CID value is monitored for activity and the CID is equal to NEXTCID, increment NEXTCID.
  • S16: If they don't match (a new packet stream), compare the calculated hash value X to the CID address in the relevant CID address field in the context memory.
  • S17: If X does not match the CID address (no clash in the virtual CID space), use the CID value in the NEXTCID register to represent the packet, and update the CID memory by the CID from the NEXTCID register. Increment NEXTCID. Update the addressed position in context memory by the header of the received packet and store the corresponding hash value X (the new CID address) calculated for that packet in the CID address field.
  • S18: If X matches the CID address (a clash in the virtual CID space), reuse the same CID for the new packet, and update the context memory with regard to header information as well as CID address. If the next CID value is monitored for activity and the CID is equal to NEXTCID, increment NEXTCID.
Compared to simple hashing, the invention will significantly reduce the probability of different packet streams being mapped to the same CID. This leads to a considerable improvement of the compression rate and therefore better utilization of the available bandwidth of the links.
The efficiency of the invention has been investigated in connection with IP header compression, and an analysis is given in Appendix A.
For a more thorough understanding of how the invention can be applied to other hashing applications, the invention will now be described with reference to cache mapping in a computer system.
Cache Mapping in Computer Systems
In most modern computer systems, the main memories are often relatively slow with rather long access times and considered as the bottlenecks of the computer systems. In this context, it is important to keep in mind that faster memory components are generally much more expensive than the slower memory components used as main memories. A common way of alleviating this problem is to use one or more levels of small and fast cache as a buffer between the processor and the larger and slower main memory.
A cache memory contains copies of blocks of data/instructions that are stored in the main memory. In the cache, these blocks of data/instructions corresponds to cache line data fields. As reads to the main memory are issued in the computer system, the system first goes to the fast cache to determine if the information is present in the cache. If the information is available in the cache, a so-called cache hit, access to the main memory is not required and the required information is taken directly from the cache. If the information is not available in the cache, a so-called cache miss, the data is fetched from the main memory into the cache, possibly overwriting other active data in the cache. Similarly, as writes to the main memory are issued, data is written to the cache and copied back to the main memory. In most applications, the use of a cache memory speeds up the operation of the overall memory system significantly. The goal is to make the memory system appear to be as large as the main memory and as fast as the cache memory.
It has also been found useful to provide an extremely rapid cache directly on the processor chip. Such an on-chip cache is commonly referred to as a level 1 (L1) cache. Typically, there is also an off-chip cache, commonly referred to as a level 2 (L2) cache.
FIG. 10 is a schematic diagram illustrating a typical cache-based memory structure in a computer system. The cache-based memory structure may be included in any general-purpose computer known to the art. As the skilled person will understand, the description is not intended to be complete with regard to the entire computer system, but concentrates on those parts that are relevant to the use of cache memories in computer systems.
In the particular example of FIG. 10, the computer system utilizes a typical two-level cache. The processor 100 is provided with an on-chip cache 101 (level 1 cache), and also connected to an off-chip cache 102 (level 2 cache). In this example, the off-chip cache 102 is connected to the processor 100 via a dedicated memory bus (including data, address and control bus). As most modern microprocessors, the processor 100 may have two caches on each level, one cache for data and another cache for instructions. For simplicity however, only a single cache is illustrated on each level. Further, the processor 100 is connected to a main memory 103, normally via a conventional memory controller 104. The caches 101, 102 are typically implemented using SRAMs (Static Random Access Memory) or similar high speed memory circuits, while the main memory 103 is typically implemented using slower DRAM (Dynamic Random Access Memory) circuits.
Of course, instead of the two-level cache of FIG. 10, it is possible to utilize a single-level cache system as well as a cache system with more than two levels.
Naturally, the processor includes functionality for controlling both the L1 and L2 cache memories. The performance of a cache is affected by the organization of the cache in general, and the placement algorithm in particular. The placement algorithm determines to which blocks or lines in the relevant cache that data in the main memory are mapped. The most commonly used algorithms are direct mapping, set-associative and fully associative mapping.
In direct mapping, which is the simplest scheme, a hash function (such as a conventional modulo function) is usually applied to the memory address to determine which cache line to use. With reference to Table I below, an example of an illustrative memory address is shown.
TABLE I
32 bit memory address.
Figure US07197622-20070327-C00001
Assuming that the cache line is 32 bytes and the cache size is 16 kB, the five least significant bits 0 to 4 give the byte offset in the cache line data field. The next 14 bits 5 to 18 determine in which cache line the data must be stored. The remaining 13 bits 19 to 31 of redundant addressing implies that 213=8132 possible memory blocks can be mapped to the same cache line. The cache has to store the address information in these bits as a tag in order to know which memory block that is currently stored there.
The most complex scheme, fully associative mapping, makes use of associative addressing that allows data from any address in the main memory to be placed in any block in the cache.
A set-associative cache lies between the two extremes of a direct mapped cache and a fully associative cache. In an n-way set-associative cache, the cache RAM is divided into n memory banks, or “ways”. Each memory address maps to a certain location in any one of the n memory banks, and there is a choice of n memory banks for storage/retrieval of the actual data. A direct-mapped cache is a one-way set-associative cache.
Direct mapping in a cache memory is comparable to direct mapping in IP header compression. The physical location in the cache to which a memory address of the main memory is mapped to is generally determined by direct hashing from appropriate parts of the memory address, and there is normally a relatively high probability that several memory addresses map to the same cache location and therefore overwrite each other. This also holds true for n-way set-associative caches.
The use of an intermediate memory for emulating an extended larger virtual space of cache line identifiers can improve the mapping of direct-mapped and set-associative caches in the same way as the compression efficiency is improved in the IP header compression applications described above.
Emulating a Larger Virtual Cache-Line-Identifier Space in Cache Mapping
With reference to FIG. 11, a cache line selection mechanism according to a preferred embodiment of the invention will now be outlined.
It is assumed that a computer system is associated with a cache memory 101/102 of a limited size, say 4096 cache lines ranging from 0 to 4095. The corresponding cache line identifier (LID) values 0 to 4095 for addressing the cache memory define a LID space having a limited range of 4096. Logically, the cache memory is preferably organized in such a way that each cache line in the cache memory 101/102 includes a valid field (V), a tag field (TAG), an identifier address field (IDA) as well as a data field (DATA). The valid field indicates whether the data field contains valid information. The tag field contains the address information required to identify whether a word in the cache corresponds to a requested word. The IDA field corresponds to the CID address field in header compression applications, and the DATA field holds the relevant data/instructions. Of course, it is possible to find various different physical realizations of the above logical organization of the cache memory.
In clear contrast to conventional cache mapping based on direct hashing, the cache line selection mechanism according to the invention is based on an intermediate memory 140 of a range that is larger than the LID range. The intermediate memory 140 provides storage of cache line identifier values assigned from the real LID space, thereby emulating a larger virtual space of LID values. The virtual LID space is a factor ƒ larger than the real LID space, where ƒ is any real value larger than 1.
A hash coder 130 or any other equivalent module for implementing a suitable hash function calculates a bash value X, based on appropriate parts of a current address in the main memory. For example, the hash value may be calculated from the so-called cache line number part of the incoming memory address together with an appropriate number of bits from the so-called tag part of the memory address (see Table I). The calculated hash value X is used for addressing the intermediate memory 140 to get access to a LID. Compared to direct hashing from a memory address, the range of the hash values X used for addressing the intermediate memory 140 has to be larger than that of the LID values.
The intermediate memory 140 provides storage of LID values assigned from the real LID space allowing access to a LID when the intermediate memory is addressed by a hash value X. The accessed LID can then be used to access a cache line in the cache memory 101/102. Assuming that LID values are assigned from the real LID space to the intermediate memory 140 in an efficient manner, the extended virtual LID space gives essentially the same effect as an ƒ times larger real LID space.
In short, a hash value is calculated based on a memory address of interest. The hash value is utilized to address an intermediate memory in which the actual cache line address is stored. The tag part of the memory address is compared to stored tag information to determine if we have a cache hit or a cache miss. If they match, the data has been found. If not, we pick a new cache line. The selection of a new cache line can be done using the least recently used algorithm or the oldest assigned algorithm.
Handling Real Clashes in the Extended Virtual LID Space
In those cases when a perfect hash function can not be found, it is necessary to detect and handle clashes in the extended virtual identifier space to prevent the proposed cache line selection mechanism from degenerating. Preferably, each hash value is stored in relation to the corresponding tag information in the special IDA field in the cache memory 101/102 and compared to the hash value X leading to the clash. If they match, the cache line identified by the LID is updated, including updating the IDA field. This is consistent with using a real larger LID space with direct hashing and does not reduce the value of the algorithm according to the invention.
Depending on how the hash value is calculated from the incoming memory address, the memory address (in the form of the hash value) of the identifier in the intermediate memory may be deduced directly from the tag information. This means that the IDA field will become redundant, and may be removed.
For completeness, the application of the “least recently used” algorithm and the “oldest assigned” algorithm to cache mapping will now be described with reference to the flow diagrams of FIG. 13 and FIG. 14, respectively.
Cache Line Selection Based on the “Least Recently Used” Algorithm
Assuming that there exists a cache implementation using direct mapping or set-associative mapping, the following resources are added:
    • An intermediate memory for storing cache line identifiers. The LID memory has a range that is a factor ƒ larger than the range of the real LID space.
    • A sorted list 150 (see FIG. 12A), logic for maintaining the sorted list as well as registers pointing to the head and tail, respectively, of the list. The list has a range equal to the real LID range and contains all possible LID values.
    • A new type of field in the cache memory (called the Identifier Address field) for storing, for each relevant memory address, the address (equal to the hash value) of the intermediate memory in which the corresponding LID is stored.
    • Logic for performing the necessary comparisons.
The overall cache line selection algorithm based on the “least recently used” algorithm will now be described with reference to the flow diagram of FIG. 13.
Perform the following steps for each main memory address of interest:
  • S21: Generate a hash value X. The hash value is generally generated by a hashing method applied to at least part of the incoming memory address.
  • S22: Address the intermediate LID memory by using X to get a LID.
  • S23: Address the cache memory by using the LID to access tag information and the LID address (hash value) associated with the stored tag information.
  • S24: Compare the tag of the memory address with the tag information accessed from the cache memory.
  • S25: If they match (cache hit), use the cache line and place the LID at the end of the list.
  • S26: If they don't match (a cache miss), compare X to the LID address in the relevant IDA field.
  • S27: If X does not match the LID address (no clash in the virtual LID space), use the LID at the head of the list and place the LID last in the list. Update the intermediate memory by the new LID.
  • S28: If X matches the LID address (a clash in the virtual LID space), reuse the cache line for the new memory address.
If a certain LID is not used for a while, that LID will successively move up in the list until it is at the head of the list.
Cache Line Selection Based on the “Oldest Assigned” Algorithm
Assuming that there exists a cache implementation using direct mapping or set-associative mapping, the following resources are added:
    • An intermediate memory for storing cache line identifiers. The LID memory has range that is a factor ƒ larger than the range of the real LID space.
    • A register, referred to as the NEXTLID register 160 (see FIG. 12B), for holding the next LID to be assigned.
    • A new type of field in the cache memory (called the Identifier Address field) for storing, for each relevant memory address, the address (equal to the hash value) of the intermediate memory in which the corresponding LID is stored.
    • Simple logic for performing the necessary comparisons.
The overall cache line selection mechanism based on the “oldest assigned” algorithm will now be described with reference to the flow diagram of FIG. 14.
Perform the following steps for each packet:
  • S31 to S34 correspond to S21 to S24.
  • S35: If they match (cache hit), use the cache line. If the next LID value is monitored for activity and the LID is equal to NEXTLID, increment NEXTLID.
  • S36: If they don't match (a cache miss), compare the calculated hash value X to the LID address in the relevant IDA field.
  • S37: If X does not match the LID address (no clash in the virtual LID space), use the LID value in the NEXTLID register, and update the intermediate memory by the LID from the NEXTLID register. Increment NEXTLID.
  • S38: If X matches the LID address (a clash in the virtual LID space), reuse the cache line for the new memory address.
For further information on cache memories, reference is made to Computer Organization and Design: The Hardware/Software Interface by Patterson and Hennessy, 2nd ed., Morgan Kaufmann Publishers, San Francisco, pp. 540–627.
As mentioned above, the proposed mapping mechanism is not limited to CID selection in IP header compression and cache mapping in computer systems but can also be applied to other hashing problems. Examples of other applications in which hashing can be improved by the invention include searching databases and performing various table lookups for IP routing.
The embodiments described above are merely given as examples, and it should be understood that the present invention is not limited thereto. Further modifications, changes and improvements which retain the basic underlying principles disclosed and claimed herein are within the scope and spirit of the invention.
APPENDIX A
Performance of CID Generation Based on Simple Direct Hashing
It is assumed that a simple hashing scheme is used for generating CID values, where C is the number of possible CID values (the maximum CID range) and S is the number of simultaneously active sessions.
To calculate the amount of packet streams that do not clash and therefore can be compressed, we first consider a single specific CID. The probability that exactly k sessions map to this CID is equal to:
p c ( k ) = ( 1 C ) k ( 1 - 1 C ) S - k ( S k ) ( 1 )
The probability that exactly one session maps to this CID is therefore equal to:
p c ( 1 ) = 1 C ( 1 - 1 C ) S - 1 S ( 2 )
The expected number of CID values that have exactly one session mapped to them is then equal to:
n ( 1 ) = p c ( 1 ) · C = ( 1 - 1 C ) S - 1 S ( 3 )
The number of sessions that do not clash is equal to the number of CID values that only have one session mapped to them. Accordingly, the fraction of sessions that clash is equal to:
p clash = 1 - n ( 1 ) S = 1 - ( 1 - 1 C ) S - 1 ( 4 )
Performance of CID Generation by Means of an Extended Virtual CID Space
The situation of having a virtual CID space, which is a factor ƒ larger than the real CID space C, can be modeled thinking of the number of possible CID values as effectively becoming ƒ times larger. For calculating the performance of the CID selection mechanism according to the invention, expression (4) above derived for direct hashing can be used with the number of possible CID values being equal to f·C (effectively replacing C by f·C).
Comparison
FIGS. 9A–D are schematic diagrams illustrating, for different values of ƒ, the clashing probability as a function of the percentage (ρ) of active CID values for a simple direct hashing algorithm (Pdirect) on one hand and the extended CID generation algorithm (Pextended) of the invention on the other hand. In FIGS. 9A–D, the clashing probability for the direct hashing algorithm is indicated by a solid line, whereas the clashing probability for the extended algorithm is indicated by a dashed line. In the given plots, C has been set to a typical value of 1024. As can be seen, the CID selection mechanism according to the invention outperforms the conventional direct hashing mechanism.

Claims (39)

1. A system for mapping signal elements to a first limited range of identifiers, comprising:
an intermediate memory of a second larger range for storing identifiers assigned from the first limited range, thus emulating a larger virtual space of identifiers;
a hash coder for calculating, for each one of a plurality of signal elements, a hash value based on at least part of the signal element for addressing the intermediate memory to access an identifier;
means for detecting and handling a clash in the virtual space of identifiers including:
means for detecting the mapping of a new signal element to the same identifier as a different and previously mapped signal element based on a comparison of the hash value associated with the new signal element and a hash value associated with the already mapped signal element; and
means for using the identifier for the new signal element in response to detection of such a mapping.
2. The system according to claim 1, further comprising:
means for cyclically assigning identifiers within the first limited range to the intermediate memory.
3. The system according to claim 1, wherein said signal elements are packet headers and said identifiers are context identifiers, and wherein the hash coder is configured to calculate the hash value based on at least one header field specified as no-change according to header compression standards.
4. The system according to claim 1, wherein said signal elements are memory addresses and said identifiers are cache line identifiers, and wherein the hash coder is configured to calculate the hash value based on at least part of the memory address.
5. A system for mapping packet streams to a first limited range of context identifiers, comprising:
an intermediate memory of a second larger range for storing context identifiers assigned from the first limited range, thus emulating a larger virtual space of context identifiers;
means for calculating, for each of a plurality of incoming packets, a hash value based on those parts of the packet header that do not change between successive packets in a packet stream for addressing the intermediate memory to access a context identifier,
thereby reducing the risk for packet headers of different packet streams being mapped to the same context identifier, the system further comprising:
means for detecting and handling a clash in the virtual space of context identifiers including:
means for comparing, for an incoming packet belonging to a new packet stream, the calculated hash value for the incoming packet with a hash value associated with a header context stored in a context memory at a location corresponding to the context identifier accessed from the intermediate memory by means of the calculated hash value;
means for allowing, in response to a match in the comparison due to clashing, the context identifier accessed from the intermediate memory to represent the new packet stream; and
means for updating the context memory by storing the context-forming part of the packet header of the incoming packet at a location corresponding to the accessed context identifier, and for associating the corresponding hash value with the stored header context.
6. The system according to claim 5, further comprising:
means for updating, in response to a mismatch in the comparison, the accessed context identifier in the intermediate memory by the oldest assigned context identifier; and
means for allowing, in response to a mismatch in the comparison, the oldest assigned context identifier to represent the packet header and the corresponding packet stream.
7. The system according to claim 5, further comprising:
means for cyclically assigning context identifiers within the first limited range to the intermediate memory to represent new packet streams.
8. The system according to claim 7, further comprising:
means for skipping a context identifier in the cyclic assignment of context identifiers if the context identifier is detected as active.
9. The system according to claim 5, further comprising:
means for updating, in response to a mismatch in the comparison, the accessed context identifier in the intermediate memory by the least recently used context identifier; and
means for allowing, in response to a mismatch in the comparison, the least recently used context identifier to represent the packet header and the corresponding packet stream.
10. The system according to claim 9, further comprising:
means for arranging context identifiers within the first limited range in a sorted list according to their use for representing packet streams with the most recently used context identifiers at the end of the list, wherein the context identifier at the head of the list is used as the least recently used context identifier.
11. The system according to claim 5, further comprising:
means for assigning the least recently used context identifier to the intermediate memory to represent a packet belonging to a new packet stream.
12. A system for mapping packet streams to a first limited range of context identifiers, comprising:
an intermediate memory of a second larger range for storing context identifiers assigned from the first limited range, thus emulating a larger virtual space of context identifiers;
means for calculating, for each of a plurality of incoming packets, a hash value based on those parts of the packet header that do not change between successive packets in a packet stream for addressing the intermediate memory to access a context identifier,
thereby reducing the risk for packet headers of different packet streams being mapped to the same context identifier, the system further comprising:
means for addressing a context memory based on the accessed context identifier to obtain a corresponding header context;
means for performing a comparison between the context-forming part of the current packet header and the obtained header context;
means for, in response to a match in the comparison, allowing the context identifier to represent the packet header and the corresponding packet stream;
means for, in response to a mismatch in the comparison:
performing a further comparison between the calculated hash value of the current packet header and a hash value associated with the obtained header context in the context memory; and
in response to a match in the further comparison due to clashing, allowing the accessed context identifier to represent the packet header and the corresponding packet stream;
in response to a mismatch in the further comparison, updating the accessed context identifier in the intermediate memory by the oldest assigned context identifier or the least recently used context identifier, allowing this context identifier to represent the packet header and the corresponding packet stream; and
updating the context memory by storing the context-forming part of the current packet header at the memory location corresponding to the context identifier allowed to represent the packet header, and associating the corresponding hash value with the context-forming header part.
13. A system for mapping memory addresses to a first limited range of cache line identifiers, comprising:
an intermediate memory of a second larger range for storing cache line identifiers assigned from the first limited range, thus emulating a larger virtual space of cache line identifiers;
means for calculating, for each of said memory addresses, a hash value based on at least part of the memory address for addressing the intermediate memory to access a cache line identifier,
thereby reducing the risk for different memory addresses being mapped to the same cache line identifier, the system further comprising:
means for obtaining tag information from a cache memory line identified by the accessed cache line identifier;
means for performing a comparison between the tag of the memory address and the obtained tag information;
means for, in response to a match in the comparison, allowing access to the cache memory line identified by the accessed cache line identifier;
means for, in response to a mismatch in the comparison:
performing a further comparison between the calculated hash value of the memory address and a hash value associated with the obtained tag information; and
in response to a match in the further comparison due to clashing, allowing access to the cache memory line identified by the accessed cache line identifier;
in response to a mismatch in the further comparison, updating the accessed cache line identifier in the intermediate memory by the oldest assigned cache line identifier or the least recently used cache line identifier, allowing access to the cache memory line identified by this cache line identifier; and
updating the accessed cache memory line by storing the tag of the memory address, and associating the corresponding hash value to the stored tag.
14. The system according to claim 13, further comprising: means for detecting and handling a clash in the virtual space of cache line identifiers.
15. The system according to claim 14, wherein said clash detecting and handling means comprises:
means for comparing, for a new memory address, the calculated hash value of the memory address with a hash value associated with a tag stored in a cache memory line that corresponds to the cache line identifier accessed from the intermediate memory by means of the calculated hash value;
means for allowing, in response to a match in the comparison due to clashing, access to the cache memory line identified by the accessed cache line identifier; and
means for updating the cache memory line identified by the accessed cache line identifier by storing the tag of the new memory address, and for associating the corresponding hash value to the stored tag.
16. The system according to claim 15, further comprising:
means for updating, in response to a mismatch in the comparison, the accessed cache line identifier in the intermediate memory by the oldest assigned cache line identifier; and
means for allowing, in response to a mismatch in the comparison, access to the cache memory line identified by the oldest assigned cache line identifier.
17. The system according to claim 15, further comprising:
means for updating, in response to a mismatch in the comparison, the accessed cache line identifier in the intermediate memory by the least recently used cache line identifier; and
means for allowing, in response to a mismatch in the comparison, access to the cache memory line identified by the least recently used cache line identifier.
18. The system according to claim 17, further comprising:
means for arranging cache line identifiers within the first limited range in a sorted list according to their use, with the most recently used cache line identifiers at the end of the list, wherein the cache line identifier at the head of the list is used as the least recently used cache line identifier.
19. The system according to claim 13, further comprising:
means for cyclically assigning cache line identifiers within the first limited range to the intermediate memory.
20. The system according to claim 19, further comprising:
means for skipping a cache line identifier in the cyclic assignment of cache line identifiers if the cache line identifier is detected as active.
21. The system according to claim 13, further comprising:
means for assigning the least recently used cache line identifier to the intermediate memory to allow access to the corresponding cache memory line.
22. A method for mapping signal elements to a first limited range of identifiers, comprising:
assigning identifiers within the first limited range to an intermediate memory of a second larger range, thus emulating a larger virtual space of identifiers;
calculating, for each one of a plurality of signal elements, a hash value based on at least part of the signal element for addressing the intermediate memory to access an identifier,
thereby reducing the risk for different signal elements being mapped to the same identifier, the method further comprising:
detecting and handling a clash in the virtual space of identifiers including:
detecting the mapping of a new signal element to the same identifier as a different and previously mapped signal element based on a comparison of the hash value associated with the new signal element and a hash value associated with the already mapped signal element; and
using the identifier for the new signal element in response to detection of such a mapping.
23. The method according to claim 22, wherein identifiers within the first limited range are cyclically assigned to the intermediate memory.
24. The method according to claim 22, wherein said signal elements are packet headers and said identifiers are context identifiers, and the hash value is calculated based on at least one header field specified as no-change according to header compression standards.
25. The method according to claim 22, wherein said signal elements are memory addresses and said identifiers are cache line identifiers, and the hash value is calculated based on at least part of the memory address.
26. A method for mapping packet streams to a first limited range of context identifiers, comprising:
assigning context identifiers within the first limited range to an intermediate memory of a second larger range, thus emulating a larger virtual space of context identifiers;
calculating, for each of a plurality of incoming packets, a hash value based on those parts of the packet header that do not change between successive packets in a packet stream for addressing the intermediate memory to access a context identifier,
thereby reducing the risk for packet headers of different packet streams being mapped to the same context identifier, the method further comprising:
detecting and handling a clash in the virtual space of context identifiers including:
comparing, for an incoming packet belonging to a new packet stream, the calculated hash value for the incoming packet with a hash value associated with a header context stored in a context memory at a location corresponding to the context identifier accessed from the intermediate memory by means of the calculated hash value;
allowing, in response to a match in the comparison due to clashing, the context identifier accessed from the intermediate memory to represent the incoming packet and the new packet stream; and
updating the context memory by storing the context-forming part of the packet header of the incoming packet at the location corresponding to the accessed context identifier, and associating the corresponding hash value with the stored header context.
27. The method according to claim 26, wherein updating, in response to a mismatch in the comparison, the accessed context identifier in the intermediate memory by the oldest assigned context identifier, allowing the oldest assigned context identifier to represent the incoming packet header.
28. The method according to claim 26, wherein context identifiers within the first limited range are cyclically assigned to the intermediate memory to represent new packet streams.
29. The method according to claim 28, further comprising:
skipping a context identifier in the cyclic assignment of context identifiers if the context identifier is detected as active.
30. The method according to claim 26, further comprising:
updating, in response to a mismatch in the comparison, the accessed context identifier in the intermediate memory by the least recently used context identifier, allowing this context identifier to represent the packet header.
31. The method according to claim 30, further comprising:
arranging context identifiers within the first limited range in a sorted list according to their use for representing packet streams with the most recently used context identifiers at the end of the list, wherein the context identifier at the head of the list is used as the least recently used context identifier.
32. The method according to claim 26, further comprising:
assigning the least recently used context identifier to the intermediate memory to represent a packet belonging to a new packet stream.
33. A method for mapping memory addresses to a first limited range of cache line identifiers, comprising:
assigning cache line identifiers within the first limited range to an intermediate memory of a second larger range, thus emulating a larger virtual space of cache line identifiers;
calculating, for each of said memory addresses, a hash value based on at least part of the memory address for addressing the intermediate memory to access a cache line identifier,
thereby reducing the risk for different memory addresses being mapped to the same cache line identifier, the method further comprising:
detecting and handling a clash in the virtual space of cache line identifiers including:
comparing, for a new memory address, the calculated hash value of the memory address with a hash value associated with a tag stored in a cache memory line that corresponds to the cache line identifier accessed from the intermediate memory (140) by means of the calculated hash value;
allowing, in response to a match in the comparison due to clashing, access to the cache memory line identified by the accessed cache line identifier; and
updating the cache memory line identified by the accessed cache line identifier by storing the tag of the new memory address, and for associating the corresponding hash value to the stored tag.
34. The method according to claim 33, further comprising:
updating, in response to a mismatch in the comparison, the accessed cache line identifier in the intermediate memory by the oldest assigned cache line identifier, allowing access to the cache memory line identified by the oldest assigned cache line identifier.
35. The method according to claim 33, wherein cache line identifiers within the first limited range are cyclically assigned to the intermediate memory.
36. The method according to claim 35, further comprising:
skipping a cache line identifier in the cyclic assignment of cache line identifiers if the cache line identifier is detected as active.
37. The method according to claim 33, further comprising:
updating, in response to a mismatch in the comparison, the accessed cache line identifier in the intermediate memory by the least recently used cache line identifier, allowing access to the cache memory line identified by this cache line identifier.
38. The method according to claim 37, further comprising:
arranging cache line identifiers within the first limited range in a sorted list according to their use, with the most recently used cache line identifiers at the end of the list, wherein the cache line identifier at the head of the list is used as the least recently used cache line identifier.
39. The method according to claim 33, further comprising:
assigning the least recently used cache line identifier to the intermediate memory to allow access to the corresponding cache memory line.
US10/450,827 2000-12-20 2001-12-12 Efficient mapping of signal elements to a limited range of identifiers Expired - Fee Related US7197622B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SE00047365 2000-12-20
SE0004736A SE0004736D0 (en) 2000-12-20 2000-12-20 Mapping system and method
PCT/SE2001/002746 WO2002051098A1 (en) 2000-12-20 2001-12-12 Efficient mapping of signal elements to a limited range of identifiers

Publications (2)

Publication Number Publication Date
US20040221132A1 US20040221132A1 (en) 2004-11-04
US7197622B2 true US7197622B2 (en) 2007-03-27

Family

ID=20282318

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/450,827 Expired - Fee Related US7197622B2 (en) 2000-12-20 2001-12-12 Efficient mapping of signal elements to a limited range of identifiers

Country Status (7)

Country Link
US (1) US7197622B2 (en)
EP (1) EP1346537B1 (en)
AT (1) ATE438249T1 (en)
AU (1) AU2002222858A1 (en)
DE (1) DE60139418D1 (en)
SE (1) SE0004736D0 (en)
WO (1) WO2002051098A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055464A1 (en) * 2003-09-04 2005-03-10 International Business Machines Corp. Header compression in messages
US20050271156A1 (en) * 2004-06-07 2005-12-08 Nahava Inc. Method and apparatus for cached adaptive transforms for compressing data streams, computing similarity, and recognizing patterns
US20060133494A1 (en) * 2004-12-17 2006-06-22 Rahul Saxena Image decoder with context-based parameter buffer
US20130268644A1 (en) * 2012-04-06 2013-10-10 Charles Hardin Consistent ring namespaces facilitating data storage and organization in network infrastructures
US8855143B1 (en) * 2005-04-21 2014-10-07 Joseph Acampora Bandwidth saving system and method for communicating self describing messages over a network
US9514137B2 (en) 2013-06-12 2016-12-06 Exablox Corporation Hybrid garbage collection
US9552382B2 (en) 2013-04-23 2017-01-24 Exablox Corporation Reference counter integrity checking
US9715521B2 (en) 2013-06-19 2017-07-25 Storagecraft Technology Corporation Data scrubbing in cluster-based storage systems
US9774582B2 (en) 2014-02-03 2017-09-26 Exablox Corporation Private cloud connected device cluster architecture
US9830324B2 (en) 2014-02-04 2017-11-28 Exablox Corporation Content based organization of file systems
US9846553B2 (en) 2016-05-04 2017-12-19 Exablox Corporation Organization and management of key-value stores
US9934242B2 (en) 2013-07-10 2018-04-03 Exablox Corporation Replication of data between mirrored data sites
US9985829B2 (en) 2013-12-12 2018-05-29 Exablox Corporation Management and provisioning of cloud connected devices
US10248556B2 (en) 2013-10-16 2019-04-02 Exablox Corporation Forward-only paged data storage management where virtual cursor moves in only one direction from header of a session to data field of the session
US10474654B2 (en) 2015-08-26 2019-11-12 Storagecraft Technology Corporation Structural data transfer over a network

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100358288C (en) * 2003-07-12 2007-12-26 华为技术有限公司 Method for processing five-membered stream group in network equipment
DE602005008824D1 (en) 2004-06-23 2008-09-25 Samsung Electronics Co Ltd A method for configuring and updating connection detection in a wireless access network
KR100584336B1 (en) * 2004-06-24 2006-05-26 삼성전자주식회사 System and method for connection identification allocation in a broadband wireless access communication system
US20060176882A1 (en) * 2005-02-10 2006-08-10 Beceem Communications Inc. Method and system of early indication for multi-user wireless communication systems
FR2884329A1 (en) * 2005-04-11 2006-10-13 St Microelectronics Sa Data and address coherence verifying method for chip card, involves calculating current signature of data with help of function taking into account data address, and verifying coherence between current signature and recorded signature
CN1893725B (en) * 2005-07-06 2011-09-14 华为技术有限公司 Method for regulating sleep group information in wireless access-in system
US7613669B2 (en) * 2005-08-19 2009-11-03 Electronics And Telecommunications Research Institute Method and apparatus for storing pattern matching data and pattern matching method using the same
US8161353B2 (en) 2007-12-06 2012-04-17 Fusion-Io, Inc. Apparatus, system, and method for validating that a correct data segment is read from a data storage device
US8151082B2 (en) * 2007-12-06 2012-04-03 Fusion-Io, Inc. Apparatus, system, and method for converting a storage request into an append data storage command
WO2008070814A2 (en) * 2006-12-06 2008-06-12 Fusion Multisystems, Inc. (Dba Fusion-Io) Apparatus, system, and method for a scalable, composite, reconfigurable backplane
JP4405533B2 (en) * 2007-07-20 2010-01-27 株式会社東芝 Cache method and cache device
GB2474250B (en) * 2009-10-07 2015-05-06 Advanced Risc Mach Ltd Video reference frame retrieval
CN102736986A (en) * 2011-03-31 2012-10-17 国际商业机器公司 Content-addressable memory and data retrieving method thereof
EP2536098A1 (en) * 2011-06-16 2012-12-19 Alcatel Lucent Method and apparatuses for controlling encoding of a dataflow
JP2014179844A (en) * 2013-03-15 2014-09-25 Nec Corp Packet transmission device, packet transmission method and packet transmission system
JP6342143B2 (en) * 2013-12-02 2018-06-13 株式会社Nttドコモ Base station apparatus and context control method
JP2015156524A (en) * 2014-02-19 2015-08-27 株式会社Nttドコモ communication device, and context control method
JP6692057B2 (en) * 2014-12-10 2020-05-13 パナソニックIpマネジメント株式会社 Transmission method, reception method, transmission device, and reception device
KR102318477B1 (en) * 2016-08-29 2021-10-27 삼성전자주식회사 Stream identifier based storage system for managing array of ssds
WO2020002158A1 (en) 2018-06-25 2020-01-02 British Telecommunications Public Limited Company Processing local area network diagnostic data
GB2575246A (en) * 2018-06-25 2020-01-08 British Telecomm Processing local area network diagnostic data
CN113824606B (en) * 2020-06-19 2023-10-24 华为技术有限公司 Network measurement method and device

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4558302A (en) * 1983-06-20 1985-12-10 Sperry Corporation High speed data compression and decompression apparatus and method
US4587610A (en) 1984-02-10 1986-05-06 Prime Computer, Inc. Address translation systems for high speed computer memories
US4864572A (en) * 1987-05-26 1989-09-05 Rechen James B Framing bitstreams
US5001478A (en) * 1989-12-28 1991-03-19 International Business Machines Corporation Method of encoding compressed data
US5049881A (en) * 1990-06-18 1991-09-17 Intersecting Concepts, Inc. Apparatus and method for very high data rate-compression incorporating lossless data compression and expansion utilizing a hashing technique
US5131016A (en) * 1991-01-09 1992-07-14 International Business Machines Corporation Communications network data compression control system and method
EP0522743A1 (en) 1991-06-26 1993-01-13 Digital Equipment Corporation Combined hash table and CAM address recognition in a network
US5390173A (en) * 1992-10-22 1995-02-14 Digital Equipment Corporation Packet format in hub for packet data communications system
US5414704A (en) * 1992-10-22 1995-05-09 Digital Equipment Corporation Address lookup in packet data communications link, using hashing and content-addressable memory
US5477537A (en) 1993-04-06 1995-12-19 Siemens Aktiengesellschaft Method for accessing address features of communication subscribers when sending data packets
US5530829A (en) * 1992-12-17 1996-06-25 International Business Machines Corporation Track and record mode caching scheme for a storage system employing a scatter index table with pointer and a track directory
US5530958A (en) * 1992-08-07 1996-06-25 Massachusetts Institute Of Technology Cache memory system and method with multiple hashing functions and hash control storage
US5530834A (en) * 1993-03-30 1996-06-25 International Computers Limited Set-associative cache memory having an enhanced LRU replacement strategy
US5592392A (en) * 1994-11-22 1997-01-07 Mentor Graphics Corporation Integrated circuit design apparatus with extensible circuit elements
US5701432A (en) * 1995-10-13 1997-12-23 Sun Microsystems, Inc. Multi-threaded processing system having a cache that is commonly accessible to each thread
US5751990A (en) * 1994-04-26 1998-05-12 International Business Machines Corporation Abridged virtual address cache directory
US5754819A (en) * 1994-07-28 1998-05-19 Sun Microsystems, Inc. Low-latency memory indexing method and structure
US5860153A (en) * 1995-11-22 1999-01-12 Sun Microsystems, Inc. Memory efficient directory coherency maintenance
US5920900A (en) 1996-12-30 1999-07-06 Cabletron Systems, Inc. Hash-based translation method and apparatus with multiple level collision resolution
US6097725A (en) 1997-10-01 2000-08-01 International Business Machines Corporation Low cost searching method and apparatus for asynchronous transfer mode systems

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4558302B1 (en) * 1983-06-20 1994-01-04 Unisys Corp
US4558302A (en) * 1983-06-20 1985-12-10 Sperry Corporation High speed data compression and decompression apparatus and method
US4587610A (en) 1984-02-10 1986-05-06 Prime Computer, Inc. Address translation systems for high speed computer memories
US4864572A (en) * 1987-05-26 1989-09-05 Rechen James B Framing bitstreams
US5001478A (en) * 1989-12-28 1991-03-19 International Business Machines Corporation Method of encoding compressed data
US5049881A (en) * 1990-06-18 1991-09-17 Intersecting Concepts, Inc. Apparatus and method for very high data rate-compression incorporating lossless data compression and expansion utilizing a hashing technique
US5131016A (en) * 1991-01-09 1992-07-14 International Business Machines Corporation Communications network data compression control system and method
EP0522743A1 (en) 1991-06-26 1993-01-13 Digital Equipment Corporation Combined hash table and CAM address recognition in a network
US5530958A (en) * 1992-08-07 1996-06-25 Massachusetts Institute Of Technology Cache memory system and method with multiple hashing functions and hash control storage
US5390173A (en) * 1992-10-22 1995-02-14 Digital Equipment Corporation Packet format in hub for packet data communications system
US5414704A (en) * 1992-10-22 1995-05-09 Digital Equipment Corporation Address lookup in packet data communications link, using hashing and content-addressable memory
US5530829A (en) * 1992-12-17 1996-06-25 International Business Machines Corporation Track and record mode caching scheme for a storage system employing a scatter index table with pointer and a track directory
US5530834A (en) * 1993-03-30 1996-06-25 International Computers Limited Set-associative cache memory having an enhanced LRU replacement strategy
US5477537A (en) 1993-04-06 1995-12-19 Siemens Aktiengesellschaft Method for accessing address features of communication subscribers when sending data packets
US5751990A (en) * 1994-04-26 1998-05-12 International Business Machines Corporation Abridged virtual address cache directory
US5754819A (en) * 1994-07-28 1998-05-19 Sun Microsystems, Inc. Low-latency memory indexing method and structure
US5592392A (en) * 1994-11-22 1997-01-07 Mentor Graphics Corporation Integrated circuit design apparatus with extensible circuit elements
US5701432A (en) * 1995-10-13 1997-12-23 Sun Microsystems, Inc. Multi-threaded processing system having a cache that is commonly accessible to each thread
US5860153A (en) * 1995-11-22 1999-01-12 Sun Microsystems, Inc. Memory efficient directory coherency maintenance
US5920900A (en) 1996-12-30 1999-07-06 Cabletron Systems, Inc. Hash-based translation method and apparatus with multiple level collision resolution
US6097725A (en) 1997-10-01 2000-08-01 International Business Machines Corporation Low cost searching method and apparatus for asynchronous transfer mode systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Donald Knuth, "The art of Computer Programming, vol. 3 Sorting and Searching", second edition 1998, Addison-Wesley, pp. 513-523. *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055464A1 (en) * 2003-09-04 2005-03-10 International Business Machines Corp. Header compression in messages
US20070299988A1 (en) * 2003-09-04 2007-12-27 Weller Scott W Header Compression in Messages
US7398325B2 (en) * 2003-09-04 2008-07-08 International Business Machines Corporation Header compression in messages
US7594036B2 (en) 2003-09-04 2009-09-22 International Business Machines Corporation Header compression in messages
US20090319630A1 (en) * 2003-09-04 2009-12-24 International Business Machines Corporation Header Compression in Messages
US7966425B2 (en) 2003-09-04 2011-06-21 International Business Machines Corporation Header compression in messages
US20050271156A1 (en) * 2004-06-07 2005-12-08 Nahava Inc. Method and apparatus for cached adaptive transforms for compressing data streams, computing similarity, and recognizing patterns
US7664173B2 (en) * 2004-06-07 2010-02-16 Nahava Inc. Method and apparatus for cached adaptive transforms for compressing data streams, computing similarity, and recognizing patterns
US20100098151A1 (en) * 2004-06-07 2010-04-22 Nahava Inc. Method and Apparatus for Cached Adaptive Transforms for Compressing Data Streams, Computing Similarity, and Recognizing Patterns
US8175144B2 (en) 2004-06-07 2012-05-08 Nahava Inc. Method and apparatus for cached adaptive transforms for compressing data streams, computing similarity, and recognizing patterns
US20060133494A1 (en) * 2004-12-17 2006-06-22 Rahul Saxena Image decoder with context-based parameter buffer
US8855143B1 (en) * 2005-04-21 2014-10-07 Joseph Acampora Bandwidth saving system and method for communicating self describing messages over a network
US20130268644A1 (en) * 2012-04-06 2013-10-10 Charles Hardin Consistent ring namespaces facilitating data storage and organization in network infrastructures
US9628438B2 (en) * 2012-04-06 2017-04-18 Exablox Consistent ring namespaces facilitating data storage and organization in network infrastructures
US9552382B2 (en) 2013-04-23 2017-01-24 Exablox Corporation Reference counter integrity checking
US9514137B2 (en) 2013-06-12 2016-12-06 Exablox Corporation Hybrid garbage collection
US9715521B2 (en) 2013-06-19 2017-07-25 Storagecraft Technology Corporation Data scrubbing in cluster-based storage systems
US9934242B2 (en) 2013-07-10 2018-04-03 Exablox Corporation Replication of data between mirrored data sites
US10248556B2 (en) 2013-10-16 2019-04-02 Exablox Corporation Forward-only paged data storage management where virtual cursor moves in only one direction from header of a session to data field of the session
US9985829B2 (en) 2013-12-12 2018-05-29 Exablox Corporation Management and provisioning of cloud connected devices
US9774582B2 (en) 2014-02-03 2017-09-26 Exablox Corporation Private cloud connected device cluster architecture
US9830324B2 (en) 2014-02-04 2017-11-28 Exablox Corporation Content based organization of file systems
US10474654B2 (en) 2015-08-26 2019-11-12 Storagecraft Technology Corporation Structural data transfer over a network
US9846553B2 (en) 2016-05-04 2017-12-19 Exablox Corporation Organization and management of key-value stores

Also Published As

Publication number Publication date
EP1346537A1 (en) 2003-09-24
EP1346537B1 (en) 2009-07-29
ATE438249T1 (en) 2009-08-15
US20040221132A1 (en) 2004-11-04
AU2002222858A1 (en) 2002-07-01
WO2002051098A1 (en) 2002-06-27
DE60139418D1 (en) 2009-09-10
SE0004736D0 (en) 2000-12-20

Similar Documents

Publication Publication Date Title
US7197622B2 (en) Efficient mapping of signal elements to a limited range of identifiers
CN109921996B (en) High-performance OpenFlow virtual flow table searching method
US7418505B2 (en) IP address lookup using either a hashing table or multiple hash functions
US6775281B1 (en) Method and apparatus for a four-way hash table
US7539032B2 (en) Regular expression searching of packet contents using dedicated search circuits
US6434144B1 (en) Multi-level table lookup
EP1438818B1 (en) Method and apparatus for a data packet classifier using a two-step hash matching process
US7069268B1 (en) System and method for identifying data using parallel hashing
US6826561B2 (en) Method and apparatus for performing a binary search on an expanded tree
US8335780B2 (en) Scalable high speed relational processor for databases and networks
US7447230B2 (en) System for protocol processing engine
US8345685B2 (en) Method and device for processing data packets
US20060253606A1 (en) Packet transfer apparatus
US20020138648A1 (en) Hash compensation architecture and method for network address lookup
US20070171911A1 (en) Routing system and method for managing rule entry thereof
US20080071779A1 (en) Method and apparatus for managing multiple data flows in a content search system
US7680806B2 (en) Reducing overflow of hash table entries
US20030050762A1 (en) Method and apparatus for measuring protocol performance in a data communication network
US20080071780A1 (en) Search Circuit having individually selectable search engines
Hasan et al. Chisel: A storage-efficient, collision-free hash-based network processing architecture
US7653798B2 (en) Apparatus and method for controlling memory allocation for variable size packets
US7403526B1 (en) Partitioning and filtering a search space of particular use for determining a longest prefix match thereon
US20140358886A1 (en) Internal search engines architecture
US7385983B2 (en) Network address-port translation apparatus and method
US7653070B2 (en) Method and system for supporting efficient and cache-friendly TCP session lookup operations based on canonicalization tags

Legal Events

Date Code Title Description
AS Assignment

Owner name: TELEFONAKTIEBOLAGET LM ERICSSON, SWEDEN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TORKELSSON, KJEIL;KLIN, LARS-ORJAN;AHL, HAKAN OTTO;AND OTHERS;REEL/FRAME:014527/0273;SIGNING DATES FROM 20011126 TO 20011205

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20190327