WO1998012651A1 - Cascadable content addressable memory and system - Google Patents

Cascadable content addressable memory and system Download PDF

Info

Publication number
WO1998012651A1
WO1998012651A1 PCT/US1997/014979 US9714979W WO9812651A1 WO 1998012651 A1 WO1998012651 A1 WO 1998012651A1 US 9714979 W US9714979 W US 9714979W WO 9812651 A1 WO9812651 A1 WO 9812651A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
cam
output
address
cascade
Prior art date
Application number
PCT/US1997/014979
Other languages
French (fr)
Inventor
Robert Alan Kempke
Anthony J. Mcauley
Michael Philip Lamacchia
Original Assignee
Motorola Inc.
Bell Communications Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc., Bell Communications Research filed Critical Motorola Inc.
Priority to AU41624/97A priority Critical patent/AU4162497A/en
Publication of WO1998012651A1 publication Critical patent/WO1998012651A1/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C15/00Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90339Query processing by using parallel associative memories or content-addressable memories

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A system for a pipeline cascaded content addressable memory CAM system for sequentially processing input data includes an input register, a CAM core, cascade logic and an output register. As the memory association functions produce matches in the CAM core, the cascade logic in parallel composites data associated with each matching CAM core. Each cascade processes a separate data input simultaneously then passes on the cumulative results to the next stage.

Description

CASCADABLE CONTENT ADDRESSABLE MEMORY AND SYSTEM
Field of the Invention
The present invention relates generally to semiconductor Content Addressable Memory (CAM) and systems, and more particularly, to a pipelined cascadable CAM device, and a system using a plurality of such devices in cascade.
Background of the Invention
Content addressable memory devices (CAMs) are extremely valuable in providing associative look-up based on contents of incoming data. A CAM is pre-loaded with a predefined data set, consisting of data to be compared, and optionally, data to be output when a match is found, or alternatively, the address where the match is found. The output data or address can be output as an index to the requesting device, or both the address and data can be output for each match.
One problem incurred in using CAMs is that the construction of CAM chips requires multiples of the number of transistors to implement than standard read/write random access memory (RAM) would require. Thus, CAM chips are usually much smaller in depth size than RAM chips. Therefore, the capacity of a single CAM chip is frequently inadequate to provide for the necessary associative lookups. Thus, it becomes necessary to use multiple CAM chips in some sort of cascaded or interconnected manner to provide greater depth.
Current binary CAM devices are using nearly 4 million transistors and have reached a memory size of 2k by 64. However, ATM and other applications require much more memory, such as 128k by 64. This requires the cascade of 64 of the 2Kx4 CAMs. Current CAM devices present a propagation delay of around 80 ns per
CAM. Cascading 64 CAMs creates a match propagation and data compare rate delay in the microseconds, which is unacceptable. High data rates which require 128k of CAM currently do not function effectively.
Another major problem with this approach is that there is a variable latency in this architecture, where the time taken to find a match is widely variable from associative look-up to associative look-up, due to the fact that there is uncertainty as to how many CAM chips in the chain will have to be accessed, one at a time in turn, until a match is found. CAM data input lines must be run in parallel to all of the chips in the cascade chain, and control logic and intercoupling must be provided between the multiple chips in the cascade chain.
This configuration is ineffective for handling multiple CAM matches for a single input. Data to be recognized by the system as acceptable in a CAM compare may be within a range. Therefore, it is efficient for a single CAM location to accommodate a range of data.
This, however, can ultimately create multiple matches for a single input.
A parallel CAM configuration can handle multiple matches, but this requires an onerous subsystem and is very slow. Processing is normally done by the processor that loaded the data initially.
Therefore, the system is at a standstill until the processor is free to load more data.
Another prior art attempt at greater CAM system efficiently couples the input and output data in parallel and chip control logic in series. Here each CAM chip passes the control down the line to the next chip serially. Naturally, the first CAM chip is idle while each successive chip compares the input word. As stated earlier, cascading 64 CAM chips for a required application creates a slow system due to this bottleneck. Each added CAM chip adds a propagation delay to the system, 64 chips would result in a minimum of 64 propogation delays between input and output. This type of system also requires a controller to synchronize the input and output of data since the combinational logic in the control creates indeterminate delays. In a parallel data, serial control system, if no match is found in a first CAM chip, it passes data to the next chip and the first CAM chip goes idle until possibly every CAM location is checked. Allowing the majority of the circuits to idle during a search is an inefficient use of CAM chips. Current cascaded CAMs are also slow because after the lookup process is complete, masking, handshaking, and housekeeping is required and also performed in series. While these functions are being performed, the memory association circuits are again idle. No processing can occur until an output from the system is produced and new data is loaded. This so called "wait and see" approach is much too slow for the currently desired data transfer rate. Each added stage compounds the CAM lookup delay. The prior art does not provide the capability of reading out multiple CAM location matches within a CAM chip or system. Indeed, multiple matches within an associative memory device create bus contention or bus conflict from every match location trying to output data at the same time. In prior art systems, after attaining a memory address from the CAM lookup tables, auxiliary RAM is sometimes used to retrieve further needed data. This function requires external processing and a plurality of address lines. As CAM usage and memory requirements are growing, there is a need to increase density and to maintain or increase system speed, without the problems and shortcomings from idle circuits and unpredictable latency.
Brief Description of the Drawings
FIG. 1 shows a block diagram of a multiple stage pipelined CAM cascade system that includes a plurality of CAM chips, in accordance with one aspect of the present invention.
FIG. 2 shows a block diagram of one embodiment of the plurality of CAM chips of FIG. 1.
FIG. 3 shows a block diagram of an alternate embodiment of the plurality of CAM chips of FIG. 1.
FIG. 4 illustrates a preferred embodiment of the cascade address generation logic used in each CAM chip, in accordance with one aspect of the present invention.
FIG. 5 illustrates a block diagram of the address calculation logic, in accordance with one embodiment of the present invention. FIG. 6 illustrates a timing diagram associated with a multiple stage multiple-CAM chip, showing the timing of the first two stages of FIG. 1.
FIG. 7 illustrates an ATM network embodiment utilizing the CAM memory system, in accordance with the present invention.
Detailed Description of a Preferred Embodiment
The present invention provides a pipelined CAM cascade system for memory association devices. The system provides sequential pipelined processing of input data within each stage (chip) and as a system. This is accomplished by each cascade stage performing a lookup and supplying an output to combinational logic if a match is found, then passing the input data to the next stage.
Each stage processes a separate input word to the next stage simultaneously with other stages. After the input word is processed, each stage outputs the word to the next stage and a new word is accepted for processing. In accordance with one aspect of the present invention-, data is processed in a plurality of cascaded CAMs using combinational logic in parallel with the memory association functions, providing for the input word to be associated with data as it traverses the cascade. In a preferred embodiment, an input word is output from every CAM stage each clock cycle (after an initial loading latency), allowing immediate usage of the first stage by the next input word. This creates a pipelined configuration where input data is loaded, and processed data is simultaneously output every clock cycle. Each CAM chip (i.e., stage) is itself a multiple stage pipelined device. The first stage thus processes new input data concurrently with the output stage providing output of processed data. At the final CAM stage, after the initial latency of loading the pipeline, new match results are generated every clock cycle.
Referring to FIG. 1 , a multiple stage pipelined CAM chip cascaded CAM memory system is illustrated, in accordance with one aspect of the present invention. In accordance with the present invention, a cascaded CAM system for processing incoming input data is provided. The memory system is comprised of a plurality of pipelined CAM subsystems 101 -103, coupled together in a cascaded chain of stages, as shown. Data flows to an initial stage, then subsequent stages, and lastly to a final stage. Each stage is comprised of a CAM core (e.g., 110), an input register (e.g., 140), an output register (e.g., 150), and cascade logic (e.g., 160). The input register receives the incoming data that includes a data word, cascade data, and op code data, which are each described later. The CAM core is comprised of content addressable memory for storing predefined data at addressable locations and comparing subsequent incoming data to the stored predefined data. The cascade logic creates a composite history of important parameters determined by activity in preceding stages. The output register is coupled to the cascade logic and the CAM core to provide outputs to the successive stage. The output is comprised of a data word, an op code output, and a cascade output from the CAM core and the cascade logic, as later described. The CAM core associates the stored predefined data in the CAM core with the incoming data word, and, responsive to determining a match between a content addressable memory location of the CAM core and the data word, produces an address location responsive to the op code data.
The cascade interface logic indicates whether a match has occurred anywhere in the CAM core, and whether multiple matches have occurred. The address location represents the lowest order address where a match was found in the CAM core. If no match has occurred, the CAM core provides an output of an address for a next location after a last matched location within that subsystem summarizing the data output, the op code output, and the cascade output from each of the CAM subsystems are coupled to the data input, the op code input, and the cascade input, as the incoming data, to the input register of the subsequent CAM subsystem.
The initial CAM subsystem has its cascade data and op code input data signals coupled from an external host processor, and the initial subsystem has its cascade inputs coupled to a predefined set of signals (in a preferred embodiment all zeros). The system also has a timing subsystem for providing synchronizing signals to all CAM subsystems. This ensures pipelined transfer of at least part of the input data between the CAM subsystems. The Cascade CAM system also provides multiple matching address locations when the user requests addresses for all the matched locations. The system logic-OR's on a bit-wise basis, the associated RAM data for all of the multiple matching locations. In accordance with the present invention, a cascadable pipelined content addressable memory subsystem accepts input CAM data, RAM data, Op code data, and Cascade data. The system has an input register for storing and outputting the input CAM Data, RAM Data, Op code data, and Cascade data. The system feeds a CAM core comprised of CAM memory locations and associative RAM memory, and a CAM comparator. The cascade data inputs are a cascade logic subsystem coupled to the input system for combinationally determining cascade conditions and for providing an output of cascade conditions, responsive to the input CAM data.
Data word (105) is loaded by a host processor (90) into stage 0 (101 ). Incoming data words (105) and op code (60) are loaded into an input register (140). In operation, a host processor (90) supplies a write instruction as an op code input (60) to stage 0 (101 ) synchronized by the timing generator.
Each input data word (105) is clocked through the system pursuant to the host processor's op codes (60). The op codes (60) provide a command set which controls the operation of the CAM. The op code (60) for normal operation includes commands such as:
RESET: command used to initialize the CAM device. This clears out all of the entries and internal registers and is ready for programming after a power-up condition.
MASK: command used to load a bank of internal registers that are subsequently used in the binary-to-ternary conversion process. Bits that are set in the mask registers will be converted to an "X" when stored in the memory array, or set to an "X" during a subsequent search operation.
SEARCH: command executes the primary function of the CAM chip. This command compares each word in the CAM array to the Data Input to determine if any matches are present. If there is a match or a multiple match condition, the lowest matching address will be enabled.
NEXT: command used to determine the address of the next matching location when multiple matches are present. The Next command must be executed immediately after the search command and must contain the identical search parameters to obtain a valid result.
DELETE: command used to individually remove entries programmed into the CAM device. After a specific entry in the CAM is no longer required, the Delete command is used to remove it from the CAM tables. All other entries remain valid in the CAM memory space.
NOP: command used when no other operation is to be executed. This can be used while the system is waiting for additional commands or data from the system. No operations are executed for this command.
Referring again to FIG. 1 , each intermediate stage (i.e., those except the initial and final stages) has its cascade inputs and outputs coupled to previous and successive stages, to form a cascaded CAM pipeline. The cascade input (50) receives data from previous cascade stages, such as handshaking, matching address data and its associated RAM data produced by the preceding CAM stage. Since stage 0 (101 ) has no preceding stages, all cascade inputs to stage 0 are normally grounded. Each successive stage is fed by the output of the previous stage. The basic data channels, data words, op code, and cascade signals are maintained through each CAM stage (101 , 102, 103). In the preferred embodiment, the data word (105) is fed forward unaltered. However, in other embodiments, RAM contents or other data may change it. The op code (60) is fed forward unaltered unless interrupted by an overriding command. The op code (60) represents commands for unique functions in each of the subsystems. An overriding command may be produced by the CAM device, such as write disable, or by the host processor, such as a reset. In the preferred embodiment, if a RAM chip's memory buffer gets filled, the CAM chip will output a write disable as part of its op code to notify a down stream chip of a change in priorities. The cascade logic (160) updates its data in real time, continuously. The cascade logic (160) processes the cascade data in parallel with the CAM core (1 10). When a data word (105) enters the CAM core portion (1 10) of stage 0, the data word is compared to the contents of the CAM, searching for a match. The cascade logic is updated responsive to finding a match, and utilizes its associated data. The cascade logic receives previously resolved data, a base address, whether a valid address has been found, and whether more than one CAM match has occurred. In a preferred embodiment, each CAM stage (101 , 102 , 103) is capable of supplying 2k of CAM memory words with which the data word (105) is compared. The successive CAM stages utilize what the previous CAM stage has found. The last stage of the pipelined cascaded CAM system (102) outputs the first match found, or the lowest ordered address, and the composite OR-ed associated RAM data from every match which occurred in the system. Continuous real time parallel processing of the cascade logic with the CAM compare function allows sequential processing of data words. When the pipeline is full, a different data word exists in each stage. During each clock cycle, a data word enters the first stage as another exits the system. In this manner, a high speed data rate can be sustained, where a new multiple-stage- search-result is provided every clock cycle. Thus, an N-stage pipeline will take N clock cycles to fill the pipeline and give the first match output results. However, thereafter, a new N-stage processed match output is provided on each clock cycle, and providing zero variation latency and high speed communication. Referring to FIG. 2, showing a single CAM stage, the host processor (90) starts the pipeline process by producing a search command synchronized by the timing generator. In the preferred embodiment, the data is converted from binary-to-ternary data between the input register and the CAM core to allow for multiple matches within the CAM core. The search command clocks the data word into the input register (140) and starts a CAM compare cycle of the CAM stored data with the input registers, which produces an output from the CAM core.
In a preferred embodiment, each CAM memory location (250) which consists of 64 bits, has associated with it 16 bits of RAM (200) (companion RAM) and a match buffer (400). The match buffer (400) is used to record if a match at that CAM location occurs. Each CAM location has a physical address associated with it. Each matching CAM location produces its corresponding RAM data (200) , which is bit-wise wire-OR'ed with the previously developed and incoming RAM data (30). The incoming RAM data is the wire-OR'ed RAM contents of all preceding matched CAM core locations. The companion RAM can be used for numerous purposes, such as security functions. The wire-OR'ed RAM data is wire-OR'ed in the logic (350) in each stage (device) throughout the pipelined system to produce a composite wire-OR'ed RAM value. The system also allows the user to see any and all of the addresses that produced the final wire- OR'ed companion RAM data with a NEXT op-code instruction. The NEXT instruction can be used, for example, in troubleshooting.
In the case where no match is found in a CAM stage (100), the output of the CAM stage places its highest address location in the cascade output. This address is called the base address. The subsequent stage starts its address locations where the previous stage left off. In the preferred embodiment, each CAM stage contains 2048 addresses. If no match occurs in stage 0, stage 0 will output 2048 as a cascade output address. If no match occurs as of stage 1 , stage 1 will output 4096; then stage 2 will output 6144; and so on. Referring to FIG. 3, in a preferred embodiment, the data word
(105), as initially input, is converted from binary-to-ternary in the binary-to-ternary (B/T) converter (1 50) pursuant to control logic, as illustrated in Table 1 below, prior to any CAM compare operations. This conversion allows user input masking. Masking of bits allows certain bit compares to be "don't cares". Masking is very important in most lookups, as well as sort and filtering functions that use CAMs, such as address resolution, password security (e.g., encryption and decryption), Virtual LAN groupings, asynchronous transfer mode (ATM) addressing (VPI/VCI) resolution, etc. Special op-codes are available for loading CAM data into the
CAM memory (250) and mask data into the RAM mask registers (460), of the CAM core (100) . Subsequent comparing of input data to the stored ternary data is accomplished pursuant to control logic, as illustrated in Table 2, also below. Parallel masking (460) and cascade logic (600) allows sequential processing of the data words through the overall pipeline system and pipelined operation within the CAM core subsystem (100). Other alternative embodiments can store binary data in a binary CAM, and providing separately for masking of each compare within the CAM core (110) .
Table 1 illustrates the binary-to-ternary conversion;
TABLE 1 - Write Table ( B->T Conversion)
Ternary A B RA RB
" N " 0 0 1 1
0 1 1 0
" 0 " 1 0 0 1
" X " 1 1 0 0
while Table 2 illustrates how ternary data is compared.
Figure imgf000012_0001
Tables 1 and 2 show four ternary codes for conversion. The null state "N" is not used for writing or searching, and is used for precharge and test functions only. X's represent "don't cares" and provide a mask function. In a ternary conversion, each bit of incoming binary data is converted to multiple bits which are presorted. Table 1 shows the ternary symbol (N,1 ,0,X), and the corresponding ternary data outputs A and B, and the corresponding memory cell outputs RA and RB. Table 2 illustrates the matching table, showing the ternary symbol (N,0,1 ,X), and the corresponding ternary data outputs A and B, plus showing the match output resulting from a comparison of the ternary code for the input data to the stored memory cell output data.
Referring to FIGS. 2 and 3, the converted data word enters the CAM core (1 10) and is compared in parallel with the contents of each CAM location. This is called the search process , which compares the data word against the contents of each CAM location using an exclusive OR function. Each CAM location normally contains user defined preloaded data. In the preferred embodiment, the data word is clocked through the compare in 40 ns by a timing generator (1 15). The ternary conversion of the preferred embodiment allows the CAM compare to find a plurality of acceptable matches for a single data word input.
In the preferred embodiment, if a match is found in the CAM core, the CAM compare and flip flop in the multimatch buffer (400) associated with the CAM core is set. Within each stage, a sorter
(900) ascertains the lowest order address corresponding to set flip flops in the multimatch buffer (400). The sorter (900) activates the multimatch buffer (400) and the address generator (500) to produce the lowest order CAM core address corresponding to a set flip flop. The ADDRESS VALID bit in the cascade logic (600) is set after the lowest order address is placed in the cascade logic output register for the pipeline output stage. The cascade logic ADDRESS VALID bit is not reset as it moves through the pipeline system. When a lowest order match address is identified, the activated multimatch buffer (400) is loaded with the corresponding RAM data from the CAM core. In a preferred embodiment, during a search command, the address of the matching CAM core location is inhibited and not produced by the address generator (500) and sent to the cascade logic (600) if the ADDRESS VALID bit from the previous chip in the cascade is set. If the ADDRESS VALID signal is not set, the address generator (500) generates the physical address of the data word/CAM match location and sends it to the cascade logic.
FIG. 4 illustrates a preferred embodiment of the algorithm for producing unique CAM stage cascade output addresses in a multi- stage system, according to the invention. If a match is found between the input register (140) data word and the CAM contents, the cascade logic (600) operates pursuant to an algorithm, such as in FIG. 4. The cascade logic places the proper address in an output register (700) to communicate with the next stage output or as a final stage output. FIG. 5 shows the flow through of the cascade logic and possible inputs which update the data as it flows through a stage, in accordance with the present invention.
Referring to FIGs. 4 and 5, each stage of the pipeline generates a unique address for matches (without an initial configuration setting, such as strapping). This is attained by passing a base address signal and an address valid logic signal from a previous chip to a subsequent one. The base address is referred to as "Address next" in the code and logic shown in FIGs. 4 and 5. The base address output from one stage (a previous stage) is sent to a subsequent stage. The base address outputted is dependent on whether a match has occurred, as illustrated in FIG. 4.
To generate a unique address in a multi-chip (stage) system, the cascade logic in each chip must provide a logic to provide a cascade address output ("Address Next"). If the cascade address output from the previous stage ("Address_prev[19:0]) is not representative of a previous match ("Address_valid_prev = 0") and there is no match in this chips, and then a signal of no valid match ("Address_valid_next = 0") is provided, and the cascade address output from this chip is a new base address ("Address_next [19:01 ]"), where Address_next[19:0] = Address_prev[19:0] + Number of words in this chip. If the cascade address output from the previous stage ("Address_prev[19:0]) is not representative of a previous match ("Address__valid_prev = 0") and there is a match in this chip, and the first match in this chip is at location AA, then a cascade output signal of a valid match ("Address_valid_next = 1 ") is provided, and the cascade address output from this chip is Address_next[19:0] = Address_prev[19:0] + AA. If there is a match in a previous chip (stage), then the signal "Address_valid_prev = 1 ", and whether or not this chip has a match, this chip provides cascade outputs of "Address_valid_next = 1 ", and "Address_next [19:0} = Address_prev[19:0]". This base address is computed by the previous stage using the previous address plus the number of words in the chip. Also shown is setting of the address valid if a first match is found and retaining address valid of address valid previous was set when incoming.
FIG. 6 shows, in accordance with the present invention, a simplified example of a pipelined ternary CAM timing diagram, showing just the main input and output. The diagram shows a write operation followed by two search operations. The address of the written or matched word is shown on ADDRESS NEXT with its associated RAM contents on RAM next, as shown. The internal pipeline delay results (e.g., A1 or T5) for three cycles after loading the data and operations (e.g., 01 , search at time T2) for each additional PT CAM chip, the result is delayed one additional clock cycle per chip, although remains unchanged.
As discussed above, when the present invention is used in a ternary system, multiple matches can occur within one chip and multiple flip flops may be set. The search command causes the CAM subsystem to set the associated flip flops within the multimatch buffer (400) when a hit occurs. If multiple matches occur in stage 0, the stage 0 (of FIG. 1 ) will feed forward only the lowest order address on the cascade logic output. If the ADDRESS VALID bit is set in the cascade data (50), subsequent matches only set selected flip flops corresponding to the match locations, and output the associated RAM data for a wired-OR function by logic (350) shown in FIG. 3. Down the pipeline (e.g., stage 1 ), if the ADDRESS VALID bit is set, and yet another match occurs, each subsequent CAM stage ignores all match addresses and feeds the lowest address forward to the subsequent CAM stage. Ultimately the lowest order matching address is output.
In the preferred embodiment, when more than one match is found, a bit in the op code is set called ADDRESS MORE. The NEXT command from the host processor clocks out data such as the address for each match location subsequent to the lowest addressed match. This allows the user a means for finding out exactly where the multiple matches occur when an ADDRESS MORE is present. This option is useful in diagnostics, particularly since it allows the user to find out the origin of the RAM output contents. In one embodiment, as illustrated in FIG. 3, address blocking logic (525) is provided. If a match is found, the associated stored RAM data is wire-OR'ed, but its corresponding addresses availability can be barred. Concurrent with the data word compare, the input RAM data is wire-OR'ed to the output RAM contents. In the preferred embodiment, the comparison function includes greater than, less than, equal to, not equal to, and combinations thereof. The compare and its features are responsive to the op code. The op-enable instruction would disable the address generation for a CAM data match regarding a successful compare of CAM data. The contents of the RAM can be used to selectively enable addresses in the CAM. One application would be where the user wanted to modify the wired-OR RAM output values in a multiple match condition, but not output the address of this RAM modifier data (e.g., as in an ATM application). A second application would be in a hierarchical searching, or searching by groups. The RAM data could be partitioned into groups, so that when a search was performed, it would only look at CAM data entries with RAM data equal to a specific group, or greater than / less than to include multiple groups. In the preferred embodiment, all CAM chips have a reset to clear all flip flops and return the chips to a known initialization state. Certain data, such as the unaltered input data word passing through the pipeline, must be delayed to keep pace with the corresponding data package. This is accomplished with delay logic, such as flip flops (650). Once the initial propagation delay, or number of clock cycles required to get through the CAM (stages 1 , 2, and 3 of FIG. 1 ), is achieved, the system thereafter produces complete comparison match results on every clock cycle thereafter, assuming that the pipeline is kept full. Referring to FIG. 7, the memory system of the present invention is illustrated in an address routing-based encryption embodiment for use in conjunction with an ATM switching system. During an initial call setup, the ATM network 800 provides for communication of information coupled via bus 805 to interface 71 0 to establish a call setup procedure prior to performing a write operation. The system 900 provides for storing of new ATM virtual address (Virtual Pipe/Virtual Channel, or VPI/VCI) link data to be setup and stored into the CAM memory array of memory system 700 by doing the CAM Write cycle process. It should be noted that either binary or ternary CAMs can be utilized, in accordance with the present invention, as relates to the pipelined cascadable CAM architecture .
In accordance with a preferred embodiment of the present invention, a ternary CAM system is provided that provides for ternary information being written into the ternary CAM cells in a single Clock cycle, which allows for the writing of a continuous stream of ATM messages coming through, instead of having to stall or delay the ATM system, to facilitate a multiple cycle ternary CAM Write with risk of cell loss. In typical applications, an entire block of VPI/VCI link translation address information is setup in the CAM memory cells, the lookup table, and the internal RAM if present, all in one continuous set of operations rather than just one location. A real-time communication network is thereafter provided.
After initial setup, communications from the ATM net 800 via coupling 805 is made to an interface 710 , which strips off the VPI/VCI portion of the header from the payload and remaining header potion of the ATM cell, and sends the VPI/VCI and remaining header, via coupling 815 , to the processor 720. The processor 720 provides the appropriate Clock, Op code, Mask Selects, CAM data, and other appropriate input signals via coupling 721 to the CAM memory system 700. The CAM memory system 700 is comprised of a plurality of cascaded pipelined CAM memory systems of the type discussed elsewhere herein (e.g., see FIGS. 1 -3). After setup is complete, the CAM search (and lookup table) can be utilized.
The CAM Data from the processor, which is requesting a compare, is the stripped-off VPI/VCI portion of the header, which is compared to the contents of the CAM memory 700, which in turn provides an address output 701 when a match occurs. The address output 701 is coupled back to the processor 720 and to a lookup table 730. During setup, the processor 720 loads the lookup table 730 with data, via coupling 723 , corresponding to the Address output of the CAM 700. The lookup table 730 outputs specific encryption parameters 735 responsive to the address output of the CAM memory system 700. The lookup table 730 provides the encryption parameters 735, which can be a unique key or some mechanism that sets up an encryptor 740. The encryption parameters 735 are coupled to the encryptor 740, which is also coupled to receive the payload data portion of the cell 825, as provided by the interface 710. The encryptor 740 then encrypts the payload data in accordance with the specific encryption parameter keys as provided by the lookup table 730, which are uniquely associated with the specific VPI/VCI address that was input as CAM Data into the CAM system 700. The encrypted data output 745 from the encryptor is coupled to a combiner 750, which recombines the encrypted data of the payload with the header, including the VPI/VCI address, and provides a combined new cell comprising the header and encrypted data as output at 755 for coupling back to the ATM network 800 for communication therefrom to the appropriate destination.
The lookup table 730, while illustrated external to the CAM memory system 700, can alternatively be provided as a part of the CAM memory system 700. However, to provide sufficient encryption parameters, it is desirable to have more than a 16-bit wide amount of RAM. Thus, to maintain cost effectiveness of the
CAM memory chips of the memory system 700, the lookup table can be provided externally and addressed responsive to the address output from the CAM memory system 700, to add flexibility to the system design. The RAM within the CAM chip itself, where present, can be used to provide sync pulses, end-of-frame indicators, and many other simpler functions than the encryption parameters, and can be provided in addition to the lookup table 730. Thus, the presence of the RAM within the CAM memory system 700 is optional, and if present, can be supplemented by an external separate lookup table. Since not every CAM address needs to have a lookup table encryption, an external lookup table can be used with a much denser lookup function than an on-chip RAM. In one embodiment, the RAM is on-chip within the CAM memory system 700, and the lookup table is integrated internally, eliminating the need for the external lookup table 730.
The lookup table is loaded as appropriate, corresponding to the CAM cell loading, via the processor 720, monitoring when a write operation is performed into the CAM memory 700, and then providing an address output 701 from the CAM , which indicates the memory location that is actually written to. Subsequent to that, the processor 720 takes the appropriate action to load in the lookup table an appropriate mapping of the encryption parameters as necessary to support that VPI/VCI address. Even where the lookup table is in RAM internal to the CAM memory system 700, the processor still monitors and rewrites into the RAM appropriately to load the encryption parameter data needed. The processor 720 provides the Mask Select, Data Input, the Op code Data input, the
Clock, and other necessary parameters for use by the CAM memory system 700. The processor 720 processes the VPI/VCI and remainder of the header, and determines the next appropriate step. In the preferred embodiment, the VPI and VCI portion and the remainder of the header are typically not encrypted or transformed by the encryption system as illustrated in FIG. 7, and are recombined with the encrypted data by the combiner 750. Alternatively, the VPI/VCI could be remapped via the processor and VPI/VCI mapping contained either within the CAM system 700 as RAM or utilizing another external memory system, to provide a new VPI/VCI address to be recombined with the remaining original header and the encrypted data.
The encryptor 740 provides a method of scrambling the input data based on certain encryption parameters, which can be any sort of scrambling and encryption, such as keys for a specific user path.
The encryption parameters in the lookup table are thus loaded in accordance with some predefined encryption algorithms to provide the necessary parameters for the encryptors 740. The keys are loaded as appropriate, so that each respective VPI/VCI address has associated with it its own key, or no key, so that the corresponding destination address system can decode the encrypted data on the other end with that unique key. The lookup table must provide the - In appropriate equivalent key, so the encryptor encodes the payload data in accordance with the key that is going to be used on the other side when the payload data is decoded.
During the initial call setup from the ATM network, messages are passed back and forth to define what keys (e.g., encryption parameters to be stored in the lookup table) can be used, what algorithms, which VPI/VCI locations have access, and various other parameters that can be defined for the encryption process. An agreed-to initial key can be used to encrypt the initial data that is sent with a common public key that all users have, and thereafter, private keys are utilized for encryption and decoding. The private key is unique for a VPI/VCI pair, although multiple VPI/VCI pairs can have the same key. The processor 720, responsive to the loading of the CAM, provides for loading the lookup table with the corresponding keys for certain addresses in response to communications from the ATM network 800 of key values for certain VPI/VCI addresses. The interface 710, the ternary CAM memory system 700, and the processor 720 provide translation of the VPI/VCI addresses to addresses for encryption keys for the respective VPI/VCI addresses, responsive to the ternary CAM 700 output 701 . The output 701 provides the addresses to the lookup table 730 which provides the encryption parameters 735 as necessary to encrypt the payload data 825 by the encryptor 740. The encryption payload data is combined by the combiner 750 with the header for output 755 to the ATM network 800.
The ATM system benefits by utilizing off-loaded key encryption of payloads, based on address routing information (e.g., VPI/VCI), which is first stripped, and after encryption, re-appended from/to the payload. This encryption of payloads can be performed transparently to the ATMs' other network operations. The combined data cell (encrypted payload and header) can now be securely communicated through public ATM networks. Since the header is non-encrypted, the combined data cell can be re-routed in commercial switches, routers, and bridges. However, since the data is encrypted, only a receiver with the correct encryption key table can de-encrypt the payload, thus securing communication of the payload. On the receiving side, the same associative lookup/mapping is used to determine the encryption keys, and the encrypted payload is de-encrypted using the encryption keys.
These benefits can also be utilized by other communications schemes, where a portion of the cell or packet is stripped off, encrypted, and then recombined for transmission, switching, routing, reception, and decrypting.
In accordance with one aspect of the present invention, the addresses, data, and associated data for multiple matches in one chip are processed simultaneously and sequentially, and CAM chips are not idle for contiguous and continuous clock cycles, nor do they require external glue logic.
This pipelined configuration yields a consistent latency regardless of where a match is found. In accordance with one aspect of the present invention, a zero latency variation and a zero variation cell delay are provided. The final output from the cascaded CAM system requires the same fixed number of clock cycles (relative to the time of input) to reach the output, regardless of where or when in the cascade a match is found.
In accordance with another aspect of the present invention, a ternary CAM system provides efficient multimatch resolution.
Multimatch resolution increases speed and decreases size.
In accordance with a further aspect of the present invention, associated stored data is supplied to supplement the CAM match in parallel operation, allowing vast flexibility in system design. ATM typically requires more CAM mapping storage than a single chip or stage can provide. Therefore, multiple CAM chips (stages) must be cascaded. The prior art cascading of multiple CAM chips resulted in delay between cells. Since delays in data transmission in ATM (and other) systems results in cell loss, encryption and other masking schemes must be transparent, that is, no delay inserted. The pipelined cascadable CAM subsystem in accordance with the present invention and the pipelined system created by a plurality of the subsystems in accordance with the present invention provide the benefits of pipelined elimination of delays, both at the subsystem architectural level and at the cascaded system level. What is claimed is:

Claims

1 . A content addressable memory (CAM) system for processing incoming input data, comprising: an input register for receiving the incoming input data, the incoming input data comprising a data word, cascade data, and op code data; a CAM core comprising a CAM subsystem, comprising means for selectively storing certain of the incoming input data as stored CAM data at addressable locations, and means for comparing the incoming data to the stored CAM data, responsive to the incoming data; cascade logic responsive to the incoming input data; an output register coupled to the cascade logic and the CAM core, for providing output register outputs of a data output, an op code output, and a cascade output, responsive to the CAM core and the cascade logic; and means for determining a match between the stored CAM data of the CAM core and the incoming input data , and for producing a match address location; wherein the cascade logic is further comprised of cascade interface means for providing the cascade output indicating whether a match has occurred anywhere in the CAM core, and whether multiple matches have occurred, and the match address location representing a lowest order address where a match was found in the CAM core, and when no match has occurred for providing an output of the match address location of an address for a next location after a last addressable location within the CAM subsystem.
2. The CAM system as in claim 1 , wherein the input register, the CAM core, the cascade logic, and the output register in combination form a CAM stage; wherein there are a plurality of the CAM stages, coupled together in a cascaded chain of CAM stages comprising an initial stage, and subsequent stages at least including a final stage; and wherein the plurality of the CAM stages comprise a pipelined system, wherein the output register for the final stage provides the output register outputs corresponding to a complete pipelined system CAM comparison for the incoming input data.
3. The CAM system as in claim 1 , further comprising: a RAM memory, for selectively storing and retrieving
RAM data as associated data at locations addressable by associative mapping to respective corresponding CAM address locations in the
CAM; and means for logic-OR'ing, on a bit-wise basis, the associated data for multiple matching address locations.
4. A cascadable pipelined content addressable memory CAM system responsive to input CAM data, RAM data, Op code data, and Cascade data, the CAM system comprising: an input register for storing and outputting the input CAM Data, RAM Data, Op code data, and Cascade data; an output register; a CAM core comprising CAM memory having a location associative RAM memory, and a CAM comparator; wherein the input CAM data is first coupled to the input register, and then coupled to the CAM memory, and then coupled to the output register; wherein the cascade data derives from a cascade logic subsystem coupled to the input register for combinationally determining cascade conditions and for providing an output of cascade conditions, responsive to the input CAM data; means for comparing the input CAM Data to each and all individual CAM memory location contents responsive to the Op code data; a multimatch buffer for storing a matching CAM location responsive to detecting a match and responsive to the Cascade data and coupled to the input register; and a cascade logic subsystem determining cascade conditions and for providing an output of cascade conditions, responsive to the multimatch buffer.
5. The CAM system as in claim 4, further comprising a binary-to-ternary converter coupled between the input register and the CAM memory of the CAM core; wherein the input CAM data is converted from binary-to- ternary format before storing the incoming input data and before comparing the incoming input data to the CAM memory; and wherein the multimatch buffer is coupled to a RAM comparator output, wherein the multimatch buffer provides an output representative of all locations in the CAM where a match exists between the input CAM data and entries stored in the CAM memory.
6. A cascadable pipelined content addressable memory (CAM) system, the CAM system comprising: an input register for receiving input data comprising input CAM data, Op code data, and Cascade data that includes status data and RAM data; a binary-to-ternary converter coupled to the input register for converting the CAM data therefrom into ternary CAM data responsive to write and compare conversion logic; a CAM core comprising an associated CAM memory for storing data at one of a plurality of memory locations each having a unique associated address, a CAM comparator, each memory location in the CAM core having a specific address, the CAM memory and the CAM comparator coupled to the binary-to-ternary converter; a multimatch buffer comprising a plurality of flip flops for indexed storage and retrieval of data associated with each of the plurality of CAM memory locations; and a cascade logic subsystem determining cascade conditions and for providing an output of cascade conditions, responsive to the multimatch buffer wherein the CAM comparator compares the ternary CAM data to the stored data for all CAM memory locations responsive to the input op code data; and wherein an associated flip flop is activated for each CAM memory location found to be matching.
7. The CAM system as in claim 6, further comprising: a sorter for determining a lowest order address, responsive to the multimatch buffer; and an address generator for generating an address output match responsive to the sorter.
8. A memory system for implementing a secure ATM communication system for an ATM network that transmits a plurality of cells, each of the plurality of cells comprising payload data and header data comprised of VPI and VCI address data, the memory system being responsive to a plurality of Data Input signals, encryption VPI and VCI addresses, and associated key data signals, the memory system comprising: a pipelined cascadable content addressable memory
(CAM) subsystem for storing CAM data to produce stored CAM data and for comparing the plurality of Data Input signals to the CAM data, and for providing a match output address for at least one of the plurality of Data Input signals matching the stored CAM data; an addressable lookup table subsystem for storing the associated key data signals and selectively outputting key data responsive to the match output address; wherein the CAM subsystem and the addressable lookup table subsystem form a memory subsystem; means for initializing the memory subsystem comprising: means for storing the encryption VPI and VCI address data as the stored CAM data in the CAM subsystem; means for storing the key data associated with the encryption VPI and VCI address data in the addressable lookup table subsystem; means for separating the payload data from the header data for each of the plurality of cells; means for coupling the separated header data to the CAM, wherein the CAM selectively provides the match output address when the separated header data at least partially matches the VPI and VCI address data stored in the CAM subsystem; wherein the addressable lookup table provides an output of the key data associated with a respective match output address; means for encrypting the payload data responsive to the key data; and means for combining the encrypted payload data with the separated header data to form an encrypted cell.
9. A memory subsystem for implementing a secure ATM communication system that transmits a plurality of signals, comprising encryption key data, and cells, each of the cells comprised of payload data and header data comprised of VPI/VCI data, wherein respective ones of the encryption key data is associated with respective ones of the VPI/VCI data, the memory subsystem comprising: a content addressable memory (CAM) subsystem, comprised of a plurality of CAM stages, coupled together as a pipelined system in an intercoupled cascaded chain of the plurality of CAM stages comprising an initial stage, and subsequent stages at least including a final stage; and wherein each of the plurality of CAM stages is comprised of an input register, a CAM core, cascade logic, and an output register; wherein the input register receives incoming data that includes a data word, cascade data, and op code data; the CAM core comprising a CAM subsystem, comprising means for selectively storing certain of the incoming data as stored CAM data at addressable locations, and means for comparing the incoming data to the stored CAM data, responsive to the incoming data; the cascade logic being responsive to the incoming data; the output register being coupled to the cascade logic and the CAM core, for providing output register outputs that include a data output, an op code output, and a cascade output; wherein the CAM core is further comprised of means for determining a match between the stored CAM data and the incoming data, and for producing a match address location output responsive thereto; and wherein the cascade logic is further comprised of cascade interface means for providing the cascade output indicating whether a match has occurred anywhere in the CAM core, and means for determining whether multiple matches have occurred, wherein the match address location output represents a lowest order address where a match was found in the CAM core, and when no match has occurred, means for providing an output of the match address location output of an address corresponding to a next location after a last addressable location within the CAM subsystem; and wherein the plurality of the CAM stages comprise a pipelined system, wherein the output register for the final stage provides the output register outputs corresponding to a complete pipelined system CAM comparison for the incoming input data.
10. The memory subsystem as in claim 9, further comprising: an addressable lookup table; a processor for storing VPI/VCI data into the CAM subsystem, each at a respective storage address, and for storing the respective associated encryption key data into the lookup table at a location mapped to the respective storage address; a decoder for separating the header data from the payload data for each of the cells; the CAM subsystem, providing means for comparing the stored data therein to the separated header data to selectively provide a match address output when the separated header matches any of the stored data therein; wherein the lookup table is responsive to the match address output to provide an output of the associated encryption key data; an encryptor, responsive to the encryption key data output from the lookup table for encrypting the separated payload data; a combiner for combining the encrypted payload data with the separated header to form an encrypted cell; and means for communicating the encrypted cell through standard ATM infrastructure systems.
PCT/US1997/014979 1996-09-23 1997-08-25 Cascadable content addressable memory and system WO1998012651A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU41624/97A AU4162497A (en) 1996-09-23 1997-08-25 Cascadable content addressable memory and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/717,557 1996-09-23
US08/717,557 US5930359A (en) 1996-09-23 1996-09-23 Cascadable content addressable memory and system

Publications (1)

Publication Number Publication Date
WO1998012651A1 true WO1998012651A1 (en) 1998-03-26

Family

ID=24882505

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/014979 WO1998012651A1 (en) 1996-09-23 1997-08-25 Cascadable content addressable memory and system

Country Status (3)

Country Link
US (1) US5930359A (en)
AU (1) AU4162497A (en)
WO (1) WO1998012651A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6240485B1 (en) 1998-05-11 2001-05-29 Netlogic Microsystems, Inc. Method and apparatus for implementing a learn instruction in a depth cascaded content addressable memory system
EP1158729A2 (en) * 2000-05-24 2001-11-28 Alcatel Internetworking (PE), Inc. Stackable lookup engines
US6460112B1 (en) 1999-02-23 2002-10-01 Netlogic Microsystems, Llc Method and apparatus for determining a longest prefix match in a content addressable memory device
US6499081B1 (en) 1999-02-23 2002-12-24 Netlogic Microsystems, Inc. Method and apparatus for determining a longest prefix match in a segmented content addressable memory device
US6539455B1 (en) 1999-02-23 2003-03-25 Netlogic Microsystems, Inc. Method and apparatus for determining an exact match in a ternary content addressable memory device
US6567340B1 (en) 1999-09-23 2003-05-20 Netlogic Microsystems, Inc. Memory storage cell based array of counters
US6574702B2 (en) 1999-02-23 2003-06-03 Netlogic Microsystems, Inc. Method and apparatus for determining an exact match in a content addressable memory device
US7110408B1 (en) 1999-09-23 2006-09-19 Netlogic Microsystems, Inc. Method and apparatus for selecting a most signficant priority number for a device using a partitioned priority index table
US7110407B1 (en) 1999-09-23 2006-09-19 Netlogic Microsystems, Inc. Method and apparatus for performing priority encoding in a segmented classification system using enable signals
US7487200B1 (en) 1999-09-23 2009-02-03 Netlogic Microsystems, Inc. Method and apparatus for performing priority encoding in a segmented classification system
US7539800B2 (en) * 2004-07-30 2009-05-26 International Business Machines Corporation System, method and storage medium for providing segment level sparing
US7685392B2 (en) 2005-11-28 2010-03-23 International Business Machines Corporation Providing indeterminate read data latency in a memory system
US7765368B2 (en) 2004-07-30 2010-07-27 International Business Machines Corporation System, method and storage medium for providing a serialized memory interface with a bus repeater
CN110032537A (en) * 2019-03-27 2019-07-19 深圳市明微电子股份有限公司 Address writing method, address writing station and computer readable storage medium

Families Citing this family (93)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6148364A (en) * 1997-12-30 2000-11-14 Netlogic Microsystems, Inc. Method and apparatus for cascading content addressable memory devices
US6199140B1 (en) * 1997-10-30 2001-03-06 Netlogic Microsystems, Inc. Multiport content addressable memory device and timing signals
US6470227B1 (en) * 1997-12-02 2002-10-22 Murali D. Rangachari Method and apparatus for automating a microelectric manufacturing process
US6658002B1 (en) 1998-06-30 2003-12-02 Cisco Technology, Inc. Logical operation unit for packet processing
US6381673B1 (en) * 1998-07-06 2002-04-30 Netlogic Microsystems, Inc. Method and apparatus for performing a read next highest priority match instruction in a content addressable memory device
US6389506B1 (en) 1998-08-07 2002-05-14 Cisco Technology, Inc. Block mask ternary cam
US6289414B1 (en) * 1998-10-08 2001-09-11 Music Semiconductors, Inc. Partially ordered cams used in ternary hierarchical address searching/sorting
US7899052B1 (en) * 1999-01-27 2011-03-01 Broadcom Corporation Memory structure for resolving addresses in a packet-based network switch
US6892272B1 (en) 1999-02-23 2005-05-10 Netlogic Microsystems, Inc. Method and apparatus for determining a longest prefix match in a content addressable memory device
WO2000072171A1 (en) * 1999-05-24 2000-11-30 Gregory Perry Method and apparatus for remotely managed local network interface security
US6560610B1 (en) 1999-08-10 2003-05-06 Washington University Data structure using a tree bitmap and method for rapid classification of data in a database
US7272027B2 (en) * 1999-09-23 2007-09-18 Netlogic Microsystems, Inc. Priority circuit for content addressable memory
US6526474B1 (en) * 1999-10-25 2003-02-25 Cisco Technology, Inc. Content addressable memory (CAM) with accesses to multiple CAM arrays used to generate result for various matching sizes
US6353548B2 (en) * 1999-12-30 2002-03-05 International Business Machines Corporation Method and data processing system for data lookups
US6832308B1 (en) * 2000-02-15 2004-12-14 Intel Corporation Apparatus and method for instruction fetch unit
US6826573B1 (en) 2000-02-15 2004-11-30 Intel Corporation Method and apparatus for queue issue pointer
US6252872B1 (en) * 2000-05-24 2001-06-26 Advanced Micro Devices, Inc. Data packet filter using contents addressable memory (CAM) and method
US6317350B1 (en) 2000-06-16 2001-11-13 Netlogic Microsystems, Inc. Hierarchical depth cascading of content addressable memory devices
US6658458B1 (en) 2000-06-22 2003-12-02 Cisco Technology, Inc. Cascading associative memory arrangement
US7032031B2 (en) * 2000-06-23 2006-04-18 Cloudshield Technologies, Inc. Edge adapter apparatus and method
US7051078B1 (en) 2000-07-10 2006-05-23 Cisco Technology, Inc. Hierarchical associative memory-based classification system
US6725326B1 (en) 2000-08-15 2004-04-20 Cisco Technology, Inc. Techniques for efficient memory management for longest prefix match problems
US6792502B1 (en) 2000-10-12 2004-09-14 Freescale Semiconductor, Inc. Microprocessor having a content addressable memory (CAM) device as a functional unit therein and method of operation
US6490650B1 (en) * 2000-12-08 2002-12-03 Netlogic Microsystems, Inc. Method and apparatus for generating a device index in a content addressable memory
AU2002232807A1 (en) * 2000-12-19 2002-07-01 At And T Wireless Services, Inc. Synchronization of encryption in a wireless communication system
US6606681B1 (en) 2001-02-23 2003-08-12 Cisco Systems, Inc. Optimized content addressable memory (CAM)
US6775764B1 (en) * 2001-04-24 2004-08-10 Cisco Technology, Inc Search function for data lookup
US6862281B1 (en) 2001-05-10 2005-03-01 Cisco Technology, Inc. L4 lookup implementation using efficient CAM organization
US7002965B1 (en) * 2001-05-21 2006-02-21 Cisco Technology, Inc. Method and apparatus for using ternary and binary content-addressable memory stages to classify packets
US7669005B1 (en) 2001-06-18 2010-02-23 Netlogic Microsystems, Inc. Content addressable memory (CAM) devices having soft priority resolution circuits therein and methods of operating same
US7260673B1 (en) 2001-07-20 2007-08-21 Cisco Technology, Inc. Method and apparatus for verifying the integrity of a content-addressable memory result
US7065083B1 (en) 2001-10-04 2006-06-20 Cisco Technology, Inc. Method and apparatus for dynamically generating lookup words for content-addressable memories
US6775737B1 (en) 2001-10-09 2004-08-10 Cisco Technology, Inc. Method and apparatus for allocating and using range identifiers as input values to content-addressable memories
US6876558B1 (en) 2001-12-27 2005-04-05 Cypress Semiconductor Corporation Method and apparatus for identifying content addressable memory device results for multiple requesting sources
US6763426B1 (en) * 2001-12-27 2004-07-13 Cypress Semiconductor Corporation Cascadable content addressable memory (CAM) device and architecture
US6879523B1 (en) 2001-12-27 2005-04-12 Cypress Semiconductor Corporation Random access memory (RAM) method of operation and device for search engine systems
US7283565B1 (en) * 2001-12-27 2007-10-16 Cypress Semiconductor Corporation Method and apparatus for framing a data packet
US7301961B1 (en) 2001-12-27 2007-11-27 Cypress Semiconductor Corportion Method and apparatus for configuring signal lines according to idle codes
US7117301B1 (en) 2001-12-27 2006-10-03 Netlogic Microsystems, Inc. Packet based communication for content addressable memory (CAM) devices and systems
US7073018B1 (en) 2001-12-27 2006-07-04 Cypress Semiconductor Corporation Device identification method for systems having multiple device branches
US6715029B1 (en) 2002-01-07 2004-03-30 Cisco Technology, Inc. Method and apparatus for possibly decreasing the number of associative memory entries by supplementing an associative memory result with discriminator bits from an original set of information
US6970971B1 (en) * 2002-01-08 2005-11-29 Cisco Technology, Inc. Method and apparatus for mapping prefixes and values of a hierarchical space to other representations
US6961808B1 (en) 2002-01-08 2005-11-01 Cisco Technology, Inc. Method and apparatus for implementing and using multiple virtual portions of physical associative memories
US6871262B1 (en) 2002-02-14 2005-03-22 Cisco Technology, Inc. Method and apparatus for matching a string with multiple lookups using a single associative memory
US7336660B2 (en) * 2002-05-31 2008-02-26 Cisco Technology, Inc. Method and apparatus for processing packets based on information extracted from the packets and context indications such as but not limited to input interface characteristics
US7412507B2 (en) * 2002-06-04 2008-08-12 Lucent Technologies Inc. Efficient cascaded lookups at a network node
US7299317B1 (en) 2002-06-08 2007-11-20 Cisco Technology, Inc. Assigning prefixes to associative memory classes based on a value of a last bit of each prefix and their use including but not limited to locating a prefix and for maintaining a Patricia tree data structure
US7558775B1 (en) 2002-06-08 2009-07-07 Cisco Technology, Inc. Methods and apparatus for maintaining sets of ranges typically using an associative memory and for using these ranges to identify a matching range based on a query point or query range and to maintain sorted elements for use such as in providing priority queue operations
US7171439B2 (en) * 2002-06-14 2007-01-30 Integrated Device Technology, Inc. Use of hashed content addressable memory (CAM) to accelerate content-aware searches
US7136960B2 (en) * 2002-06-14 2006-11-14 Integrated Device Technology, Inc. Hardware hashing of an input of a content addressable memory (CAM) to emulate a wider CAM
US7069378B2 (en) * 2002-07-22 2006-06-27 Integrated Device Technology, Inc. Multi-bank content addressable memory (CAM) devices having staged segment-to-segment soft and hard priority resolution circuits therein and methods of operating same
US6842358B2 (en) * 2002-08-01 2005-01-11 Netlogic Microsystems, Inc. Content addressable memory with cascaded array
US7313667B1 (en) 2002-08-05 2007-12-25 Cisco Technology, Inc. Methods and apparatus for mapping fields of entries into new values and combining these mapped values into mapped entries for use in lookup operations such as for packet processing
US7103708B2 (en) * 2002-08-10 2006-09-05 Cisco Technology, Inc. Performing lookup operations using associative memories optionally including modifying a search key in generating a lookup word and possibly forcing a no-hit indication in response to matching a particular entry
US7082492B2 (en) * 2002-08-10 2006-07-25 Cisco Technology, Inc. Associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices
US7028136B1 (en) 2002-08-10 2006-04-11 Cisco Technology, Inc. Managing idle time and performing lookup operations to adapt to refresh requirements or operational rates of the particular associative memory or other devices used to implement the system
US7065609B2 (en) * 2002-08-10 2006-06-20 Cisco Technology, Inc. Performing lookup operations using associative memories optionally including selectively determining which associative memory blocks to use in identifying a result and possibly propagating error indications
WO2004015593A2 (en) * 2002-08-10 2004-02-19 Cisco Technology, Inc. Associative memory with enhanced capabilities
US7689485B2 (en) * 2002-08-10 2010-03-30 Cisco Technology, Inc. Generating accounting data based on access control list entries
US7349382B2 (en) * 2002-08-10 2008-03-25 Cisco Technology, Inc. Reverse path forwarding protection of packets using automated population of access control lists based on a forwarding information base
US7177978B2 (en) * 2002-08-10 2007-02-13 Cisco Technology, Inc. Generating and merging lookup results to apply multiple features
US7441074B1 (en) 2002-08-10 2008-10-21 Cisco Technology, Inc. Methods and apparatus for distributing entries among lookup units and selectively enabling less than all of the lookup units when performing a lookup operation
US6775166B2 (en) * 2002-08-30 2004-08-10 Mosaid Technologies, Inc. Content addressable memory architecture
US20040139274A1 (en) * 2002-10-21 2004-07-15 Hui Ronald Chi-Chun Virtual content addressable memory with high speed key insertion and deletion and pipelined key search
US6717946B1 (en) 2002-10-31 2004-04-06 Cisco Technology Inc. Methods and apparatus for mapping ranges of values into unique values of particular use for range matching operations using an associative memory
US7024515B1 (en) 2002-11-15 2006-04-04 Cisco Technology, Inc. Methods and apparatus for performing continue actions using an associative memory which might be particularly useful for implementing access control list and quality of service features
US7496035B1 (en) 2003-01-31 2009-02-24 Cisco Technology, Inc. Methods and apparatus for defining flow types and instances thereof such as for identifying packets corresponding to instances of the flow types
US7043600B2 (en) * 2003-05-12 2006-05-09 Integrated Silison Solution, Inc. Cascading content addressable memory devices with programmable input/output connections
US7257670B2 (en) * 2003-06-18 2007-08-14 Micron Technology, Inc. Multipurpose CAM circuit
US7937495B2 (en) * 2003-06-25 2011-05-03 Cisco Technology, Inc. System and method for modifying data transferred from a source to a destination
US7634597B2 (en) * 2003-10-08 2009-12-15 Micron Technology, Inc. Alignment of instructions and replies across multiple devices in a cascaded system, using buffers of programmable depths
JP4541077B2 (en) * 2004-01-13 2010-09-08 株式会社日立超エル・エス・アイ・システムズ Semiconductor memory device
US7403526B1 (en) 2004-05-17 2008-07-22 Cisco Technology, Inc. Partitioning and filtering a search space of particular use for determining a longest prefix match thereon
US7290084B2 (en) * 2004-11-02 2007-10-30 Integrated Device Technology, Inc. Fast collision detection for a hashed content addressable memory (CAM) using a random access memory
TWI290426B (en) * 2005-02-03 2007-11-21 Sanyo Electric Co Encryption processing circuit
US7606231B2 (en) * 2005-02-18 2009-10-20 Broadcom Corporation Pipeline architecture for a network device
US7500060B1 (en) * 2007-03-16 2009-03-03 Xilinx, Inc. Hardware stack structure using programmable logic
US8156309B2 (en) * 2007-10-18 2012-04-10 Cisco Technology, Inc. Translation look-aside buffer with variable page sizes
IL187038A0 (en) * 2007-10-30 2008-02-09 Sandisk Il Ltd Secure data processing for unaligned data
US8332580B2 (en) * 2008-04-02 2012-12-11 Zikbit Ltd. System, method and apparatus for memory with embedded associative section for computations
WO2009155253A1 (en) 2008-06-19 2009-12-23 Marvell World Trade Ltd. Cascaded memory tables for searching
US8149643B2 (en) 2008-10-23 2012-04-03 Cypress Semiconductor Corporation Memory device and method
KR101095799B1 (en) * 2009-05-29 2011-12-21 주식회사 하이닉스반도체 Circuit for code address memory cell in non-volatile memory device and Method for operating thereof
US9354823B2 (en) 2012-06-06 2016-05-31 Mosys, Inc. Memory system including variable write burst and broadcast command scheduling
US8473695B2 (en) 2011-03-31 2013-06-25 Mosys, Inc. Memory system including variable write command scheduling
US9055114B1 (en) * 2011-12-22 2015-06-09 Juniper Networks, Inc. Packet parsing and control packet classification
US20140115422A1 (en) * 2012-10-24 2014-04-24 Laurence H. Cooke Non-volatile memory error correction
US10498648B1 (en) 2015-03-25 2019-12-03 Amazon Technologies, Inc. Processing packet data using an offload engine in a service provider environment
JP6548459B2 (en) * 2015-05-29 2019-07-24 キヤノン株式会社 Information processing device
US10622071B2 (en) * 2015-09-04 2020-04-14 Hewlett Packard Enterprise Development Lp Content addressable memory
US11017858B1 (en) * 2015-12-29 2021-05-25 Sudarshan Kumar Low power content addressable memory
US20220013154A1 (en) * 2015-12-29 2022-01-13 Sudarshan Kumar Low Power Content Addressable Memory
TWI713051B (en) 2019-10-21 2020-12-11 瑞昱半導體股份有限公司 Content addressable memory device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3679977A (en) * 1969-06-24 1972-07-25 Bell Telephone Labor Inc Precoded ternary data transmission
US5619446A (en) * 1992-01-10 1997-04-08 Kawasaki Steel Corporation Hierarchical encoder including timing and data detection devices for a content addressable memory
GB9205551D0 (en) * 1992-03-13 1992-04-29 Inmos Ltd Cache memory
US5226082A (en) * 1992-07-02 1993-07-06 At&T Bell Laboratories Variable length decoder
US5446685A (en) * 1993-02-23 1995-08-29 Intergraph Corporation Pulsed ground circuit for CAM and PAL memories
US5454094A (en) * 1993-06-24 1995-09-26 Hal Computer Systems, Inc. Method and apparatus for detecting multiple matches in a content addressable memory
US5442702A (en) * 1993-11-30 1995-08-15 At&T Corp. Method and apparatus for privacy of traffic behavior on a shared medium network
US5414707A (en) * 1993-12-01 1995-05-09 Bell Communications Research, Inc. Broadband ISDN processing method and system
US5649149A (en) * 1994-08-01 1997-07-15 Cypress Semiconductor Corporation Integrated content addressable memory array with processing logical and a host computer interface
US5646878A (en) * 1995-06-02 1997-07-08 Motorola, Inc. Content addressable memory system
US5638315A (en) * 1995-09-13 1997-06-10 International Business Machines Corporation Content addressable memory for a data processing system
US5696930A (en) * 1996-02-09 1997-12-09 Advanced Micro Devices, Inc. CAM accelerated buffer management
US5841874A (en) * 1996-08-13 1998-11-24 Motorola, Inc. Ternary CAM memory architecture and methodology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GHOSE K ET AL: "Response pipelined CAM chips: the first generation and beyond", PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON VLSI DESIGN (CAT. NO.94TH0612-2), PROCEEDINGS OF 7TH INTERNATIONAL CONFERENCE ON VLSI DESIGN, CALCUTTA, INDIA, 5-8 JAN. 1994, ISBN 0-8186-4990-9, 1994, LOS ALAMITOS, CA, USA, IEEE COMPUT. SOC. PRESS, USA, pages 365 - 368, XP002049055 *
MOORS T ET AL: "CASCADING CONTENT- ADDRESSABLE MEMORIES", IEEE MICRO, vol. 12, no. 3, 1 June 1992 (1992-06-01), pages 56 - 66, XP000277663 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6240485B1 (en) 1998-05-11 2001-05-29 Netlogic Microsystems, Inc. Method and apparatus for implementing a learn instruction in a depth cascaded content addressable memory system
US6460112B1 (en) 1999-02-23 2002-10-01 Netlogic Microsystems, Llc Method and apparatus for determining a longest prefix match in a content addressable memory device
US6499081B1 (en) 1999-02-23 2002-12-24 Netlogic Microsystems, Inc. Method and apparatus for determining a longest prefix match in a segmented content addressable memory device
US6539455B1 (en) 1999-02-23 2003-03-25 Netlogic Microsystems, Inc. Method and apparatus for determining an exact match in a ternary content addressable memory device
US6574702B2 (en) 1999-02-23 2003-06-03 Netlogic Microsystems, Inc. Method and apparatus for determining an exact match in a content addressable memory device
US7110408B1 (en) 1999-09-23 2006-09-19 Netlogic Microsystems, Inc. Method and apparatus for selecting a most signficant priority number for a device using a partitioned priority index table
US7487200B1 (en) 1999-09-23 2009-02-03 Netlogic Microsystems, Inc. Method and apparatus for performing priority encoding in a segmented classification system
US6567340B1 (en) 1999-09-23 2003-05-20 Netlogic Microsystems, Inc. Memory storage cell based array of counters
US7110407B1 (en) 1999-09-23 2006-09-19 Netlogic Microsystems, Inc. Method and apparatus for performing priority encoding in a segmented classification system using enable signals
US6957272B2 (en) 2000-05-24 2005-10-18 Alcatel Internetworking (Pe), Inc. Stackable lookup engines
EP1158729A3 (en) * 2000-05-24 2004-04-07 Alcatel Internetworking (PE), Inc. Stackable lookup engines
EP1158729A2 (en) * 2000-05-24 2001-11-28 Alcatel Internetworking (PE), Inc. Stackable lookup engines
US7539800B2 (en) * 2004-07-30 2009-05-26 International Business Machines Corporation System, method and storage medium for providing segment level sparing
US7765368B2 (en) 2004-07-30 2010-07-27 International Business Machines Corporation System, method and storage medium for providing a serialized memory interface with a bus repeater
US7685392B2 (en) 2005-11-28 2010-03-23 International Business Machines Corporation Providing indeterminate read data latency in a memory system
CN110032537A (en) * 2019-03-27 2019-07-19 深圳市明微电子股份有限公司 Address writing method, address writing station and computer readable storage medium
CN110032537B (en) * 2019-03-27 2021-04-09 深圳市明微电子股份有限公司 Address writing method, address writing device and computer readable storage medium

Also Published As

Publication number Publication date
AU4162497A (en) 1998-04-14
US5930359A (en) 1999-07-27

Similar Documents

Publication Publication Date Title
US5930359A (en) Cascadable content addressable memory and system
US5841874A (en) Ternary CAM memory architecture and methodology
US9411776B2 (en) Separation of data and control in a switching device
US8780926B2 (en) Updating prefix-compressed tries for IP route lookup
US5956336A (en) Apparatus and method for concurrent search content addressable memory circuit
US6535951B1 (en) Hit result register file used in a CAM
US5870479A (en) Device for processing data packets
US5307343A (en) Basic element for the connection network of a fast packet switching node
US6870929B1 (en) High throughput system for encryption and other data operations
US6253280B1 (en) Programmable multiple word width CAM architecture
EP1425755B1 (en) Concurrent searching of different tables within a content addressable memory
US6606317B1 (en) Dual key controlled content addressable memory for accessing packet switch data buffer for multicasting data packets
US20020073073A1 (en) Paralleled content addressable memory search engine
EP1678619B1 (en) Associative memory with entry groups and skip operations
Lee et al. Bundle-updatable SRAM-based TCAM design for openflow-compliant packet processor
EP1070287B1 (en) Method and apparatus of an address analysis function in a network employing boolean logic and programmable structures for complete destination address analysis
US5146560A (en) Apparatus for processing bit streams
US6931127B2 (en) Encryption device using data encryption standard algorithm
US6665210B1 (en) Data storage and retrieval
US5903780A (en) Data sorting device having multi-input comparator comparing data input from latch register and key value storage devices
US6819675B2 (en) Self-route multi-memory expandable packet switch with overflow processing means
US11720492B1 (en) Algorithmic TCAM with compressed key encoding
US6141348A (en) Constant-time programmable field extraction system and method
US7161950B2 (en) Systematic memory location selection in Ethernet switches
US7739423B2 (en) Bulk transfer of information on network device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA CN JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998514674

Format of ref document f/p: F

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase