CN100454899C - Network processing device and method - Google Patents

Network processing device and method Download PDF

Info

Publication number
CN100454899C
CN100454899C CNB2006100027851A CN200610002785A CN100454899C CN 100454899 C CN100454899 C CN 100454899C CN B2006100027851 A CNB2006100027851 A CN B2006100027851A CN 200610002785 A CN200610002785 A CN 200610002785A CN 100454899 C CN100454899 C CN 100454899C
Authority
CN
China
Prior art keywords
address
register
unit
buffer
micro engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CNB2006100027851A
Other languages
Chinese (zh)
Other versions
CN1845529A (en
Inventor
易惕斌
李君英
余洲
朱海培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB2006100027851A priority Critical patent/CN100454899C/en
Publication of CN1845529A publication Critical patent/CN1845529A/en
Application granted granted Critical
Publication of CN100454899C publication Critical patent/CN100454899C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention relates to a network processing device and a method; the network processing device comprises an interface unit, a micro engine unit and an address mapping unit, wherein the interface unit comprising a plurality of channels is used for receiving and transmitting messages; the micro engine unit comprises a plurality of micro engines; the address mapping unit composed of a plurality of register segments are used for segment address mapping, wherein the messages in the channels are filled in the buffer of the micro engines according to the segment address mapping of the address mapping unit. The messages are sent to different message processing micro engines, which brings development convenience using the message processing micro engines to transmit software.

Description

A kind of network processing device and method
Technical field
The present invention relates to network processing unit (NPU:Network Process Unit), is a kind of network processing device and method concretely.
Background technology
Network processing unit (NPU:Network Process Unit) is a kind of semiconductor device able to programme or configurable, aims at network data (packet) and design and optimization.The optimization of network processing unit comprises hardware and the instruction set in order to support that high speed packet classification and packet are revised.The topmost effect of network processing unit is that distinctive data transmission of network application and Processing tasks are unloaded from general processor, thereby the processing of packets of information and transmission are accelerated greatly.A network processing unit is generally formed the concurrent processing of finishing a packet synchronously by a core processor (as Strong ARM core series) and a plurality of micro engine (Micro engine).In network processing unit, a plurality of micro engines are generally arranged in order to receive message from a plurality of interfaces, such scheme is very complicated.
As shown in Figure 1, described in the prior art network processing unit generally comprises following plurality of units: micro engine unit, memory cell, interface unit and register cell.Micro engine unit is the core of network processing unit, finishes message analysis and forwarding.The micro engine unit of network processing unit can have a plurality of micro engine parallel processings.Memory cell is the network processing unit internal storage device, as SRAM etc., and the position of storage such as message or list item.Interface unit is the outbound data interface of network processing unit, and message is entered by incoming interface, is transmitted by outgoing interface again.Register cell is finished the configuration of network processing unit.
Network processing unit of the prior art is as follows in the flow process of handling message: the message that passage enters from interface unit, at first be buffered in memory cell, and when needs were handled, micro engine read from memory cell with specific instruction and handles.Because the message of a plurality of interfaces all is stored in the outer private memory of micro engine, needs complicated storage administration.And, when micro engine is handled message, need the specific instruction of scheduling and need extra resource to read message content to handle.Can increase the management complexity (because a plurality of passages are arranged) of memory cell like this, also can increase the time delay that interface data arrives micro engine in addition, thereby reduce the performance of network processing unit.
As seen, in network processing unit in the past, it is difficult and complexity that a plurality of micro engines will obtain the message that needs to handle, and what have need obtain from external memory space (SRAM), and a plurality of micro engines that then need that have be obtained message from common memory space.
Summary of the invention
The objective of the invention is to, a kind of network processing device and method are provided, make any memory space that is admitted to network processing unit that message can be flexible, improve the performance of network processing unit, also reduced the complexity of network processing unit design simultaneously.
Technical scheme of the present invention is: a kind of network processing device comprises: interface unit, micro engine unit, and described interface unit comprises a plurality of passages, is used for the reception and the forwarding of message; Described micro engine unit comprises a plurality of micro engines, and described device also comprises:
Address mapping unit, it is made up of a plurality of register segmentations, adopts three sector address mapping mechanisms to realize mapping relations between the buffer of passage and pairing buffer of micro engine or micro engine inside;
Wherein, the message in the described passage is filled in the buffer with corresponding buffer of described micro engine or described micro engine inside according to the map addresses of described address mapping unit segmentation.
Described address mapping unit comprises: first section map unit, comprise a plurality of registers, and described register number is identical with the port number of described interface unit; Second section map unit comprises at least one register; The 3rd section map unit comprises two groups of registers, and every group of register number is identical with described micro engine number.
Described the 3rd section map unit comprises: the address register group, comprise a plurality of first registers, the described first register number is identical with the micro engine number of described micro engine unit, stores the address of a micro engine buffer in the described micro engine unit in each described first register; The location register group, it is corresponding one by one with the address register group, comprises a plurality of second registers, stores the address of another micro engine buffer in the described micro engine unit in each described second register.
Store the address of a register of described second section map unit in each register of described first section map unit respectively.
A plurality of register-stored of described first section map unit have the address of a register of described second section map unit.
The span of register number is in described second section map unit: more than or equal to the 1 any integer value smaller or equal to described passage number; Each register-stored of described second section map unit has the address of a register in described the 3rd section map unit.
The present invention also provides a kind of network message processing method, comprising: adopt the passage in the interface to accept entering of message; Make described passage set up the connection of communicating by letter with the buffer of pairing buffer of micro engine or micro engine inside by the sectional address mapping, and the message in the passage is filled in the buffer with pairing buffer of micro engine or micro engine inside, described sectional address is mapped as and adopts three sector address mapping mechanisms to realize mapping relations between the buffer of passage and pairing buffer of micro engine or micro engine inside.
Network message processing method of the present invention, its concrete steps are:
Interface unit removes to visit a register in first section map unit of an one passage correspondence, and obtains to be stored in first address in this register;
Interface unit removes to visit a register of second section map unit of the described first address correspondence, and obtains to be stored in second address in this register;
Interface unit removes to visit a register of the 3rd section map unit of the described second address correspondence, and obtains to be stored in the three-address in this register;
The message that interface unit will enter its described passage deposits the buffer of three-address correspondence in.
After a buffer filling is finished, second address that the register of second section map unit is stored can change according to the address of the next buffer that defines in the 3rd section map unit, to point to the register of the 3rd section map unit of storing described next buffer address.
The present invention also provides a kind of network message processing method, comprising: a passage of interface unit receives outside message; Interface unit search address map unit finds the address of the buffer of described passage correspondence, described address mapping unit to adopt three sector address mapping mechanisms to realize mapping relations between the buffer of passage and pairing buffer of micro engine or micro engine inside; Interface unit deposits this message in pairing buffer of micro engine or micro engine inside buffer by bus.
The concrete steps of described network message processing method are, described interface unit search address map unit finds the address of the buffer of described passage correspondence, first address of a register-stored in pairing first map unit of an interface unit retrieval one passage; Retrieve second address of a register-stored of second map unit of the described first address correspondence; Retrieve the three-address of storage of a register of the 3rd map unit of the described second address correspondence; The message that interface unit will enter an one passage deposits the buffer of three-address correspondence in.
After a buffer filling is finished, second address that the register of second map unit is stored can change according to the address of the next buffer that defines in the 3rd map unit, to point to the register of the 3rd map unit of storing described next buffer address.
Beneficial effect of the present invention is: owing to adopt three sector address mapping mechanisms, the message that enters network processing device can arrive any memory space of network processing unit flexibly, solved in the network processing unit, the interface data management is difficult, and the micro engine difficulty is obtained clear text and is obtained the big general difficult problem of message expense, thereby improve message forwarding and handling property, and make message handle the easier realization of micro engine software.
Description of drawings
Fig. 1 is the structural representation of the network processing unit of prior art;
Fig. 2 is the register map of the present invention's three sector addresses mapping;
Fig. 3 is the register map of the embodiment 1 of the present invention's three sector addresses mapping;
Fig. 4 is the register map of the embodiment 2 of the present invention's three sector addresses mapping;
Fig. 5 is the register map of the embodiment 3 of the present invention's three sector addresses mapping;
Fig. 6 is the structural representation of network processing device of the present invention;
Fig. 7 is the flow chart of network processing method of the present invention;
Fig. 8 is the flow chart of network processing method embodiment of the present invention.
Embodiment
Below in conjunction with description of drawings the specific embodiment of the present invention.The present invention is mainly used in the hardware designs of network processing unit, and define grid processing unit of the present invention comprises following plurality of units: micro engine unit (or memory cell), interface unit and register cell.
Micro engine unit is the core of network processing unit, finishes message analysis and forwarding.The micro engine unit of network processing unit can have a plurality of micro engine parallel processings.
Memory cell is the network processing unit internal storage device, as SRAM etc., and the position of storage such as message or list item.
Interface unit is the outbound data interface of network processing unit, and message has incoming interface to enter, and is transmitted by outgoing interface again.
Register cell is finished the sectional address mapping of network processing unit.
Three groups of register cells of definition in described register cell, that is: first section map unit (MAPStage1), second section map unit (MAP Stage2) and the 3rd section map unit (MAP Stage3), as shown in Figure 2.First section map unit is formed of registers, and described the number of registers equates with the number of described passage; Second section map unit is made up of at least one register; The 3rd section map unit is formed of registers, and described the number of registers is associated with the number of described passage; Buffer cell is made up of a plurality of buffers, and the number of described buffer is corresponding with the number of registers in described the 3rd section map unit, and described buffer is corresponding with a micro engine or refer to the buffer of micro engine inside.
First section map unit registers group register number is identical with passage number in the interface unit, whether passage is effective in the defining interface unit respectively, if what the register value of passage correspondence pointed to is effective register in second section map unit registers group, show that then this passage is effective in the interface unit, otherwise show that this passage is invalid.
Register value in second section map unit registers group then is again to point to particular register in the 3rd section map unit registers group.Be unique transformable in the mapping of three sector addresses, by it do not stop change, thereby the 3rd section map unit address that it is pointed to do not stop to change yet, and that is to say that the effective passage message that defines in first section map unit registers group can arrive in the different buffer (MAP Stage3 address definition) flexibly.
The 3rd section map unit registers group is actual forms one group of register definitions memory address (buffer address), the position of one group of register definitions buffer size and next buffer by two groups of registers one to one.After (expiring) finished in a buffer filling, the register value of second section map unit registers group correspondence can change register value according to the next buffer position that defines in the 3rd section map unit, and this second section map unit register pointed to the 3rd a section new map unit register again.Subsequent packet and the like.
Embodiment 1:
Each map addresses in stage be provided with as signal shown in Figure 3, its structural representation as shown in Figure 6, network processing device 100 wherein comprises: interface unit 101, micro engine unit 103, described interface unit comprises a plurality of passages, is used for the reception and the forwarding of message; Described micro engine unit comprises a plurality of micro engines, it is characterized in that, described device also comprises: address mapping unit 102, and it is made up of a plurality of register segmentations, is used for the map addresses of segmentation; Wherein, the message in the described passage is filled in the buffer memory of described micro engine according to the map addresses of described address mapping unit segmentation.Its concrete mapping relations are:
The n of a MAP Stage1 channel address all is provided with and points to MAP Stage2 address 0, and the n of a specification interface unit passage is effective, all might have message to arrive.Be provided with 0 address of pointing to MAP Stage3 in 0 address of MAP Stage2.Because n the channel address of MAP Stage1 all is provided with 0, illustrate that n passage all has message to enter, n passage is with Round-Robin mode poll, MAP Stage2 address 0 is pointed in MAP Stage1 address 0 (being passage 1), MAP Stage2 address 0 also is the address 0 of pointing to MAP Stage3, so the message of passage 1 is filled the Buffer1 address of MAP Stage3 address 0 (the MAPStage3 address that MAP Stage2 address is pointed) indication, after filling up, can revise MAP Stage2 address according to the step-length of second group of register setting of MAP Stage3, fill the Buffer2 of next MAP Stage3 address indication if desired, it is 1 that step-length then can be set, 0 of MAP Stage2 address can be modified to 1 (0+1), when so the message of passage 2 (poll of MAP Stage1 passage) arrives (MAP Stage1 address setting passage 2 also is to point to MAP Stage2 address 0), then according to the value (=1) of MAP Stage2 address 0, get the Buffer2 in the MAP Stage3 address 1 (MAP Stage3 pointed address, MAP Stage2 address), thereby remove to fill Buffer2, MAP Stage2 address becomes 2 (1+1) again then, by that analogy, until last buffer, MAP Stage2 address becomes 0 again again, thereby finish a circulation, so Infinite Cyclic is gone down, and the message of each passage is distributed to (SRAM or message are handled micro engine etc.) among any Buffer flexibly.
As shown in Figure 7, be the workflow of network message processing method of the present invention, wherein: adopt the passage in the interface to accept entering of message; Make described passage set up the connection of communicating by letter with the buffer memory of micro engine, and the message in the passage is filled in the buffer memory of described micro engine by the sectional address mapping.
The concrete workflow of network message processing method of the present invention is (as shown in Figure 8): mapping relations as shown in Figure 3, the interface unit of network processing device removes to visit a register in first section map unit of its passage 1 correspondence, that is: and obtain to be stored in first address 0 in this register; Interface unit removes to visit a register of second section map unit of the described first address correspondence, and obtains to be stored in second address 0 in this register; Interface unit removes to visit a register of the 3rd section map unit of the described second address correspondence, and obtains to be stored in the three-address (being the address of Buffer1) in this register; Thereby the message that makes described interface unit will enter its described passage deposits among the buffer Buffer1 of three-address correspondence.
After Buffer1 fills up, can revise MAP Stage2 address according to the step-length of second group of register setting of MAP Stage3, fill the Buffer2 of next MAP Stage3 address indication if desired, it is 1 that step-length then can be set, 0 of MAP Stage2 address can be modified to 1 (0+1), when so the message of passage 2 (poll of MAP Stage1 passage) arrives (MAP Stage1 address setting passage 2 also is to point to MAP Stage2 address 0), then according to the value (=1) of MAP Stage2 address 0, get the Buffer2 in the MAP Stage3 address 1 (MAP Stage3 pointed address, MAP Stage2 address), thereby remove to fill Buffer2, MAP Stage2 address becomes 2 (1+1) again then, by that analogy, until last buffer, MAP Stage2 address becomes 0 again again, thereby finish a circulation, so Infinite Cyclic is gone down, and the message of each passage is distributed to (SRAM or message are handled micro engine etc.) among any Buffer flexibly.
Embodiment 2:
Each map addresses in stage be provided with as signal shown in Figure 4, its structural representation as shown in Figure 6, network processing device 100 wherein comprises: interface unit 101, micro engine unit 103, described interface unit comprises a plurality of passages, is used for the reception and the forwarding of message; Described micro engine unit comprises a plurality of micro engines, it is characterized in that, described device also comprises: address mapping unit 102, and it is made up of a plurality of register segmentations, is used for the map addresses of segmentation; Wherein, the message in the described passage is filled in the buffer memory of described micro engine according to the map addresses of described address mapping unit segmentation.Its concrete mapping relations are:
A MAP Stage1 address n passage all is provided with the value of pointing to MAP Stage2 address, and n passage of MAP Stage1 address is all effective.The value that passage 1 is provided with is 0, points to MAP Stage2 address 0, and the value of the sensing MAP Stage3 address that MAP Stage2 address 0 is provided with also is 0, and the message that passage 1 arrives will be filled the buffer1 in the MAPStage3 address 0.Same, passage 2 messages also will be filled the buffer2 in the MAP Stage3 address 1, passage n message will lead to the buffer n that fills among the MAP Stage3 address n, so each passage has been configured to independently arrive among each different buffer, in such cases, MAP Stage2 address can constant (change step be set to 0), and the message of passage 0 is filled buffer1 all the time, and the message of passage n is filled buffer n all the time.
As shown in Figure 7, be the workflow of network message processing method of the present invention, wherein: adopt the passage in the interface to accept entering of message; Make described passage set up the connection of communicating by letter with the buffer memory of micro engine, and the message in the passage is filled in the buffer memory of described micro engine by the sectional address mapping.The concrete workflow of network message processing method of the present invention is: mapping relations as shown in Figure 4, the interface unit of network processing device removes to visit a register in first section map unit of its passage 1 correspondence, obtains to be stored in first address 0 in this register; Interface unit removes to visit a register of second section map unit of the described first address correspondence, and obtains to be stored in second address 0 in this register; Interface unit removes to visit a register of the 3rd section map unit of the described second address correspondence, and obtains to be stored in the three-address (being the address of Buffer1) in this register; Thereby the message that makes described interface unit will enter its described passage 1 deposits among the buffer Buffer1 of three-address correspondence.
Interface unit removes to visit a register in first section map unit of its passage 2 correspondences, obtains to be stored in first address 1 in this register; Interface unit removes to visit a register of second section map unit of the described first address correspondence, and obtains to be stored in second address 1 in this register; Interface unit removes to visit a register of the 3rd section map unit of the described second address correspondence, and obtains to be stored in the three-address (being the address of Buffer2) in this register; Thereby the message that makes described interface unit will enter its described passage 2 deposits among the buffer Buffer2 of three-address correspondence.
Repeat above-mentioned steps, remove to visit a register in first section map unit of its passage n correspondence, obtain to be stored in the first address n in this register up to interface unit; Interface unit removes to visit a register of second section map unit of the described first address correspondence, and obtains to be stored in the second address n in this register; Interface unit removes to visit a register of the 3rd section map unit of the described second address correspondence, and obtains to be stored in the three-address (being the address of Buffern) in this register; Thereby the message that makes described interface unit will enter its described passage n deposits among the buffer Buffern of three-address correspondence.
Embodiment 3:
Each map addresses in stage be provided with as signal shown in Figure 5, its structural representation as shown in Figure 6, network processing device 100 wherein comprises: interface unit 101, micro engine unit 103, described interface unit comprises a plurality of passages, is used for the reception and the forwarding of message; Described micro engine unit comprises a plurality of micro engines, it is characterized in that, described device also comprises: address mapping unit 102, and it is made up of a plurality of register segmentations, is used for the map addresses of segmentation; Wherein, the message in the described passage is filled in the buffer memory of described micro engine according to the map addresses of described address mapping unit segmentation.Its concrete mapping relations are:
2 channel address of MAP Stage1 are provided with and point to MAP Stage2 address 0,2 channel address in addition of MAP Stage1 are provided with respectively and point to MAP Stage2 address 3, address 4, the residue n of a MAP Stage1 channel address all is provided with and points to MAP Stage2 address m, each passage of specification interface unit is effective, all might have message to arrive.Be provided with 0 address of pointing to MAP Stage3 in 0 address of MAP Stage2.Because 2 channel address of MAP Stage1 all are provided with 0, illustrate that these 2 passages all have message to enter, these 2 passages are with Round-Robin mode poll, MAP Stage2 address 0 is pointed in MAP Stage1 address 0 (being passage 1), MAP Stage2 address 0 also is the address 0 of pointing to MAP Stage3, so the message of passage 1 is filled the Buffer1 address of MAP Stage3 address 0 (the MAP Stage3 address that MAP Stage2 address is pointed) indication, after filling up, can revise MAP Stage2 address according to the step-length of second group of register setting of MAP Stage3, fill the Buffer2 of next MAP Stage3 address indication if desired, it is 1 that step-length then can be set, 0 of MAP Stage2 address can be modified to 1 (0+1), when so the message of passage 2 (poll of MAPStage1 passage) arrives (MAP Stage1 address setting passage 2 also is to point to MAPStage2 address 0), then according to the value (=1) of MAP Stage2 address 0, get the Buffer2 in the MAP Stage3 address 1 (MAP Stage3 pointed address, MAP Stage2 address), thereby remove to fill Buffer2.
The passage of MAP Stage1 address 1 is provided with and points to MAP Stage2 address 3, and this passage of MAP Stage1 address is effective.The value of the sensing MAP Stage3 address that MAP Stage2 address 3 is provided with is 3, and the message that this passage arrives will be filled the buffer3 in the MAP Stage3 address 3.
The passage of MAP Stage1 address 3 is provided with and points to MAP Stage2 address 4, and this passage of MAP Stage1 address is effective.The value of the sensing MAP Stage3 address that MAP Stage2 address 3 is provided with is 4, and the message that this passage arrives will be filled the buffer4 in the MAP Stage3 address 4.
The residue n of a MAP Stage1 channel address all is provided with and points to MAP Stage2 address m, is provided with the n address of pointing to MAP Stage3 in the m address of MAP Stage2.Because n the channel address of MAP Stage1 all is provided with m, illustrate that this n passage all has message to enter, this n passage is with Round-Robin mode poll, MAP Stage1 address n points to MAP Stage2 address m, MAP Stage2 address m also is the address n that points to MAP Stage3, so the message of passage n1 is filled the Buffern1 address of MAP Stage3 address n (the MAP Stage3 address that MAP Stage2 address is pointed) indication, after filling up, can revise MAP Stage2 address according to the step-length of second group of register setting of MAP Stage3, fill the Buffern2 of next MAP Stage3 address indication if desired, it is 1 that step-length then can be set, MAP Stage2 address m then can be modified to m+1, so (MAP Stage1 address setting passage n2 pointed to MAP Stage2 address m when the message of passage n2 (poll of MAP Stage1 passage) arrived, then according to the value of MAP Stage2 address m, get the Buffern2 among the MAP Stage3 address n (MAP Stage3 pointed address, MAP Stage2 address), thereby remove to fill Buffern2.MAP Stage2 address becomes m+2 again then, by that analogy, until last buffer, MAP Stage2 address becomes m again again, thereby finish a circulation, so Infinite Cyclic is gone down, and the message of each passage is distributed to (SRAM or message are handled micro engine etc.) among any Buffer flexibly.
As shown in Figure 7, be the workflow of network message processing method of the present invention, wherein: adopt the passage in the interface to accept entering of message; Make described passage set up the connection of communicating by letter with the buffer memory of micro engine, and the message in the passage is filled in the buffer memory of described micro engine by the sectional address mapping.The concrete workflow mapping relations as shown in Figure 5 of network message processing method of the present invention, the combination of network message processing method among network message processing method wherein such as above-mentioned embodiment 1 and the embodiment 2.
More than be three embodiment,, can also organize out more collocation method more flexibly, thereby the message of different passages is arrived in the different addresses (micro engine unit or memory cell), make things convenient for message to handle the processing of micro engine according to the explanation of front.
The beneficial effect that technical solution of the present invention is brought is: the invention solves in the network processing unit, the interface data management is difficult to and the micro engine difficulty is obtained clear text and obtained the big general difficult problem of message expense, thereby make message handle the easier realization of micro engine software, and improve message forwarding performance.
Above embodiment only is used to illustrate the present invention, but not is used to limit the present invention.

Claims (13)

1, a kind of network processing device comprises: interface unit, micro engine unit, and described interface unit comprises a plurality of passages, is used for the reception and the forwarding of message; Described micro engine unit comprises a plurality of micro engines, it is characterized in that, described device also comprises:
Address mapping unit, it is made up of a plurality of register segmentations, adopts three sector address mapping mechanisms to realize mapping relations between the buffer of passage and pairing buffer of micro engine or micro engine inside;
Wherein, the message in the described passage is filled in the buffer with corresponding buffer of described micro engine or described micro engine inside according to the map addresses of described address mapping unit segmentation.
2, network processing device as claimed in claim 1 is characterized in that, described address mapping unit comprises:
First section map unit comprises a plurality of registers, and described register number is identical with the port number of described interface unit;
Second section map unit comprises at least one register;
The 3rd section map unit comprises two groups of registers, and every group of register number is identical with described micro engine number.
3, network processing device as claimed in claim 2 is characterized in that, described the 3rd section map unit comprises:
The address register group comprises a plurality of first registers, and the described first register number is identical with the micro engine number of described micro engine unit, stores the address of a micro engine buffer in the described micro engine unit in each described first register;
The location register group, it is corresponding one by one with the address register group, comprises a plurality of second registers, stores the address of another micro engine buffer in the described micro engine unit in each described second register.
4, network processing device as claimed in claim 2 is characterized in that, stores the address of a register of described second section map unit in each register of described first section map unit respectively.
5, network processing device as claimed in claim 2 is characterized in that, a plurality of register-stored of described first section map unit have the address of a register of described second section map unit.
6, network processing device as claimed in claim 2 is characterized in that, the span of register number is in described second section map unit: more than or equal to the 1 any integer value smaller or equal to described passage number; Each register-stored of described second section map unit has the address of a register in described the 3rd section map unit.
7, network processing device as claimed in claim 1 is characterized in that, described address mapping unit comprises:
First section map unit comprises a plurality of registers, and described register number is identical with the port number of described interface unit; Store the address of a register of second section map unit in each register of described first section map unit respectively;
Second section map unit comprises at least one register; The span of register number is in described second section map unit: more than or equal to the 1 any integer value smaller or equal to described passage number; Each register of described second section map unit contains the address of the 3rd a section register in the map unit;
The 3rd section map unit comprises two groups of registers, and every group of register number is identical with described micro engine number; Described the 3rd section map unit comprises: the address register group, comprise a plurality of first registers, the described first register number is identical with the micro engine number of described micro engine unit, stores the address of a micro engine buffer in the described micro engine unit in each described first register; The location register group, it is corresponding one by one with the address register group, comprises a plurality of second registers, stores the address of another micro engine buffer in the described micro engine unit in each described second register.
8, a kind of network message processing method is characterized in that,
Adopt the passage in the interface to accept entering of message;
Make described passage set up the connection of communicating by letter with the buffer of pairing buffer of micro engine or micro engine inside by the sectional address mapping, and the message in the passage is filled in the buffer with pairing buffer of micro engine or micro engine inside, described sectional address is mapped as and adopts three sector address mapping mechanisms to realize mapping relations between the buffer of passage and pairing buffer of micro engine or micro engine inside.
9, network message processing method as claimed in claim 8 is characterized in that,
Interface unit removes to visit a register in first section map unit of an one passage correspondence, and obtains to be stored in first address in this register;
Interface unit removes to visit a register of second section map unit of the described first address correspondence, and obtains to be stored in second address in this register;
Interface unit removes to visit a register of the 3rd section map unit of the described second address correspondence, and obtains to be stored in the three-address in this register;
The message that interface unit will enter its described passage deposits the buffer of three-address correspondence in.
10, network message processing method as claimed in claim 9 is characterized in that, also comprises:
After a buffer filling is finished, second address that the register of second section map unit is stored can change according to the address of the next buffer that defines in the 3rd section map unit, to point to the register of the 3rd section map unit of storing described next buffer address.
11, a kind of network message processing method is characterized in that, comprising:
A passage of interface unit receives outside message;
Interface unit search address map unit finds the address of the buffer of described passage correspondence, described address mapping unit to adopt three sector address mapping mechanisms to realize mapping relations between the buffer of passage and pairing buffer of micro engine or micro engine inside;
Interface unit deposits this message in pairing buffer of micro engine or micro engine inside buffer by bus.
12, network message processing method as claimed in claim 11 is characterized in that, described interface unit search address map unit finds the address of the buffer of described passage correspondence, specifically comprises:
First address of a register-stored in pairing first map unit of an interface unit retrieval one passage;
Retrieve second address of a register-stored of second map unit of the described first address correspondence;
Retrieve the three-address of a register-stored of the 3rd map unit of the described second address correspondence;
The message that interface unit will enter an one passage deposits the buffer of three-address correspondence in.
13, network message processing method as claimed in claim 12 is characterized in that, also comprises:
After a buffer filling is finished, second address that the register of second map unit is stored can change according to the address of the next buffer that defines in the 3rd map unit, to point to the register of the 3rd map unit of storing described next buffer address.
CNB2006100027851A 2006-01-25 2006-01-25 Network processing device and method Active CN100454899C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100027851A CN100454899C (en) 2006-01-25 2006-01-25 Network processing device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100027851A CN100454899C (en) 2006-01-25 2006-01-25 Network processing device and method

Publications (2)

Publication Number Publication Date
CN1845529A CN1845529A (en) 2006-10-11
CN100454899C true CN100454899C (en) 2009-01-21

Family

ID=37064444

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100027851A Active CN100454899C (en) 2006-01-25 2006-01-25 Network processing device and method

Country Status (1)

Country Link
CN (1) CN100454899C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789869A (en) * 2009-01-23 2010-07-28 华为技术有限公司 Processing method and devices of protocol independent multicast service
CN102855213B (en) * 2012-07-06 2017-10-27 中兴通讯股份有限公司 A kind of instruction storage method of network processing unit instruction storage device and the device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5379393A (en) * 1992-05-14 1995-01-03 The Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations Cache memory system for vector processing
US6243762B1 (en) * 1994-08-08 2001-06-05 Mercury Computer Systems, Inc. Methods and apparatus for data access and program generation on a multiprocessing computer
US20030105901A1 (en) * 1999-12-22 2003-06-05 Intel Corporation, A California Corporation Parallel multi-threaded processing
CN1577310A (en) * 2003-06-27 2005-02-09 株式会社东芝 Information processing system including processors and memory managing method used in the same system
US20050198090A1 (en) * 2004-03-02 2005-09-08 Altek Corporation Shift register engine

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5379393A (en) * 1992-05-14 1995-01-03 The Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations Cache memory system for vector processing
US6243762B1 (en) * 1994-08-08 2001-06-05 Mercury Computer Systems, Inc. Methods and apparatus for data access and program generation on a multiprocessing computer
US20030105901A1 (en) * 1999-12-22 2003-06-05 Intel Corporation, A California Corporation Parallel multi-threaded processing
CN1577310A (en) * 2003-06-27 2005-02-09 株式会社东芝 Information processing system including processors and memory managing method used in the same system
US20050198090A1 (en) * 2004-03-02 2005-09-08 Altek Corporation Shift register engine

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
IXP1200网络处理器多层次并行机制研究. 刘钰,赵荣彩,张铮,芦阳.微机发展,第14卷第6期. 2004
IXP1200网络处理器多层次并行机制研究. 刘钰,赵荣彩,张铮,芦阳.微机发展,第14卷第6期. 2004 *
IXP2400网络处理器及其微引擎中多线程实现的研究. 吴闻,李雪莹,许榕生,刘秉瀚.计算机工程与应用,第9期. 2004
IXP2400网络处理器及其微引擎中多线程实现的研究. 吴闻,李雪莹,许榕生,刘秉瀚.计算机工程与应用,第9期. 2004 *

Also Published As

Publication number Publication date
CN1845529A (en) 2006-10-11

Similar Documents

Publication Publication Date Title
US8656071B1 (en) System and method for routing a data message through a message network
CN108900327B (en) DPDK-based astronomical data acquisition and real-time processing method
CN1201532C (en) Quick-circulating port dispatcher for high-volume asynchronous transmission mode exchange
JP6535253B2 (en) Method and apparatus for utilizing multiple linked memory lists
US7158964B2 (en) Queue management
US20130219148A1 (en) Network on chip processor with multiple cores and routing method thereof
CN101488922B (en) Network-on-chip router having adaptive routing capability and implementing method thereof
CN104394096A (en) Multi-core processor based message processing method and multi-core processor
US8751701B2 (en) Host channel adapter with pattern-type DMA
CN101359314A (en) Optimal use of buffer space by a storage controller which writes retrieved data directly to a memory
US20130028256A1 (en) Network element with shared buffers
CN114356223B (en) Memory access method and device, chip and electronic equipment
WO2012019475A1 (en) Access control method and device for reduced latency dynamic random access memory with separate input/output (rldram sio)
CN1545658A (en) Switch fabric with dual port memory emulation scheme
JP7074839B2 (en) Packet processing
CN112084136A (en) Queue cache management method, system, storage medium, computer device and application
US20030056073A1 (en) Queue management method and system for a shared memory switch
CN102567278A (en) On-chip multi-core data transmission method and device
CN102402422A (en) Processor component and memory sharing method thereof
CN111641566A (en) Data processing method, network card and server
CN100454899C (en) Network processing device and method
AU2003234641A1 (en) Inter-chip processor control plane
WO2017086987A1 (en) In-memory data shuffling
CN101420233B (en) Bit interleaver and interleaving method
CN109947390B (en) Buffer system and method of operation thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220118

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Patentee after: Super fusion Digital Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.