US5630021A - Hamming neural network circuit - Google Patents

Hamming neural network circuit Download PDF

Info

Publication number
US5630021A
US5630021A US08/316,135 US31613594A US5630021A US 5630021 A US5630021 A US 5630021A US 31613594 A US31613594 A US 31613594A US 5630021 A US5630021 A US 5630021A
Authority
US
United States
Prior art keywords
template
competition
neuron
neurons
exemplar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/316,135
Inventor
Zhi-Jian Li
Bing-Xue Shi
Bin-Qiao Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
United Microelectronics Corp
Original Assignee
United Microelectronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by United Microelectronics Corp filed Critical United Microelectronics Corp
Priority to US08/316,135 priority Critical patent/US5630021A/en
Assigned to UNITED MICROELECTRONICS CORP. reassignment UNITED MICROELECTRONICS CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, ZHI-JIAN, QIAO-BIN, SHI, BING-XUE
Application granted granted Critical
Publication of US5630021A publication Critical patent/US5630021A/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Definitions

  • the present invention relates to a Hamming neural network circuit, and more particularly to an analog integrated circuit of a Hamming neural network which can be fabricated in CMOS (Complementary-Metal-Oxide-Semiconductor) technology.
  • CMOS Complementary-Metal-Oxide-Semiconductor
  • the Hamming network is a two-layer network, and implements the optimum minimum error classifier when bit errors are random and independent.
  • the lower subnet shown in Lippmann's FIG. 6 calculates N minus the Hamming distance to M exemplar patterns.
  • the upper MAXNET subnet selects that node with the maximum output. All nodes use threshold-logic nonlinearities where it is assumed that the outputs of these nonlinearities never saturate.
  • Weights and thresholds are first set in the lower subnet such that the matching scores generated by the outputs of the middle nodes of FIG. 6 are equal to N minus the Hamming distance to the exemplar patterns. These matching scores will range from 0 to the number of elements in the input (N) and are highest for those nodes corresponding to classes with exemplars that best match the input. Thresholds and weights in the MAXNET subnet are fixed. All thresholds are set to zero and weights from each node to itself are 1. Weights between nodes are inhibitory with a value of - ⁇ where ⁇ 1/M.
  • the primary object of the present invention is to provide a Hamming neural network circuit which can realize the Hamming network model, and is very suitable for being fabricated in CMOS technology.
  • a Hamming neural network circuit having N binary inputs and M exemplar template outputs comprises:
  • a template matching calculation subnet including M first neurons in which M exemplar templates are stored respectively, each first neuron being consisted of N pull-up and pull-down transistor pairs connected in parallel with each other, the N pull-up and pull-down transistor pairs of each first neuron being connected to and controlled by the N binary inputs, respectively, to operate in the nonsaturation region during a first time interval so that the M first neurons generate M template matching signals depending on the matching degrees between the N binary inputs and the M exemplar templates stored in the M first neurons; and
  • a winner-take-all subnet including M second neurons, each having a template competition node, a load element connected between a power source and the template competition node, and a competition circuit connected between the template competition node and ground; the M template competition nodes being connected to the M template matching signals respectively during a second time interval for generating the M exemplar template outputs, and the competition circuit of each second neuron being consisted of M-1 parallel-connected transistors controlled respectively by the template competition nodes of all second neurons except the template competition node of itself so that the template competition node connecting with the largest template matching signal is eventually at a relatively high voltage level, and the other template competition nodes are at a relatively low voltage level, after competition.
  • the M template competition nodes are connected with each other during the first time interval in order to balance their voltage levels before competition, and are disconnected from each other during the second time interval when competition proceeds.
  • the competition circuit of each second neuron further includes an additional parallel-connected transistor adapted to be controlled by an adjusting threshold voltage
  • the winner-take-all subnet further includes an additional second neuron acting as a threshold neuron.
  • the template competition node of the threshold neuron is connected to the adjusting threshold voltage during the second time interval, and the M parallel-connected transistors of the threshold neuron are controlled respectively by the template competition nodes of all other second neurons, so that all exemplar template outputs are eventually at the relatively low voltage level, after competition, if the adjusting threshold voltage is higher than the voltages of all template matching signals.
  • the threshold voltages of each pull-up and each pull-down transistors in the first neurons are individually preset depending upon the corresponding exemplar templates, and parts of the binary inputs are selectively inverted in each first neuron, depending upon its corresponding exemplar template.
  • the load elements in the second neurons are MOS transistors adapted to be controlled by a bias voltage.
  • FIG. 1 is a schematically electrical circuit diagram of a Hamming neural network according to one preferred embodiment of the present invention
  • FIGS. 2a through 2e illustrate five exemplar patterns or templates of Arabic numerals "0" to "4", constructed by 70 (7 ⁇ 10 array) binary pixels;
  • FIGS. 3a through 3c illustrate the experimental results for the Hamming neural network circuit shown in FIG. 1;
  • FIG. 4 illustrates a timing chart of two clock signals used in the Hamming neural network circuit of FIG. 1.
  • the Hamming neural network circuit includes a template matching calculation subnet consisted of M first neurons 10-1 through 10-m; and a winner-take-all subnet consisted of M second neurons 20-1 through 20-m and a threshold neuron 20T.
  • the first neurons 10-1 through 10-m are connected to the second neurons 20-1 through 20-m via M switch circuits 40-1 through 40-m, respectively.
  • the threshold neuron 20T is connected to a threshold voltage V T via an additional switch circuit 40T.
  • the Hamming neural network circuit can be used for speech and image recognitions, and its inputs are binary signals.
  • the Hamming neural network circuit has N binary inputs x 1 through x n , and M exemplar template outputs V o1 through V om .
  • the M exemplar templates or patterns can be determined and designed by statistics and analysis. Referring to FIG. 2, there is shown five exemplar templates or patterns of Arabic numerals "0" to "4" consisted of 7 ⁇ 10 array of binary pixels, i.e. white and black pixels, just for easy understanding of the present invention. In this case, the number N is 70, and the number M is 5.
  • the white pixel may be represented by a logic "0" signal while the black pixel may be represented by a logic "1” signal.
  • the unknown pattern containing 70 binary pixels is inputted into the Hamming neural network circuit via the binary inputs x 1 through x n , and the Hamming neural network circuit determines which one of the M exemplar templates is most representative of the unknown pattern by generating a logic "1" signal at the corresponding exemplar template output and a logic "0" signal at the other exemplar template outputs.
  • Each first neuron 10-1 ⁇ 10-m includes N pull-up and pull-down transistor pairs connected in parallel with each other and between a power source V dd and ground.
  • the N pull-up and pull-down transistor pairs of each first neuron 10-1 ⁇ 10-m are controlled by the N binary inputs x 1 through x n respectively.
  • the pull-up transistors are PMOS (P-channel Metal-Oxide-Semiconductor) transistors
  • the pull-down transistors are NMOS (N-channel MOS) transistors.
  • the drain electrodes of PMOS pull-up and NMOS pull-down transistors of each first neuron 10-1 ⁇ 10-m are connected together to generate a template matching signal V i1 ⁇ V im depending on the matching degree between the N binary inputs x 1 through x n and the corresponding exemplar template.
  • the M exemplar templates are "stored” in the M first neurons 10-1 ⁇ 10-m respectively.
  • the "storage" of the exemplar templates can be achieved by appropriately designing the threshold voltages of all pull-up and pull-down transistors in the corresponding first neurons so that the voltage levels of the template matching signals V i1 ⁇ V im are determined by the matching degrees between the N binary inputs x 1 through x n and the exemplar templates.
  • the unknown binary pattern with N elements must be presented at the inputs x 1 ⁇ x n long enough to allow the template matching signals V i1 ⁇ V im of the template matching calculation subnet to settle and initialize the exemplar template output values of the winner-take-all subnet.
  • the unknown pattern is presented at the binary inputs during a first time interval ⁇ 1 as shown in FIG. 4.
  • the template matching signals V i1 ⁇ V im of the template matching calculation subnet are connected to the exemplar template outputs or template competition nodes V o1 ⁇ V om of the winner-take-all subnet via M switch circuits 40-1 through 40-m respectively, as shown in FIG. 1.
  • Each switch circuit 40-1 ⁇ 40-m, or 40-T includes a PMOS transistor and an NMOS transistor connected in paralled with each other.
  • the PMOS transistors of the switch circuits are controlled by a clock signal ⁇ 2
  • the NMOS transistors of the switch circuits are controlled by the clock signal ⁇ 2 .
  • the clock signal ⁇ 2 is shown in FIG. 4.
  • the parallel-connected PMOS and NMOS transistors of the switch circuits 40-1 through 40-m are used to eliminate the threshold voltage drop in order not to decrease the voltage levels of the template matching signals V i1 ⁇ V im because the template matching signals may be very small.
  • the template matching signals are inputted into the winner-take-all subnet.
  • the second neurons 20-1 through 20-m and the threshold neuron 20-T have the same circuit structure, and each includes a load element 22 connected between the power source V dd and the template competition node V o1 ⁇ V om or V t , and a competition circuit 24 connected between the template competition node V o1 ⁇ V om or V t and ground.
  • the load elements 22 may be PMOS transistors controlled by a bias voltage V p to act as a resistance element.
  • the competition circuits 24 all include M NMOS transistors connected in parallel with each other and controlled respectively by the template competition nodes V o1 ⁇ V om and V t of all second neurons 20-1 through 20-m and the threshold neuron 20-T except the template competition node of itself.
  • the inputted template matching signals V i1 through V im and the threshold voltage V T compete with each other in the winner-take-all subnet.
  • the largest singal eventually becomes a relatively high voltage signal, and the other signals eventually become relatively low voltage signal.
  • the voltage level of the template matching signal V i1 is larger than those of the other template matching signals and the threshold voltage.
  • the NMOS transistors in the competition circuits 24 of the second neurons 20-2 through 20-m and the threshold neuron 20-T which are controlled by the template matching signal V i1 are much turned on, and thus the voltage levels of the template competition nodes V o2 ⁇ V om and V t are pulled down.
  • the NMOS transistors in the competition circuit 24 of the second neuron 20-1 which are controlled by the template competition nodes V o2 ⁇ V om and V t respectively, are then more turned off, and thus the voltage level of the template competition node V o1 is pulled up. Then, the voltage levels of the template competition nodes V o2 ⁇ V om and V t are further pulled down because the NMOS transistors controlled by the template competition node V o1 are more turned on. Eventually, the template competition nodes V o1 is at a relatively high voltage level, for example near the power source V dd , and the other template competition nodes V o2 ⁇ V om and V t are at a relatively low voltage level, for example near the ground.
  • the Hamming neural network circuit of the present invention determines that the unknown input pattern is the exemplar template or pattern stored in the first neuron 10-1.
  • the threshold voltage V T is adjustable, and is used to reject the recognition if the template matching signals V i 1 through V im are all smaller than the threshold voltage V T .
  • the template competition node V t is eventually at a relatively high voltage level, and the other template competition nodes V o1 ⁇ V om are eventually at a relatively low voltage level.
  • the Hamming neural network circuit of the present invention determines that a "no match" result occurs.
  • an NMOS balance/isolation transistor 30 is connected between every two adjacent template competition nodes V o1 ⁇ V om and V t .
  • the NMOS balance/isolation transistors 30 are controlled by the clock signal ⁇ 1 as shown in FIG. 4, and are used to balance the voltage levels of all template competition nodes V o1 ⁇ V om and V t during the first time interval ⁇ 1 , i.e. before competition, and to isolate all template competition nodes V o1 ⁇ V om and V t from each other during the second time interval ⁇ 2 , i.e. during competition.
  • the period of time T shown in FIG. 4 indicates one recognition cycle.
  • the Hamming neural network circuit of the present invention has been implemented by single layer metal CMOS technology.
  • the gate length of the MOS transistors in the template matching calculation subnet are designed larger than the gate width.
  • the PMOS transistors in the template matching calculation subnet have a gate length of about 20 microns, and a gate width of about 15 microns.
  • the NMOS transistors in the template matching calculation subnet have a gate length of about 20 microns, and a gate width of about 10 microns.
  • the gate length and width of the NMOS transistors in the winner-take-all subnet is about 5 microns and about 20 microns respectively.
  • a CMOS inverter is further added to each exemplar template output to increase the driving capacity and avoid the effect of external load on the resolution accuracy of the winner-take-all subnet.
  • the operation speed of the template matching calculation subnet corresponds approximately the delay time of one logic gate.
  • the speed and resolution accuracy of the whole Hamming neural network mainly depends on the winner-take-all subnet, so the winner-take-all subnet is tested thoroughly.
  • the testing result shows that the resolution accuracy of the winner-take-all subnet can reach 20 mV.
  • the operation speed strongly depends on the input voltage difference of the input terminals, the whole input voltage level, and the bias voltage V p applied to the PMOS load transistors.
  • the following testing result is for a three terminal network. Referring to FIG. 3a, the testing result shows that the convergence or competition time decreases approximately linearly with the increase of the input voltage difference. Referring to FIG. 3b, keeping the input voltage difference at a fixed value, the testing result shows that the convergence time decreases quickly with the increase of the whole input voltage level. Referring to FIG. 3c, the testing result shows that the convergence time decreases quickly with the reduction of the bias voltage V p .
  • the testing result of the winner-take-all subnet with different number of input and output terminals also shows that the convergence time increases slowly with the increase of terminal number.
  • the reason is that in the present circuit design the convergence time mainly depends on the output load other than the load in the network itself.

Abstract

A Hamming neural network circuit is provided with N binary inputs and M exemplar template outputs, and has a template matching calculation subnet and a winner-take-all subnet. The template matching calculation subnet includes M first neurons in which M exemplar templates are stored respectively. Each first neuron includes N pull-up and pull-down transistor pairs connected in parallel with each other, and connected to and controlled by the N binary inputs, respectively, so that the M first neurons generate M template matching signals depending on the matching degrees between the N binary inputs and the M exemplar templates. The winner-take-all subnet includes M second neurons, each having a template competition node, a load element connected between a power source and the template competition node, and a competition circuit connected between the template competition node and ground. The M template competition nodes are connected to the M template matching signals respectively for generating the M exemplar template outputs. The competition circuit of each second neuron includes M -1 parallel-connected transistors controlled respectively by the template competition nodes of all second neurons except the template competition node of itself so that the template competition node connecting with the largest template matching signal is eventually at a relatively high voltage level, and the other template competition nodes are at a relatively low voltage level, after competition.

Description

BACKGROUND OF THE INVENTION
The present invention relates to a Hamming neural network circuit, and more particularly to an analog integrated circuit of a Hamming neural network which can be fabricated in CMOS (Complementary-Metal-Oxide-Semiconductor) technology.
Artificial neural network models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. Now the main research method in this field is still using software to simulate some models or realize some algorithms. Although this research method can solve many problems, it is not suitable for some applications requiring real-time processing such as some image and speech recognition applications. The modern VLSI (Very-Large-Scale-Integration) technology has made it possible to fabricate more practicable artificial neural network chip. Using digital logic circuit can not realize really full parallel processing. The artificial neural network realized by analog integrated circuit have full parallel processing capability and other inherent advantages of biological neural networks.
The literature "An Introduction to Computing with Neural Nets", Richard P. Lippmann, IEEE ASSP Magazine, pp. 4-22, April, 1987, provides an introduction to the field of artificial neural networks by reviewing six important neural network models that can be used for pattern classification. As described in Lippmann's literature, these networks are highly parallel building blocks that illustrate neural-network components and design principles and can be used to construct more complex systems. One of the six neural network models is the Hamming network which is a neural network implementation of the optimum classifier for binary patterns corrupted by random noise. The structural model of a feed-forward Hamming network maximum likelihood classifier for binary inputs corrupted by noise is described in FIG. 6 of Lippmann's literature. The Hamming network is a two-layer network, and implements the optimum minimum error classifier when bit errors are random and independent. The lower subnet shown in Lippmann's FIG. 6 calculates N minus the Hamming distance to M exemplar patterns. The upper MAXNET subnet selects that node with the maximum output. All nodes use threshold-logic nonlinearities where it is assumed that the outputs of these nonlinearities never saturate.
The operation of the Hamming network is described in Box 2 of Lippmann's literature. Weights and thresholds are first set in the lower subnet such that the matching scores generated by the outputs of the middle nodes of FIG. 6 are equal to N minus the Hamming distance to the exemplar patterns. These matching scores will range from 0 to the number of elements in the input (N) and are highest for those nodes corresponding to classes with exemplars that best match the input. Thresholds and weights in the MAXNET subnet are fixed. All thresholds are set to zero and weights from each node to itself are 1. Weights between nodes are inhibitory with a value of -ε where ε<1/M.
After weights and thresholds have been set, a binary pattern with N elements is presented at the bottom of the Hamming network. It must be presented long enough to allow the matching score outputs of the lower subnet to settle and initialize the output values of the MAXNET. The input is then removed and the MAXNET iterates until the output of only one node is positive. Classification is then complete and the selected class is that corresponding to the node with a positive output.
SUMMARY OF THE INVENTION
The primary object of the present invention is to provide a Hamming neural network circuit which can realize the Hamming network model, and is very suitable for being fabricated in CMOS technology.
In accordance with the present invention, a Hamming neural network circuit having N binary inputs and M exemplar template outputs, comprises:
a template matching calculation subnet including M first neurons in which M exemplar templates are stored respectively, each first neuron being consisted of N pull-up and pull-down transistor pairs connected in parallel with each other, the N pull-up and pull-down transistor pairs of each first neuron being connected to and controlled by the N binary inputs, respectively, to operate in the nonsaturation region during a first time interval so that the M first neurons generate M template matching signals depending on the matching degrees between the N binary inputs and the M exemplar templates stored in the M first neurons; and
a winner-take-all subnet including M second neurons, each having a template competition node, a load element connected between a power source and the template competition node, and a competition circuit connected between the template competition node and ground; the M template competition nodes being connected to the M template matching signals respectively during a second time interval for generating the M exemplar template outputs, and the competition circuit of each second neuron being consisted of M-1 parallel-connected transistors controlled respectively by the template competition nodes of all second neurons except the template competition node of itself so that the template competition node connecting with the largest template matching signal is eventually at a relatively high voltage level, and the other template competition nodes are at a relatively low voltage level, after competition.
According to one feature of the present invention, the M template competition nodes are connected with each other during the first time interval in order to balance their voltage levels before competition, and are disconnected from each other during the second time interval when competition proceeds.
According to another feature of the present invention, the competition circuit of each second neuron further includes an additional parallel-connected transistor adapted to be controlled by an adjusting threshold voltage, and the winner-take-all subnet further includes an additional second neuron acting as a threshold neuron. The template competition node of the threshold neuron is connected to the adjusting threshold voltage during the second time interval, and the M parallel-connected transistors of the threshold neuron are controlled respectively by the template competition nodes of all other second neurons, so that all exemplar template outputs are eventually at the relatively low voltage level, after competition, if the adjusting threshold voltage is higher than the voltages of all template matching signals.
According to further feature of the present invention, the threshold voltages of each pull-up and each pull-down transistors in the first neurons are individually preset depending upon the corresponding exemplar templates, and parts of the binary inputs are selectively inverted in each first neuron, depending upon its corresponding exemplar template. The load elements in the second neurons are MOS transistors adapted to be controlled by a bias voltage.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention can be more fully understood by reference to the following description and accompanying drawings, which form an integral part of this application:
FIG. 1 is a schematically electrical circuit diagram of a Hamming neural network according to one preferred embodiment of the present invention;
FIGS. 2a through 2e illustrate five exemplar patterns or templates of Arabic numerals "0" to "4", constructed by 70 (7×10 array) binary pixels;
FIGS. 3a through 3c illustrate the experimental results for the Hamming neural network circuit shown in FIG. 1; and
FIG. 4 illustrates a timing chart of two clock signals used in the Hamming neural network circuit of FIG. 1.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring now to FIG. 1, there is shown a Hamming neural network circuit suitable to be fabricated in CMOS technology, according to one preferred embodiment of the present invention. The Hamming neural network circuit includes a template matching calculation subnet consisted of M first neurons 10-1 through 10-m; and a winner-take-all subnet consisted of M second neurons 20-1 through 20-m and a threshold neuron 20T. The first neurons 10-1 through 10-m are connected to the second neurons 20-1 through 20-m via M switch circuits 40-1 through 40-m, respectively. The threshold neuron 20T is connected to a threshold voltage VT via an additional switch circuit 40T.
As known, the Hamming neural network circuit can be used for speech and image recognitions, and its inputs are binary signals. As shown in FIG. 1, the Hamming neural network circuit has N binary inputs x1 through xn, and M exemplar template outputs Vo1 through Vom. The M exemplar templates or patterns can be determined and designed by statistics and analysis. Referring to FIG. 2, there is shown five exemplar templates or patterns of Arabic numerals "0" to "4" consisted of 7×10 array of binary pixels, i.e. white and black pixels, just for easy understanding of the present invention. In this case, the number N is 70, and the number M is 5. The white pixel may be represented by a logic "0" signal while the black pixel may be represented by a logic "1" signal. The unknown pattern containing 70 binary pixels is inputted into the Hamming neural network circuit via the binary inputs x1 through xn, and the Hamming neural network circuit determines which one of the M exemplar templates is most representative of the unknown pattern by generating a logic "1" signal at the corresponding exemplar template output and a logic "0" signal at the other exemplar template outputs.
Each first neuron 10-1˜10-m includes N pull-up and pull-down transistor pairs connected in parallel with each other and between a power source Vdd and ground. The N pull-up and pull-down transistor pairs of each first neuron 10-1˜10-m are controlled by the N binary inputs x1 through xn respectively. In this embodiment of the present invention, the pull-up transistors are PMOS (P-channel Metal-Oxide-Semiconductor) transistors, and the pull-down transistors are NMOS (N-channel MOS) transistors. The drain electrodes of PMOS pull-up and NMOS pull-down transistors of each first neuron 10-1˜10-m are connected together to generate a template matching signal Vi1 ˜Vim depending on the matching degree between the N binary inputs x1 through xn and the corresponding exemplar template. After the exemplar templates are determined and designed, the M exemplar templates are "stored" in the M first neurons 10-1˜10-m respectively. The "storage" of the exemplar templates can be achieved by appropriately designing the threshold voltages of all pull-up and pull-down transistors in the corresponding first neurons so that the voltage levels of the template matching signals Vi1 ˜Vim are determined by the matching degrees between the N binary inputs x1 through xn and the exemplar templates. Larger matching degree will cause larger template matching signal, and vice versa. If an input unknown pattern matches perfectly with an exemplar template stored in the corresponding first neuron, an maximum output voltage level Vdd can be obtained at the corresponding template matching output. It should be understood by those skilled in the art that all PMOS pull-up and NMOS pull-down transistor pairs are designed to operate in the nonsaturation region. Since the exemplar templates all contain white (e.g. represented by logic "0") and black (e.g. represented by logic "1") pixels, the expected logic "1" inputs in respective first neurons 10-1˜10-m are connected to the pull-up and pull-down transistor pairs via inverters 12, as shown in FIG. 1. As known in the art, the unknown binary pattern with N elements must be presented at the inputs x1 ˜xn long enough to allow the template matching signals Vi1 ˜Vim of the template matching calculation subnet to settle and initialize the exemplar template output values of the winner-take-all subnet. In this embodiment of the present invention, the unknown pattern is presented at the binary inputs during a first time interval φ1 as shown in FIG. 4.
The template matching signals Vi1 ˜Vim of the template matching calculation subnet are connected to the exemplar template outputs or template competition nodes Vo1 ˜Vom of the winner-take-all subnet via M switch circuits 40-1 through 40-m respectively, as shown in FIG. 1. Each switch circuit 40-1˜40-m, or 40-T includes a PMOS transistor and an NMOS transistor connected in paralled with each other. The PMOS transistors of the switch circuits are controlled by a clock signal φ2 , and the NMOS transistors of the switch circuits are controlled by the clock signal φ2. The clock signal φ2 is shown in FIG. 4. The parallel-connected PMOS and NMOS transistors of the switch circuits 40-1 through 40-m are used to eliminate the threshold voltage drop in order not to decrease the voltage levels of the template matching signals Vi1 ˜Vim because the template matching signals may be very small. During the second time interval φ2, the template matching signals are inputted into the winner-take-all subnet.
The second neurons 20-1 through 20-m and the threshold neuron 20-T have the same circuit structure, and each includes a load element 22 connected between the power source Vdd and the template competition node Vo1 ˜Vom or Vt, and a competition circuit 24 connected between the template competition node Vo1 ˜Vom or Vt and ground. The load elements 22 may be PMOS transistors controlled by a bias voltage Vp to act as a resistance element. The competition circuits 24 all include M NMOS transistors connected in parallel with each other and controlled respectively by the template competition nodes Vo1 ˜Vom and Vt of all second neurons 20-1 through 20-m and the threshold neuron 20-T except the template competition node of itself.
During the second time interval φ2, the inputted template matching signals Vi1 through Vim and the threshold voltage VT compete with each other in the winner-take-all subnet. After competition, the largest singal eventually becomes a relatively high voltage signal, and the other signals eventually become relatively low voltage signal. More specifically, suppose the voltage level of the template matching signal Vi1 is larger than those of the other template matching signals and the threshold voltage. The NMOS transistors in the competition circuits 24 of the second neurons 20-2 through 20-m and the threshold neuron 20-T which are controlled by the template matching signal Vi1, are much turned on, and thus the voltage levels of the template competition nodes Vo2 ˜Vom and Vt are pulled down. The NMOS transistors in the competition circuit 24 of the second neuron 20-1 which are controlled by the template competition nodes Vo2 ˜Vom and Vt respectively, are then more turned off, and thus the voltage level of the template competition node Vo1 is pulled up. Then, the voltage levels of the template competition nodes Vo2 ˜Vom and Vt are further pulled down because the NMOS transistors controlled by the template competition node Vo1 are more turned on. Eventually, the template competition nodes Vo1 is at a relatively high voltage level, for example near the power source Vdd, and the other template competition nodes Vo2 ˜Vom and Vt are at a relatively low voltage level, for example near the ground. The Hamming neural network circuit of the present invention then determines that the unknown input pattern is the exemplar template or pattern stored in the first neuron 10-1.
The threshold voltage VT is adjustable, and is used to reject the recognition if the template matching signals Vi 1 through Vim are all smaller than the threshold voltage VT. In this case, the template competition node Vt is eventually at a relatively high voltage level, and the other template competition nodes Vo1 ˜Vom are eventually at a relatively low voltage level. The Hamming neural network circuit of the present invention then determines that a "no match" result occurs.
Referring to FIG. 1, an NMOS balance/isolation transistor 30 is connected between every two adjacent template competition nodes Vo1 ˜Vom and Vt. The NMOS balance/isolation transistors 30 are controlled by the clock signal φ1 as shown in FIG. 4, and are used to balance the voltage levels of all template competition nodes Vo1 ˜Vom and Vt during the first time interval φ1, i.e. before competition, and to isolate all template competition nodes Vo1 ˜Vom and Vt from each other during the second time interval φ2, i.e. during competition. The period of time T shown in FIG. 4 indicates one recognition cycle.
The Hamming neural network circuit of the present invention has been implemented by single layer metal CMOS technology. In order to reduce the power dissipation, the gate length of the MOS transistors in the template matching calculation subnet are designed larger than the gate width. In one preferred embodiment, the PMOS transistors in the template matching calculation subnet have a gate length of about 20 microns, and a gate width of about 15 microns. The NMOS transistors in the template matching calculation subnet have a gate length of about 20 microns, and a gate width of about 10 microns. The gate length and width of the NMOS transistors in the winner-take-all subnet is about 5 microns and about 20 microns respectively. A CMOS inverter is further added to each exemplar template output to increase the driving capacity and avoid the effect of external load on the resolution accuracy of the winner-take-all subnet.
The operation speed of the template matching calculation subnet corresponds approximately the delay time of one logic gate. The speed and resolution accuracy of the whole Hamming neural network mainly depends on the winner-take-all subnet, so the winner-take-all subnet is tested thoroughly.
The testing result shows that the resolution accuracy of the winner-take-all subnet can reach 20 mV. The operation speed strongly depends on the input voltage difference of the input terminals, the whole input voltage level, and the bias voltage Vp applied to the PMOS load transistors. The following testing result is for a three terminal network. Referring to FIG. 3a, the testing result shows that the convergence or competition time decreases approximately linearly with the increase of the input voltage difference. Referring to FIG. 3b, keeping the input voltage difference at a fixed value, the testing result shows that the convergence time decreases quickly with the increase of the whole input voltage level. Referring to FIG. 3c, the testing result shows that the convergence time decreases quickly with the reduction of the bias voltage Vp. The testing result of the winner-take-all subnet with different number of input and output terminals also shows that the convergence time increases slowly with the increase of terminal number. The reason is that in the present circuit design the convergence time mainly depends on the output load other than the load in the network itself.
Overview of the testing results gives the conclusion that the winner-take-all subnet has high resolution accuracy and can reach the convergence at least less than 200 ns. The resolution accuracy and convergence speed can be increased markedly if multiple layer metals, narrow line width technology, and multiple stage driving circuit are used.
While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention need not be limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims, the scope of which should be accorded the broadest interpretation so as to encompass all such modifications and similar structures.

Claims (5)

What is claimed is:
1. A Hamming neural network circuit having N binary inputs and M exemplar template outputs, comprising:
a template matching calculation subnet including M first neurons in which M exemplar templates are stored respectively, each first neuron including N pull-up and pull-down transistor pairs connected in parallel with each other, said N pull-up and pull-down transistor pairs of each first neuron being connected to and controlled by said N binary inputs, respectively, so that said M first neurons generate M template matching signals depending on an amount of matching between said N binary inputs and said M exemplar templates stored in said M first neurons; and
a winner-take-all subnet including M second neurons, each having a template competition node, a load element connected between a power source and said template competition node, and a competition circuit connected between said template competition node and ground; said M template competition nodes being connected to said M template matching signals respectively for generating said M exemplar template outputs, and said competition circuit of each second neuron including M-1 parallel-connected transistors controlled respectively by said template competition nodes of all second neurons except the template competition node of itself so that the template competition node connecting with the largest template matching signal is established at a high voltage level, and the other template competition nodes are at a low voltage level relative to the high voltage level, after competition.
2. The Hamming neural network circuit as claimed in claim 1, wherein said winner-take-all subnet further includes means for connecting said M template competition nodes with each other in order to balance their voltage levels before competition, and means for disconnecting said M template competition nodes from each other when competition proceeds.
3. The Hamming neural network circuit as claimed in claim 2, wherein said competition circuit of each second neuron further includes an additional parallel-connected transistor controlled by a predetermined threshold voltage, and wherein said winner-take-all subnet further includes an additional second neuron acting as a threshold neuron, the template competition node of said threshold neuron being connected to said threshold voltage during competition, and the M parallel-connected transistors of said threshold neuron being controlled respectively by the template competition nodes of all other second neurons, so that all exemplar template outputs are eventually at the low voltage level, after competition, if said threshold voltage is higher than the voltages of all template matching signals.
4. The Hamming neural network as claimed in claim 3, wherein said template matching calculation subnet includes means for inverting selected ones of said binary inputs of each first neuron, depending upon the exemplar template of each first neuron.
5. The Hamming neural network circuit as claimed in claim 4, wherein said load elements in said second neurons are MOS transistors controlled by a bias voltage.
US08/316,135 1994-09-30 1994-09-30 Hamming neural network circuit Expired - Lifetime US5630021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/316,135 US5630021A (en) 1994-09-30 1994-09-30 Hamming neural network circuit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/316,135 US5630021A (en) 1994-09-30 1994-09-30 Hamming neural network circuit

Publications (1)

Publication Number Publication Date
US5630021A true US5630021A (en) 1997-05-13

Family

ID=23227629

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/316,135 Expired - Lifetime US5630021A (en) 1994-09-30 1994-09-30 Hamming neural network circuit

Country Status (1)

Country Link
US (1) US5630021A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999033019A1 (en) * 1997-12-19 1999-07-01 Bae Systems Plc Neural networks and neural memory
US5999643A (en) * 1998-04-06 1999-12-07 Winbond Electronics Corp. Switched-current type of hamming neural network system for pattern recognition
US6341275B1 (en) * 1999-04-27 2002-01-22 Winbond Electrnics Corp. Programmable and expandable hamming neural network circuit

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4980583A (en) * 1989-01-03 1990-12-25 National Semiconductor Corporation CMOS level shift circuit with active pull-up and pull-down
US5049758A (en) * 1988-12-09 1991-09-17 Synaptics, Incorporated Adaptable CMOS winner-take all circuit
US5059814A (en) * 1988-11-30 1991-10-22 The California Institute Of Technology Winner-take-all circuits for neural computing systems
US5146106A (en) * 1988-12-09 1992-09-08 Synaptics, Incorporated CMOS winner-take all circuit with offset adaptation
US5248873A (en) * 1991-06-10 1993-09-28 Synaptics, Incorporated Integrated device for recognition of moving objects
US5268684A (en) * 1992-01-07 1993-12-07 Ricoh Corporation Apparatus for a neural network one-out-of-N encoder/decoder
US5331215A (en) * 1988-12-09 1994-07-19 Synaptics, Incorporated Electrically adaptable neural network with post-processing circuitry
US5361311A (en) * 1992-07-14 1994-11-01 The United States Of America As Represented By The Secretary Of Commerce Automated recongition of characters using optical filtering with positive and negative functions encoding pattern and relevance information
US5537512A (en) * 1993-05-26 1996-07-16 Northrop Grumman Corporation Neural network elements

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5059814A (en) * 1988-11-30 1991-10-22 The California Institute Of Technology Winner-take-all circuits for neural computing systems
US5049758A (en) * 1988-12-09 1991-09-17 Synaptics, Incorporated Adaptable CMOS winner-take all circuit
US5146106A (en) * 1988-12-09 1992-09-08 Synaptics, Incorporated CMOS winner-take all circuit with offset adaptation
US5331215A (en) * 1988-12-09 1994-07-19 Synaptics, Incorporated Electrically adaptable neural network with post-processing circuitry
US4980583A (en) * 1989-01-03 1990-12-25 National Semiconductor Corporation CMOS level shift circuit with active pull-up and pull-down
US5248873A (en) * 1991-06-10 1993-09-28 Synaptics, Incorporated Integrated device for recognition of moving objects
US5268684A (en) * 1992-01-07 1993-12-07 Ricoh Corporation Apparatus for a neural network one-out-of-N encoder/decoder
US5361311A (en) * 1992-07-14 1994-11-01 The United States Of America As Represented By The Secretary Of Commerce Automated recongition of characters using optical filtering with positive and negative functions encoding pattern and relevance information
US5537512A (en) * 1993-05-26 1996-07-16 Northrop Grumman Corporation Neural network elements

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
CMOS Circuit Design of Programmable Neural Net Classifier of Exclusive Classes Aug. 1989. *
Gomez Castaneda et al., VLSI Hamming Neural Net Showing Digial Decoding Jun. 1993. *
Gomez Castaneda, Integrated Circuit for Hamming Neural Net 1994. *
Gomez-Castaneda et al., VLSI Hamming Neural Net Showing Digial Decoding Jun. 1993.
Gomez-Castaneda, Integrated Circuit for Hamming Neural Net 1994.
Grant et al., Synthesis of a Class of Artificial Neural Network . . . 1991. *
Grant, A High Speed Integrated Hamming Neural Classifier 1994. *
He et al., A High Density and Low Power Charge Based Hamming Network Mar. 1993. *
IEEE ASSP Magazine, pp. 4 22, Apr. 1987 entitled An Introduction to Computing with Neural Nets , by Richard P. Lippmann. *
IEEE ASSP Magazine, pp. 4-22, Apr. 1987 entitled "An Introduction to Computing with Neural Nets", by Richard P. Lippmann.
Robinson et al, A Modular VLSI Design of a CMOS Hamming Network Apr. 1991. *
Robinson et al., A Modular CMOS Design of a Hamming Network Jul. 1992. *
Zhong, An Analog Cell Library Useful for Artificial Neural Networks 1990. *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999033019A1 (en) * 1997-12-19 1999-07-01 Bae Systems Plc Neural networks and neural memory
US20050049984A1 (en) * 1997-12-19 2005-03-03 Bae Systems Plc Neural networks and neural memory
US5999643A (en) * 1998-04-06 1999-12-07 Winbond Electronics Corp. Switched-current type of hamming neural network system for pattern recognition
US6341275B1 (en) * 1999-04-27 2002-01-22 Winbond Electrnics Corp. Programmable and expandable hamming neural network circuit

Similar Documents

Publication Publication Date Title
US5010512A (en) Neural network having an associative memory that learns by example
US6456992B1 (en) Semiconductor arithmetic circuit
US5905387A (en) Analog voltage-signal selector device
JP3278080B2 (en) Semiconductor integrated circuit
Kotani et al. Clock-controlled neuron-MOS logic gates
US5682109A (en) Semiconductor integrated circuit
Liu et al. The circuit realization of a neuromorphic computing system with memristor-based synapse design
US6041321A (en) Electronic device for performing convolution operations
US6185331B1 (en) Switched-current fuzzy processor for pattern recognition
EP1118954A1 (en) Semiconductor computing unit
US5720004A (en) Current-mode hamming neural network
US5923779A (en) Computing circuit having an instantaneous recognition function and instantaneous recognition method
US6341275B1 (en) Programmable and expandable hamming neural network circuit
US5630021A (en) Hamming neural network circuit
KR19990022761A (en) A circuit for comparing the two electrical values provided by the first neuron MOSF and the reference source
US5444821A (en) Artificial neuron element with electrically programmable synaptic weight for neural networks
US6272476B1 (en) Programmable and expandable fuzzy processor for pattern recognition
US11600319B2 (en) Memory system capable of performing a bit partitioning process and an internal computation process
US6127852A (en) Semiconductor integrated circuit for parallel signal processing
US5923205A (en) Semiconductor arithmetic circuit
Grant et al. Design, implementation and evaluation of a high-speed integrated Hamming neural classifier
US5544279A (en) Fuzzy logic recognition device which determines the mode of an unknown move using fuzzy set theory
US5999643A (en) Switched-current type of hamming neural network system for pattern recognition
Aksin A high-precision high-resolution WTA-MAX circuit of O (N) complexity
Anguita et al. A low-power CMOS implementation of programmable CNN's with embedded photosensors

Legal Events

Date Code Title Description
AS Assignment

Owner name: UNITED MICROELECTRONICS CORP., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, ZHI-JIAN;SHI, BING-XUE;QIAO-BIN;REEL/FRAME:007280/0488

Effective date: 19940824

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12