WO2010028517A1 - System and method for generating/ identifying cipher code via artificial neural network - Google Patents

System and method for generating/ identifying cipher code via artificial neural network Download PDF

Info

Publication number
WO2010028517A1
WO2010028517A1 PCT/CN2008/001915 CN2008001915W WO2010028517A1 WO 2010028517 A1 WO2010028517 A1 WO 2010028517A1 CN 2008001915 W CN2008001915 W CN 2008001915W WO 2010028517 A1 WO2010028517 A1 WO 2010028517A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
password
vector
value
handwriting
Prior art date
Application number
PCT/CN2008/001915
Other languages
French (fr)
Chinese (zh)
Inventor
陈淮琰
王小春
Original Assignee
无敌科技(西安)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 无敌科技(西安)有限公司 filed Critical 无敌科技(西安)有限公司
Publication of WO2010028517A1 publication Critical patent/WO2010028517A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes

Definitions

  • the present invention relates to a system and method for generating/verifying passwords, and more particularly to a system and method for generating/verifying passwords by processing a user-written handwriting through a neural network.
  • fingerprint technology has been developed in recent years. This technology has been gradually popularized in high-priced electronic products such as notebook computers and personal digital assistants (PDAs). . Compared with character passwords, fingerprint passwords are more complicated in space, and because fingerprints are unique characteristics of individuals, the reliability of fingerprint passwords is quite high. However, in order to use a fingerprint as a password, a special fingerprint input device is required, which significantly increases the cost and cannot be widely used in the parity product.
  • Another personally identifiable password protection technique is face recognition/retina recognition. This technology requires dedicated scanning equipment that is costly and there is currently no mature technology available to the general public.
  • the invention solves the problem that the password protection technology in the background art cannot be effectively and in a low cost manner.
  • the technical solution of the present invention is:
  • the present invention provides a method for generating a password through a neural network, which is special in that: the method comprises the following steps: 1) receiving a password handwriting;
  • the handwriting is a set of points that are represented by coordinates.
  • this sample point is treated as a set of vectors.
  • two examples are two examples:
  • the grid is divided into 100X100, and the sampling points in each grid are filtered into straight lines to obtain the slope of the line, and the slopes in all the grids are recorded as a set of vectors.
  • the neural network is self-learning based on the cipher vector and the interference vector to generate a cryptographic weight value.
  • the above neural network includes an input layer, an output layer, and a hidden layer, wherein the input layer has input neurons; the output layer has output neurons; the hidden layer has hidden neurons, and the input layer accesses the hidden layer through input neurons. The neurons are hidden, and the hidden layer accesses the output neurons of the output layer by hiding the neurons.
  • step 4 The specific steps of step 4 above are as follows:
  • the input neuron receives a cipher vector and an interference vector
  • the learning target value is specified in advance, for example, the interference vector specifies the output as a value between 000000 and 111110, and the cipher vector specifies the output as 1 1 1 1 1 1 .
  • Feedback training-like neural networks are performed using these specified learning target values. 4.4) judging whether the error value is greater than the critical value, when the error value is not greater than the critical value, storing the operation result of each output neuron as a comparison value, and proceeding to the step; when the error value is greater than the critical value, proceeding to step 4.5);
  • the threshold can be specified by the user.
  • the critical value is extremely high, the handwriting height is required to be consistent to confirm. This value can usually be obtained empirically based on the specific implementation.
  • the neural network adjusts the weight values of the output layer and the hidden layer according to the error value (;
  • the weight value is a parameter set that determines the network function after the neural network training.
  • the method also includes the step of initializing a weight value of the neural network.
  • the present invention also provides a system for generating a password through a neural network, which is special in that: the system includes a receiving module for receiving a password handwriting, an analysis for analyzing the password handwriting, and outputting a password vector according to the analysis result.
  • the system also includes an initialization module for initializing the weight value of the neural network, the initialization module: an access type neural network.
  • the system also includes a readout module for reading an interference vector in the storage module, the storage module accessing the neural network via the readout module.
  • the present invention also provides a method for verifying a password through a neural network, which is special in that: the method comprises the following steps:
  • the method of obtaining the vector to be tested here is the same as the method of obtaining the cipher vector.
  • the values to be checked are compared with the control values (the comparison values refer to the values specified by the cipher vectors in 4.3 of the foregoing steps), and the passwords are verified based on the comparison results.
  • step 4) The specific steps of step 4) above are as follows: 4.1) determining whether the maximum value in the value to be checked is less than the lower matching limit;
  • step 4.5 If the maximum value to be detected is less than the lower matching limit, proceed to step 4.5); if the maximum value to be detected is not less than the lower matching limit, proceed to step 4.3);
  • the neural network determines whether the output neuron outputting the maximum value to be detected is the same as the output neuron outputting the maximum control value; if the two are the same neuron, proceed to step 4.4) if the two are different nerves Yuan, then proceed to step 4.5);
  • the present invention also provides a system for verifying a password through a neural network, which is special in that: the system includes a receiving module for receiving a handwriting to be inspected, configured to analyze the handwriting to be inspected, and outputting a to be inspected according to the analysis result. An analysis module of the vector and a neural network for comparing the value to be checked with the comparison value; the receiving module accesses the neural network through the analysis module.
  • the present invention generates/verifies a password composed of handwriting through a neural network, and 'because the handwriting password is a two-dimensional directional trajectory, the space complexity is quite high and it is difficult to be cracked by the machine.
  • the content of the handwriting password is memorized and recognized through the neural network. Even the person who knows the neural network's network data cannot reverse or crack the content of the handwriting password from the data of the neural network.
  • the present invention has high security in comparison with the conventional technique of using characters as passwords.
  • the electronic product of the present invention only needs to have a general handwriting input device (such as a mouse, a tablet, etc.) for the user to input handwritten handwriting, and no additional device is required, so compared with the conventional password protection technology,
  • the invention can generate a password with high security at a low cost.
  • the handwriting is only required to input the handwriting into the system of the present invention, and the password generation and verification can be performed. It is known to memorize the character password or carry the IC card/magnetic card to perform the password verification. In contrast, the present invention is more convenient to use.
  • FIG. 1 is a schematic diagram of a specific embodiment of a system for generating a password by a neural network according to the present invention
  • FIG. 2 is a schematic diagram of a specific embodiment of a method for generating a password by a neural network according to the present invention
  • FIG. 3 is a schematic diagram of a neural network of the present invention. Schematic;
  • FIG. 4 is a flow chart of a neural network-like self-learning of the present invention
  • FIG. 5 is a schematic diagram of a specific embodiment of a system for verifying a password by a neural network according to the present invention
  • FIG. 6 is a schematic diagram of a specific embodiment of a method for verifying a password by a neural network according to the present invention.
  • the system 100 for generating a password via a neural network of the present invention includes a receiving module 110, an analyzing module 120, a storage module 130, and a neural network 140.
  • the receiving module 110 is responsible for receiving the password handwriting (step 210), and the password handwriting is performed.
  • the analysis module 120 is analyzed by the analysis module 120 (step 220), and the analysis module 120 of the embodiment includes a sampling unit 122 for sampling the cryptographic handwriting to obtain a cryptographic vector.
  • the storage module 130 is configured to store at least one interference vector.
  • the interference vector is obtained, for example, by analysis of the interference handwriting previously input to the system 100 via the analysis module 120.
  • the cipher vector and the interference vector are respectively composed of a plurality of vector data, for example.
  • the cryptographic vector and the interference vector are input to the neural network 140 (step 240).
  • the system 100 further includes a readout module 150 for reading the interference vector in the storage module 130, and outputting the read interference vector together with the password vector output by the analysis module 120 to the class. Neural network 140.
  • system 100 of this embodiment may further include an initialization module 160 for initializing the weight values in the neural network based on the input of the interference vector and the cryptographic vector to the neural network 140 (step 245).
  • initialization module 160 for example, initializes the weight value of neural network 140 to a random value between 0 and 1 in step 245.
  • the neural network 140 includes an input layer 142, an output layer 144, and at least one hidden layer 146, wherein the input layer 142 has a plurality of input neurons 142a; the output layer has a plurality of output neurons 144a; and the hidden layer 146 has Multiple hidden neurons 146a.
  • the number of layers of the hidden layer 146 may vary depending on the complexity requirements for the password handwriting. For example, if a simple handwriting is to be used as the password, the number of layers of the hidden layer 146 can be reduced; if more complex handwriting is to be used as the password to encrypt the important data, the number of layers of the hidden layer 146 can be increased to provide more Complex recognition capabilities. Those skilled in the art can determine the number of layers of the hidden layer 146 according to their needs. The present invention is not This limits the number of layers of the hidden layer 146.
  • the neural network 140 After receiving the cipher vector CV and the interference vector DV, the neural network 140 performs self-learning according to the cipher vector CV and the interference vector DV to generate a plurality of cryptographic weight values (step 250), and the neural network 140 remembers the passwords.
  • the weight value (step 260) is used to facilitate subsequent verification of the password.
  • the cryptographic vector CV and the interference vector DV are received by the input neuron 142a (step 251).
  • the number of input neurons 142a is, for example, equal to the sum of the amounts of vector data of the cipher vector CV and the interference vector DV. That is, each input neuron 142a receives a vector of data.
  • Each output neuron 144a outputs an operation result according to the operation of the cryptogram vector CV and the interference vector DV received by the input neuron 142a by the hidden layer 146 (step 252), and then respectively calculates the operation result 0 of each output neuron corresponding thereto.
  • the error value between the learning target values (step 253).
  • step 254 it is judged whether or not the error value is greater than a critical value (step 254).
  • the error value between the operation result of each output neuron 144a and the corresponding learning target value is not greater than the threshold value, the operation result of each output neuron is stored as a comparison value (step 255), so that the password is subsequently performed.
  • these comparison values are used as a comparison benchmark.
  • the weight value in the neural network 140 at this time is the cryptographic weight value.
  • the neural network 140 adjusts the weight values of the output layer 144 and the hidden layer 146 by itself. For example, when the error value E k between the operation result O k of the kth output neuron and the learning target value T k is greater than the critical value, the kth output neuron is connected according to the error value E k The weight value between the jth hidden neurons of the mth hidden layer (step 256).
  • step 256 is, for example, a public To adjust the weight value between the kth output neuron and the jth hidden neuron of the mth hidden layer to which it is connected.
  • W ' k is the weight value after adjusting n+1 times between the kth output neuron and the jth hidden neuron of the mth hidden layer to which it is connected, which is the kth output neuron and the mth connected thereto
  • n is a positive integer
  • Oj is the result of the operation output by the jth output neuron
  • L is called the learning rate and is a constant greater than zero, which is usually between 0 and 1. In the self-learning process of the neural network, if the learning rate is large, the learning speed is faster; if the learning rate is smaller, the learning result is more accurate.
  • an output error value Ej of the j-th hidden neuron of the m-th hidden layer is calculated according to the adjusted weight value in step 256 (step 257).
  • the error value is calculated ⁇ , - ⁇ ⁇ - ⁇ ⁇ ⁇ ⁇ ⁇ where ⁇ (to the right after the step 256 by adjusting the weight values.
  • step ft257 the weight value between the jth hidden neuron and the i-th hidden neuron of the m-1th hidden layer to which it is connected is adjusted (step 258).
  • the weight value between . ⁇ ' + ' . is the adjustment between the jth hidden neuron and the i-th hidden neuron of the m-1 hidden layer to which it is connected.: Weight value after n+1 times , the weight value adjusted n times after the jth hidden neuron and the i-th hidden neuron of the m-1th hidden layer to which it is connected, where m and n are positive integers, and ! ⁇ .
  • step 256 to step 258 are continuously repeated until the error value between the operation result of each output neuron 144a and the corresponding learning target value is not greater than the critical value, and the neural network 140 completes its self-learning.
  • the weight value memorized by the network 140 is the aforementioned password weight value.
  • system 500 includes a receiving module 510, an analysis module 520, and a neural network 530.
  • the receiving module 510 and the analyzing module 520 are similar to the receiving module 110 and the analyzing module 120 of the foregoing embodiment, respectively.
  • the neural network 530 has self-learned according to the flow of steps shown in "Fig. 4", thus memorizing the cryptographic weight values described above.
  • the receiving module 510 is responsible for receiving the to-be-tested handwriting (step 610), and transmitting the to-be-detected handwriting to the analysis module 520. .
  • the handwriting to be detected by the analysis module 520 is analyzed (step 620). Its The analysis module 520 includes a sampling unit 522 for sampling the to-be-tested handwriting to obtain a to-be-checked vector.
  • the vector to be detected may also be composed of a plurality of vector data.
  • the test vector is input to the neural network 530 such that each output neuron of the neural network 530 outputs a test value according to the vector to be tested (step 630).
  • the neural network 530 will automatically compare these values to the control values stored therein.
  • the step of comparing the value to be checked with the comparison value includes determining whether the maximum value of the values to be detected is less than a lower matching limit (step 642), wherein the lower limit of the matching is a constant, which is familiar to the skilled person.
  • the matching accuracy determines the actual value of the matching lower limit.
  • step 642 determines whether the output neuron outputting the maximum test value is the same as the output neuron outputting the maximum control value (step 644). If the two are the same neuron, it means that the handwriting to be detected by the user matches the originally set password handwriting, that is, the password verification is successful (step 646). On the other hand, if the two are different neurons, it means that the handwriting to be detected by the user does not match the originally set password handwriting, and the password verification fails (step 648).

Abstract

The system and method for generating/ identifying cipher code via artificial neural network dealing with the handwriting inputted from user are provided. The system involves a receiver module (110) for receiving a cipher code handwriting, an analyzing module (120) for analyzing the cipher code handwriting and outputting the cipher vector according to the result of the analyzing, a storage module (130) for storing an interference vector, and an artificial neural network (140) self-learning according to the interference vector and the cipher vector to generate the cipher weight value. The receiver module (110) is connected to the artificial neural network (140) via the analyzing module (120), and the storage module (130) is connected to the artificial neural network (140).

Description

通过类神经网络生成 /验证密码的方法及系统 技术领域  Method and system for generating/verifying passwords by neural network
本发明涉及一种生成 /验证密码的系统及方法, 尤其是一种通过类神经网 络处理使用者输入笔迹来生成 /验证密码的系统及方法。  The present invention relates to a system and method for generating/verifying passwords, and more particularly to a system and method for generating/verifying passwords by processing a user-written handwriting through a neural network.
背景技术 Background technique
保护自有财产是人类的本性。为防止他人窃取私有资产, 从一般保险箱上 的密码转盘, 到计算机的开机密码、 电子邮箱的登录密码, 银行交易的交易 密码等等, 密码保护技术已广泛地应用于当代日常生活中。  Protecting one's own property is the nature of mankind. In order to prevent others from stealing private assets, from the password dial on the general safe to the computer's power-on password, e-mail login password, bank transaction transaction password, etc., password protection technology has been widely used in contemporary daily life.
不同的密码保护技术存有不同的优缺点,以目前最被广泛使用于各方面的 字符密码来说, 其多以字母、 数字等可键入符号组成, 具有实现简单、 成本 低及使用方便的特点。 但使用者必须记忆作为密码的字符, 对记忆力不佳的 人来说实为一种负担。 而且, 由于字符密码的密码空间复杂度有限, 因此被 破解率相当高。  Different password protection technologies have different advantages and disadvantages. In terms of character passwords which are widely used in various aspects at present, they are mostly composed of letters, numbers and other inputtable symbols, and have the characteristics of simple implementation, low cost and convenient use. . However, the user must remember the character as a password, which is a burden for people with poor memory. Moreover, since the password space of the character password is limited in complexity, the crack rate is quite high.
目前尚有另一种密码保护技术, 其是在 IC卡 /磁卡内建密码, 以便于使用 者直接使用 IC卡 /磁卡幵启以密码保护的对象, 而无须记忆密码。 虽然 IC卡 / 磁卡在使用上非常方便, 但其缺点是容易遗失, 且需要额外的成本制作卡片。  There is another password protection technology, which is built-in password on the IC card/magnetic card, so that the user can directly use the IC card/magnetic card to open the password-protected object without having to memorize the password. Although the IC card/magnetic card is very convenient to use, its disadvantage is that it is easy to lose and requires additional cost to make a card.
此外,为提高个人身份的辨识度,近年来更发展出以指纹作为密码的技术, 此技术目前已逐渐在高价的笔记型计算机、 个人数字助理 (Personal Digital Assistant, PDA)等电子产品上普及化。 指紋密码与字符密码相较而言, 其在空 间上较为复杂, 且由于指纹为个人独特的特征, 因此指纹密码的可靠性相当 高。 但欲以指紋作为密码, 需要配置特殊的指纹输入装置, 显著地增加了成 本, 无法广泛地应用于平价产品中。  In addition, in order to improve the identification of personal identities, fingerprint technology has been developed in recent years. This technology has been gradually popularized in high-priced electronic products such as notebook computers and personal digital assistants (PDAs). . Compared with character passwords, fingerprint passwords are more complicated in space, and because fingerprints are unique characteristics of individuals, the reliability of fingerprint passwords is quite high. However, in order to use a fingerprint as a password, a special fingerprint input device is required, which significantly increases the cost and cannot be widely used in the parity product.
另一种个人身份辨识度高的密码保护技术为人脸识别 /视网膜识别。 此技 术需要专用的扫描设备, 成本昂贵, 且目前尚无成熟的技术可供一般大众使 用。  Another personally identifiable password protection technique is face recognition/retina recognition. This technology requires dedicated scanning equipment that is costly and there is currently no mature technology available to the general public.
发明内容 Summary of the invention
本发明为解决背景技术中的密码保护技术无法有效的以低成本的方式来 改善密码的被破解率的技术问题, 而提供一种通过类神经网络生成 /验证密码 的方法及系统, 在低成本的前提下提高密码的可靠度。 The invention solves the problem that the password protection technology in the background art cannot be effectively and in a low cost manner. The technical problem of improving the crack rate of the password, and providing a method and system for generating/verifying the password through the neural network, and improving the reliability of the password under the premise of low cost.
本发明的技术解决方案是:本发明提供了一种通过类神经网络生成密码的 方法, 其特殊之处在于: 该方法包括以下步骤- 1 ) 接收密码笔迹;  The technical solution of the present invention is: The present invention provides a method for generating a password through a neural network, which is special in that: the method comprises the following steps: 1) receiving a password handwriting;
2) 分析密码笔迹, 根据密码笔迹得到密码向量;  2) Analyze the password handwriting, and obtain the password vector according to the password handwriting;
通常笔迹为一组用座标表示的釆样点, 可使用多种方法将此采样点处理 为一组向量, 下面举两例说明:  Usually, the handwriting is a set of points that are represented by coordinates. There are several ways to treat this sample point as a set of vectors. Here are two examples:
2.1 )按座标分为 100X100的格子,把每一格子当中的采样点滤波处理为直 线, 得到此直线的斜率, 把所有格子中的斜率记录为一组向量。  2.1) According to the coordinates, the grid is divided into 100X100, and the sampling points in each grid are filtered into straight lines to obtain the slope of the line, and the slopes in all the grids are recorded as a set of vectors.
2.2 ) 按釆样点的数目将所有釆样点均分为一段一段, 比如 50段, 然后把 每段采样点滤波处理为直线, 得到此直线的斜率。 把所有段的斜率记录为一 组向量;  2.2) Divide all the sample points into segments, such as 50 segments, according to the number of sample points, and then filter each sample point into a straight line to obtain the slope of the line. Record the slope of all segments as a set of vectors;
3 ) 提供干扰向量, 将密码向量和干扰向量输入至类神经网络;  3) providing an interference vector, inputting the cipher vector and the interference vector to the neural network;
4)类神经网络依据密码向量与干扰向量进行自我学习,产生密码权重值。 上述类神经网络包括输入层、输出层以及隐藏层,其中输入层具有输入神 经元; 输出层具有输出神经元; 隐藏层则具有隐藏神经元, 所述输入层通过 输入神经元接入隐藏层的隐藏神经元, 所述隐藏层通过隐藏神经元接入输出 层的输出神经元。  4) The neural network is self-learning based on the cipher vector and the interference vector to generate a cryptographic weight value. The above neural network includes an input layer, an output layer, and a hidden layer, wherein the input layer has input neurons; the output layer has output neurons; the hidden layer has hidden neurons, and the input layer accesses the hidden layer through input neurons. The neurons are hidden, and the hidden layer accesses the output neurons of the output layer by hiding the neurons.
上述步骤 4 ) 的具体步骤如下:  The specific steps of step 4 above are as follows:
4.1 ) 所述输入神经元接收密码向量与干扰向量;  4.1) the input neuron receives a cipher vector and an interference vector;
4.2 ) 依据隐藏层对密码向量与干扰向量进行运算, 将密码向量输入已使 用干扰向量训练过的神经网络, 然后由各输出神经元输出运算结果;  4.2) Calculating the cipher vector and the interference vector according to the hidden layer, inputting the cryptographic vector into the neural network that has been trained using the interference vector, and then outputting the operation result by each output neuron;
4.3 ) 分别计算各输出神经元的输出的运算结果与类神经网络所对应的学 习目标值之间的误差值;  4.3) separately calculating an error value between the operation result of the output of each output neuron and the learning target value corresponding to the neural network;
学习目标值为事先指定,比如干扰向量都指定输出为 000000到 111110之间 的值, 而密码向量指定输出为 1 1 1 1 1 1。 使用这些指定的学习目标值进行反馈 训练类神经网络。 4.4) 判断误差值是否大于临界值, 当误差值不大于临界值时, 将各输出 神经元的运算结果储存为对照值, 进至步骤; 当误差值大于临界值时, 进至 步骤 4.5 ) ; The learning target value is specified in advance, for example, the interference vector specifies the output as a value between 000000 and 111110, and the cipher vector specifies the output as 1 1 1 1 1 1 . Feedback training-like neural networks are performed using these specified learning target values. 4.4) judging whether the error value is greater than the critical value, when the error value is not greater than the critical value, storing the operation result of each output neuron as a comparison value, and proceeding to the step; when the error value is greater than the critical value, proceeding to step 4.5);
临界值可通过使用者指定。 当临界值极高时要求笔迹高度一致才能确认。 此值通常可根据具体实现方案, 实验取得经验值。  The threshold can be specified by the user. When the critical value is extremely high, the handwriting height is required to be consistent to confirm. This value can usually be obtained empirically based on the specific implementation.
4.5 ) 类神经网络根据误差值调整输出层与隐藏层的权重值 (;  4.5) The neural network adjusts the weight values of the output layer and the hidden layer according to the error value (;
4.6 ) 得到类神经网络中的权重值即为密码权重值;  4.6) Obtaining the weight value in the neural network is the password weight value;
其中权重值是神经网络训练后, 决定网络功能的参数集合  The weight value is a parameter set that determines the network function after the neural network training.
该方法还包括初始化类神经网络的权重值的步骤。  The method also includes the step of initializing a weight value of the neural network.
本发明还提供了一种通过类神经网络生成密码的系统, 其特殊之处在于: 该系统包括用来接收密码笔迹的接收模块、 用来分析该密码笔迹, 并依据分 析结果输出密码向量的分析模块、 用来储存干扰向量的储存模块以及依据干 扰向量与密码向量来进行学习, 而产生密码权重值的类神经网络; 接收模块 通过分析模块接入类神经网络, 储存模块接入类神经网络。  The present invention also provides a system for generating a password through a neural network, which is special in that: the system includes a receiving module for receiving a password handwriting, an analysis for analyzing the password handwriting, and outputting a password vector according to the analysis result. The module, the storage module for storing the interference vector, and the neural network based on the interference vector and the cryptographic vector to generate the cryptographic weight value; the receiving module accesses the neural network through the analysis module, and the storage module accesses the neural network.
该系统还包括用来初始化类神经网络的权重值的初始化模块,初始化模块: 接入类神经网络。  The system also includes an initialization module for initializing the weight value of the neural network, the initialization module: an access type neural network.
该系统还包括用来读取储存模块中的干扰向量的读出模块,储存模块通过 读出模块接入类神经网络。  The system also includes a readout module for reading an interference vector in the storage module, the storage module accessing the neural network via the readout module.
本发明还提供了一种通过类神经网络验证密码的方法, 其特殊之处在于: 该方法包括以下步骤:  The present invention also provides a method for verifying a password through a neural network, which is special in that: the method comprises the following steps:
1 ) 接收待检笔迹;  1) receiving the handwriting to be inspected;
2) 分析待检笔迹, 得到待检向量;  2) Analyze the handwriting to be inspected and obtain the vector to be tested;
此处获得待检向量的方法与获得密码向量的方法一样。  The method of obtaining the vector to be tested here is the same as the method of obtaining the cipher vector.
3 ) 将待检向量输入至类神经网络, 类神经网络的输出神经元根据待检向 量而输出待检值;  3) inputting the vector to be tested into the neural network, and the output neuron of the neural network outputs the value to be tested according to the to-be-detected vector;
4)将这些待检值与对照值(对照值是指前述步骤中 4.3中的密码向量指定 输出的值) 进行比对, 根据比对结果对密码进行验证。  4) The values to be checked are compared with the control values (the comparison values refer to the values specified by the cipher vectors in 4.3 of the foregoing steps), and the passwords are verified based on the comparison results.
上述步骤 4 ) 的具体步骤如下: 4.1 )判断待检值中的最大值是否小于匹配下限; The specific steps of step 4) above are as follows: 4.1) determining whether the maximum value in the value to be checked is less than the lower matching limit;
4.2) 若最大待检值小于匹配下限时, 则进至步骤 4.5 ) ; 若最大待检值不 小于匹配下限时, 则进至步骤 4.3 ) ;  4.2) If the maximum value to be detected is less than the lower matching limit, proceed to step 4.5); if the maximum value to be detected is not less than the lower matching limit, proceed to step 4.3);
4.3 ) 类神经网络接着判断输出此最大待检值的输出神经元是否与输出最 大对照值的输出神经元相同; 若两者为同一神经元, 则进至步骤 4.4) , 若两 者为不同神经元, 则进至步骤 4.5 ) ;  4.3) The neural network then determines whether the output neuron outputting the maximum value to be detected is the same as the output neuron outputting the maximum control value; if the two are the same neuron, proceed to step 4.4) if the two are different nerves Yuan, then proceed to step 4.5);
4.4) 密码验证成功;  4.4) The password verification is successful;
4.5 ) 密码验证失败。  4.5) Password verification failed.
本发明还提供了一种通过类神经网络验证密码的系统, 其特殊之处在于: 该系统包括用来接收待检笔迹的接收模块、 用来分析该待检笔迹, 并依据分 析结果输出待检向量的分析模块以及用来将待检值与对照值进行比对的类神 经网络; 接收模块通过分析模块接入类神经网络。  The present invention also provides a system for verifying a password through a neural network, which is special in that: the system includes a receiving module for receiving a handwriting to be inspected, configured to analyze the handwriting to be inspected, and outputting a to be inspected according to the analysis result. An analysis module of the vector and a neural network for comparing the value to be checked with the comparison value; the receiving module accesses the neural network through the analysis module.
综上所述, 本发明系通过类神经网络来生成 /验证以笔迹构成的密码, 且' 由于笔迹密码是一个二维有向轨迹, 其空间复杂度相当高, 难以被机器破解。: 再者, 笔迹密码的内容系通过类神经网络来记忆与识别, 即使是知悉类神经'' 网络数据的幵发人员, 也无法从类神经网络的数据反推或破解出笔迹密码的 内容。 与习知以字符作为密码的技术相较之下, 本发明具有高安全性。  In summary, the present invention generates/verifies a password composed of handwriting through a neural network, and 'because the handwriting password is a two-dimensional directional trajectory, the space complexity is quite high and it is difficult to be cracked by the machine. In addition, the content of the handwriting password is memorized and recognized through the neural network. Even the person who knows the neural network's network data cannot reverse or crack the content of the handwriting password from the data of the neural network. The present invention has high security in comparison with the conventional technique of using characters as passwords.
此外, 利用本发明的电子产品仅需具备一般的手写输入装置(如鼠标、手 写板等) 供使用者输入手写笔迹, 无须额外添购装置, 因此与习知密码保护 技术相较之下, 本发明能够以低成本来生成具有高安全性的密码。  In addition, the electronic product of the present invention only needs to have a general handwriting input device (such as a mouse, a tablet, etc.) for the user to input handwritten handwriting, and no additional device is required, so compared with the conventional password protection technology, The invention can generate a password with high security at a low cost.
另外,对于使用者来说,仅需依其手写习惯将笔迹输入至本发明之系统中, 即可进行密码的生成及验证, 与公知需记忆字符密码或携带 IC卡 /磁卡来执行 密码验证的技术相较之下, 本发明在使用上更为便利。  In addition, for the user, the handwriting is only required to input the handwriting into the system of the present invention, and the password generation and verification can be performed. It is known to memorize the character password or carry the IC card/magnetic card to perform the password verification. In contrast, the present invention is more convenient to use.
附图说明 DRAWINGS
图 1为本发明的通过类神经网络生成密码的系统具体实施例的示意图; 图 2为本发明的通过类神经网络生成密码的方法具体实施例的示意图; 图 3为本发明的类神经网络的结构示意图;  1 is a schematic diagram of a specific embodiment of a system for generating a password by a neural network according to the present invention; FIG. 2 is a schematic diagram of a specific embodiment of a method for generating a password by a neural network according to the present invention; FIG. 3 is a schematic diagram of a neural network of the present invention. Schematic;
图 4为本发明的类神经网络自我学习的流程图; 图 5为本发明的通过类神经网络验证密码的系统具体实施例的示意图; 图 6为本发明的通过类神经网络验证密码的方法具体实施例的示意图。 具体实施方式 4 is a flow chart of a neural network-like self-learning of the present invention; FIG. 5 is a schematic diagram of a specific embodiment of a system for verifying a password by a neural network according to the present invention; FIG. 6 is a schematic diagram of a specific embodiment of a method for verifying a password by a neural network according to the present invention. detailed description
参见图 1、 2, 本发明的通过类神经网络生成密码的系统 100包括接收模块 110、 分析模块 120、 储存模块 130以及类神经网络 140。 使用者通过手写输入 装置 (未绘示, 如鼠标、 手写板等) 将欲设为密码的密码笔迹输入至系统 100 后, 由接收模块 110负责接收密码笔迹 (步骤 210 ) , 并将此密码笔迹传送至 分析模块 120。 接着, 由分析模块 120对密码笔迹进行分析 (步骤 220) , 且本 实施例的分析模块 120包括一采样单元 122, 用来对密码笔迹进行采样, 以得 一密码向量。  Referring to Figures 1 and 2, the system 100 for generating a password via a neural network of the present invention includes a receiving module 110, an analyzing module 120, a storage module 130, and a neural network 140. After the user inputs the password handwriting to be set as a password to the system 100 through a handwriting input device (not shown, such as a mouse, a tablet, etc.), the receiving module 110 is responsible for receiving the password handwriting (step 210), and the password handwriting is performed. Transfer to the analysis module 120. Next, the cryptographic handwriting is analyzed by the analysis module 120 (step 220), and the analysis module 120 of the embodiment includes a sampling unit 122 for sampling the cryptographic handwriting to obtain a cryptographic vector.
储存模块 130是用来储存至少一干扰向量。 在本实施例中, 干扰向量例如 是由预先输入至系统 100的干扰笔迹经分析模块 120分析后所得。 其中, 密码 向量与干扰向量例如是分别由多个向量数据所构成。 由储存模块 130提供至少 一干扰向量 (步骤 230 ) 之后, 再将密码向量与这些干扰向量输入至类神经网 络 140 (步骤 240 ) 。 在本实施例中, 系统 100更包括读出模块 150, 用来读取 储存模块 130中的干扰向量, 并将读取到的千扰向量与分析模块 120所输出的 密码向量一并输出至类神经网络 140。  The storage module 130 is configured to store at least one interference vector. In the present embodiment, the interference vector is obtained, for example, by analysis of the interference handwriting previously input to the system 100 via the analysis module 120. The cipher vector and the interference vector are respectively composed of a plurality of vector data, for example. After the memory module 130 provides at least one interference vector (step 230), the cryptographic vector and the interference vector are input to the neural network 140 (step 240). In this embodiment, the system 100 further includes a readout module 150 for reading the interference vector in the storage module 130, and outputting the read interference vector together with the password vector output by the analysis module 120 to the class. Neural network 140.
值得注意的是,本实施例之系统 100更可以包括一初始化模块 160,用来在 将干扰向量及密码向量输入至类神经网络 140之前, 初始化类神经网络中的权 重值 (步骤 245 ) 。 举例来说, 初始化模块 160在步骤 245中例如是将类神经网 络 140的权重值初始化为介于 0和 1之间的随机数值。  It should be noted that the system 100 of this embodiment may further include an initialization module 160 for initializing the weight values in the neural network based on the input of the interference vector and the cryptographic vector to the neural network 140 (step 245). For example, initialization module 160, for example, initializes the weight value of neural network 140 to a random value between 0 and 1 in step 245.
参见图 3, 类神经网络 140包括输入层 142、 输出层 144以及至少一隐藏层 146, 其中输入层 142具有多个输入神经元 142a; 输出层具有多个输出神经元 144a; 隐藏层 146则具有多个隐藏神经元 146a。值得一提的是, 隐藏层 146的层 数可根据对密码笔迹的复杂性要求度不同而有所不同。 举例来说, 若欲使用 简单的笔迹作为密码, 可减少隐藏层 146的层数; 若欲使用较为复杂的笔迹作 为密码来对重要数据加密, 则可增加隐藏层 146的层数, 以提供更复杂的识别 能力。 熟习此技艺者可自行依其需求来决定隐藏层 146的层数, 本发明并未在 此对隐藏层 146的层数做任何限定。 Referring to FIG. 3, the neural network 140 includes an input layer 142, an output layer 144, and at least one hidden layer 146, wherein the input layer 142 has a plurality of input neurons 142a; the output layer has a plurality of output neurons 144a; and the hidden layer 146 has Multiple hidden neurons 146a. It is worth mentioning that the number of layers of the hidden layer 146 may vary depending on the complexity requirements for the password handwriting. For example, if a simple handwriting is to be used as the password, the number of layers of the hidden layer 146 can be reduced; if more complex handwriting is to be used as the password to encrypt the important data, the number of layers of the hidden layer 146 can be increased to provide more Complex recognition capabilities. Those skilled in the art can determine the number of layers of the hidden layer 146 according to their needs. The present invention is not This limits the number of layers of the hidden layer 146.
类神经网络 140在接收到密码向量 CV与干扰向量 DV之后,会依据密码向 量 CV与干扰向量 DV进行自我学习, 而产生多个密码权重值 (步骤 250) , 且 类神经网络 140会记忆这些密码权重值 (步骤 260 ) , 以便于后续欲验证密码 时使用。  After receiving the cipher vector CV and the interference vector DV, the neural network 140 performs self-learning according to the cipher vector CV and the interference vector DV to generate a plurality of cryptographic weight values (step 250), and the neural network 140 remembers the passwords. The weight value (step 260) is used to facilitate subsequent verification of the password.
参见图 4,在将密码向量 CV与干扰向量 DV输入至类神经网络 140的输入层 142后, 是由输入神经元 142a接收密码向量 CV与干扰向量 DV (步骤 251 ) 。 在 本实施例中,输入神经元 142a的数量例如是与密码向量 CV及干扰向量 DV的向 量数据的数量总和相等。 也就是说, 每一输入神经元 142a各自接收一向量数 据。  Referring to Fig. 4, after the cipher vector CV and the interference vector DV are input to the input layer 142 of the neural network 140, the cryptographic vector CV and the interference vector DV are received by the input neuron 142a (step 251). In the present embodiment, the number of input neurons 142a is, for example, equal to the sum of the amounts of vector data of the cipher vector CV and the interference vector DV. That is, each input neuron 142a receives a vector of data.
各输出神经元 144a依据隐藏层 146对输入神经元 142a所接收之密码向量 CV与干扰向量 DV的运算而输出一运算结果 (步骤 252 ) , 然后分别计算各输 出神经元的运算结果 0与其所对应之学习目标值之间的误差值 (步骤 253 ) 。 举例来说, 第 k个输出神经元与其所对应之学习目标值间的误差值5公式为 Ek = (Tk - Ok ) x Ok (l - Ok ) } 其中 Tk为第 个输出神经元所对应之学习目标值, Ok 为第 k个输出神经元实际输出的运算结果。 Each output neuron 144a outputs an operation result according to the operation of the cryptogram vector CV and the interference vector DV received by the input neuron 142a by the hidden layer 146 (step 252), and then respectively calculates the operation result 0 of each output neuron corresponding thereto. The error value between the learning target values (step 253). For example, the error value 5 between the kth output neuron and its corresponding learning target value is E k = (T k - O k ) x O k (l - O k ) } where Tk is the first output The learning target value corresponding to the neuron, O k is the operation result of the actual output of the kth output neuron.
之后,判断此误差值是否大于一临界值(步骤 254)。当各输出神经元 144a 之运算结果与其所对应之该学习目标值间的误差值不大于临界值时, 将各输 出神经元之运算结果储存为对照值 (步骤 255 ) , 以便于在后续进行密码验证 时, 以这些对照值作为比对基准。 而且, 此时类神经网络 140中的权重值即为 密码权重值。  Thereafter, it is judged whether or not the error value is greater than a critical value (step 254). When the error value between the operation result of each output neuron 144a and the corresponding learning target value is not greater than the threshold value, the operation result of each output neuron is stored as a comparison value (step 255), so that the password is subsequently performed. When verifying, these comparison values are used as a comparison benchmark. Moreover, the weight value in the neural network 140 at this time is the cryptographic weight value.
反之,当输出神经元 144a的运算结果与其所对应的该学习目标值间的误差 值大于临界值时, 类神经网络 140会自行调整输出层 144与隐藏层 146的权重 值。 举例来说, 当第 k个输出神经元的运算结果 Ok与学习目标值 Tk间的误差值 Ek大于临界值时, 依据误差值 Ek来调整第 k个输出神经元与其所连接的第 m层 隐藏层的第 j个隐藏神经元间的权重值 (步骤 256 ) 。 On the contrary, when the error value between the operation result of the output neuron 144a and the corresponding learning target value is greater than the critical value, the neural network 140 adjusts the weight values of the output layer 144 and the hidden layer 146 by itself. For example, when the error value E k between the operation result O k of the kth output neuron and the learning target value T k is greater than the critical value, the kth output neuron is connected according to the error value E k The weight value between the jth hidden neurons of the mth hidden layer (step 256).
在本实施例中, 步骤 256例如是以公 来调整第 k个输 出神经元与其所连接的第 m层隐藏层之第 j个隐藏神经元间的权重值。 其中, W'k 为第 k个输出神经元与其所连接的第 m层隐藏层之第 j个隐藏神经元间调 整 n+1次后的权重值, 为第 k个输出神经元与其所连接的第 m层隐藏层之第 j 个隐藏神经元间调整 n次后的权重值。 此处 n为正整数; Oj为第 j个输出神经元 所输出之运算结果; L称之为学习率, 为一大于零的常数, 其通常系介于 0和 1 之间。 在类神经网络的自我学习过程中, 若具有较大的学习率, 则学习速度 较快; 若学习率较小, 则具有较准确的学习结果。 In this embodiment, step 256 is, for example, a public To adjust the weight value between the kth output neuron and the jth hidden neuron of the mth hidden layer to which it is connected. among them, W ' k is the weight value after adjusting n+1 times between the kth output neuron and the jth hidden neuron of the mth hidden layer to which it is connected, which is the kth output neuron and the mth connected thereto The weight value after nth adjustment between the jth hidden neurons of the layer hidden layer. Where n is a positive integer; Oj is the result of the operation output by the jth output neuron; L is called the learning rate and is a constant greater than zero, which is usually between 0 and 1. In the self-learning process of the neural network, if the learning rate is large, the learning speed is faster; if the learning rate is smaller, the learning result is more accurate.
接着, 依据步骤 256中调整后的权重值来计算第 m层隐藏层的第 j个隐藏神 经元的一输出误差值 Ej (步骤 257 ) 。 在本实施例中, 误差值 的计算公式为 Ε, - Ο^— Ο χ ^ Ε ^ 其中, ^(为经步骤 256调整过后的权重值。 Next, an output error value Ej of the j-th hidden neuron of the m-th hidden layer is calculated according to the adjusted weight value in step 256 (step 257). In the present embodiment, the error value is calculated Ε, - Ο ^ - Ο χ ^ Ε ^ where ^ (to the right after the step 256 by adjusting the weight values.
之后再依据步 ft257所计算出的输出误差值 Ej来调整第 j个隐藏神经元与其所 连接的第 m-1层隐藏层的第 i个隐藏神经元之间的权重值 (步骤 258 ) 。 Then, according to the output error value Ej calculated in step ft257, the weight value between the jth hidden neuron and the i-th hidden neuron of the m-1th hidden layer to which it is connected is adjusted (step 258).
在本实施例中, 步骤 258例如是以公式 = ^' >< >< '' >< (^来调整¾个隐藏神 经元与其所连接的第 m-1层隐藏层的第 i个隐藏神经元之间的权重值。其中^ '+' . 为第 j个隐藏神经元与其所连接之第 m-1层隐藏层的第 i个隐藏神经元之间调整.: n+1次后的权重值, 为第 j个隐藏神经元与其所连接之第 m-1层隐藏层的第 i 个隐藏神经元之间调整 n次后的权重值, 此处 m、 n均为正整数, 且!!^。 In this embodiment, step 258 adjusts the i-th hidden neuron of the hidden layer of the m-1th hidden layer and the m-1 layer hidden by the hidden neuron, for example, by the formula = ^'><><''>< The weight value between . ^ ' + ' . is the adjustment between the jth hidden neuron and the i-th hidden neuron of the m-1 hidden layer to which it is connected.: Weight value after n+1 times , the weight value adjusted n times after the jth hidden neuron and the i-th hidden neuron of the m-1th hidden layer to which it is connected, where m and n are positive integers, and !!^ .
如此不断地重复步骤 256至步骤 258,直到各输出神经元 144a之运算结果与 其所对应之该学习目标值间的误差值不大于临界值, 类神经网络 140即完成其 自我学习, 此时类神经网络 140所记忆的权重值即为前述之密码权重值。  Thus, step 256 to step 258 are continuously repeated until the error value between the operation result of each output neuron 144a and the corresponding learning target value is not greater than the critical value, and the neural network 140 completes its self-learning. The weight value memorized by the network 140 is the aforementioned password weight value.
为使熟习此技艺者更加了解本发明, 以下将配合图式详述验证以上述方法及 系统所生成的密码的系统与方法。 In order to make the present invention more familiar with the present invention, the system and method for verifying the password generated by the above method and system will be described in detail below with reference to the drawings.
参见图 5、 6,系统 500包括接收模块 510、分析模块 520以及类神经网络 530。 其中, 接收模块 510与分析模块 520分别相似于上述实施例的接收模块 1 10及分 析模块 120。 特别的是, 类神经网络 530已依据 「第 4图」 所示的步骤流程进行 过自我学习, 因而记忆有前文所述的密码权重值。  Referring to Figures 5 and 6, system 500 includes a receiving module 510, an analysis module 520, and a neural network 530. The receiving module 510 and the analyzing module 520 are similar to the receiving module 110 and the analyzing module 120 of the foregoing embodiment, respectively. In particular, the neural network 530 has self-learned according to the flow of steps shown in "Fig. 4", thus memorizing the cryptographic weight values described above.
使用者藉由手写输入装置 (未绘示) 将待检笔迹输入至密码验证系统 500 后, 系由接收模块 510负责接收待检笔迹 (步骤 610 ) , 并将此待检笔迹传送 至分析模块 520。 接着, 由分析模块 520对待检笔迹进行分析 (步骤 620) 。 其 中, 分析模块 520包括一采样单元 522, 用来采样待检笔迹, 以得一待检向量。 如同前述实施例的密码向量与干扰向量, 待检向量也可以是由多个向量数据 所构成。 After the user inputs the to-be-detected handwriting to the password verification system 500 by using a handwriting input device (not shown), the receiving module 510 is responsible for receiving the to-be-tested handwriting (step 610), and transmitting the to-be-detected handwriting to the analysis module 520. . Next, the handwriting to be detected by the analysis module 520 is analyzed (step 620). Its The analysis module 520 includes a sampling unit 522 for sampling the to-be-tested handwriting to obtain a to-be-checked vector. Like the cipher vector and the interference vector of the foregoing embodiment, the vector to be detected may also be composed of a plurality of vector data.
之后,将待检向量输入至类神经网络 530, 以使类神经网络 530之各输出神 经元系依据待检向量而分别输出待检值 (步骤 630 ) 。 类神经网络 530会自行 将这些待检值与储存于其中的对照值进行比对。 在本实施例中, 比对待检值 与对照值的步骤包括判断这些待检值中的最大值是否小于一匹配下限 (步骤 642) , 其中匹配下限为一常数, 熟习此技艺者可依据所需的匹配精度来决定 匹配下限的实际数值。  Thereafter, the test vector is input to the neural network 530 such that each output neuron of the neural network 530 outputs a test value according to the vector to be tested (step 630). The neural network 530 will automatically compare these values to the control values stored therein. In this embodiment, the step of comparing the value to be checked with the comparison value includes determining whether the maximum value of the values to be detected is less than a lower matching limit (step 642), wherein the lower limit of the matching is a constant, which is familiar to the skilled person. The matching accuracy determines the actual value of the matching lower limit.
承上所述, 若步骤 642中的判断结果为最大待检值小于匹配下限时, 表示 密码验证失败 (步骤 648 ) ; 若步骤 642中的判断结果为最大待检值不小于匹 配下限时, 则类神经网络 530接着判断输出此最大待检值的输出神经元是否与 输出最大对照值的输出神经元相同 (步骤 644) 。 若两者为同一神经元, 表示 使用者所输入的待检笔迹与原设定的密码笔迹相符,也就是密码验证成功(步 骤 646 ) 。 反之, 若两者为不同神经元, 则表示使用者所输入的待检笔迹与原 设定的密码笔迹不符, 密码验证失败 (步骤 648 ) 。  If the result of the determination in step 642 is that the maximum to-be-checked value is less than the matching lower limit, it indicates that the password verification fails (step 648); if the result of the determination in step 642 is that the maximum to-be-checked value is not less than the matching lower limit, then The neural network 530 then determines whether the output neuron outputting the maximum test value is the same as the output neuron outputting the maximum control value (step 644). If the two are the same neuron, it means that the handwriting to be detected by the user matches the originally set password handwriting, that is, the password verification is successful (step 646). On the other hand, if the two are different neurons, it means that the handwriting to be detected by the user does not match the originally set password handwriting, and the password verification fails (step 648).

Claims

权利要求书 Claim
1、 一种通过类神经网络生成密码的方法, 其特征在于: 该方法包括以下 步骤:  A method for generating a password by a neural network, characterized in that: the method comprises the following steps:
1 ) 接收密码笔迹;  1) Receive password handwriting;
2) 分析密码笔迹, 根据密码笔迹得到密码向量;  2) Analyze the password handwriting, and obtain the password vector according to the password handwriting;
3 ) 提供干扰向量, 将密码向量和干扰向量输入至类神经网络;  3) providing an interference vector, inputting the cipher vector and the interference vector to the neural network;
4)类神经网络依据密码向量与干扰向量进行自我学习,产生密码权重值。 4) The neural network is self-learning based on the cipher vector and the interference vector to generate a cryptographic weight value.
2、根据权利要求 1所述的通过类神经网络生成密码的方法,其特征在于: 所述类神经网络包括输入层、 输出层以及隐藏层, 其中输入层具有输入神经 元; 输出层具有输出神经元; 隐藏层则具有隐藏神经元, 所述输入层通过输 入神经元接入隐藏层的隐藏神经元, 所述隐藏层通过隐藏神经元接入输出层 的输出神经元。 2. The method for generating a password by a neural network according to claim 1, wherein: said neural network comprises an input layer, an output layer and a hidden layer, wherein the input layer has input neurons; and the output layer has output nerves. The hidden layer has hidden neurons that access the hidden neurons of the hidden layer through the input neurons, and the hidden layer accesses the output neurons of the output layer by hiding the neurons.
3、 根据权利要求 2所述的通过类神经网络生成密码的方法, 其特征在于: 所述步骤 4) 的具体步骤如下:  3. The method for generating a password by a neural network according to claim 2, wherein: the specific steps of the step 4) are as follows:
4.1 ) 所述输入神经元接收密码向量与干扰向量;  4.1) the input neuron receives a cipher vector and an interference vector;
4.2 ) 依据隐藏层对密码向量与干扰向量进行运算, 将密码向量输入已使 用干扰向量训练过的神经网络, 然后由各输出神经元输出运算结果;  4.2) Calculating the cipher vector and the interference vector according to the hidden layer, inputting the cryptographic vector into the neural network that has been trained using the interference vector, and then outputting the operation result by each output neuron;
4.3 ) 分别计算各输出神经元的输出的运算结果与类神经网络所对应的学 习目标值之间的误差值;  4.3) separately calculating an error value between the operation result of the output of each output neuron and the learning target value corresponding to the neural network;
4.4) 判断误差值是否大于临界值, 当误差值不大于临界值时, 将各输出 神经元的运算结果储存为对照值, 进至步骤; 当误差值大于临界值时, 进至 步骤 4.5 ) ;  4.4) judging whether the error value is greater than the critical value, when the error value is not greater than the critical value, storing the operation result of each output neuron as a comparison value, and proceeding to the step; when the error value is greater than the critical value, proceeding to step 4.5);
4.5 ) 类神经网络根据误差值调整输出层与隐藏层的权重值;  4.5) The neural network adjusts the weight values of the output layer and the hidden layer according to the error value;
4.6 ) 得到类神经网络中的权重值即为密码权重值。  4.6) The weight value in the class-like neural network is the cryptographic weight value.
4、 根据权利要求 1或 2或 3所述的通过类神经网络生成密码的方法, 其 特征在于: 该方法还包括初始化类神经网络的权重值的步骤。  4. A method of generating a password by a neural network according to claim 1 or 2 or 3, wherein: the method further comprises the step of initializing a weight value of the neural network.
5、 一种通过类神经网络生成密码的系统, 其特征在于: 该系统包括用来 接收密码笔迹的接收模块、 用来分析该密码笔迹, 并依据分析结果输出密码 向量的分析模块、 用来储存干扰向量的储存模块以及依据干扰向量与密码向 量来进行学习, 而产生密码权重值的类神经网络; 所述接收模块通过分析模 · 块接入类神经网络, 所述储存模块接入类神经网络。 5. A system for generating a password through a neural network, the system comprising: a receiving module for receiving a password handwriting, for analyzing the password handwriting, and outputting a password according to the analysis result. An analysis module of the vector, a storage module for storing the interference vector, and a neural network based on the interference vector and the cryptographic vector to generate a cryptographic weight value; the receiving module accesses the neural network by analyzing the modulo block The storage module is connected to a neural network.
6、根据权利要求 5所述的通过类神经网络生成密码的系统,其特征在于: 该系统还包括用来初始化类神经网络的权重值的初始化模块, 所述初始化模 块接入类神经网络。  A system for generating a password by a neural network according to claim 5, wherein: the system further comprises an initialization module for initializing a weight value of the neural network, the initialization module accessing the neural network.
7、 根据权利要求 5或 6所述的通过类神经网络生成密码的系统, 其特征 在于: 该系统还包括用来读取储存模块中的干扰向量的读出模块, 所述储存 模块通过读出模块接入类神经网络。  7. The system for generating a password by a neural network according to claim 5 or 6, wherein: the system further comprises a readout module for reading an interference vector in the storage module, the storage module being read by The module accesses a neural network.
8、 一种通过类神经网络验证密码的方法, 其特征在于: 该方法包括以下 步骤:  8. A method for verifying a password by a neural network, the method comprising: the method comprising the steps of:
1 ) 接收待检笔迹;  1) receiving the handwriting to be inspected;
2) 分析待检笔迹, 得到待检向量;  2) Analyze the handwriting to be inspected and obtain the vector to be tested;
3 ) 将待检向量输入至类神经网络, 类神经网络的输出神经元系根据待检 向量而输出待检值;  3) inputting the vector to be tested into the neural network, and outputting the neuron of the neural network to output the value to be tested according to the vector to be detected;
4) 将这些待检值与对照值进行比对, 根据比对结果对密码进行验证。 4) Compare these values to the control values and verify the password based on the comparison results.
9、根据权利要求 8所述的通过类神经网络验证密码的方法,其特征在于: 所述步骤 4) 的具体步骤如下: 9. The method for verifying a password by a neural network according to claim 8, wherein: the specific steps of the step 4) are as follows:
4.1 ) 判断待检值中的最大值是否小于匹配下限;  4.1) Determine whether the maximum value in the value to be checked is less than the lower matching limit;
4.2)若最大待检值小于匹配下限时, 则进至步骤 4.5 ) ; 若最大待检值不 小于匹配下限时, 则进至步骤 4.3 ) ;  4.2) If the maximum value to be detected is less than the lower matching limit, proceed to step 4.5); if the maximum value to be detected is not less than the lower matching limit, proceed to step 4.3);
4.3 ) 类神经网络接着判断输出此最大待检值的输出神经元是否与输出最 大对照值的输出神经元相同; 若两者为同一神经元, 则进至步骤 4.4 ) , 若两 者为不同神经元, 则进至步骤 4.5 ) ;  4.3) The neural network then determines whether the output neuron outputting the maximum value to be detected is the same as the output neuron outputting the maximum control value; if the two are the same neuron, proceed to step 4.4) if the two are different nerves Yuan, then proceed to step 4.5);
4.4) 密码验证成功;  4.4) The password verification is successful;
4.5 ) 密码验证失败。  4.5) Password verification failed.
10、一种通过类神经网络验证密码的系统, 其特征在于: 该系统包括用来 接收待检笔迹的接收模块、 用来分析该待检笔迹, 并依据分析结果输出待检 向量的分析模块以及用来将待检值与对照值进行比对的类神经网络; 所述接 收模块通过分析模块接入类神经网络。 10. A system for verifying a password by a neural network, wherein: the system includes a receiving module for receiving a handwriting to be inspected, configured to analyze the handwriting to be inspected, and outputting a pending test according to the analysis result. An analysis module of the vector and a neural network for comparing the value to be checked with the comparison value; the receiving module accesses the neural network through the analysis module.
PCT/CN2008/001915 2008-09-09 2008-11-24 System and method for generating/ identifying cipher code via artificial neural network WO2010028517A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200810150877.3A CN101350155A (en) 2008-09-09 2008-09-09 Method and system for generating and verifying cipher through genus nerval network
CN200810150877.3 2008-09-09

Publications (1)

Publication Number Publication Date
WO2010028517A1 true WO2010028517A1 (en) 2010-03-18

Family

ID=40268930

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2008/001915 WO2010028517A1 (en) 2008-09-09 2008-11-24 System and method for generating/ identifying cipher code via artificial neural network

Country Status (2)

Country Link
CN (1) CN101350155A (en)
WO (1) WO2010028517A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346819A (en) * 2010-07-30 2012-02-08 汉王科技股份有限公司 Electronic reader information display method and device
CN106407874A (en) * 2016-03-25 2017-02-15 东南大学 Handwriting recognition method based on handwriting coordinate sequence
CN109214193B (en) * 2017-07-05 2022-03-22 创新先进技术有限公司 Data encryption and machine learning model training method and device and electronic equipment
TWI672643B (en) * 2018-05-23 2019-09-21 倍加科技股份有限公司 Full index operation method for deep neural networks, computer devices, and computer readable recording media
CN110795726A (en) * 2019-10-23 2020-02-14 成都索贝数码科技股份有限公司 Password protection method and system based on artificial neural network
CN114692040B (en) * 2022-04-06 2022-11-29 山东特亿宝互联网科技有限公司 Auxiliary display platform of web browser

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06243296A (en) * 1993-02-16 1994-09-02 Matsushita Electric Ind Co Ltd Pen input password system
US6151593A (en) * 1997-07-14 2000-11-21 Postech Foundation Apparatus for authenticating an individual based on a typing pattern by using a neural network system
CN1279451A (en) * 1999-07-06 2001-01-10 英业达集团(西安)电子技术有限公司 Triangular vector approximating method of digitalized character contour
CN1371504A (en) * 1999-01-13 2002-09-25 电脑相关想象公司 Signature recognition system and method
CN1571453A (en) * 2003-07-18 2005-01-26 英华达(南京)科技有限公司 Method for implementing network trade safety certification
JP2008052333A (en) * 2006-08-22 2008-03-06 Takafumi Terasawa Input information analyzing method and input information analyzing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06243296A (en) * 1993-02-16 1994-09-02 Matsushita Electric Ind Co Ltd Pen input password system
US6151593A (en) * 1997-07-14 2000-11-21 Postech Foundation Apparatus for authenticating an individual based on a typing pattern by using a neural network system
CN1371504A (en) * 1999-01-13 2002-09-25 电脑相关想象公司 Signature recognition system and method
CN1279451A (en) * 1999-07-06 2001-01-10 英业达集团(西安)电子技术有限公司 Triangular vector approximating method of digitalized character contour
CN1571453A (en) * 2003-07-18 2005-01-26 英华达(南京)科技有限公司 Method for implementing network trade safety certification
JP2008052333A (en) * 2006-08-22 2008-03-06 Takafumi Terasawa Input information analyzing method and input information analyzing device

Also Published As

Publication number Publication date
CN101350155A (en) 2009-01-21

Similar Documents

Publication Publication Date Title
CN109644183B (en) Remote use of locally stored biometric authentication data
JP4886371B2 (en) Biometric authentication method and system
US8020005B2 (en) Method and apparatus for multi-model hybrid comparison system
Centeno et al. Mobile based continuous authentication using deep features
Wu et al. Liveness is not enough: Enhancing fingerprint authentication with behavioral biometrics to defeat puppet attacks
WO2004038639A2 (en) Verification of identity and continued presence of computer users
EP3455766A1 (en) Authenticating a user
US9202035B1 (en) User authentication based on biometric handwriting aspects of a handwritten code
Wang et al. Improving reliability: User authentication on smartphones using keystroke biometrics
WO2010028517A1 (en) System and method for generating/ identifying cipher code via artificial neural network
Xu et al. Challenge-response authentication using in-air handwriting style verification
Sheng et al. Template-free biometric-key generation by means of fuzzy genetic clustering
Alshanketi et al. Multimodal mobile keystroke dynamics biometrics combining fixed and variable passwords
EP1847959B1 (en) Threshold determining device, method and program, and person identifying system
Bhardwaj et al. A novel behavioural biometric technique for robust user authentication
Fang et al. HandiText: Handwriting recognition based on dynamic characteristics with incremental LSTM
Zahid et al. Biometric authentication security system using human DNA
Saifan et al. A Survey of behavioral authentication using keystroke dynamics: Touch screens and mobile devices
Nechiporenko et al. Authentication of users of mobile devices by their motor reactions
Wang et al. Towards DTW-based unlock scheme using handwritten graphics on smartphones
Wu et al. CaiAuth: Context-aware implicit authentication when the screen is awake
Sheng et al. Reliable and secure encryption key generation from fingerprints
Ameh et al. Securing cardless automated teller machine transactions using bimodal authentication system
US11711216B1 (en) Systems and methods for privacy-secured biometric identification and verification
Shinde et al. Survey of Keystroke Dynamics as a Biometric for Static Authentication.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08876911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08876911

Country of ref document: EP

Kind code of ref document: A1