CN102213591A - Digital image analysis device - Google Patents

Digital image analysis device Download PDF

Info

Publication number
CN102213591A
CN102213591A CN2010101419423A CN201010141942A CN102213591A CN 102213591 A CN102213591 A CN 102213591A CN 2010101419423 A CN2010101419423 A CN 2010101419423A CN 201010141942 A CN201010141942 A CN 201010141942A CN 102213591 A CN102213591 A CN 102213591A
Authority
CN
China
Prior art keywords
lines
image
digitized video
digital image
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010101419423A
Other languages
Chinese (zh)
Other versions
CN102213591B (en
Inventor
吴能伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anmo Electronics Corp
Original Assignee
Anmo Electronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anmo Electronics Corp filed Critical Anmo Electronics Corp
Priority to CN 201010141942 priority Critical patent/CN102213591B/en
Publication of CN102213591A publication Critical patent/CN102213591A/en
Application granted granted Critical
Publication of CN102213591B publication Critical patent/CN102213591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a digital image analysis device, which comprises a device for receiving a plurality of lines related to an image edge of a digital image; and a device for defining positions of a plurality of images that the plurality of lines are intersected with the image edge of the digital image. When a computer executes a functional module in the digital image analysis device to identify the digital image, pixel values on paths of various lines are only needed to be analyzed, total pixels in the digital image are not needed to be analyzed, and computing resources needed by a processor of the computer are greatly reduced.

Description

The digital image analysis device
Technical field
The relevant technology that image is carried out numerical analysis of the present invention refers to the digitized video identification especially, measures relevant method and digital image analysis device.
Background technology
Being of wide application of digitized video identification technique, for example, in many application such as product detection, micro-measurement (microscopic measurement), the identification of image object, can utilize the image identification technology to judge image features such as the edge of privileged site in the digitized video, shape.
Yet known digitized video discrimination method need utilize the processor of computing machine that whole pixels of digitized video are carried out pixel value comparison and analysis, and the calculation resources that expends is very high.Therefore, known digitized video discrimination method not only processing speed is subject to the processor calculating ability of computing machine, and is difficult to carry out on the lower device of processor calculating ability.
In addition, known image recognition method is also easily because the relation of presentation content (for example image lines, image shape etc.), and has influence on the correctness of image identification.
In view of this, how to improve or solve the disappearance of the digitized video discrimination method in the above association area, there is problem to be solved in real system.
Summary of the invention
This instructions provides a kind of digital image analysis device, and it includes: be used to receive the device that a plurality of lines relevant with the image edge of a digitized video are set; And the device of a plurality of image positions of intersecting of the image edge that is used to define these a plurality of lines and this digitized video.
One of advantage of aforementioned digital image analysis device is, the processor of computing machine is when carrying out the digitized video identification, only need analyze the pixel value on the path of each lines, and need not the whole pixels in the digitized video be analyzed, significantly reduce the required calculation resources of processor.
Description of drawings
Fig. 1 is the functional block diagram after an embodiment of digital image analysis device of the present invention simplifies.
Fig. 2 is an embodiment process flow diagram of digitized video discrimination method of the present invention.
Fig. 3 and Fig. 4 are the embodiment of the display frame of the display among the 1st figure.
Fig. 5 is the partial enlarged drawing of the digitized video among Fig. 4.
Fig. 6 and Fig. 7 are the embodiment of the display frame of the display among Fig. 1.
Fig. 8 is an embodiment process flow diagram of digitized video method for measurement of the present invention.
Fig. 9 is the embodiment of the display frame of the display among Fig. 1.
[main element symbol description]
100 The digital image analysis device
102 The target object
110 Image capture unit
120 Main frame
122 Processor
124 Store module
130 Display
140 Input interface
150 Bogey
300、600、700、900 Digitized video
302、602 Perforate
304、306、604、606 Side
310 The option row
410、420、430、440、450、460、470、702、 704、706、720、730、740、750、760、770、 780、790、910、920、930、940、950、960 Lines
412、422、432、442、452、462、472、552、 554、556、612、622、632、642、652、662、 672、712、714、716、722、732、742、752、 762、772、782、792、912、922、932、942、 952、962 Image position
502、504 The lines end points
510、512、514、516、518、520、522、524、 526、528、530、532、534、536、538、540 Pixel
710 Circular arc
Embodiment
Below will cooperate correlative type that embodiments of the invention are described.In these were graphic, identical label was represented identical or similar elements.
In the middle of instructions and follow-up claim, used some vocabulary to censure specific element.The technician should understand in the affiliated field, and same element may be called with different nouns.This instructions and follow-up claim are not used as distinguishing the mode of element with the difference of title, but the benchmark that is used as distinguishing with the difference of element on function.Be an open term mentioned " comprising " in the middle of instructions and the follow-up claim in the whole text, so should be construed to " comprise but be not limited to ... ".
Fig. 1 illustrate is the functional block diagram of the digital image analysis device 100 of one embodiment of the invention after simplifying.As shown in Figure 1, digital image analysis device 100 includes image capture unit 110, main frame 120, display 130 and input interface 140.Image capture unit 110, display 130 and input interface 140 are coupled to main frame 120." couple " speech in the middle of instructions and follow-up claim, comprise any directly and indirect connection means.Therefore, image capture unit 110, display 130 and input interface 140 threes all can directly (comprise by signal connected modes such as electric connection or wireless transmission, optical delivery) and are connected to main frame 120, or by other devices or connection means indirectly electrically or signal be connected to main frame 120.
Image capture unit 110 is to be used for sensing to be placed in the image of the target object 102 of bogey 150.In the enforcement, image capture unit 110 can be that digital camera, digital camera, digital microscope or any other possess the device of image sensing ability.In the present embodiment, main frame 120 includes processor 122 and stores module 124.Storing module 124 can realize with one or more Storage Media.
Digital image analysis device 100 can be used to carry out the digitized video identification or the operation of digitized video measurement, and the Fig. 2 that below will arrange in pairs or groups further specifies the function mode of digital image analysis device 100.
Fig. 2 is embodiment flow process Figure 200 of digitized video discrimination method of the present invention.
In step 210, the digitized video of the target object 102 that the processor 122 of main frame 120 can be sensed image capture unit 110 is presented on the display 130.For example, the digitized video 300 that is illustrated among Fig. 3 is local images of the circuit board that sensed of image capture unit 110, and wherein, label 302 is represented a perforate on this circuit board, label 304 and 306 two sides representing this circuit board.
For the digitized video identification was handled, the identification of image edge was a considerable ring.After defining the position of image edge, just can further pick out the shape of image.In order to reduce speed and the accuracy that the digitized video identification is handled required calculation resources and promoted identification, digital image analysis device 100 is in the process of combine digital image identification, meeting and user's interaction require the user to provide and help the supplementary of judging image edge and/or shape.To be that example is illustrated below with the image identification process of the perforate 302 of circuit board.
In step 220, processor 122 can be with the option of a plurality of identification type samples, is shown on the display 130 with the form of literal or pattern and selects for the user.For example, as shown in Figure 3, processor 122 can be formed the option (as circle, ellipse, polygon, irregular shape, circular arc, straight line, broken line or the like) of a plurality of identification type samples option row 310 with the formal group of literal button, and be shown on the display 130.Each identification type sample option all has a corresponding image edge account form, and the form of available computers program is stored in and stores in the module 124.The content and the number of aforementioned identification type sample option are an embodiment only, are not limitation actual embodiment of the present invention.In the enforcement, can be according to the option and the content that need to increase, reduce or change identification type sample of design.
In step 230, the user need select one or more identification type sample relevant with the shape of digitized video 300 from a plurality of identification type sample options that option row 310 are presented.In the enforcement, the user can by input interface 140 with vernier click, touch-control, acoustic control or other instruction input modes, select one or more identification type sample.In the present embodiment, the user can select and circuit board perforate 302 shapes immediate identification type samples " circle ".
In step 240, main frame 120 can receive the selected identification type of users sample by input interface 140 and set, and is recorded in and stores in the module 124, the reference frame when carrying out image identification as processor 122.
In step 250, the user need set a plurality of simple lines roughly on digitized video 300.The user can click or the mode of touch-control by input interface 140 usefulness verniers, set with digitized video 300 in desire the corresponding a plurality of lines of image edge of the perforate 302 of identification.The quantity of required lines is relevant with the selected identification type of user sample.For example, when selected identification type sample is " circle " or " circular arc ", need at least 3 lines.When selected identification type sample is " straight line ", need at least 2 lines.When selected identification type sample is " polygon ", " irregular shape ", then need the lines of a greater number.
Each lines need only get final product with the image edge of the image object of desiring identification is crossing, that is the two-end-point of each lines will lay respectively at the both sides of image edge.The length of each lines is regardless of, and shape also is not limited to straight line, can also be curve or irregular line.For reducing the complexity of subsequent calculations, each lines preferably only intersects with single image edge.In the enforcement, display 130 and input interface 140 can be combined, for example, can utilize the touch-control screen to realize both functions of display 130 and input interface 140.
Shown in the embodiment of Fig. 4, the user can draw on digitized video 300 and crossing three lines 410,420 and 430 of the image edge of perforate 302 roughly.
In step 260, main frame 120 can receive the setting data of a plurality of lines that the user imported by input interface 140, and is recorded in and stores in the module 124, the reference frame when carrying out the image edge identification as processor 122.
In step 270, processor 122 is understood the setting data of a plurality of lines of reading and recording in storing module 124, and defines the crossing a plurality of image positions of image edge of perforate 302 in these a plurality of lines and the digitized video 300.For example, the processor in the present embodiment 122 can calculate crossing image position 412, the lines 420 and crossing image position 422 and the lines 430 and the crossing image position 432 of the image edge of perforate 302 of the image edge of perforate 302 of image edge of lines 410 and perforate 302.
The method of the image position that the image edge of processor 122 each lines of calculating and perforate 302 intersects has a lot.In one embodiment, processor 122 can calculate the value differences between neighbor on the path of each lines correspondence, and is the intersection location of the image edge of these lines and perforate 302 with the set positions of value differences maximum.For example, the pixel value in the image edge both sides has in the application of notable difference, can adopt the account form of this image position.
Note that alleged pixel value in instructions and follow-up claim, can be the brightness value (1uminance) or the chroma value (chrominance) of pixel in the enforcement.
In another embodiment, processor 122 can be on the path of each lines correspondence, calculate value differences between neighbor in regular turn along a predetermined direction, and value differences is met or exceeded the intersection location of first set positions of a predetermined critical for the image edge of these lines and perforate 302.For example, the pixel value in the image edge both sides does not have significant difference, but the pixel value of the pixel value of image edge and image edge both sides obviously has during other uses, and can adopt the account form of this image position.For simplicity, the partial enlarged drawing with the digitized video 300 that Fig. 5 was illustrated below is that example is done explanation.
In Fig. 5, pixel 510,512,514,516,518,520,524,526 and 528 is the local pixels on the image edge of perforate 302.Pixel 530,532,518,516,534,536,538 and 540 is the pixels that are positioned on the path of lines 410. Number 502 and 504 two end points representing lines 410.The user can by about adjust the wherein mode of the position of an end points (for example end points 504) of lines 410, finely tune the intersection location of lines 410 and the image edge of perforate 302.The mode of the intersection location of this setting one lines and an image edge, the mode compared to utilizing a specified point on the directly selected image edge of slide-mouse (or touch-control means) can provide the user higher selection precision.
Processor 122 can prolong direction D1, (for example calculate on the path of lines 410 value differences between two adjacent pixels in regular turn, elder generation's calculating pixel 530 and 532 value differences, calculating pixel 532 and 518 value differences, calculating pixel 518 and 516 value differences more again, the rest may be inferred), and compare with this predetermined critical.In the present embodiment, pixel 532 and 518 value differences are first value differences that surpasses this predetermined critical, therefore, processor 122 can omit the value differences of calculated for subsequent, and the position of intersecting point 552 of pixel 532 and pixel 518 is set at lines 410 and the crossing image position 412 of the image edge of perforate 302.
In another embodiment, processor 122 can calculate the median or the mean value of a plurality of pixel values on the path of lines 410 earlier, and this median or mean value are made as a critical value.In other words, the setting of this critical value can be dynamically to adjust.Then, processor 122 can prolong direction D1, in regular turn the pixel value on lines 410 paths and this critical value is compared, and pixel value is reached or first set positions of crossing over this critical value is lines 410 and the intersection location of the image edge of perforate 302.Suppose that pixel 518 is pixels that path first pixel values on the direction of D1 of lines 410 reaches this critical value, processor 122 can omit follow-up pixel value, and the image position 412 that intersects as the image edge of lines 410 and perforate 302 with the center 554 of pixel 518.
Suppose that pixel 518 is pixels that path first pixel values on the direction of D1 of lines 410 strides across this critical value (that is pixel value is greater than or less than this critical value for the first time), processor 122 can omit follow-up pixel value, and the pixel 518 and the position of intersecting point 552 of pixel 532 is set at the crossing image position 412 of image edge of lines 410 and perforate 302.Perhaps, processor 122 also can carry out interpolative operation to the pixel value of pixel value 512 and pixel 532, calculates the pairing position 556 of this critical value, and the image position 412 that intersects as the image edge of lines 410 and perforate 302 with position 556.
By above stated specification as can be known, the precision (Precision) when processor 122 defines the intersection location of image edge of each lines and image object can reach than a degree that pixel unit is also little.
Then, in step 280, processor 122 can be according to selected identification type sample in step 230, and a plurality of image positions that are connected in the step 270 to be obtained are to pick out one or more local edge or the integral edge of digitized video 300.For example, in the present embodiment, user's selected identification type sample in step 230 is " circle ", so processor 122 can decide a circumference according to 3 image positions 412,422 and 432 that defined in step 270, with the image edge identification result as perforate 302.
After picking out the border (for example circumference of perforate 302) of an image object, processor 122 can further calculate the position of the geometric properties (center of circle 408 of for example perforate 302) of this image object according to need.
Be that example illustrates digitized video discrimination method of the present invention with the image identification process of circuit board side 304 below.
Because circuit board side 304 is linear sides, so the user can select the immediate identification type of the shape sample " straight line " with circuit board side 304 in step 230, and in step 250, set two lines 440 and 450 crossing with the image edge of circuit board side 304.
In step 270, processor 122 can utilize one of aforesaid mode, defines lines 440 and the crossing image position 442 of the image edge of circuit board side 304, and lines 450 and the crossing image position 452 of the image edge of circuit board side 304.
In step 280, be " straight line " owing to store the identification type sample that is write down in the module 124, so processor 122 can determine a straight line according to image position 442 and 452, with image edge identification result as circuit board side 304.
In another embodiment, the user also can select in step 230 and both the immediate identification type of shape samples " broken line " of circuit board side 304 and side 306, and two lines 440 setting in step 250 that image edge with circuit board side 304 intersects with 450 and with crossing two lines 460 and 470 of the image edge of circuit board side 306.Afterwards, processor 122 can define the image position 472 that the image edge of image position 462 that the image edge of image position 452 that the image edge of image position 442 that the image edge of lines 440 and circuit board side 304 intersects, lines 450 and circuit board side 304 intersects, lines 460 and circuit board side 306 intersects and lines 470 and circuit board side 306 intersects in step 270.
Thus, processor 122 just can connect into a L type broken line with image position 442,452,462 and 472 according to the indication of identification type sample " broken line " in step 280, as circuit board side 304 and 306 both image identification results.
Obtain the position in the center of circle 408 of perforate 302, and behind the position of circuit board side 304 and 306, processor 122 just can further calculate the distance of the center of circle 408 to circuit board side 304 or 306.
By above stated specification as can be known, digitized video discrimination method of the present invention only need be analyzed the pixel value on the path of each lines, just can calculate the intersection location of these lines and a specific image edge, and then reach the purpose of identification image edge, and need not the whole pixels in the digitized video 300 be analyzed, significantly reduced the required calculation resources of processor 122.In addition, because processor 122 can be set reference frame when carrying out image identification with the identification type sample that the user was imported, so the accuracy can significantly promote the edge of identification digitized video or profile the time.
On using, digital image analysis device 100 can be used to realize the purpose of the examination and test of products, QC.For example, bogey 150 can bring realization with the conveying on the production line, is used for periodically target object 102 to be tested (circuit board as the aforementioned) being delivered to the surveyed area that image capture unit 110 is aimed at.Often have many similar image features between the target object to be detected, so the user only needs for the first time when image capture unit 110 carries out image capture and identification, import suitable supplementary (for example image identification type sample, lines relevant etc.) to main frame 120 with image object edge, image capture unit 110 just can be finished the image identification processing of follow-up target object 102 automatically according to same supplementary, and then realizes the screening operation of non-defective unit or defective products.Below will be described further with the example of Fig. 6.
In Fig. 6, digitized video 600 is local images of another circuit board of being sensed of image capture unit 110, and wherein, label 602 is represented the perforate on this circuit board, label 604 and 606 two sides representing this circuit board.Generally speaking, the product difference to each other under the same processing procedure usually can be in limited scope.Therefore, digital image analysis device 100 can keep the auxiliary line 410,420,430,440,450,460 and 470 that the user is imported in the process of the digitized video 300 of identification previous circuit plate, and the setting of identification type sample " circle " and " broken line " (or " straight line ").Processor 122 can utilize one of aforesaid digitized video identification mode, defines crossing image position 612, the lines 420 and crossing image position 622 and the lines 430 and the crossing image position 632 of the image edge of perforate 602 of the image edge of perforate 602 of image edge of lines 410 and perforate 602 automatically.Then, processor 122 can utilize image position 612,622 and 632 these three determining positions to go out the circumference of perforate 602, and then calculate the position in the center of circle 608 of perforate 602 according to the setting of " circle " identification type sample.
Similarly, processor 122 can utilize one of aforesaid digitized video identification mode, defines the image position 672 that the image edge of image position 662 that the image edge of image position 652 that the image edge of image position 642 that the image edge of lines 440 and circuit board side 604 intersects, lines 450 and circuit board side 604 intersects, lines 460 and circuit board side 606 intersects and lines 470 and circuit board side 606 intersects.Then, processor 122 can connect into a L type broken line with image position 642,652,662 and 672, as the image identification result of circuit board side 604 and 606 according to the setting of identification type sample " broken line " in step 280.
Next, processor 122 can further calculate the distance of the center of circle 608 of perforate 602 to the side 604 and/or the side 606 of circuit board, whether meets default specification with the position of judging perforate 602, and then determines whether this circuit board is qualified circuit board.
By above stated specification as can be known, when digitized video discrimination method of the present invention is applied in the product detection, only need the user simply to involve in (for example, input image identification type sample and simple several lines etc.), just can finish the detection operation of subsequent product in full automatic mode almost.Moreover, because the required calculation resources of aforementioned image identification and examination and test of products process is low more than conventional approaches, so can effectively improve the efficient of product guard system and reduce required hardware cost.
Except being used for carrying out the examination and test of products, digital image analysis device 100 also can be used in aforesaid digitized video discrimination method in the application of object location and aligning.For example, after the processor 122 of main frame 120 picks out relative position and distance between ad-hoc location (the normally design of position line, gulde edge, orientation angle, anchor point and so on) on target object 102 and the bogey 150, just can control bogey 150 target object 102 is moved and/or rotate to a precalculated position, with the purpose of reaching the object location or aiming at.
For the digitized video that presents inner concavity, aforesaid digitized video discrimination method still can successfully pick out local edge or whole border of this digitized video.For example, for the digitized video 700 that is illustrated among Fig. 7, the user can select the approaching identification type sample " polygon " of shape with digitized video 700 in step 230, and sets in step 250 and the crossing a plurality of lines 720,730,740,750,760,770,780 and 790 of the image edge of digitized video 700.Thus, processor 122 just can carry out step 270 according to these settings, with the crossing image position 722,732,742,752,762,772,782 and 792 of image edge that calculates these lines and digitized video 700.
In addition, the user can carry out aforesaid digitized video discrimination method once more, but reelects identification type sample " circular arc ", and the crossing lines 702,704 and 706 of the image edge of setting and circular arc 710.So, processor 122 just can pick out lines 702,704 and 706 and the image position 712,714 and 716 that intersects of the image edge of circular arc 710.
After obtaining aforesaid image position 712,714,716,722,732,742,752,762,772,782 and 792, processor 122 just can further carry out step 280 according to the aforesaid identification type sample " polygon " and the setting of " circular arc ", determine a plurality of image edges position of digitized videos 700 according to these image positions, and then promptly pick out the complete shape of digitized video 700.
For erose digitized video, the user can select identification type sample to be " irregular shape ", and sets some and the crossing lines of the image edge of this difform digitized video more.The lines that the user sets are many more, can promote the correctness of the profile identification of 122 pairs of these digitized videos of processor more.
The execution sequence that note that each step among aforementioned flow process Figure 200 only is an embodiment, but not the limitation embodiments of the present invention.For example, step 230 can be exchanged with the order of step 250.In addition, also digital image analysis device 100 can be designed to be used for specially the digitized video of the identical shaped target object of identification, just can omit step 220,230 and 240 this moment.
Digital image analysis device 100 can further be applied in aforesaid digitized video discrimination method digitized video and measure.For example, Fig. 8 illustrate is an embodiment process flow diagram 800 of digitized video method for measurement of the present invention.Step 210 in the process flow diagram 800 is identical with the flow process of same numeral among aforementioned flow process Figure 200 to step 280, for for purpose of brevity, does not repeat them here.
As shown in Figure 8, processor 122 is after step 270 defines a plurality of image positions on the digitized video, further execution in step 870, scaling chi according to digitized video, with the spacing of two specific image positions on the digitized video or the path total length of a plurality of image positions, be converted into the physical length value from pixel distance.
In addition, processor 122 also can carry out step 880 after picking out the local edge or whole border of digitized video in step 280, according to the image edge identification result of digitized video, measures the specific image feature value of digitized video.Be that example further specifies below with Fig. 9.
Digitized video 900 among Fig. 9 is cross sectional images of a pipe.If the target that will measure is the image feature value relevant with the image inner edge of digitized video 900, for example internal diameter size, inner ring sectional area etc., then the user can select the approaching identification type sample " circle " of image inner edge shape with digitized video 900 in step 230, and in a plurality of lines that step 250 is set according to certain direction and the image inner edge of digitized video 900 intersects.For example, in the embodiment of Fig. 9, set lines 910,920 and 930 in mode from inside to outside.
Afterwards, processor 122 needs only in step 270 according to aforesaid account form, drafting direction along each lines on the path of each lines is carried out the pixel value analysis, just can define crossing image position 912, the lines 920 and crossing image position 922 and the lines 930 and the crossing image position 932 of image inner edge of image inner edge of image inner edge of lines 910 and digitized video 900.That is aforesaid pixel value is analyzed computing, is carrying out, then is being to carry out along direction D3 carrying out on the path of lines 930 along direction D2 on the path of lines 920 along direction D1 on the path of lines 910.
In step 280, processor 122 just can utilize image position 912,922 and 932 to decide and the corresponding circumference of identification type sample " circle ", with the identification result as the image inner edge of digitized video 900.
After picking out the circumference of digitized video 900 image inner edges, processor 122 just can be in step 880 further according to the scaling chi of digitized video 900, calculate the image feature value relevant, for example the radius of the internal diameter size in this pipe cross section, this pipe inner ring, girth, sectional area, average color or the like with the image inner edge of digitized video 900.
Similarly, if the target that will measure is the image feature value relevant with the image outer rim of digitized video 900, for example outside dimension, pipe outer ring sectional area etc., the user is as long as set according to certain direction in step 250 and a plurality of lines that the image outer rim of digitized video 900 intersects.For example, in the embodiment of Fig. 9, set lines 950,960 and 970 in the mode of ecto-entad.Processor 122 just can carry out the pixel value analysis, carry out the pixel value analysis and carry out the pixel value analysis along direction D7 on the path of lines 970 along direction D6 on the path of lines 960 along direction D5 on the path of lines 950 in step 270, and then defines lines 950 and crossing image position 952, the lines 960 and crossing image position 962 and the lines 970 and the crossing image position 972 of image outer rim of image outer rim of image outer rim.
Then, in step 280, processor 122 just can utilize image position 952,962 and 972 to pick out the circumference of the image outer rim of digitized video 900.
Therefore, processor 122 just can be further in step 880 according to the scaling chi of digitized video 900, calculate the image feature value relevant, for example the data of the radius of the outside dimension in this pipe cross section, this pipe outer ring, girth, sectional area, average color or the like with the image outer rim of digitized video 900.
Utilize aforesaid digitized video method for measurement, the user only need provide a spot of supplementary, digital image analysis device 100 just can rapidly, correctly measure particular geometric feature (Geometrical feature) value of image object, for example, the girth of local edge length, arc length, radian, angle, central angle, angle of circumference, the angle of osculation, image object, area, radius, diameter, internal diameter, external diameter etc., or image feature values such as the average color of image object, mean flow rate.
In the enforcement, the profile at processor 122 specific image position in the identification digitized video, the image identification mechanism that also can arrange in pairs or groups suitable, further identification and calculate the position and/or the quantity of the specific image feature (as particle, crystal structure etc.) that is comprised in this image position allows the user can carry out more applications.
As previously mentioned, the precision of processor 122 when the intersection location of the image edge that defines each lines and image object can reach than a degree that pixel unit is also little.Therefore, when aforesaid digitized video discrimination method is applied in the digitized video measurement, can effectively promote the accuracy of image measurement.
Please note, each ingredient in the device claim in claims is corresponding consistent with each step of aforesaid computer program flow process, therefore, the device claim in claims should be understood to mainly realize by the computer program of instructions record the functional module framework of aforementioned solution.
The above only is preferred embodiment of the present invention, and all equalizations of doing according to claim of the present invention change and modify, and all should belong to covering scope of the present invention.

Claims (15)

1. digital image analysis device, it includes:
Be used to receive the device that a plurality of lines relevant with the image edge of a digitized video are set; And
Be used to define the device of a plurality of image positions that the image edge of these a plurality of lines and this digitized video intersects.
2. digital image analysis device as claimed in claim 1, it includes in addition:
Be used to connect the device of these a plurality of image positions, with at least one local edge of this digitized video of identification.
3. digital image analysis device as claimed in claim 2, it includes in addition:
Be used to show the device of the option of a plurality of identification type samples; And
Be used to receive the device that selected one or more identification type sample of a user is set.
4. digital image analysis device as claimed in claim 3, the device that wherein is used to connect these a plurality of image positions includes:
Be used for setting the device that connects these a plurality of image positions according to this one or more identification type sample.
5. digital image analysis device as claimed in claim 3, the device that wherein is used to connect these a plurality of image positions includes:
Be used for determining the device of plural edges line according to this one or more identification type sample setting and these a plurality of image positions; And
Be used for determining one or more local edge of this digitized video or the device of integral edge according to this plural edges line.
6. digital image analysis device as claimed in claim 3, wherein this one or more identification type sample is set relevant with the shape of this digitized video.
7. digital image analysis device as claimed in claim 3, wherein one or more identification type sample of the quantity and this of these a plurality of lines is set corresponding.
8. digital image analysis device as claimed in claim 2, it includes in addition:
Be used at least one local edge, judge the device of position of a geometric properties of this digitized video according to this digitized video.
9. digital image analysis device as claimed in claim 2, it includes in addition:
Be used for identification result, calculate the geometrical characteristic of this digitized video or the device of image feature value according to image edge.
10. digital image analysis device as claimed in claim 1, it includes in addition:
Be used for calculating the device of the spacing or the path of these a plurality of image positions according to the scaling chi of this digitized video.
11. digital image analysis device as claimed in claim 1, the device that wherein is used to define these a plurality of image positions includes:
Be used to analyze the pixel value of the pixel on each lines respective path, with the device of the intersection location of the image edge that determines these lines and this digitized video.
12. digital image analysis device as claimed in claim 11, the device that wherein is used to analyze the pixel value of the pixel on each lines respective path includes:
Be used to calculate the device of the value differences between neighbor on each lines respective path; And
Be used for the set positions of value differences maximum device for the intersection location of the image edge of these lines and this digitized video.
13. digital image analysis device as claimed in claim 11, the device that wherein is used to analyze the pixel value of the pixel on each lines respective path includes:
Be used for calculating the device of the value differences between neighbor in regular turn along a predetermined direction in the path of each lines correspondence; And
Be used for value differences is met or exceeded the device of first set positions of a predetermined critical for the intersection location of the image edge of these lines and this digitized video.
14. digital image analysis device as claimed in claim 11, the device that wherein is used to analyze the pixel value of the pixel on each lines respective path includes:
Be used in the path of each lines correspondence the device that in regular turn pixel value on this path and a critical value is compared along a predetermined direction; And
First set positions that is used for pixel value reached or crosses over this critical value is the device of intersection location of the image edge of lines and this digitized video.
15. digital image analysis device as claimed in claim 14, the device that wherein is used to analyze the pixel value of the pixel on each lines respective path includes:
Be used for setting the device of the pairing critical value of these lines according to the pixel value on the path of each lines correspondence.
CN 201010141942 2010-04-01 2010-04-01 Digital image analysis device Active CN102213591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010141942 CN102213591B (en) 2010-04-01 2010-04-01 Digital image analysis device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010141942 CN102213591B (en) 2010-04-01 2010-04-01 Digital image analysis device

Publications (2)

Publication Number Publication Date
CN102213591A true CN102213591A (en) 2011-10-12
CN102213591B CN102213591B (en) 2013-10-23

Family

ID=44744994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010141942 Active CN102213591B (en) 2010-04-01 2010-04-01 Digital image analysis device

Country Status (1)

Country Link
CN (1) CN102213591B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI627561B (en) * 2017-08-03 2018-06-21 Window frame measuring method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5134661A (en) * 1991-03-04 1992-07-28 Reinsch Roger A Method of capture and analysis of digitized image data
TW416006B (en) * 1999-01-04 2000-12-21 Optron Corp Photoelectric tester and method for a printed circuit board
CN1080911C (en) * 1995-05-18 2002-03-13 欧姆龙公司 Object observing method and device
CN1550773A (en) * 2003-05-07 2004-12-01 ������������ʽ���� Machine vision inspection system and method having improved operations for increased precision inspection throughput
WO2008111724A1 (en) * 2007-03-14 2008-09-18 Daewoo Engineering & Construction Co., Ltd. Automatic test system and test method for slump flow of concrete using computing device
CN101334263A (en) * 2008-07-22 2008-12-31 东南大学 Circular target circular center positioning method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5134661A (en) * 1991-03-04 1992-07-28 Reinsch Roger A Method of capture and analysis of digitized image data
CN1080911C (en) * 1995-05-18 2002-03-13 欧姆龙公司 Object observing method and device
TW416006B (en) * 1999-01-04 2000-12-21 Optron Corp Photoelectric tester and method for a printed circuit board
CN1550773A (en) * 2003-05-07 2004-12-01 ������������ʽ���� Machine vision inspection system and method having improved operations for increased precision inspection throughput
WO2008111724A1 (en) * 2007-03-14 2008-09-18 Daewoo Engineering & Construction Co., Ltd. Automatic test system and test method for slump flow of concrete using computing device
CN101334263A (en) * 2008-07-22 2008-12-31 东南大学 Circular target circular center positioning method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI627561B (en) * 2017-08-03 2018-06-21 Window frame measuring method
US10197384B1 (en) 2017-08-03 2019-02-05 Ching Feng Home Fashions Co., Ltd. Window frame measuring method

Also Published As

Publication number Publication date
CN102213591B (en) 2013-10-23

Similar Documents

Publication Publication Date Title
US10083365B2 (en) Optical reading of external segmented display
CN104169972A (en) Area designating method and area designating device
CN101464773A (en) Method and computer system for displaying program execution window along with user position
JP5786622B2 (en) Image quality inspection method, image quality inspection apparatus, and image quality inspection program
CN107219971A (en) A kind of methods of exhibiting and device for showing object
TWI410880B (en) Computer program product related to digital image analyzing
CN102213591B (en) Digital image analysis device
JPWO2012169190A1 (en) Character input device and display change method
US8355599B2 (en) Methods and devices for detecting changes in background of images using multiple binary images thereof and hough transformation
CN102981683B (en) A kind of camera optical alignment method for quickly correcting and optical axis bearing calibration thereof
CN103809817A (en) Optical touch system and object position judgment method thereof
WO2015096824A1 (en) Analysis device and analysis method
CN103076925B (en) Optical touch control system, optical sensing module and How It Works thereof
US20130074005A1 (en) System, method and graphical user interface for displaying and controlling vision system operating parameters
CN108984097B (en) Touch operation method and device, storage medium and electronic equipment
CN103309496A (en) Input control device and input control method
CN102944301A (en) Digital peak detection method and system for ultrasonic signals based on variable-pitch sectioning method
CN113160155B (en) Auxiliary spacer highest point determination method and device, electronic equipment and storage medium
CN112637587B (en) Dead pixel detection method and device
CN115713491A (en) Liquid crystal display panel defect detection method and device and electronic equipment
US20130166255A1 (en) Computing device and method for extracting feature elements of product from design drawing
US20140292667A1 (en) Touch panel and multi-points detecting method
JP5855961B2 (en) Image processing apparatus and image processing method
US20160162115A1 (en) Touch point sensing method and optical touch system
CN104677276A (en) Eyelet distinguishing and detecting method and system for raw ceramic

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant