CN103824282B - Touch and motion detection using surface mapping figure, shadow of object and camera - Google Patents

Touch and motion detection using surface mapping figure, shadow of object and camera Download PDF

Info

Publication number
CN103824282B
CN103824282B CN201410009366.5A CN201410009366A CN103824282B CN 103824282 B CN103824282 B CN 103824282B CN 201410009366 A CN201410009366 A CN 201410009366A CN 103824282 B CN103824282 B CN 103824282B
Authority
CN
China
Prior art keywords
camera
reference plane
projecting apparatus
image
light pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410009366.5A
Other languages
Chinese (zh)
Other versions
CN103824282A (en
Inventor
张玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hong Kong Applied Science and Technology Research Institute ASTRI
Original Assignee
Hong Kong Applied Science and Technology Research Institute ASTRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/102,506 external-priority patent/US9429417B2/en
Application filed by Hong Kong Applied Science and Technology Research Institute ASTRI filed Critical Hong Kong Applied Science and Technology Research Institute ASTRI
Publication of CN103824282A publication Critical patent/CN103824282A/en
Application granted granted Critical
Publication of CN103824282B publication Critical patent/CN103824282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides a kind of optical means and a kind of system, by using a projecting apparatus and a camera, obtains positional information and/or movable information of the object on reference plane, including detect whether the object touches the reference plane.One surface mapping figure be used to map a position in reference plane and image shot by camera(There is the picture in reference plane)On a relevant position.Particularly, by using surface mapping figure, estimate the shadow length that camera is seen, you can the length for the shadow of object seen by camera, be subsequently used for calculating height of the object in reference plane(Z coordinate).It can also determine whether object touches reference plane, after XY coordinates are estimated, just obtain the three-dimensional coordinate of object.By calculating the three-dimensional coordinate of a time series, movable information, such as speed and acceleration have just been obtained.

Description

Touch and motion detection using surface mapping figure, shadow of object and camera
【Associated cross is quoted】
The application is U.S. Patent application 13/474,567(The applying date is on May 17th, 2012)Cip application, its Disclosure this paper hereby incorporated by reference in its entirety.
【Technical field】
The present invention relates to optically determine position or movable information of the object on a reference plane, including the object Whether the information of the reference plane is touched.The present invention is more particularly directed to a kind of method and system, it uses a surface mapping figure (surface map)For mapping the physical location in the reference plane and its relevant position in image shot by camera, and With reference to the length that project objects are measured from an image shot by camera, so as to optically determine position or the fortune of the object Dynamic information.
【Background technology】
Whether one object of computer based automatic detection touches a reference plane and/or determines the positional information of the object (Such as space coordinate)Or movable information(Such as speed, acceleration), this has quite in human-computer interaction amusement and consumer electronics field Many applications.Such as one of them such application is that an interactive projection system provides a display, is used for and user It is interactive, it is necessary to determine whether user's finger tip touches a predeterminable area of the screen, so that the interactive projection system can connect Receive user's input.Another such application is related to computer entertainment, and a game taps the speed of screen using user's finger To predict that the user is firmly or vacillatingly providing one inputs to the game.
In order to determine whether position or the movable information of object, including the object touch reference plane, that is accomplished by by light Technology and obtain height and position of the object in reference plane.In Chinese patent application publication number Isosorbide-5-Nitrae 77,478, disclose A kind of device, it detects finger height on the touch surface and determines whether finger touches touch-surface.The device of the disclosure Two image inductors are used(Such as two cameras)For detecting.Using two cameras rather than a camera, in product There are practical application, such as higher cost in manufacture, product is also required to more spaces and goes to accommodate two cameras.
Only a use of camera is desirable.In this case, height and position of the object in reference plane can be with Calculated from the shade size of the shadow image of camera shooting.But, Fig. 1 provides two example explanations, in some situations Under, the object for having different height in reference plane can produce substantially the same shade size.Example 1 is U.S. Patent application The arrangement of projecting apparatus-camera of system in publication number 2007/201,863, for determining whether finger touches surface.In example In son 1, vertically throw light is in reference plane 117 for projecting apparatus 110, while camera 115 is being offset one in the vertical direction The place of segment distance shoots projection 130, for calculating height and position of the object in reference plane 117.Projection 130 can be first What object 120a is produced or the second object 120b was produced, two objects have different height positions in reference plane 117 Put.Example 2 is the arrangement of projecting apparatus-camera on U.S. Patent application 6,177,682, for determining to be arranged on benchmark The height of object on face.In example 2, the camera 165 on top shoots the image of projection 180, and it is by projecting apparatus to project 180 160 agley throw light to producing in reference plane 167.Likewise, projection 180 can be by different objects 170a, What 170b, 170c were produced in reference plane 167 on different height position.Particularly, object 170c has touched reference plane 167 , and another two object 170a, 170b do not have.So, on height and position of the object in reference plane, it can not obtain only One solution.
Therefore a kind of method is needed, the height for estimating object in reference plane is just only can determine that using a camera.And And position or the movable information of object can also be obtained.
【The content of the invention】
The present invention provides a kind of optical means, for obtaining position or movable information of the object on reference plane, bag Include and detect whether the object touches the reference plane.Reference plane can be flat or uneven.Object have one in advance really Fixed reference edge point.One projecting apparatus and a camera are used for the optical means.The arrangement of projecting apparatus and camera will to work as When being projected instrument irradiation not in contact with the object to reference plane, the object is along the formation of topographical surface line at one of shade of reference plane Dividing can be by camera looks into fee to the length of above-mentioned partial phantom is to can be used for uniquely determining height of the object in the reference plane. Topographical surface line is as connection camera and with reference to formed by being mapped in reference plane the light path of marginal point.Particularly, using one A relevant position on a position and image shot by camera in individual surface mapping figure, mapping reference plane, wherein camera is shot Image has the picture of reference plane.This method includes:The surface wheel of the surface mapping figure and a reference plane is obtained during initialization Exterior feature, the height that surface profile provides reference plane is distributed;Then whether detection object occurs, until having identified object appearance; At a moment after identifying that object occurs, start position-information obtaining procedure, this process can produce one or more Whether positional information has touched height, the three-dimensional of object of reference plane, object in reference plane including object(3D)Coordinate, generally This process repeats at multiple moment, thus produces the object dimensional coordinate of a time series, believes as a motion Breath.The three-dimensional coordinate of this time series can be used for generating speed of other movable informations including object, acceleration, traveling side To, the time history of the time history of speed, the time history of acceleration and direct of travel.
It is determined that when surface mapping figure and surface profile, on one light pattern of projector projects to reference plane.Then camera Shooting image, the image is included in the picture of the light pattern in reference plane.From captured image, calculate and obtain surface mapping Figure, surface mapping figure is used to the arbitrfary point on image shot by camera being mapped to a respective physical position on reference plane light pattern Put.So as to which the light pattern and corresponding shooting image determine surface profile.
When detection there may be object to occur, the appearance-test image shot by camera is used.Repeat to shoot appearance-survey Picture is attempted, until having identified object appearance.
The position-information acquisition process is described below.A region-of-interest is determined first in reference plane(ROI), make ROI is one and surrounds and include the region of reference edge point.After ROI is determined, irradiation is condensed to one and at least covers ROI Region on so that the object around reference edge point is illuminated and is mapped in reference plane.Spotlight can be produced from projecting apparatus Life is produced from an independent light source.Then camera shoots ROI- and highlights image.Image is highlighted from ROI-, is passed through Shadow length and the shade-projecting apparatus distance that camera is seen are estimated using surface mapping figure.If the shade that camera is seen is long Degree is found to may be substantially close to zero, it is determined that the object touches reference plane, so as to provide first position information.
Height of the object in reference plane, as second place information, can be estimated, this group of number according to one group of data The shadow length seen according to the surface profile including reference plane, camera, shade-projecting apparatus distance, along datum-plane orientation measurement Projecting apparatus and the distance between camera, hang down along the distance from projecting apparatus to reference plane of reference vehicular orientation measurement, along benchmark The distance from camera to reference plane of straight orientation measurement.If reference plane is flat, then height of the object in reference plane Equation that can be in embodiment(4)To calculate.
Height of the object in reference plane is the Z coordinate for constituting the object dimensional coordinate.Y-coordinate can be according to reference water square The distance between projecting apparatus and reference edge point measured upwards and obtain.The distance can by embodiment etc. Formula(3)Or equation(5)To calculate.X-coordinate can be obtained directly from image shot by camera and surface mapping figure.Camera shooting figure As preferably ROI- highlights image.Therefore, the three-dimensional coordinate of acquisition provides the 3rd positional information.
Projecting apparatus can be used infrared light projected, or use independent infrared light supply so that camera be also configured as to Sensing infrared light carrys out shooting image less.
Alternatively, arrangement projecting apparatus and camera cause:When there is object to occur, on datum-plane direction, projecting apparatus position Between camera and object;On reference vehicular direction, camera is located between projecting apparatus and reference plane.
Another selection is that any image by projector projects to reference plane is reflected using speculum, and is reflected Any picture appeared in reference plane is shot for camera.
The other side of the present invention is disclosed by following examples.
【Brief description of the drawings】
Fig. 1 provides two examples, and size phase can be produced by illustrating the different objects of the different height in a reference plane Same shade, so only using the size information of shade, can not have a unique solution for height of the object in reference plane.
Fig. 2 a show the model that an object casts a shadow in a reference plane, and the model is used for the Expansion Solution of the present invention Release, if the shadow length that camera is seen is used for the height for calculating object, the model has unique solution.
Fig. 2 b show a similar model, but in reference plane and have a square below reference edge point, simulate The effect that reference plane is elevated in object lower position, even if illustrating that reference plane is non-flat forms, can also obtain object height Unique solution.
Fig. 2 c show that is similar to a Fig. 2 a model, but have used a speculum to be projected for episcopic projector The picture occurred on image in reference plane, and reflected fudicial face, shoots for camera.
Fig. 3 is determination position and the step flow chart of movable information of exemplary embodiments of the present invention.
Fig. 4 is the determination surface mapping figure of one embodiment of the invention and the step flow chart of surface profile.
Fig. 5 is the step flow chart of the detection object appearance of one embodiment of the invention.
Fig. 6 is the step flow chart of position-information obtaining procedure of one embodiment of the invention.
Fig. 7 shows the example of a light pattern.
Fig. 8 shows that a ROI- highlights the example of image, and the shade that display camera is seen is by an object(Hand Refer to)Produce, thus obtaining the shadow length that camera sees is used to estimate height of the object in reference plane.
【Embodiment】
As used herein, " a reference vehicular direction " and " a datum-plane direction " be defined as two mutually hang down Straight orthogonal direction, but the two directions are not to be defined on gravity direction.Assuming that reference plane is flat, then base Quasi- vertical direction is just defined as perpendicular to the direction of the reference plane, and datum-plane direction is just on the reference vehicular direction Definition.Such as reference plane can be a floor surface or a metope.If reference plane is not flat, then use an energy The imaginary plane of reference plane is represented to replace original reference plane to be used for definition datum vertical direction.If it is, benchmark Face is not flat, then reference vehicular direction is just defined as perpendicular to the direction of the imaginary plane at this moment.
" height of the object in reference plane " used in this specification and appended is defined as, and is hung down along benchmark Distance of the reference edge point predetermined from one on object of straight orientation measurement to reference plane.One example of reference edge point It is finger tip, and object is finger.Another example of reference edge point is nib, and pen is object.
In addition, as used in this specification and appended, " object occur " refers to that object is appeared in The visual visual field of camera(field of view)It is interior.Similarly, refer to that object does not appear in the above-mentioned visual field " without object " It is interior.
A. mathematics deploys
Fig. 2 a show the model that an object casts a shadow in a reference plane.The model is used for the Expansion Solution of the present invention Release.A reference vehicular direction 202 and a datum-plane direction 204 are defined on reference plane 230.Projecting apparatus 210 is irradiated to One object 220, the object has a predetermined reference edge point 225.The light of projecting apparatus 210 is stopped by object 220, in base A shade 241a is produced on quasi- face 230.Particularly, the light of projecting apparatus 210 is along sight line path(line-of- sightpath)250 advance, and encounter reference edge point 225, therefore produce shade 241a starting point 242a.Camera 215 is used for Absorb the object 220 in the visual field and a part of shade 241a that can be observed by camera 215.Can by camera 215 observe this Partial phantom 241a is along topographical surface line(topographical surface line)235 formation, the landform table Upper thread is to be mapped in the reference plane to be formed by the light path 255 of connection camera 215 and reference edge point 225.Camera is seen Less than part shade 241a stopped by object 220 without being absorbed by camera 215.
This partial phantom 241a that can be seen by camera 215 has a shadow length 240a that can be arrived by camera looks into fee, table It is shown as S.With HfRepresent height of the object 220 in reference plane 230.With LfRepresent between camera 215 and reference edge point 225 Distance on datum-plane direction 204.Shade-projecting apparatus distance, i.e. projecting apparatus 210 and shade 241a starting points are represented with D The distance on datum-plane direction 204 between 242a.With LpRepresent the datum-plane direction between the camera 215 of projecting apparatus 210 Distance on 204.With HpRepresent the distance on projecting apparatus 210 to the reference vehicular direction 202 of reference plane 230.With HcRepresent camera Distance on 215 to the reference vehicular direction 202 of reference plane 230.Because constitute that two its sides overlap with light path 250 is similar Triangle, so obtaining equation(1):
In addition, also constituting the similar triangles that two its sides are overlapped with light path 255, equation is obtained(2):
According to equation(1), with HfExpress Lf, obtain
By equation(3)Substitute into equation(2)Again through algebraic operation, obtain
Thus the H drawnf, i.e. height of the object 220 in reference plane 230, being can be by S(The shadow length that camera is seen 240a)And D(Shade-projecting apparatus distance)Uniquely determine, and the two parameters can be by camera intake image and one Obtained from surface mapping figure, this will be explained below.Equation(4)In other specification, i.e. Lp、HpAnd Hc, it is to set Just obtained when camera 215 and projecting apparatus 210.
According to equation(4)If, it is evident that S=0, then Hf=0.Therefore, if the shadow length 240a that sees of camera is several When being 0, or if can by camera 215 observe part shade 241a if be not present, then be assured that object Reference plane 230 is touched.
Another result, passes through equation(4)HfCalculated value, from equation(3)L can be obtainedf, or directly by equation (5)Calculate:
The Y-coordinate of object can be by LfObtain.The X-coordinate of object can be obtained from image shot by camera and surface mapping figure Arrive, this is described in detail below.Again by equation(4)Calculating obtains Hf, so it is achieved with the three-dimensional coordinate of object.
Fig. 2 a show that position of the camera on reference vehicular direction 202 will be less than projecting apparatus 210, in datum-plane direction It is more farther apart from object 220 than projecting apparatus 210 on 204.But, the present invention does not limit to the position of camera 215 and projecting apparatus 210 Configuration.Projecting apparatus 210 can be less than camera 215, the path as long as projecting apparatus 210 does not obstruct the view on reference vehicular direction 202 255.Similarly, on datum-plane direction 204 camera 215 can than projecting apparatus 210 from object 220 closer to as long as camera 215 Do not obstruct the view path 250.
The similar Fig. 2 a of model that Fig. 2 b are shown, except there is a height to be H below the reference edge of object 220 point 225o Rectangular boxes 260.Introduce square 260 and raise H equivalent to by the reference plane 230 under reference edge point 225o.This can be formed The shade 241b of one deformation, it has the starting point 242b of a skew.This can cause the shade that the camera of a lengthening is seen Length 240b, S ', and a shade-projecting apparatus shortened is apart from D '.Relation between them is as follows:
And
S=S '+D′-D (7)
So, HfIt is still what is uniquely determined.The result illustrates, even if reference plane 230 is not flat, reference plane 230 Height distribution(It can be regarded as the surface profile of reference plane 230(surface profile))Also object can be made in reference plane Height on 230 is now uniquely determined.Those of ordinary skill in the art being capable of easily peer-to-peer(4)Suitable modification is made, with Determine height of the object in a non-flat forms reference plane for having surface profile.
Fig. 2 c show that one has the model of identical function effect with Fig. 2 a models.A speculum 280 has been used to reflect The image occurred on the image that projecting apparatus 270 is projected in reference plane 230, and reflected fudicial face 230 is shot to camera 275.Cause To there is speculum 280, projecting apparatus 270 and camera 275 just have a virtual projectors 271 and a virtual camera respectively 276.Virtual projectors 271 and virtual camera 276 have identical work(with the projecting apparatus 210 and camera 215 in Fig. 2 a models respectively Can effect.
B. it is of the invention
The present invention provides an optical system, and it includes a projecting apparatus and a camera, to obtain object on base The position in quasi- face or movable information.The special benefit of the present invention is only using a camera.Object has a predetermined reference edge Edge point.Positional information includes the three-dimensional the seat whether object touches the height and the object of reference plane, object in reference plane Mark.Movable information includes the sequential of three-dimensional coordinate(time sequence).Other movable information also includes the speed of object, added Speed, the direction of motion and its time history(time history).Position and movable information are, on reference plane, to mean The three-dimensional coordinate of object is as X/Y plane as coordinate system, if reference plane is flat using reference plane.If benchmark Face is non-flat forms, and those of ordinary skill in the art can adjust coordinate system according to the surface profile of reference plane.
Fig. 3 is the Main process steps flow chart of exemplary embodiments of the present invention.In this method first step 310, obtain first Obtain the surface profile and a surface mapping figure of reference plane(surface map).As described above, surface profile is with reference plane Highly it is distributed(height distribution)It is characterized.Surface mapping figure is by any point on image shot by camera(Or Any pixel)It is mapped on a respective physical position in reference plane.By using surface mapping figure, in shooting image some The point or pixel being concerned can be mapped to relevant position in reference plane.Step 310 is generally performed in system initialization. Step 320, whether object system detectio occurs, until it is determined that there is object appearance.The appearance triggering of object performs next step 330.In step 330, it is determined that execution position-information obtaining procedure at once after having object appearance, the program produces one or many Individual positional information.According to the one or more positional information, movable information can be calculated in step 340.The position of step 330 Put-the calculating movable information of information obtaining procedure and step 340 often repeatedly, step 330,340 performs in multiple times. In general, the selection of time is based on certain hardware limitation, if desired for the frame per second of matching pursuit instrument and camera.
In addition, this method be further characterized in that the position arrangement of projecting apparatus and camera is good so that when not touching benchmark When the object in face is projected instrument irradiation, the fractional object shade in reference plane can be seen by camera.Particularly, shade is edge Formed by landform surface line, the topographical surface line is to be mapped in institute by the linear light path of connection camera and reference edge point State what is formed in reference plane.The length of above-mentioned partial phantom is to be used to determine object in benchmark in position-information obtaining procedure Single-height on face.The length is referred to as the shadow length that camera is seen.
Surface mapping figure and surface profile can be determined by technology disclosed in U.S. Patent application 13/474,567.Figure 4 display one embodiment of the invention, it determines the surface mapping figure and surface profile of reference plane according to above-mentioned technology.Projecting apparatus A light pattern is projected first to reference plane(Step 410).During practical application, light pattern be usually designed to one it is regular Pattern, such as structured grid, regular grid, rectangular mesh.Fig. 7 shows a rectangular mesh.Rectangular mesh 710 is used now Above-mentioned light pattern is realized as an example.Rectangular mesh 710 has multiple crosspoint 721-728.These crosspoints 721-728 It can recognize and find in image shot by camera easily, as long as image shot by camera includes the base that projection has rectangular mesh 710 Quasi- face.On each crosspoint 721-728 and light pattern identified in image shot by camera respective point have one it is one-to-one Corresponding relation.As long as being aware of the crevice projection angle of height and projecting apparatus of the projecting apparatus in reference plane, it is possible to thrown Each crosspoint 721-728 of light pattern on shadow to reference plane physical location(Such as XY coordinates).Therefore it can calculate and structure Build out surface mapping figure.For non-flat forms reference plane, when rectangular mesh 710 is projected in reference plane, the side of rectangular mesh 710 It can deform and/or disconnect.By analyzing the deviation of the corresponding edge on these sides and an imaginary flat surfaces that camera is seen, It is estimated that surface profile.Determine that surface mapping figure and surface profile can be summarized as follows:Camera shoots and projects to reference plane On light pattern(Step 420);Then closed according to the matching between this group of respective point in one group of point on light pattern and shooting image System, calculates surface mapping figure(Step 430);After surface mapping figure is obtained, also it can determine that according to light pattern and shooting image Surface profile(Step 440).
In one embodiment, it is determined whether it is the appearance-test shot according to camera to have object to occur(test-for- presence)Image and determine:When on second light pattern of projector projects to reference plane, one appearance of camera shooting- Test image.Fig. 5 is the flow chart of the detection object appearance of the present embodiment.Second light pattern of projector projects is in reference plane (Step 510).Second light pattern can be that light pattern for determining surface mapping figure and surface profile, also may be used To be to obtain being applied to or optimizing another light pattern for reference plane after surface profile.Another described light pattern compares Suitable for non-flat forms reference plane.Then camera shoots an appearance-test image(Step 520).According to appearance-test image, Have whether two methods detection object occurs.The first object detecting method is to will appear from-test image and second light pattern Comparison chart picture be compared(Step 530).Camera shoots second light pattern in the reference plane occurred without object, must To the comparison chart picture.Or, it can also be calculated from second light pattern according to surface mapping figure and obtain the comparison chart picture.Than A method compared with appearance-test image and comparison chart picture is to calculate the differential chart between appearance-test image and comparison chart picture Picture.If occurred without object, appearance-test image and comparison chart are as substantially the same, so most of picture of error image Element is worth all close to zero, or less than some threshold value.If object occurs, then just there is part continuum in error image, Its pixel value is more than threshold value, so that it may it is determined that there is object appearance.Whether occur come detection object using error image, there is individual benefit: Due to calculating concurrency, the calculating of error image is very fast, therefore the frequency of detection can accomplish very high.Second of object detection Method is, second light pattern and the reconstruction light pattern that is obtained from appearance-test image are compared(Step 535).Rebuild Light pattern is rebuild from appearance-test image according to surface mapping figure, so if occurring without object, then rebuilds light figure Case should be substantially the same with second light pattern.Similar to the first above-mentioned object detecting method, second light can be calculated Whether pattern and the error image rebuild between light pattern, occur with detection object.Repeat step 520 and 530(Or 535), directly To having identified object appearance(Make decision step 540).
Fig. 6 is the flow chart of position-information obtaining procedure of one embodiment of the invention.
In the first step 610 of position-information obtaining procedure, the region-of-interest in reference plane is determined(ROI).ROI is to enclose Put on the spot and a region comprising reference edge point.After step 320 determines to have object appearance, the ROI in step 610 is determined most It is to be determined according to a upper appearance-test image for the inner acquisition of step 320 well.Recognized in a upper appearance-test image One pattern so that the pattern matches with the object features around reference edge point, has determined that ROI.If lasting tracking To object, then the ROI at current time determines according to the ROI of upper one determination can just be simplified, because determining reference edge Whether point is also predicted current ROI in the ROI of upper one determination, or according to the movement locus of upper reference edge point Put, it is easier than in an a wide range of interior progress pattern identification.
It is determined that after ROI, by optically focused(spot light)It is irradiated to the region at least covering ROI so that reference edge Object around point is illuminated, and a shade is formed in reference plane, unless object is very close to reference plane(Step 620). Optically focused is preferably what is produced by projecting apparatus.But, optically focused can also be produced by an independent light source rather than by projecting apparatus.Such as Upper described, this partial phantom that can be seen by camera is that the topographical surface line is by connecting along formed by topographical surface line The light path of camera and reference edge point is mapped in what is formed in the reference plane.
Then camera shoots ROI- and highlighted(ROI-highlighted)Image(Step 625).ROI- highlights figure One example of picture is shown in Fig. 8.As shown in figure 8, object is finger 830, reference edge point is the finger tip 835 of finger 830.Optically focused 820 project in reference plane 810, form a shade 840.
Highlighted as described above, obtaining ROI- after image, calculate the shadow length that camera is seen(Step 630).As Above-mentioned, the shadow length that camera is seen is the partial phantom length that can be seen by camera.Image is highlighted from ROI- The shadow length that middle calculating camera is seen is realized by using surface mapping figure.
If the shadow length that camera is seen may be substantially close to zero, it can be determined that the object touches reference plane(Step Rapid 640).Therefore, it provides first position information.If in fact, the shadow length that this camera is seen is less than some threshold Value, or the shadow of object part that can be seen by camera is undetectable, then the shadow length that camera is seen is regarded as It is substantially near zero.In some practical applications, this is sufficient to confirm that the object touches reference plane.In one embodiment, If it is confirmed that object touches reference plane, just stop the position-information obtaining procedure, start next step.
If continuing to position-information obtaining procedure after step 640 obtains the first position information, then can By using surface mapping figure from ROI- highlight image in calculate shade-projecting apparatus distance(Step 650).Such as institute above Mention, shade-projecting apparatus distance is distance between the projecting apparatus and shade starting point measured on datum-plane direction, wherein Starting point is the specified point that reference edge point is incident upon on shade.
Second positional information that this method is provided is height of the object in reference plane.Obtaining the shade that camera is seen It is long according to one group of data, including the shade that surface profile, camera are seen in step 660 after length and shade-projecting apparatus distance Degree(S), shade-projecting apparatus distance(D), distance between the projecting apparatus and camera that measure on datum-plane direction(Lp), edge The distance from projecting apparatus to reference plane of reference vehicular orientation measurement(Hp), along reference vehicular orientation measurement from camera to benchmark The distance in face(Hc), to calculate height of the object in reference plane(Hf).If reference plane is flat, then object is in benchmark Height on face can be according to equation(4)To calculate.If it should be noted that the shadow length that camera is seen is found to be substantially On close to zero, then height of the object in reference plane can be directly disposed as zero.It is also noted that object is in reference plane Height is the Z coordinate for constituting the object dimensional coordinate.
In order to complete three-dimensional coordinate, the XY coordinates of object can be obtained as below.Y-coordinate can be camera on datum-plane direction The distance between with reference edge point(That is Lf).Apart from LfIt is by equation(3)Or equation(5)Calculate(Step 670).The X of object is sat Mark can be directly from image shot by camera and surface mapping figure by determining position of the object in this image shot by camera then The position is mapped to physical location of the object on surface mapping figure and obtained(Step 675).Image shot by camera is most It is that the ROI- obtained in step 625 highlights image well.Then the height according to XY coordinates and object in reference plane (Z coordinate), obtain the three-dimensional coordinate of object(Step 680), so that there is provided the 3rd positional information.
As described above, position-information obtaining procedure is repeated several times, to obtain the object dimensional coordinate of a time series. This time series is so as to there is provided the first movable information.According to the time series, one or more fortune of object can be obtained Dynamic information.The example of one or more movable informations includes speed, the acceleration of object and its direct of travel, the speed of object The time history of the time history of degree, the time history of acceleration and direct of travel.
In realizing the system of methods described at one, projecting apparatus can use visible ray or black light to be used to project.Light Selection depend on application.For example, one needs the interactive throwing that user presses his or her finger to input on the touchscreen Shadow system, for projecting apparatus, preferably uses black light, preferably infrared light.Similarly, can also in terms of optically focused is produced Infrared light is produced by independent light source.When producing infrared light with projecting apparatus or independent light source, then in shooting image Camera just will be configured to sense infrared light.
In the realization of this method, projecting apparatus and camera are arranged so as to:Cause(ⅰ)Object is produced in reference plane One shade, and(ii)The visual field of camera preferably covers the whole light pattern projected in reference plane.In one kind selection, throw Shadow instrument and camera arrange it is so:(ⅰ)When object occurs, on datum-plane direction projecting apparatus be located at camera and object it Between, and(ii)Camera is located between projecting apparatus and reference plane on reference vehicular direction.Also a kind of selection, uses speculum To reflect any image by projector projects to reference plane, and any picture appeared in reference plane is reflected, by camera Shoot.
In using some of this method applications, the light pattern and second light pattern are identicals.If by only Vertical light source produces optically focused, then projecting apparatus only needs to project light pattern.In this case, one can carry project it is fixed The light source of light pattern is the implementation method of projecting apparatus low cost.
It is a kind of to be used to obtain object on the position of reference plane or the system of movable information, including a projecting apparatus and one Camera, and system is configured in itself according to previously described embodiment of the method, to determine above-mentioned position or movable information.Alternatively, The system can be embodied as an independent unit with integrated projecting apparatus and camera in one.Generally, it can be embedded in one or more Processor in systems, for performing calculating and estimation steps in this method.
Method and system disclosed herein can be used for or as interactive projection system.In interactive projection system, Object is the finger of user, and reference edge point is the finger tip of finger.This interactive projection system is to the appearance of finger and to benchmark The touch in face makes detection to provide user input information.
The present invention can have other concrete forms to embody without departing from its spirit or essential characteristics.Therefore, the present embodiment Should be considered as illustrative and not restrictive in every respect.The scope of the present invention is by appended claims rather than by preceding The description in face is represented, therefore all changes in claim equivalent meaning and scope are intended to and are contained in wherein.

Claims (16)

1. it is a kind of be used for include a projecting apparatus and a camera system optical means, for obtain an object on The positional information or movable information of one reference plane, the object have a predetermined reference edge point, and this method includes:
The surface profile of the reference plane, and a surface mapping figure are obtained, the surface mapping figure is used for the camera Arbitrfary point in shooting image is mapped to a corresponding physical location in the reference plane;
A moment after the object appearance is determined again, starts a position-information acquisition process;
The projecting apparatus and the camera are disposed using position configuration so that when the object for not touching the reference plane When the optically focused produced by the projecting apparatus or an arbitrary source illuminates, the thing formed along topographical surface line in the reference plane A part for body shade can be seen by the camera so that the shade that the length of the partial phantom, referred to as camera calibration are arrived is long Degree, is used to determine single-height of the object in the reference plane during the position-information acquisition, so that it is determined that Along the reference edge point predetermined from one on object of reference vehicular orientation measurement to the distance of the reference plane, it is derived from Positional information of the object on the reference plane;
Wherein described position-information acquisition process includes:
1) determine that a ROI is region-of-interest in the ROI- process decision chart picture shot from the camera so that the ROI includes At least one surrounds and included the region of the reference edge point;
By on the focus irradiation to one at least covering ROI region so that the object around the reference edge point is shone It is mapped to and a shade is produced in the reference plane;
By using the surface mapping figure, from a ROI- highlight image in estimate the camera calibration to shade Length, is shot wherein the ROI- highlights image after the optically focused is produced by the camera;
If the shadow length that the camera calibration is arrived is in close in zero preset range, then determine that the object touches institute Reference plane is stated, thus first position information is provided;
2) by using the surface mapping figure, from the ROI- highlight image in estimate shade-projecting apparatus distance;
Height of the object in the reference plane is gone out according to one group of data estimation, described group of data are taken turns including the surface Wide, described camera calibration to shadow length, the shade-projecting apparatus distance, the throwing measured on datum-plane direction The distance between shadow instrument and the camera, measured on reference vehicular direction from the projecting apparatus to the reference plane away from From, measure on reference vehicular direction from the camera to the reference plane with a distance from, thus second place information is provided;
If the reference plane is flat, wherein the height of the object in the reference plane that estimate includes meter Calculate:
Wherein HfHeight of the object in the reference plane, S be the camera calibration to the shadow length arrived, D is described Shade-projecting apparatus distance, LpIt is the distance between the projecting apparatus and the camera measured on datum-plane direction, HpIt is Measured on reference vehicular direction from the projecting apparatus to the distance of the reference plane, HcIt is to be measured on reference vehicular direction From the camera to the distance of the reference plane;
If the reference plane is not flat, according to the height of reference plane distribution adjustment above-mentioned formula, institute is uniquely determined State height of the object in the reference plane;
3) according to it is following one of them, estimate the projecting apparatus that is measured on the datum-plane direction and the reference edge The distance between point:
(a) described group of data;Or
(b) height of the object in the reference plane, the shade-projecting apparatus distance, measure on datum-plane direction The projecting apparatus and the distance between the camera, measured on reference vehicular direction from the projecting apparatus to the benchmark The distance in face;
From an image shot by camera and the surface mapping figure, the X-coordinate of the object is obtained;
The projecting apparatus and reference edge point measured according to the X-coordinate of the object, on the datum-plane direction The distance between, height of the object in the reference plane, obtain the three-dimensional coordinate of the object, therefore provide the 3rd Confidence ceases;
At multiple moment, the position-information acquisition process is repeated, so as to the three-dimensional of the object that obtains a time series Coordinate, therefore a movable information is provided.
2. the method as described in claim 1, wherein the topographical surface line is to connect the camera and reference edge point Light path be mapped in what is formed in the reference plane.
3. the method as described in claim 1, in addition to:From the three-dimensional coordinate of the time series, the one of the object is calculated Other individual or multiple movable informations, including the time history of speed, acceleration, direct of travel, speed, the time of acceleration are gone through Journey, the time history of direct of travel.
4. the method as described in claim 1, wherein the acquisition surface profile and surface mapping figure include:
By described one light pattern of projector projects to the reference plane;
The image in the reference plane is projected by camera shooting light pattern when not described object occurs, therefore There is no the object in the image shot by camera;
The surface profile is determined from the light pattern and the image shot by camera;
According to the matching between one group on one group of point on the light pattern and the image shot by camera recognizable respective point, meter Calculate the surface mapping figure.
5. method as claimed in claim 4, wherein the light pattern is structured grid, regular grid or rectangular mesh.
6. method as claimed in claim 4, in addition to:
Determine whether the object occurs, until identifying that the object is occurred in that, so as to start in the time trigger described Position-information acquisition process;
Wherein:
The appearance of the object is by when on the light pattern of projector projects second to the reference plane, according to camera bat Appearance-test image for taking the photograph and determine;
Second light pattern can be described light pattern or be applied to or optimize for the another of the reference plane Individual light pattern.
7. method as claimed in claim 6, wherein the appearance of the object is determined always according to following:
An error image between the appearance-test image and a comparison chart picture of second light pattern is calculated, its Described in comparison chart seem to be calculated from second light pattern according to the surface mapping figure;Or
Second light pattern and an error image rebuild between light pattern are calculated, wherein the reconstruction light pattern is Rebuild from the appearance-test image according to the surface mapping figure, if occurred without the object, then described heavy Second light pattern should be similar to by building light pattern.
8. the method as described in claim 1, wherein position configuration is:
When the object occurs, on a datum-plane direction, the projecting apparatus be located at the camera and the object it Between;
On a reference vehicular direction, the camera is located between the projecting apparatus and the reference plane.
9. the method as described in claim 1, in addition to:Reflected using a speculum by the projector projects described in All images in reference plane, and reflection appears in all pictures in the reference plane, is shot for the camera.
10. the method as described in claim 1, wherein the determination ROI includes:By being recognized on the ROI process decision charts picture One pattern determines ROI, and its described pattern matches with the object features around the reference edge point.
11. the method as described in claim 1, wherein the projecting apparatus or the arbitrary source use infrared light, wherein described Camera at least senses infrared light when being configured in shooting image.
12. method as claimed in claim 6, wherein the projecting apparatus is used for image projection using infrared light, or it is described only Vertical light source is an infrared light supply, wherein the camera at least senses infrared light when being configured in shooting image.
13. a kind of be used to obtain object on the positional information of reference plane or the system of movable information, the object has one to make a reservation for Reference edge point, wherein the system includes a projecting apparatus and a camera, be configured to using side as claimed in claim 1 Method and obtain the positional information or movable information.
14. system as claimed in claim 13, wherein the object is finger, the reference edge point is finger tip, the hand User's input information will be provided to the system by referring to the touching reference plane.
15. a kind of be used to obtain object on the positional information of reference plane or the system of movable information, the object has one to make a reservation for Reference edge point, wherein the system includes a projecting apparatus and a camera, be configured to using side as claimed in claim 1 Method and obtain the positional information or movable information.
16. a kind of be used to obtain object on the positional information of reference plane or the system of movable information, the object has one to make a reservation for Reference edge point, wherein the system includes a projecting apparatus and a camera, be configured to using side as claimed in claim 3 Method and obtain the positional information or movable information.
CN201410009366.5A 2013-12-11 2014-01-09 Touch and motion detection using surface mapping figure, shadow of object and camera Active CN103824282B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/102,506 2013-12-11
US14/102,506 US9429417B2 (en) 2012-05-17 2013-12-11 Touch and motion detection using surface map, object shadow and a single camera

Publications (2)

Publication Number Publication Date
CN103824282A CN103824282A (en) 2014-05-28
CN103824282B true CN103824282B (en) 2017-08-08

Family

ID=50759324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410009366.5A Active CN103824282B (en) 2013-12-11 2014-01-09 Touch and motion detection using surface mapping figure, shadow of object and camera

Country Status (1)

Country Link
CN (1) CN103824282B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106415460B (en) * 2016-07-12 2019-04-09 香港应用科技研究院有限公司 Wearable device with intelligent subscriber input interface
CN106774846B (en) * 2016-11-24 2019-12-03 中国科学院深圳先进技术研究院 Alternative projection method and device
CN114777681A (en) * 2017-10-06 2022-07-22 先进扫描仪公司 Generating one or more luminance edges to form a three-dimensional model of an object
CN107943351B (en) * 2017-11-22 2021-01-05 苏州佳世达光电有限公司 Projection surface touch identification system and method
CN108108475B (en) * 2018-01-03 2020-10-27 华南理工大学 Time sequence prediction method based on depth-limited Boltzmann machine
CN108279809B (en) * 2018-01-15 2021-11-19 歌尔科技有限公司 Calibration method and device
CN108483035A (en) * 2018-03-23 2018-09-04 杭州景业智能科技有限公司 Divide brush all-in-one machine handgrip
CN110858404B (en) * 2018-08-22 2023-07-07 瑞芯微电子股份有限公司 Identification method and terminal based on regional offset
CN109375833B (en) * 2018-09-03 2022-03-04 深圳先进技术研究院 Touch instruction generation method and device
CN110941367A (en) * 2018-09-25 2020-03-31 福州瑞芯微电子股份有限公司 Identification method based on double photographing and terminal
CN110455201B (en) * 2019-08-13 2020-11-03 东南大学 Stalk crop height measuring method based on machine vision
CN111208479B (en) * 2020-01-15 2022-08-02 电子科技大学 Method for reducing false alarm probability in deep network detection
CN112560891A (en) * 2020-11-09 2021-03-26 联想(北京)有限公司 Feature detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6177682B1 (en) * 1998-10-21 2001-01-23 Novacam Tyechnologies Inc. Inspection of ball grid arrays (BGA) by using shadow images of the solder balls
CN101571776A (en) * 2008-04-21 2009-11-04 株式会社理光 Electronics device having projector module
CN102779001A (en) * 2012-05-17 2012-11-14 香港应用科技研究院有限公司 Light pattern used for touch detection or gesture detection
CN103383731A (en) * 2013-07-08 2013-11-06 深圳先进技术研究院 Projection interactive method and system based on fingertip positioning and computing device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101829429B (en) * 2004-12-03 2011-12-07 世嘉股份有限公司 Gaming machine
US7599561B2 (en) * 2006-02-28 2009-10-06 Microsoft Corporation Compact interactive tabletop with projection-vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6177682B1 (en) * 1998-10-21 2001-01-23 Novacam Tyechnologies Inc. Inspection of ball grid arrays (BGA) by using shadow images of the solder balls
CN101571776A (en) * 2008-04-21 2009-11-04 株式会社理光 Electronics device having projector module
CN102779001A (en) * 2012-05-17 2012-11-14 香港应用科技研究院有限公司 Light pattern used for touch detection or gesture detection
CN103383731A (en) * 2013-07-08 2013-11-06 深圳先进技术研究院 Projection interactive method and system based on fingertip positioning and computing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Vision-based Projected Tabletop Interface for Finger Interactions;Peng Song et al.;《IEEE International Conference on Human-computer Interaction (HCI 2007)》;20071231;第50页第5段第4-6行,第54页第3段 *

Also Published As

Publication number Publication date
CN103824282A (en) 2014-05-28

Similar Documents

Publication Publication Date Title
CN103824282B (en) Touch and motion detection using surface mapping figure, shadow of object and camera
US11652965B2 (en) Method of and system for projecting digital information on a real object in a real environment
US9429417B2 (en) Touch and motion detection using surface map, object shadow and a single camera
US9001208B2 (en) Imaging sensor based multi-dimensional remote controller with multiple input mode
KR102365730B1 (en) Apparatus for controlling interactive contents and method thereof
CN108469899B (en) Method of identifying an aiming point or area in a viewing space of a wearable display device
CN102508578B (en) Projection positioning device and method as well as interaction system and method
KR101002785B1 (en) Method and System for Spatial Interaction in Augmented Reality System
JP4278979B2 (en) Single camera system for gesture-based input and target indication
CN115509352A (en) Optimized object scanning using sensor fusion
JP6747292B2 (en) Image processing apparatus, image processing method, and program
TWI508027B (en) Three dimensional detecting device and method for detecting images thereof
WO2009148064A1 (en) Image recognizing device, operation judging method, and program
WO2011137226A1 (en) Spatial-input-based cursor projection systems and methods
KR101196291B1 (en) Terminal providing 3d interface by recognizing motion of fingers and method thereof
CN103677240B (en) Virtual touch exchange method and virtual touch interactive device
JP2011022945A5 (en) Interactive operation device
US20140347329A1 (en) Pre-Button Event Stylus Position
JP2013097805A (en) Three-dimensional interactive system and three-dimensional interactive method
CN109073363A (en) Pattern recognition device, image-recognizing method and image identification unit
KR101330531B1 (en) Method of virtual touch using 3D camera and apparatus thereof
US20210287330A1 (en) Information processing system, method of information processing, and program
US9857919B2 (en) Wearable device with intelligent user-input interface
TWI486815B (en) Display device, system and method for controlling the display device
JP2014142695A (en) Information processing apparatus, system, image projector, information processing method, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant