CN101976449A - Method for shooting and matching multiple text images - Google Patents

Method for shooting and matching multiple text images Download PDF

Info

Publication number
CN101976449A
CN101976449A CN 201010558868 CN201010558868A CN101976449A CN 101976449 A CN101976449 A CN 101976449A CN 201010558868 CN201010558868 CN 201010558868 CN 201010558868 A CN201010558868 A CN 201010558868A CN 101976449 A CN101976449 A CN 101976449A
Authority
CN
China
Prior art keywords
topography
image
text
template image
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010558868
Other languages
Chinese (zh)
Other versions
CN101976449B (en
Inventor
龙腾
黄灿
镇立新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hehe Information Technology Development Co Ltd
Original Assignee
Shanghai Hehe Information Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hehe Information Technology Development Co Ltd filed Critical Shanghai Hehe Information Technology Development Co Ltd
Priority to CN2010105588685A priority Critical patent/CN101976449B/en
Publication of CN101976449A publication Critical patent/CN101976449A/en
Application granted granted Critical
Publication of CN101976449B publication Critical patent/CN101976449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for shooting and matching multiple text images, which comprises the following steps of: shooting the entire text to obtain the total graph of the entire text as a template image; dividing the entire text into L areas; calculating the areas to be shot, and shooting one by one. If the last shooting is in the k area, the k+1 area is shot this time. Images in the corresponding area in the template image are semitransparent, and are filled in the filling area of a screen. Based on the shooting presentation of the semitransparent template image, users shoot the local images of a corresponding area text. Whether all the local images in the text are shot is judged, and if there are also some local images to shoot, shooting is continued. The invention can guarantee that all the shooting local images can cover the entire text area, and the full picture using the text matched by the local images does not have the circumstance of holes or unfilled corners, thereby once matching can achieve good effect.

Description

Take the method for several text images and splicing
Technical field
The invention belongs to technical field of image processing, relate to a kind of image split-joint method, relate in particular to a kind of method of taking several text images and splicing.
Background technology
Along with the development of technology, the general all integrated digital camera functionality that has automatic focusing of present smart mobile phone, the camera of people on mobile phone commonly used scans or takes text image.For " business card " this small-sized text, the text image that the camera that has automatic focus function that carries with mobile phone scans out, its literal all is very clearly, but when to want scanned objects be big document, such as journal surface, document of paper or A3 size or the like is because the pixel of camera is between 3,000,000 to 5,000,000 on the present mobile phone, the details of the whole text image of shooting is inevitable not enough, so font understands that some is fuzzy in the image.
In order to obtain a high-resolution document full figure, a kind of solution commonly used is exactly to adopt traditional image mosaic technology, and make camera closer from document, take each regional area of the document earlier, after obtaining more local text image, again these topographies are stitched together, generate a text full figure.The text full figure that obtains by splicing like this, its resolution can reach more than ten million pixel.
But the shortcoming that this scheme exists is exactly: sometimes the local document image taken of user does not have complete each zone of document that covers, and the document full figure that causes splicing at last the cavity can occur or the phenomenon of unfilled corner is arranged.If find that the full figure that splices at last is imperfect, all topographies of retaking again splice once more, and this mode expends time in very much.
Another shortcoming needs between the every width of cloth topography overlapping region to be arranged exactly, the feature that is based on these overlapping regions calculating couplings in the image mosaic stage is right, right according to the feature on the coupling then, calculate the transformation matrices between two width of cloth images, according to transformation matrices two width of cloth image change are spliced to same plane again.Too small as doubling of the image zone, perhaps the overlapping region does not have texture information.Splicing between each topography will be failed so, and this also is the normal problem that exists of present panorama picture mosaic software.In order to allow each topography that the overlapping region is arranged each other, the user just can not arbitrarily disorderly clap document when taking so, must take successively, and guarantee that the topography of each shooting has overlapping.This style of shooting that a lot of requirement for restriction are arranged is for the cellphone subscriber and inconvenient.
Summary of the invention
Technical matters to be solved by this invention is: a kind of method of taking several text images and splicing is provided, all topographies can be guaranteed to photograph and the entire document zone can be covered, the document full figure that uses these topographies to splice can not have the cavity or the situation of unfilled corner is arranged, thereby once splicing just can reach good effect.
In order to overcome two shortcomings of prior art, a kind of effective solution is exactly when taking, and the prompting user takes the zone of appointment, guarantees that all appointed areas have covered entire document just.In addition in the process of splicing full figure, obtain an initial text full figure earlier and (directly take the full figure that the document obtains with mobile phone, this full figure resolution is lower), then the topography that obtains is carried out characteristic matching with the original text full figure, right according to the match point of topography and original text full figure then, topography is all changed to the plane at original text image place, thereby the overlapping region between each topography is not required.For this reason, the present invention proposes a kind of method of taking text topography, this method can make topography cover entire document, and has also preserved initial text image in the process of taking.
For solving the problems of the technologies described above, the present invention adopts following technical scheme:
A kind of method of taking several text images and splicing, described method comprises the steps:
Step 110 makes camera far away from text, can take whole text just, and the whole text full figure that obtains is as template image;
Step 120, the user is divided into N*M homogeneous area with whole text full figure;
Step 130, the zone that calculating will be taken, if shooting last time is k zone, then this less important shooting is k+1 zone, the image that this is regional is full of the fill area of screen as translucent template figure, and wherein every limit, fill area is an i pixel with the distance of screen edge;
Step 140, based on the shooting prompting of translucent document template figure, the user takes the topography of region document;
Step 150 judges whether all topographies of document have taken, and is to have taken to finish as judged result, turns to step 160, if judge the topography that will take in addition, then turns to step 130;
Step 160 is a complete full figure with all local image mosaics.
As a preferred embodiment of the present invention, in the step 130, the method that the prompting user takes next zone comprises:
Zone with above-mentioned division is a benchmark, and certain regional shear that take is come out as template image, in the photographed screen of mobile phone the fill area is set then, and every edge of fill area is an i pixel with the distance at display screen edge;
According to the fill area, template image narrowed down to just can be full of the fill area, the pixel transparency of template image is set to translucent, makes when taking topography, both but preview was to the topography that will take, and also the topography that can take compares with template image.
As a preferred embodiment of the present invention, in the described step 140, the method for taking topography comprises: adjust the distance of camera, when preview when the topography that will take almost coincide with template image, press shooting push button at this moment, obtain topography.
As a preferred embodiment of the present invention, in the described step 160, the splicing step comprises: topography and template image are carried out characteristic matching, then right based on the unique point on the coupling, calculate the perspective transformation matrices of topography and original text image, topography changes to the plane, place of original text image by the perspective transformation matrices then, will be in same plane through all topographies after the change process, then splices.
As a preferred embodiment of the present invention, described step 160 specifically comprises:
Step 161, topography and template image that a width of cloth is not also handled carry out characteristic matching, and it is right to obtain characteristic matching point; Topography comprises with the method that template image carries out characteristic matching: step 1611, determine interested feature key points; Step 1612, the proper vector descriptor of extraction key point peripheral region; Step 1613, the Euclidean distance by unique point mates each proper vector descriptor; In the step 1613, matching strategy adopts arest neighbors ratio coupling: for the Feature Points Matching of two width of cloth images, search with first width of cloth image in the corresponding match point of certain unique point, then in second width of cloth image, find out two unique points nearest with this unique point Euclidean distance, if closest approach apart from d NearstDivided by second near point apart from d Sec ondLess than setting threshold, think that then this closest approach is a match point, otherwise do not receive;
Step 162, whether the judging characteristic coupling is successful; Criterion: whether the unique point on the coupling to reaching setting value; If be lower than setting value, the transformation matrices between can't computed image then is judged as failure, forwards step S3 to and takes correspondence image again; If right the counting of characteristic matching meets or exceeds setting value, be judged as successfully, forward step 163 to;
Step 163 by the feature of coupling, is calculated the perspective transformation matrices between corresponding topography and the template image, then with topography according to transformation matrices, obtain the picture after this topography's conversion;
Wherein, the method for calculating perspective transformation matrix is comprised according to the unique point on the coupling: right according to the unique point on the coupling of two width of cloth images, calculate the perspective transformation matrices between the plane, two width of cloth text image places; Setting src_points is the match point coordinate on plane, place in the template text image, and size is 2xN, and wherein, N represents number a little; Setting dst_points is the match point coordinate on plane, topography place, and size is 2xN; The perspective transformation matrices is 3 * 3 matrix, makes
Figure BDA0000034147560000051
(x wherein i, y i, 1) and be the coordinate of a point of dst_points, (x ' i, y ' i, 1) and be the coordinate of a point of src_point;
The perspective transformation matrices of 3x3 of output makes back projection's mistake minimum, i.e. following formula minimum:
Σ i ( ( x i ′ - h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 ) 2 + ( y i ′ - h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 ) 2 ) ;
Topography is comprised by the method that transformation matrix obtains the topography after the conversion: revise the perspective transformation matrices
Figure BDA0000034147560000053
The third line (h 31, h 32, h 33) be that the coefficient that dwindles is amplified in control, for this reason with (h 31, h 32, h 33) be varied to (h 31/ scale, h 32/ scale, h 33/ scale), scale is that topography changes the amplification coefficient of back with respect to template image; By the topography that obtains after the conversion of perspective transformation matrices, resolution is scale times of original template image; All transform to topography the same coordinate system under according to amended perspective transformation matrices this moment, then carries out next step splicing and handle;
Step 164, judge: whether all topographies all handle; If answer then forwards step 165 to for being, otherwise forwards step 161 to, handle next width of cloth topography;
Step 165 with the text image after all changes, is stitched together it according to its effective coverage, obtains splicing full figure; The method that topography after all conversion is spliced comprises: the topography that will need to splice changes to after the same coordinate system, carries out the splicing of image;
Step 166, the full figure that splicing is obtained carries out aftertreatment; The post-processing step of splicing full figure comprises: if the full figure that all local image mosaics come out, start a leak or during unfilled corner, template image can be amplified scale this moment doubly, directly fill the zone of disappearance part then in this regional pixel with template image, by above-mentioned aftertreatment, guarantee to obtain complete image.
A kind of method of taking several text images and splicing, clap several text images be used for the splicing of text image; Described method comprises the steps:
Step S1 takes whole text, and the whole text full figure that obtains is as template image;
Step S2 is divided into L zone with whole text full figure;
Step S3, the zone that calculating will be taken, and take one by one; If last shooting is k zone, then this less important shooting is k+1 zone, k≤L-1; With the image setting of the corresponding region in the template image is translucent, and is full of the fill area of screen;
Step S4, based on the shooting prompting of translucent template image, the user takes the topography of corresponding region text;
Step S5 judges whether all topographies of document have taken, if judge the topography that will take in addition, then turns to step S3.
A kind of method of taking several text images and splicing, clap several text images be used for the splicing of text image; Described method comprises the steps:
Step S1 takes whole text, and the whole text full figure that obtains is as template image;
Step S2 is divided into L zone with whole text full figure;
Step S3, the zone that calculating will be taken, and take one by one; If last shooting is k zone, then this less important shooting is k+1 zone, k≤L-1; With the image setting of the corresponding region in the template image is translucent, and is full of the fill area of screen;
Step S4, based on the shooting prompting of translucent template image, the user takes the topography of corresponding region text;
Step S5 judges whether all topographies of document have taken, and is to have taken to finish as judged result, turns to step S6, if judge the topography that will take in addition, then turns to step S3;
Step S6 with all topographies and template image coupling, is spliced into new text image respectively.
As a preferred embodiment of the present invention, among the step S3, every limit, fill area is the setting pixel value with the distance of screen edge.
As a preferred embodiment of the present invention, the joining method of described step S6 comprises:
Step S61, topography and template image that a width of cloth is not also handled carry out characteristic matching, and it is right to obtain characteristic matching point; Topography comprises with the method that template image carries out characteristic matching: step S611, determine interested feature key points; Step S612, the proper vector descriptor of extraction key point peripheral region; Step S613, the Euclidean distance by unique point mates each proper vector descriptor; Among the step S613, matching strategy adopts arest neighbors ratio coupling: for the Feature Points Matching of two width of cloth images, search with first width of cloth image in the corresponding match point of certain unique point, then in second width of cloth image, find out two unique points nearest with this unique point Euclidean distance, if closest approach apart from d NearstDivided by second near point apart from d Sec ondLess than setting threshold, think that then this closest approach is a match point, otherwise do not receive;
Step S62, whether the judging characteristic coupling is successful; Criterion: whether the unique point on the coupling to reaching setting value; If be lower than setting value, the transformation matrices between can't computed image then is judged as failure, forwards step S3 to and takes correspondence image again; If right the counting of characteristic matching meets or exceeds setting value, be judged as successfully, forward step S63 to;
Step S63 by the feature of coupling, calculates the perspective transformation matrices between corresponding topography and the template image, then with topography according to transformation matrices, obtain the picture after this topography's conversion;
Wherein, the method for calculating perspective transformation matrix is comprised according to the unique point on the coupling: right according to the unique point on the coupling of two width of cloth images, calculate the perspective transformation matrices between the plane, two width of cloth text image places; Setting src_points is the match point coordinate on plane, place in the template text image, and size is 2xN, and wherein, N represents number a little; Setting dst_points is the match point coordinate on plane, topography place, and size is 2xN; The perspective transformation matrices is 3 * 3 matrix, makes
Figure BDA0000034147560000081
(x wherein i, y i, 1) and be the coordinate of a point of dst_points, (x ' i, y ' i, 1) and be the coordinate of a point of src_point;
The perspective transformation matrices of 3x3 of output makes back projection's mistake minimum, i.e. following formula minimum:
Σ i ( ( x i ′ - h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 ) 2 + ( y i ′ - h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 ) 2 ) ;
Topography is comprised by the method that transformation matrix obtains the topography after the conversion: revise the perspective transformation matrices The third line (h 31, h 32, h 33) be that the coefficient that dwindles is amplified in control, for this reason with (h 31, h 32, h 33) be varied to (h 31/ scale, h 32/ scale, h 33/ scale), scale is that topography changes the amplification coefficient of back with respect to template image; By the topography that obtains after the conversion of perspective transformation matrices, resolution is scale times of original template image; All transform to topography the same coordinate system under according to amended perspective transformation matrices this moment, then carries out next step splicing and handle;
Step S64, judge: whether all topographies all handle; If answer then forwards step S65 to for being, otherwise forwards step S61 to, handle next width of cloth topography;
Step S65 with the text image after all changes, is stitched together it according to its effective coverage, obtains splicing full figure; The method that topography after all conversion is spliced comprises: the topography that will need to splice changes to after the same coordinate system, carries out the splicing of image;
Step S66, the full figure that splicing is obtained carries out aftertreatment; The post-processing step of splicing full figure comprises: if the full figure that all local image mosaics come out, start a leak or during unfilled corner, template image can be amplified scale this moment doubly, directly fill the zone of disappearance part then in this regional pixel with template image, by above-mentioned aftertreatment, guarantee to obtain complete image.
Beneficial effect of the present invention is: the method for several text images of shooting that the present invention proposes and splicing, can guarantee to photograph the covering entire document zone that all topographies can be complete, make the file and picture that spells out at last can not have the situation that there is unfilled corner in the cavity.The image pickup method of text image can be known the correspondence position in the document full figure of each topography in addition, makes in the subsequent images splicing, can effectively improve the speed and the degree of accuracy of characteristic matching.Image pickup method of the present invention is fit to the document splicing on the mobile phone very much.
Description of drawings
Fig. 1 takes the process flow diagram of several text images and joining method for the present invention.
Fig. 2 is for being divided into template image the synoptic diagram in 2 * 2 zones.
Fig. 3 is the synoptic diagram that is provided with of screen fill area.
Fig. 4 is the initial picture synoptic diagram based on the template image prompting.
Embodiment
Describe the preferred embodiments of the present invention in detail below in conjunction with accompanying drawing.
Embodiment one
The present invention has disclosed a kind of method of taking several text images and splicing, sees also Fig. 1, and described method comprises the steps:
[step 110] makes camera far away from document, can take entire document just, this text full figure that obtains.
The method of taking the original text image comprises: adjust the distance of camera from document, when the document that will take is full of whole mobile phone screen just, press shooting push button at this moment, obtain initial text image.
[step 120] user selects, and whole text full figure is divided into N*M homogeneous area, generally is divided into 2 * 2 or 3 * 3 homogeneous areas.
The zoning the method for method comprise: with whole original text image division size N*M homogeneous area.The zoning an example see Fig. 2, wherein N gets 2, M gets 2.
[step 130] calculates the zone that will take, if what took last time is k zone, then this less important shooting is k+1 zone, the image that this is regional is full of the fill area of screen as translucent template figure, wherein every limit, fill area is an i pixel with the distance of screen edge, for the mobile phone screen display pixel is 480 * 320, and the i value is traditionally arranged to be 20.
The method that the prompting user takes next zone comprises:
With the zone of dividing previously is benchmark, certain homogeneous area that will take is sheared out as template image, in the photographed screen of mobile phone, the fill area is set then, every edge of fill area is an i pixel with the distance at display screen edge, display screen for 480 * 320, the i value is traditionally arranged to be 20.
According to the fill area, template image narrowed down to just can be full of the fill area, the pixel transparency of template image is set to 30 percent, like this when taking topography, both can preview to the topography that will take, but also the topography that can take compares with template image.
Fig. 3 is seen in being provided with of fill area, sees Fig. 4 based on the example of template image prompting.
[step 140] based on the shooting prompting of translucent document template image, the user takes the topography of region document.
The method of taking topography comprises: adjust the distance of camera, when preview when the topography that will take almost coincide with template image, press shooting push button at this moment, obtain topography.
[step 150] judges whether all topographies of document have taken, and is to have taken to finish as judged result, turns to step 160, if judge the topography that will take in addition, then turns to step 130.Judge whether the standard of having taken: " for example, in step 120, what select to divide is 2 * 2 zones, if the image in the 4th zone has been taken, all topographies been scanned is described ".
[step 160] is a complete full figure with all local image mosaics.With all local image mosaics is that the method for a complete full figure is as follows:
Owing in shooting process, obtained initial text image.Therefore in the image mosaic stage, these topographies and initial text image can be carried out characteristic matching, then right based on the unique point on the coupling, calculate the perspective transformation matrices (Homography matrix) of topography and original text image, topography changes to the plane, place of original text image by the Homography matrix then, to be in same plane through all topographies after the change process, can splice this moment.
In the present embodiment, described step 160 specifically comprises:
Step 161, topography and template image that a width of cloth is not also handled carry out characteristic matching, and it is right to obtain characteristic matching point; Topography comprises with the method that template image carries out characteristic matching: step 1611, determine interested feature key points; Step 1612, the proper vector descriptor of extraction key point peripheral region; Step 1613, the Euclidean distance by unique point mates each proper vector descriptor; In the step 1613, matching strategy adopts arest neighbors ratio coupling: for the Feature Points Matching of two width of cloth images, search with first width of cloth image in the corresponding match point of certain unique point, then in second width of cloth image, find out two unique points nearest with this unique point Euclidean distance, if closest approach apart from d NearstDivided by second near point apart from d Sec ondLess than setting threshold, think that then this closest approach is a match point, otherwise do not receive;
Step 162, whether the judging characteristic coupling is successful; Criterion: whether the unique point on the coupling to reaching setting value; If be lower than setting value, the transformation matrices between can't computed image then is judged as failure, forwards step S3 to and takes correspondence image again; If right the counting of characteristic matching meets or exceeds setting value, be judged as successfully, forward step 163 to;
Step 163 by the feature of coupling, is calculated the perspective transformation matrices between corresponding topography and the template image, then with topography according to transformation matrices, obtain the picture after this topography's conversion;
Wherein, the method for calculating perspective transformation matrix is comprised according to the unique point on the coupling: right according to the unique point on the coupling of two width of cloth images, calculate the perspective transformation matrices between the plane, two width of cloth text image places.
Setting src_points is the match point coordinate on plane, place in the original text image, and size is 2xN, and wherein, N represents number a little; Setting dst_points is the match point coordinate on plane, topography place, and size is 2xN; The perspective transformation matrices is 3 * 3 matrix, makes
Figure BDA0000034147560000121
(x wherein i, y i, 1) and be the homogeneous coordinates of dst_points point correspondence, (x ' i, y ' i, 1) and be the homogeneous coordinates of src_points point correspondence.
In the stage of calculating match point, obtaining src_points and dst_points is Cartesian coordinates, and for N point, size is 2 * N.And when calculating perspective transformation matrices H, employing be homogeneous coordinates.Homogeneous coordinates are described the Cartesian coordinates of N dimension with N+1 component.Such as, the 2D homogeneous coordinates are that (x increases a new component 1 on basis y), become (x, y, 1) in Cartesian coordinates.For example: the point (1,2) in the Cartesian coordinates is exactly (1,2,1) in homogeneous coordinates.
The perspective transformation matrices of 3x3 of output makes back projection's mistake minimum, i.e. following formula minimum:
Σ i ( ( x i ′ - h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 ) 2 + ( y i ′ - h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 ) 2 ) ;
Topography is comprised by the method that transformation matrix obtains the topography after the conversion: revise the perspective transformation matrices
Figure BDA0000034147560000131
The third line (h 31, h 32, h 33) be that the coefficient that dwindles is amplified in control, for this reason with (h 31, h 32, h 33) be varied to (h 31/ scale, h 32/ scale, h 33/ scale), scale is that topography changes the amplification coefficient of back with respect to template image; By the topography that obtains after the conversion of perspective transformation matrices, resolution is scale times of original template image; All transform to topography the same coordinate system under according to amended perspective transformation matrices this moment, then carries out next step splicing and handle;
Step 164, judge: whether all topographies all handle; If answer then forwards step 165 to for being, otherwise forwards step 161 to, handle next width of cloth topography;
Step 165 with the text image after all changes, is stitched together it according to its effective coverage, obtains splicing full figure; The method that topography after all conversion is spliced comprises: the topography that will need to splice changes to after the same coordinate system, carries out the splicing of image;
Step 166, the full figure that splicing is obtained carries out aftertreatment; The post-processing step of splicing full figure comprises: if the full figure that all local image mosaics come out, start a leak or during unfilled corner, template image can be amplified scale this moment doubly, directly fill the zone of disappearance part then in this regional pixel with template image, by above-mentioned aftertreatment, guarantee to obtain complete image.
Embodiment two
Present embodiment discloses a kind of method of taking several text images and splicing, clap several text images be used for the splicing of text image; Described method comprises the steps:
Step S1 takes whole text, and the whole text full figure that obtains is as template image;
Step S2 is divided into L zone with whole text full figure;
Step S3, the zone that calculating will be taken, and take one by one; If last shooting is k zone, then this less important shooting is k+1 zone, k≤L-1; With the image setting of the corresponding region in the template image is translucent, and is full of the fill area of screen;
Step S4, based on the shooting prompting of translucent template image, the user takes the topography of corresponding region text;
Step S5 judges whether all topographies of document have taken, if judge the topography that will take in addition, then turns to step S3.
Embodiment three
Present embodiment discloses a kind of method of taking several text images and splicing, clap several text images be used for the splicing of text image; Described method comprises the steps:
Step S1 takes whole text, and the whole text full figure that obtains is as template image;
Step S2 is divided into L zone with whole text full figure;
Step S3, the zone that calculating will be taken, and take one by one; If last shooting is k zone, then this less important shooting is k+1 zone, k≤L-1; With the image setting of the corresponding region in the template image is translucent, and is full of the fill area of screen.Wherein, every limit, fill area can be 0 with the distance of screen edge, perhaps for setting pixel value.
Step S4, based on the shooting prompting of translucent template image, the user takes the topography of corresponding region text;
Step S5 judges whether all topographies of document have taken, and is to have taken to finish as judged result, turns to step S6, if judge the topography that will take in addition, then turns to step S3;
Step S6 with all topographies and template image coupling, is spliced into new text image respectively.
In the present embodiment, the joining method of described step S6 comprises:
Step S61, topography and template image that a width of cloth is not also handled carry out characteristic matching, and it is right to obtain characteristic matching point; Topography comprises with the method that template image carries out characteristic matching: step S611, determine interested feature key points; Step S612, the proper vector descriptor of extraction key point peripheral region; Step S613, the Euclidean distance by unique point mates each proper vector descriptor; Among the step S613, matching strategy adopts arest neighbors ratio coupling: for the Feature Points Matching of two width of cloth images, search with first width of cloth image in the corresponding match point of certain unique point, then in second width of cloth image, find out two unique points nearest with this unique point Euclidean distance, if closest approach apart from d NearstDivided by second near point apart from d Sec ondLess than setting threshold, think that then this closest approach is a match point, otherwise do not receive;
Step S62, whether the judging characteristic coupling is successful; Criterion: whether the unique point on the coupling to reaching setting value; If be lower than setting value, the transformation matrices between can't computed image then is judged as failure, forwards step S3 to and takes correspondence image again; If right the counting of characteristic matching meets or exceeds setting value, be judged as successfully, forward step S63 to;
Step S63 by the feature of coupling, calculates the perspective transformation matrices between corresponding topography and the template image, then with topography according to transformation matrices, obtain the picture after this topography's conversion;
Wherein, the method for calculating perspective transformation matrix is comprised according to the unique point on the coupling: right according to the unique point on the coupling of two width of cloth images, calculate the perspective transformation matrices between the plane, two width of cloth text image places; Setting src_points is the match point coordinate on plane, place in the template text image, and size is 2xN, and wherein, N represents number a little; Setting dst_points is the match point coordinate on plane, topography place, and size is 2xN; The perspective transformation matrices is 3 * 3 matrix, makes
Figure BDA0000034147560000151
(x wherein i, y i, 1) and be the coordinate of a point of dst_points, (x ' i, y ' i, 1) and be the coordinate of a point of src_point;
The perspective transformation matrices of 3x3 of output makes back projection's mistake minimum, i.e. following formula minimum:
Σ i ( ( x i ′ - h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 ) 2 + ( y i ′ - h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 ) 2 ) ;
Topography is comprised by the method that transformation matrix obtains the topography after the conversion: revise the perspective transformation matrices
Figure BDA0000034147560000162
The third line (h 31, h 32, h 33) be that the coefficient that dwindles is amplified in control, for this reason with (h 31, h 32, h 33) be varied to (h 31/ scale, h 32/ scale, h 33/ scale), scale is that topography changes the amplification coefficient of back with respect to template image; By the topography that obtains after the conversion of perspective transformation matrices, resolution is scale times of original template image; All transform to topography the same coordinate system under according to amended perspective transformation matrices this moment, then carries out next step splicing and handle;
Step S64, judge: whether all topographies all handle; If answer then forwards step S65 to for being, otherwise forwards step S61 to, handle next width of cloth topography;
Step S65 with the text image after all changes, is stitched together it according to its effective coverage, obtains splicing full figure; The method that topography after all conversion is spliced comprises: the topography that will need to splice changes to after the same coordinate system, carries out the splicing of image;
Step S66, the full figure that splicing is obtained carries out aftertreatment; The post-processing step of splicing full figure comprises: if the full figure that all local image mosaics come out, start a leak or during unfilled corner, template image can be amplified scale this moment doubly, directly fill the zone of disappearance part then in this regional pixel with template image, by above-mentioned aftertreatment, guarantee to obtain complete image.
In sum, the method for several text images of shooting that the present invention proposes and splicing can guarantee to photograph the covering entire document zone that all topographies can be complete, makes the file and picture that spells out at last can not have the situation that there is unfilled corner in the cavity.The image pickup method of text image can be known the correspondence position in the document full figure of each topography in addition, makes in the subsequent images splicing, can effectively improve the speed and the degree of accuracy of characteristic matching.Image pickup method of the present invention is fit to the document splicing on the mobile phone very much.
Here description of the invention and application is illustrative, is not to want with scope restriction of the present invention in the above-described embodiments.Here the distortion of disclosed embodiment and change are possible, and the various parts of the replacement of embodiment and equivalence are known for those those of ordinary skill in the art.Those skilled in the art are noted that under the situation that does not break away from spirit of the present invention or essential characteristic, and the present invention can be with other form, structure, layout, ratio, and realize with other assembly, material and parts.Under the situation that does not break away from the scope of the invention and spirit, can carry out other distortion and change here to disclosed embodiment.

Claims (10)

1. a method of taking several text images and splicing is characterized in that described method comprises the steps:
Step 110 makes camera far away from text, can take whole text just, and the whole text full figure that obtains is as template image;
Step 120, the user is divided into N*M homogeneous area with whole text full figure;
Step 130, the zone that calculating will be taken, if shooting last time is k zone, then this less important shooting is k+1 zone, the image that this is regional is full of the fill area of screen as translucent template figure, and wherein every limit, fill area is an i pixel with the distance of screen edge; And the prompting user takes;
The method that the prompting user takes next zone comprises: the zone with above-mentioned division is a benchmark, certain regional shear that will take is come out as template image, in the photographed screen of mobile phone the fill area is set then, every edge of fill area is an i pixel with the distance at display screen edge; According to the fill area, template image narrowed down to just can be full of the fill area, the pixel transparency of template image is set to translucent, makes when taking topography, both but preview was to the topography that will take, and also the topography that can take compares with template image;
Step 140, based on the shooting prompting of translucent document template figure, the user takes the topography of region document; The method of taking topography comprises: adjust the distance of camera, when preview when the topography that will take almost coincide with template image, press shooting push button at this moment, obtain topography;
Step 150 judges whether all topographies of document have taken, and is to have taken to finish as judged result, turns to step 160, if judge the topography that will take in addition, then turns to step 130;
Step 160 is a complete full figure with all local image mosaics; This step comprises: topography and template image are carried out characteristic matching, then right based on the unique point on the coupling, calculate the perspective transformation matrices of topography and original text image, topography changes to the plane, place of original text image by the perspective transformation matrices then, to be in same plane through all topographies after the change process, then splice; Specifically comprise:
Step 161, topography and template image that a width of cloth is not also handled carry out characteristic matching, and it is right to obtain characteristic matching point; Topography comprises with the method that template image carries out characteristic matching: step 1611, determine interested feature key points; Step 1612, the proper vector descriptor of extraction key point peripheral region; Step 1613, the Euclidean distance by unique point mates each proper vector descriptor; In the step 1613, matching strategy adopts arest neighbors ratio coupling: for the Feature Points Matching of two width of cloth images, search with first width of cloth image in the corresponding match point of certain unique point, then in second width of cloth image, find out two unique points nearest with this unique point Euclidean distance, if closest approach apart from d NearstDivided by second near point apart from d Sec ondLess than setting threshold, think that then this closest approach is a match point, otherwise do not receive;
Step 162, whether the judging characteristic coupling is successful; Criterion: whether the unique point on the coupling to reaching setting value; If be lower than setting value, the transformation matrices between can't computed image then is judged as failure, forwards step S3 to and takes correspondence image again; If right the counting of characteristic matching meets or exceeds setting value, be judged as successfully, forward step 163 to;
Step 163 by the feature of coupling, is calculated the perspective transformation matrices between corresponding topography and the template image, then with topography according to transformation matrices, obtain the picture after this topography's conversion;
Wherein, the method for calculating perspective transformation matrix is comprised according to the unique point on the coupling: right according to the unique point on the coupling of two width of cloth images, calculate the perspective transformation matrices between the plane, two width of cloth text image places; Setting src_points is the match point coordinate on plane, place in the template text image, and size is 2xN, and wherein, N represents number a little; Setting dst_points is the match point coordinate on plane, topography place, and size is 2xN; The perspective transformation matrices is 3 * 3 matrix, makes
Figure FDA0000034147550000021
(x wherein i, y i, 1) and be the coordinate of a point of dst_points, (x ' i, y ' i, 1) and be the coordinate of a point of src_point;
The perspective transformation matrices of 3x3 of output makes back projection's mistake minimum, i.e. following formula minimum:
Σ i ( ( x i ′ - h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 ) 2 + ( y i ′ - h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 ) 2 ) ;
Topography is comprised by the method that transformation matrix obtains the topography after the conversion: revise the perspective transformation matrices
Figure FDA0000034147550000032
The third line (h 31, h 32, h 33) be that the coefficient that dwindles is amplified in control, for this reason with (h 31, h 32, h 33) be varied to (h 31/ scale, h 32/ scale, h 33/ scale), scale is that topography changes the amplification coefficient of back with respect to template image; By the topography that obtains after the conversion of perspective transformation matrices, resolution is scale times of original template image; All transform to topography the same coordinate system under according to amended perspective transformation matrices this moment, then carries out next step splicing and handle;
Step 164, judge: whether all topographies all handle; If answer then forwards step 165 to for being, otherwise forwards step 161 to, handle next width of cloth topography;
Step 165 with the text image after all changes, is stitched together it according to its effective coverage, obtains splicing full figure; The method that topography after all conversion is spliced comprises: the topography that will need to splice changes to after the same coordinate system, carries out the splicing of image;
Step 166, the full figure that splicing is obtained carries out aftertreatment; The post-processing step of splicing full figure comprises: if the full figure that all local image mosaics come out, start a leak or during unfilled corner, template image can be amplified scale this moment doubly, directly fill the zone of disappearance part then in this regional pixel with template image, by above-mentioned aftertreatment, guarantee to obtain complete image.
2. a method of taking several text images and splicing is characterized in that described method comprises the steps:
Step 110 makes camera far away from text, can take whole text just, and the whole text full figure that obtains is as template image;
Step 120, the user is divided into N*M homogeneous area with whole text full figure;
Step 130, the zone that calculating will be taken, if shooting last time is k zone, then this less important shooting is k+1 zone, the image that this is regional is full of the fill area of screen as translucent template figure, and wherein every limit, fill area is an i pixel with the distance of screen edge;
Step 140, based on the shooting prompting of translucent document template figure, the user takes the topography of region document;
Step 150 judges whether all topographies of document have taken, and is to have taken to finish as judged result, turns to step 160, if judge the topography that will take in addition, then turns to step 130;
Step 160 is a complete full figure with all local image mosaics.
3. the method for several text images of shooting according to claim 2 and splicing is characterized in that:
In the step 130, the method that the prompting user takes next zone comprises:
Zone with above-mentioned division is a benchmark, and certain regional shear that take is come out as template image, in the photographed screen of mobile phone the fill area is set then, and every edge of fill area is an i pixel with the distance at display screen edge;
According to the fill area, template image narrowed down to just can be full of the fill area, the pixel transparency of template image is set to translucent, makes when taking topography, both but preview was to the topography that will take, and also the topography that can take compares with template image.
4. the method for several text images of shooting according to claim 2 and splicing is characterized in that:
In the described step 140, the method for taking topography comprises: adjust the distance of camera, when preview when the topography that will take almost coincide with template image, press shooting push button at this moment, obtain topography.
5. the method for several text images of shooting according to claim 2 and splicing is characterized in that:
In the described step 160, the splicing step comprises: topography and template image are carried out characteristic matching, then right based on the unique point on the coupling, calculate the perspective transformation matrices of topography and original text image, topography changes to the plane, place of original text image by the perspective transformation matrices then, to be in same plane through all topographies after the change process, then splice.
6. the method for several text images of shooting according to claim 5 and splicing is characterized in that:
Described step 160 specifically comprises:
Step 161, topography and template image that a width of cloth is not also handled carry out characteristic matching, and it is right to obtain characteristic matching point; Topography comprises with the method that template image carries out characteristic matching: step 1611, determine interested feature key points; Step 1612, the proper vector descriptor of extraction key point peripheral region; Step 1613, the Euclidean distance by unique point mates each proper vector descriptor; In the step 1613, matching strategy adopts arest neighbors ratio coupling: for the Feature Points Matching of two width of cloth images, search with first width of cloth image in the corresponding match point of certain unique point, then in second width of cloth image, find out two unique points nearest with this unique point Euclidean distance, if closest approach apart from d NearstDivided by second near point apart from d Sec ondLess than setting threshold, think that then this closest approach is a match point, otherwise do not receive;
Step 162, whether the judging characteristic coupling is successful; Criterion: whether the unique point on the coupling to reaching setting value; If be lower than setting value, the transformation matrices between can't computed image then is judged as failure, forwards step S3 to and takes correspondence image again; If right the counting of characteristic matching meets or exceeds setting value, be judged as successfully, forward step 163 to;
Step 163 by the feature of coupling, is calculated the perspective transformation matrices between corresponding topography and the template image, then with topography according to transformation matrices, obtain the picture after this topography's conversion;
Wherein, the method for calculating perspective transformation matrix is comprised according to the unique point on the coupling: right according to the unique point on the coupling of two width of cloth images, calculate the perspective transformation matrices between the plane, two width of cloth text image places; Setting src_points is the match point coordinate on plane, place in the template text image, and size is 2xN, and wherein, N represents number a little; Setting dst_points is the match point coordinate on plane, topography place, and size is 2xN; The perspective transformation matrices is 3 * 3 matrix, makes
Figure FDA0000034147550000061
(x wherein i, y i, 1) and be the coordinate of a point of dst_points, (x ' i, y ' i, 1) and be the coordinate of a point of src_point;
The perspective transformation matrices of 3x3 of output makes back projection's mistake minimum, i.e. following formula minimum:
Σ i ( ( x i ′ - h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 ) 2 + ( y i ′ - h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 ) 2 ) ;
Topography is comprised by the method that transformation matrix obtains the topography after the conversion: revise the perspective transformation matrices The third line (h 31, h 32, h 33) be that the coefficient that dwindles is amplified in control, for this reason with (h 31, h 32, h 33) be varied to (h 31/ scale, h 32/ scale, h 33/ scale), scale is that topography changes the amplification coefficient of back with respect to template image; By the topography that obtains after the conversion of perspective transformation matrices, resolution is scale times of original template image; All transform to topography the same coordinate system under according to amended perspective transformation matrices this moment, then carries out next step splicing and handle;
Step 164, judge: whether all topographies all handle; If answer then forwards step 165 to for being, otherwise forwards step 161 to, handle next width of cloth topography;
Step 165 with the text image after all changes, is stitched together it according to its effective coverage, obtains splicing full figure; The method that topography after all conversion is spliced comprises: the topography that will need to splice changes to after the same coordinate system, carries out the splicing of image;
Step 166, the full figure that splicing is obtained carries out aftertreatment; The post-processing step of splicing full figure comprises: if the full figure that all local image mosaics come out, start a leak or during unfilled corner, template image can be amplified scale this moment doubly, directly fill the zone of disappearance part then in this regional pixel with template image, by above-mentioned aftertreatment, guarantee to obtain complete image.
7. a method of taking several text images and splicing is characterized in that, clap several text images be used for the splicing of text image; Described method comprises the steps:
Step S1 takes whole text, and the whole text full figure that obtains is as template image;
Step S2 is divided into L zone with whole text full figure;
Step S3, the zone that calculating will be taken, and take one by one; If last shooting is k zone, then this less important shooting is k+1 zone, k≤L-1; With the image setting of the corresponding region in the template image is translucent, and is full of the fill area of screen;
Step S4, based on the shooting prompting of translucent template image, the user takes the topography of corresponding region text;
Step S5 judges whether all topographies of document have taken, if judge the topography that will take in addition, then turns to step S3.
8. a method of taking several text images and splicing is characterized in that, clap several text images be used for the splicing of text image; Described method comprises the steps:
Step S1 takes whole text, and the whole text full figure that obtains is as template image;
Step S2 is divided into L zone with whole text full figure;
Step S3, the zone that calculating will be taken, and take one by one; If last shooting is k zone, then this less important shooting is k+1 zone, k≤L-1; With the image setting of the corresponding region in the template image is translucent, and is full of the fill area of screen;
Step S4, based on the shooting prompting of translucent template image, the user takes the topography of corresponding region text;
Step S5 judges whether all topographies of document have taken, and is to have taken to finish as judged result, turns to step S6, if judge the topography that will take in addition, then turns to step S3;
Step S6 with all topographies and template image coupling, is spliced into new text image respectively.
9. the method for several text images of shooting according to claim 8 and splicing is characterized in that:
Among the step S3, every limit, fill area is the setting pixel value with the distance of screen edge.
10. the method for several text images of shooting according to claim 8 and splicing is characterized in that:
The joining method of described step S6 comprises:
Step S61, topography and template image that a width of cloth is not also handled carry out characteristic matching, and it is right to obtain characteristic matching point; Topography comprises with the method that template image carries out characteristic matching: step S611, determine interested feature key points; Step S612, the proper vector descriptor of extraction key point peripheral region; Step S613, the Euclidean distance by unique point mates each proper vector descriptor; Among the step S613, matching strategy adopts arest neighbors ratio coupling: for the Feature Points Matching of two width of cloth images, search with first width of cloth image in the corresponding match point of certain unique point, then in second width of cloth image, find out two unique points nearest with this unique point Euclidean distance, if closest approach apart from d NearstDivided by second near point apart from d Sec ondLess than setting threshold, think that then this closest approach is a match point, otherwise do not receive;
Step S62, whether the judging characteristic coupling is successful; Criterion: whether the unique point on the coupling to reaching setting value; If be lower than setting value, the transformation matrices between can't computed image then is judged as failure, forwards step S3 to and takes correspondence image again; If right the counting of characteristic matching meets or exceeds setting value, be judged as successfully, forward step S63 to;
Step S63 by the feature of coupling, calculates the perspective transformation matrices between corresponding topography and the template image, then with topography according to transformation matrices, obtain the picture after this topography's conversion;
Wherein, the method for calculating perspective transformation matrix is comprised according to the unique point on the coupling: right according to the unique point on the coupling of two width of cloth images, calculate the perspective transformation matrices between the plane, two width of cloth text image places; Setting src_points is the match point coordinate on plane, place in the template text image, and size is 2xN, and wherein, N represents number a little; Setting dst_points is the match point coordinate on plane, topography place, and size is 2xN; The perspective transformation matrices is 3 * 3 matrix, makes
Figure FDA0000034147550000091
(x wherein i, y i, 1) and be the coordinate of a point of dst_points, (x ' i, y ' i, 1) and be the coordinate of a point of src_point;
The perspective transformation matrices of 3x3 of output makes back projection's mistake minimum, i.e. following formula minimum:
Σ i ( ( x i ′ - h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 ) 2 + ( y i ′ - h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 ) 2 ) ;
Topography is comprised by the method that transformation matrix obtains the topography after the conversion: revise the perspective transformation matrices
Figure FDA0000034147550000093
The third line (h 31, h 32, h 33) be that the coefficient that dwindles is amplified in control, for this reason with (h 31, h 32, h 33) be varied to (h 31/ scale, h 32/ scale, h 33/ scale), scale is that topography changes the amplification coefficient of back with respect to template image; By the topography that obtains after the conversion of perspective transformation matrices, resolution is scale times of original template image; All transform to topography the same coordinate system under according to amended perspective transformation matrices this moment, then carries out next step splicing and handle;
Step S64, judge: whether all topographies all handle; If answer then forwards step S65 to for being, otherwise forwards step S61 to, handle next width of cloth topography;
Step S65 with the text image after all changes, is stitched together it according to its effective coverage, obtains splicing full figure; The method that topography after all conversion is spliced comprises: the topography that will need to splice changes to after the same coordinate system, carries out the splicing of image;
Step S66, the full figure that splicing is obtained carries out aftertreatment; The post-processing step of splicing full figure comprises: if the full figure that all local image mosaics come out, start a leak or during unfilled corner, template image can be amplified scale this moment doubly, directly fill the zone of disappearance part then in this regional pixel with template image, by above-mentioned aftertreatment, guarantee to obtain complete image.
CN2010105588685A 2010-11-25 2010-11-25 Method for shooting and matching multiple text images Active CN101976449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105588685A CN101976449B (en) 2010-11-25 2010-11-25 Method for shooting and matching multiple text images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105588685A CN101976449B (en) 2010-11-25 2010-11-25 Method for shooting and matching multiple text images

Publications (2)

Publication Number Publication Date
CN101976449A true CN101976449A (en) 2011-02-16
CN101976449B CN101976449B (en) 2012-08-08

Family

ID=43576331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105588685A Active CN101976449B (en) 2010-11-25 2010-11-25 Method for shooting and matching multiple text images

Country Status (1)

Country Link
CN (1) CN101976449B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194241A (en) * 2011-04-28 2011-09-21 西安交通大学 Internet-based design method of artistic picture splicing system
WO2014023231A1 (en) * 2012-08-07 2014-02-13 泰邦泰平科技(北京)有限公司 Wide-view-field ultrahigh-resolution optical imaging system and method
CN105107190A (en) * 2015-09-15 2015-12-02 清华大学 Image collecting and processing system applied to Chinese billiards and image processing method
CN105335948A (en) * 2014-08-08 2016-02-17 富士通株式会社 Document image splicing apparatus and method and scanner
CN105407251A (en) * 2014-09-09 2016-03-16 佳能株式会社 Image processing apparatus and image processing method
CN105827987A (en) * 2016-05-27 2016-08-03 维沃移动通信有限公司 Picture shooting method and mobile terminal
CN106648571A (en) * 2015-11-03 2017-05-10 百度在线网络技术(北京)有限公司 Application interface correction method and apparatus
CN109618092A (en) * 2018-12-03 2019-04-12 广州图匠数据科技有限公司 A kind of splicing photographic method, system and storage medium
CN110006361A (en) * 2019-03-12 2019-07-12 精诚工科汽车系统有限公司 Part automated detection method and system based on industrial robot
CN111355889A (en) * 2020-03-12 2020-06-30 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112132148A (en) * 2020-08-26 2020-12-25 长春理工大学光电信息学院 Document scanning method for automatically splicing multiple pictures shot by mobile phone camera
CN116363262A (en) * 2023-03-31 2023-06-30 北京百度网讯科技有限公司 Image generation method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466701B1 (en) * 1997-09-10 2002-10-15 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images
US6771304B1 (en) * 1999-12-31 2004-08-03 Stmicroelectronics, Inc. Perspective correction device for panoramic digital camera
CN1545060A (en) * 2003-11-17 2004-11-10 甲尚股份有限公司 Method and apparatus for picking image
CN1589050A (en) * 2004-09-23 2005-03-02 美博通信设备(北京)有限公司 Method for shooting and browsing panoramic film using mobile terminal
CN1992811A (en) * 2005-12-30 2007-07-04 摩托罗拉公司 Method and system for displaying adjacent image in the preview window of camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466701B1 (en) * 1997-09-10 2002-10-15 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images
US6771304B1 (en) * 1999-12-31 2004-08-03 Stmicroelectronics, Inc. Perspective correction device for panoramic digital camera
CN1545060A (en) * 2003-11-17 2004-11-10 甲尚股份有限公司 Method and apparatus for picking image
CN1589050A (en) * 2004-09-23 2005-03-02 美博通信设备(北京)有限公司 Method for shooting and browsing panoramic film using mobile terminal
CN1992811A (en) * 2005-12-30 2007-07-04 摩托罗拉公司 Method and system for displaying adjacent image in the preview window of camera

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194241B (en) * 2011-04-28 2013-07-10 西安交通大学 Internet-based design method of artistic picture splicing system
CN102194241A (en) * 2011-04-28 2011-09-21 西安交通大学 Internet-based design method of artistic picture splicing system
WO2014023231A1 (en) * 2012-08-07 2014-02-13 泰邦泰平科技(北京)有限公司 Wide-view-field ultrahigh-resolution optical imaging system and method
CN105335948A (en) * 2014-08-08 2016-02-17 富士通株式会社 Document image splicing apparatus and method and scanner
CN105335948B (en) * 2014-08-08 2018-06-29 富士通株式会社 Splicing apparatus, method and the scanner of file and picture
CN105407251A (en) * 2014-09-09 2016-03-16 佳能株式会社 Image processing apparatus and image processing method
CN105407251B (en) * 2014-09-09 2018-11-09 佳能株式会社 Image processing apparatus and image processing method
CN105107190A (en) * 2015-09-15 2015-12-02 清华大学 Image collecting and processing system applied to Chinese billiards and image processing method
CN106648571A (en) * 2015-11-03 2017-05-10 百度在线网络技术(北京)有限公司 Application interface correction method and apparatus
CN105827987B (en) * 2016-05-27 2019-07-26 维沃移动通信有限公司 A kind of picture shooting method and mobile terminal
CN105827987A (en) * 2016-05-27 2016-08-03 维沃移动通信有限公司 Picture shooting method and mobile terminal
CN109618092A (en) * 2018-12-03 2019-04-12 广州图匠数据科技有限公司 A kind of splicing photographic method, system and storage medium
CN109618092B (en) * 2018-12-03 2020-11-06 广州图匠数据科技有限公司 Splicing photographing method and system and storage medium
CN110006361A (en) * 2019-03-12 2019-07-12 精诚工科汽车系统有限公司 Part automated detection method and system based on industrial robot
CN110006361B (en) * 2019-03-12 2021-08-03 精诚工科汽车系统有限公司 Automatic part detection method and system based on industrial robot
CN111355889A (en) * 2020-03-12 2020-06-30 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112132148A (en) * 2020-08-26 2020-12-25 长春理工大学光电信息学院 Document scanning method for automatically splicing multiple pictures shot by mobile phone camera
CN112132148B (en) * 2020-08-26 2024-01-30 深圳市米特半导体技术有限公司 Document scanning method based on automatic splicing of multiple pictures shot by mobile phone camera
CN116363262A (en) * 2023-03-31 2023-06-30 北京百度网讯科技有限公司 Image generation method and device and electronic equipment
CN116363262B (en) * 2023-03-31 2024-02-02 北京百度网讯科技有限公司 Image generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN101976449B (en) 2012-08-08

Similar Documents

Publication Publication Date Title
CN101976449B (en) Method for shooting and matching multiple text images
CN102074001B (en) Method and system for stitching text images
CN102013094B (en) Method and system for improving definition of text images
US7386188B2 (en) Merging images to form a panoramic image
US8249390B2 (en) Method for taking panorama mosaic photograph with a portable terminal
US8040386B2 (en) Image processing apparatus, image processing method, program, and recording medium
RU2421814C2 (en) Method to generate composite image
US20050104901A1 (en) System and method for whiteboard scanning to obtain a high resolution image
CN104680501A (en) Image splicing method and device
CN109691080B (en) Image shooting method and device and terminal
CN105744180A (en) Image generating device, electronic device and image generating method
CN104285435A (en) Imaging device and signal correction method
WO2014045689A1 (en) Image processing device, imaging device, program, and image processing method
JP2007201948A (en) Imaging apparatus, image processing method and program
US20020057848A1 (en) Using an electronic camera to build a file containing text
CN104641625A (en) Image processing device, imaging device, image processing method and image processing program
CN102595146B (en) Panoramic image generation method and device
CN102012629B (en) Shooting method for splicing document images
US20150304529A1 (en) Image processing device, imaging device, program, and image processing method
CN104871058A (en) Image processing device, imaging device, image processing method, and image processing program
CN105103534A (en) Image capturing apparatus, calibration method, program, and recording medium
CN104885440A (en) Image processing device, imaging device, image processing method, and image processing program
WO2014155813A1 (en) Image processing device, imaging device, image processing method and image processing program
CN105139340A (en) Method and device for splicing panoramic photos
CN113141450A (en) Shooting method, shooting device, electronic equipment and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 200433, Shanghai, Yangpu District Fudan hi tech Park Road, No. 335, building 11011A room

Patentee after: Shanghai hehe Information Technology Co., Ltd

Address before: 200433, Shanghai, Yangpu District Fudan hi tech Park Road, No. 335, building 11011A room

Patentee before: INTSIG INFORMATION Co.,Ltd.

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: Room 1105-1123, No. 1256, 1258, Wanrong Road, Jing'an District, Shanghai, 200436

Patentee after: Shanghai hehe Information Technology Co., Ltd

Address before: 200433, Shanghai, Yangpu District Fudan hi tech Park Road, No. 335, building 11011A room

Patentee before: Shanghai hehe Information Technology Co., Ltd