WO2009120073A2 - A dynamically calibrated self referenced three dimensional structured light scanner - Google Patents

A dynamically calibrated self referenced three dimensional structured light scanner Download PDF

Info

Publication number
WO2009120073A2
WO2009120073A2 PCT/NL2009/050140 NL2009050140W WO2009120073A2 WO 2009120073 A2 WO2009120073 A2 WO 2009120073A2 NL 2009050140 W NL2009050140 W NL 2009050140W WO 2009120073 A2 WO2009120073 A2 WO 2009120073A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
points
positions
scan
Prior art date
Application number
PCT/NL2009/050140
Other languages
French (fr)
Other versions
WO2009120073A3 (en
Inventor
Bantwal Janardrian Mohandas Rao
Original Assignee
Bantwal Janardrian Mohandas Rao
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bantwal Janardrian Mohandas Rao filed Critical Bantwal Janardrian Mohandas Rao
Publication of WO2009120073A2 publication Critical patent/WO2009120073A2/en
Publication of WO2009120073A3 publication Critical patent/WO2009120073A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the invention relates to three dimensional scanning and measuring systems that generate 3D geometrical data sets representing objects and or scenes using a structured light source.
  • 3D scanning An example of 3D scanning is described in an article by Lyubomir Zagorchev and A. Ardeshir Goshtasby (Zagorchev et al), titled “A paint-brush laser range scanner” and published in Computer Vision and Image Understanding archive Volume 101 , Issue 2 (February 2006) Pages: 65 - 86ISSN:1077-3142. 3D scanning is also described in an article by Bouguet, titled “3D Photography On Your Desk” published Proceedings of International Conference on Computer Vision, Bombay India, Jan 1998, pp. 43-50.
  • 3D scanning is the technique of converting the geometrical shape or form of a tangible object or scene into a data set of points. Each point may represent an actual point, in 3 dimensional space, of the surface of the object/scene that was scanned.
  • 3D scanning offers technical solution to many different industries and markets for many different reasons. Some of the better known applications where 3D scanning offers solution are within dentistry, ear mold making for hearing aids, diamond industry, movie and gaming production, heritage and materials production (rapid prototyping, CAD, CAM, CNC). The range of applications is steadily increasing, as the use of computers becomes more common, the power of PC increases and the demand for better and faster means to capture, store and manipulate real world data increases. Many 3D scanning techniques and methods exist.
  • 3D scanners are typically complex to use and require costly instruments and equipment. Their application has usually been reserved for only specialized applications. In recent years this has started to change with the introduction of 'low cost 3D scanning systems'. These systems tend to employ more robust scanning methods that rely less on costly, specialized instruments and hardware.
  • One popular class or group of 3D scanning techniques is the "active non-contact” type. "Active” meaning that some form of encoded, structured or non-coded energy, such as light, is emitted from a source to reflect off of an object in order to directly or indirectly understand something about the object's 3D shape.
  • 'Structured light' is one type of active non-contact 3D scanning technologies that uses a predefined light pattern such as a projected line or stripe.
  • Non-contact is meaning that the main scanning device does not require touch the object that is being scanned.
  • the active non-contact type 3D scanners that use structured light have received widespread interest over the years due to inherent high scanning speeds and wide scanning range. In particular, research has been directed towards developing absolutely low cost systems. Other properties such as scanning speed, range, robustness, accuracy, sensitivity and mobility are considered equally important elements.
  • Active non-contact type 3D scanners are most frequently based on the use of triangulation to derive the 3 dimensional position of a point on an object's surface.
  • This is a well-known art. This can be achieved by the projection of a pattern such as a point onto the object's surface.
  • a camera of which its imaging area has, in most cases, usually been calibrated, views the reflection of the point on the object's surface.
  • the camera's geometrical position is usually fixed and the geometrical orientation of the point projection device in relation to the camera's imaging area is known or at least well approximated. With these parameters known and set the position of the reflected point on the objects surface that is being viewed by the camera is easily derived using triangulation.
  • the angle and distance between the camera and the projection device is known. This is sufficient to satisfy the calculation of the reflected point in terms of its distance to the camera using basic trigonometry. Moving the point projection device one increment in another known position can derive another point position of the object's surface. Repeating this process for new and known positions pertaining to the projection device will yield a dense data set of 3D points that represent the object's surface.
  • the projection device is held at a known angle or between known angle margins in relation to the camera and either the object or the projection device is translated perpendicular to the line projection plane thereby sweeping the projected line over the object's surface.
  • most all of the surface area of the object is illuminated by the projection device's pattern.
  • the greater the angle pose of the projection device is in relation to the camera, while the reflected pattern is still clearly visible on the objects surface the greater the scanning accuracy will be as a wider section of the imaging element array is being employed.
  • the greater the angle the more chance that the projection device's beam path or plane and/or cameras line of sight will be obstructed or occluded.
  • the projected pattern cannot reflect off of areas on the object that it can not reach at certain angles nor can the camera view areas where the pattern may reflect off of the object while being occluded by the object's own shape. Occlusion is a common problem with these types of scanners and many methods have been devised to reduce it. These methods included for instance, a second projection device or two or more cameras are employed to reduce this limitation by gaining multiple views of the scanning scene. It should be evident that these approaches increase complexity and cost to say the least. And, even if these additional instruments/methods are employed the angle(s) of the projected pattern(s) are usually fixed. More precisely explained, the projection device's scanning angle does not dynamically and effectively follow the ideal scanning angle for a given object geometry.
  • Zagorchev et al describe an example wherein two wire frames placed next to the scanned object are used.
  • Other possible reference structures include, for instance, a cube type cage structure made of thin bars or wires, a flat surface or two flat surfaces placed at known angles behind or to the sides of the object. In any case the reference surface is calibrated in the camera's image.
  • This reference surface must allow to be illuminated by the projection device as well as the object that is being scanned. A minimal area of the reference surface must always be in view and it must remain fixed in position just as the object.
  • the projected pattern is swept over the object and reference surface. Each image will show the deformed projection device's pattern over the object's surface as well as on the reference surface.
  • the 3D position of each point of the object's surface can be derived since the geometry and coordinate position of the reference surface is known in relation to the camera. More specifically, if at least three separate points are located on the reference then this will satisfy the calculation to determine the projected pattern plane and hence it's orientation in relation to the object.
  • the user or operator is confined to sweep the projected pattern over the object in a controlled and docile fashion and within a particular orientation in relation to the pre calibrated surface it is now possible to significantly reduce occlusion.
  • the operator can focus on areas that are susceptible to occlusion and adjust the orientation of the projected pattern to illuminate those areas by hand within the permissible scanning orientation that is determined by the reference surface.
  • the user can now, in a more dynamic way and within the restricted area determined by the reference surface, sweep the projection pattern to follow the best possible scanning that the object's geometry warrants.
  • the present invention aims to address at least part of the problems previously described.
  • a method as set out in claim 1 is provided.
  • the scanning of the object comprises a reference scan and a measurement scan of the object.
  • the reference scan is performed under calibrated conditions to determine 3D positions of points on the object that are visible to an image sensor.
  • the resulting 3D positions of object points are used to calibrate the geometry of structured light that is applied to the object during the measurement scan.
  • the structured light may be a plane of light in 3D space for example, which results in a one dimension line where the object intersects the plane.
  • other types of structures light may be used, such as a curved two dimensional surface in 3D space, a set of parallel surfaces, a grid of rows and columns of surfaces, a set of discrete lines etc.
  • the intersection of the structured light with the object gives rise to selection of points in the two dimensional image captured by the image sensor.
  • the points may be selected in the sense that the structured light provides light only in a spatial light structure so that the selected points are selectively illuminated points, but alternatively points may be selected by supplying light on mutually opposite sides of the selected points but not at the selected points, for example by supplying bands of light on the opposite sides or supplying light everywhere except in a spatial light structure, so that the selected points are selectively not illuminated points.
  • points at the edge of a broad lighted band may be selected points.
  • the 3D position is known.
  • geometrical properties of the light structure during the measurement scan are determined.
  • the position and orientation of that surface relative to the image sensor can be determined from the 3D positions of the points on the object that are known from the reference scan.
  • Three points on the object may be sufficient to determine a position and orientation, but fewer points may suffice if changes in position and orientation are partly constrained and more points may be used, for example in a least square error estimation of the position and orientation.
  • the structured light may be manually positioned during said step of applying the structured light.
  • An human operator may manually swing the light structure through successive orientations and/or positions to realize successive measurement scans.
  • An apparatus may be provided that comprises an image sensor, one or more projection devices for projecting a light structure and a computer.
  • the computer may be configured, by means of a computer program for example, to receive captured images of the object from the image sensor and to compute 3D positions associated with image points in the reference scan, to compute geometric parameters of light structures during the measurement scan using these 3D positions and to compute further 3D positions from the measurement scan, using these geometric parameters.
  • the reference scan itself may be performed using a light structure with calibrated properties, e.g. with known orientation and position relative to the image sensor.
  • the same light structure projecting device may be used as in the measurement scan, but mounted in a holder that provides for controlled calibrated conditions during the reference scan.
  • another light structure projecting device may be used.
  • the relative orientation and position of the camera and the object are the same during the reference scan and the measurement scan.
  • a controlled change of this relative orientation and/or position may be used between the reference scan and the measurement scan (e.g. a rotation of the object), with the 3D position result of the reference scan being determined from the position during the reference scan and the controlled change of relative orientation and/or position.
  • the reference scan may be a pre-scan, performed before the actual measurement scan.
  • a post- scan may be used or a scan that is simultaneous with the measurement scan, using light of a different wavelength for example.
  • the results of measurement scan, once calibrated may subsequently be used as results of a reference scan to calibrate another measurement scan.
  • a mirror may be used to make parts of the object visible in an image captured by the image sensor that would not otherwise be visible.
  • the use of a reference scan and a measurement scan may be used to determine 3D positions of points that in portions of the image that view the object via the mirror and directly or via a further mirror. Once these 3D positions are known, positions from any combination of portions of the image can be used to calibrate geometric properties of the light structure during the measurement scan.
  • a plurality of image sensors may be used, to view the object from different directions. In this case too any combination of images from different image sensors can be used to calibrate geometric properties of the light structure during the measurement scan.
  • Fig. 1 illustrates the principle geometry of the principal materials and apparatus layout.
  • Fig. 2A illustrates the principle geometry of the principal materials and apparatus layout from a side view.
  • Fig. 2B illustrates the principle geometry of the principal materials and apparatus layout as in Fig 2A but drawn from a frontal view
  • Fig. 3 illustrates another type of arrangement of materials for pre scanning to create reference geometry.
  • Fig. 4 illustrates another arrangement of materials for pre scanning to create reference geometry as in Fig 2A-B.
  • Fig. 5A shows a sample image of a scanning scene.
  • Fig. 5B-C show a projected light plane curvature.
  • Fig. 5D shows connected selected points.
  • Fig. 6A-C shows sample video camera images.
  • Fig. 7A-B illustrates a geometry with a turntable.
  • Fig. 8A shows a sample image.
  • Fig. 8B shows a processed image. Description of exemplary embodiments
  • Methods and devices are disclosed for modeling 3D objects or scenes or otherwise converting their geometrical shape into a data set of 3D points that represent the objects 3D surface.
  • These devices are commonly referred to as 3D scanners, 3D range scanners or finders, 3D object modelers, 3D digitizers and 3D distance measurement devices. It will be evident to those skilled in the art of 3D scanning that there are many elements involved in the process. This description will adhere to the core aspects and principles involved without going into great detail about well-known methods, processes or commonly used instruments. This is done in order to maintain clarity.
  • Fig. 1 illustrates the principle geometry of the principal materials and apparatus layout.
  • This layout includes a projection device (100) that projects a light plane (105) onto and object/scene (103) that is to be scanned. The reflection of the light plane on the subject (103) results in a curvature (109) that follows the surface of the object (103) for that section.
  • An imaging device such as a video camera (101) views the scanning scene.
  • the 2Dimage (104) that is projected onto the imaging element pertaining to the scene is shown which illustrates that the relation between the actual and projected scene is the same.
  • the camera (101) is interfaced with a personal computer (102) in which images from the camera (101) are relayed to the personal computer (102).
  • the computer (102) displays the image of the scene (108) from the camera (101) on a display device (106).
  • the image displayed (108) on the display device (106) also includes the pre scanned data (107) overlaid onto the object in the image.
  • Fig. 2A illustrates the principle geometry of the principal materials and apparatus layout from a side view.
  • This particular layout includes a projection device (200), a video camera (201), the subject/object/scene (202) that is to be scanned, the light ray plane (203) seen from a view point perpendicular to the plane surface and a the reflection of the light ray plane on the subject (204).
  • Fig. 2B illustrates the principle geometry of the principal materials and apparatus layout as in Fig 2A but drawn from a frontal view, general viewing direction of the camera (209).
  • This layout includes the projection device (205), the video camera (209), the subject/object/scene (208) that is to be scanned and the light ray plane (206) which reflects from the subject (208) as a curvature (207).
  • the scanning of the object is divided into to two main scan session processes.
  • First a pre-scan is made of the object/subject/scene in order to gain a useful approximation of the object's overall geometry.
  • This pre-scan data of the subject is then geometrically related to the imaging element's image of the same subject.
  • Second, a subsequent and final scanning session is performed allowing for dynamically calibrated scanning of the object or scene.
  • the final scanning process uses the pre scan data as a reference of known geometry in order to understand the pose of the light plane and thereby calculate the 3D position of registered points of the object.
  • Fig. 1, 2A and 2B displays the arrangement or layout of these components for the "final" scanning approach.
  • the present invention employs a common projection device such as a laser line module in conjunction with a calibrated image- sensing element such as a video camera.
  • a known well-defined mathematical relation exists between respective 2D points (e.g. pixel positions) in the image obtained by the video camera (209) and 3D positions on the light structure (for example light plane) projected by the projection device.
  • 3D positions on the light structure for example light plane
  • the well-defined mathematical relation follows for example from a mathematical description of the collection of 3D positions on the light structure (e.g. a plane) and the projection geometry of the camera.
  • the projection geometry of the camera mathematically defines for each point in the image the 3D ray path that ends at that point.
  • the intersection of the ray path of a point in the image and the light structure defines the 3D position corresponding to the point in the image.
  • the image shows that the image content at a position in the image shows a point where the light structure lights up the object
  • the corresponding 3D position is at the intersection of the ray path from the position in the image and the light structure.
  • the 3D positions may be similarly determined, provided that the parameters of the light structure are known at the time of the final scan.
  • the parameters of the light structure at that time may be determined if the 3D positions of a sufficient number of points on the light structure is known.
  • FIG. 3 illustrates another type of arrangement of materials for pre scanning to create reference geometry, which includes the addition of a platform (302) onto which the projection device (301) is attached to.
  • the platform may be user operated (303).
  • the projection device projects a light plane (304), which is reflected off of a subject/object/scene (304).
  • the reflection of the light plane is a curvature (306) that follows the surface geometry of that particular section of the object (305).
  • This scanning scene (305) is viewed by a video camera (300).
  • Fig. 3 displays a pre-scan layout and includes, in addition to the previously mentioned layout in Fig 1, 2 A and 2B, a tripod or platform onto which the projection device is attached to.
  • This layout illustrates one possible pre scanning configuration.
  • the projection device's pattern may be swept across the surface of the object that is to be scanned using the tripod.
  • This tripod functions as a sturdy base in order to permit a docile and controlled movement of the projection device in which movement in constrained to 1 degree of freedom.
  • the pose of the projection device in relation to the camera is known allowing triangulation of the actual point positions to be calculated.
  • a labor intensive yet effective option to create the pre scan geometry would be to measure 3 non linear 3D point positions of any flat planar surface on the object that is in the viewable area of the imager to create a reference geometry surface.
  • the depth measurement of each point must be perpendicular to the imaging elements surface plane.
  • Several of the reference geometry surfaces must be made. Reconstructed reference geometry surfaces based on these measured points should be in dissimilar planes. The more point groups measured and thereby surfaces created the more effective the employment of this pre scan geometry will be.
  • Reference objects that possess few surface irregularities and are close to parallel to the imagers imaging element's surface may benefit from the introduction of reference objects that are placed into the pre scanning scene. These may be place on the object and/or to its sides.
  • the reference objects may be irregularly positioned or have irregular shape. This will enhance the employment value of the pre scan geometry. They can be marked using image processing techniques and later be deleted from the final scan. They may also be deleted from the scan using the recovered surrounding geometry as the reference geometry in order to derive the pose of the light plane in areas previously obstructed from view by the reference object.
  • the reference objects may be, but not limited to, such things as string, rope, poles, rings and even clay made shapes.
  • the pre scanning is performed by sweeping the light plane from the projection device over the object in which the pose of the projection device is known as it is constrained to be permitted only 1 degree of motion freedom.
  • the points of the light plane curvature may then be used to determine in conjunction with triangulation the actual 3D point positions on the curvature. This process is repeated for all images of the sweeping light plane on the object in order to create the pre scan surface geometry.
  • a pre scan may also be achieved by using reference objects of known geometry and orientation with respect to the viewing imager, which is illustrated in Fig 4. These objects may be place at the sides, behind or front of the object to be scanned. In the later case the reference object(s) may actually obstruct some portion of the view of the subject as long as sufficient area of the subject is still in clear view.
  • the reference objects may be, but not limited to, such things as strings, ropes, poles, rings, panels and even clay that has been shaped in a desired form.
  • Fig. 4 illustrates same arrangement of materials for pre scanning to create reference geometry as in Fig 2A-B but with the addition of reference objects (404).
  • a projection device (400) projects a light plane (403) onto an object (402) and the reference objects (404), which is viewed by a video camera (401).
  • the light plane (403) that is reflected on the object (402) produces a curvature (408), which follows the object (402) surface geometry for that section.
  • the light plane (403) is also reflected by each of the reference objects (404), which produce a light spots (405), (406) and (407).
  • the reference objects (404) are in known positions in relation to the camera (402).
  • Fig 4 displays the layout of a preferred embodiment that is same as the layout as in Fig 1, 2 A and 2B but with the addition of 3 pole or rod shaped objects.
  • the surface of the poles reflects light well.
  • Two of the poles are placed near to and towards the sides of the object but still within the imager's field of view.
  • the 3rd pole is placed in the center of the imager's field of view.
  • the reference objects may be placed in various different positions as long as their orientation, position and shape are known.
  • the poles may be in parallel alignment along their length, although this is not necessary.
  • the relative positions of the outer placed poles are in a non-linear setting and may be for instance at an angle of 90 degrees separation around the center pole.
  • the layout is known and the camera has been calibrated in which the shape and dimensions as well as the positions of the poles relative to the camera are known.
  • a projection device such as a laser emitting line module is used to sweep a light ray plane at various angles but approximately perpendicular to the lengths of the poles over the object and poles while the imager views this.
  • the sweeping direction of the laser line should approximate a motion perpendicular to the lengths of the poles or motion parallel to the length axis of the poles.
  • the laser plane may be approximately perpendicular to the length axis of the poles.
  • Each image from the imager is examined to isolate and derive the curvature points in two dimensional space of the light ray plane reflection on the subject and reference objects.
  • the center spots positions of the light ray plane reflection on each of the poles are used to derive the pose of the laser line in relation to the imager. This may be realized by connecting the spots of the three poles to form a plane. The position and orientation of this plane are same as that of the laser line light ray plane.
  • This plane pose is known as the 3D spot or point position for each pole is known. Mathematical relations that express plane and point intersection may be used. With this pose now known is it possible to triangulate the curvature points of the light plane on the object.
  • the reference objects may be extracted from this pre scan geometry by image processing techniques that were performed beforehand. This may be accomplished by storing an image with only the subject and another image with the addition of the reference objects. Sufficient contrast between subject and reference object should exist. To further increase the separation integrity the user or operator may select areas in the image to exclude post pre scanning.
  • the reference geometry may be removed from the scene after the pre scan is made.
  • a pre- scan is made of an object or scene that is to be scanned.
  • This may even be achieved by using framework of known geometry to derive the pose of the projection device's ray plane and thereby triangulate the positions of the object surface points as explained in the previous section of this document.
  • this allows the subsequent final scan to proceed without any reliance on separate reference geometry. Freeing this restriction allows almost complete scanning flexibility as most any practical sweeping direction and angle may be performed. This allows an optimal registration of points relating to the object's surface to be performed.
  • the criteria of the pre scan data are to use a method that best approximates the surface geometry of the object.
  • the projection device may sweep its projected pattern over the object at most any angle and from most any direction without having to recalibrate the system.
  • the pre-scan acts a surface of known geometry and allows the orientation, position or otherwise pose of the projected light plane that illuminates the object curvature to be derived.
  • the projection device may be handheld and the operator is permitted to focus on areas of interest that may require particular scanning angles and directions to be performed in order to permit effective illumination of the projected pattern on the object and/or achieve optimal scanning angles and poses in relation to the surface geometry.
  • the operator's focus may be to reduce occlusion of normally hidden or occluded areas of the object and/or improve on the quality of the scanned data.
  • the process may be repeated for same points multiple times in order to achieve increasingly better approximations of the actual point positions.
  • the scanning of object surfaces at different angles will allow for these 3D positions of points that are related to the actual object's surface to be more closely achieved.
  • the filtering approach of same points may be based on weighted averaging and/or selection of median points as well as the validation of each point based on the regularity of surrounding point positions and/or the surface slope in those areas. Areas of the pre scan data where the slope between points are more perpendicular or less parallel to the light ray plane are prioritized point regions as these are more closely related positions to that of the actual object geometry.
  • One preferred method to achieve the pre- scan employs a common laser line module that has been attached to a camera tripod or other platform that can fix its position.
  • the projection device is set at an ample but practical distance from the object that is to be scanned.
  • the projection device may be actuated by means of a motor in order to realize controlled and speed consistent motion, but need not be absolutely required. An operator may also achieve the same.
  • a tripod may be used that leaves one degree of freedom for manual or motorized scanning.
  • the projection device produces an incandescent based light plane or laser based light plane. This light plane strikes and illuminates the object producing a lighted contour over the object.
  • the degree of curvature deformation depends on the angular pose of the light plane in relation to a viewing position of the object.
  • the light plane is typically set to project a horizontal plane of light onto an object.
  • the platform allows radial movement or 1 degree of freedom in which the projection device can be rotated such that the horizontal plane of light can sweep vertically over the object from top to bottom.
  • the viewing position displays the curvature deformation as it follows the surface shape of the object.
  • a video camera is positioned to view the object, which has been maximized, in the imaging element's field of view or of an area of interest.
  • the projection device and camera may be aligned for example so that the camera's imaging center is in the same plane as the projection devices projection center and perpendicular to the projection device's horizontal ray plane.
  • the camera may be calibrated to determine intrinsic properties of its lens in order to compensate for lens distortion of the viewed scene. Further calibration may be performed to determine extrinsic properties of the camera as well as the pose of the projection device ray plane in relation to the camera. This may be achieved using an object of known shape, size and position.
  • the light plane stemming from the projection device is cast onto the calibration object.
  • the offset from a known position on the calibration object is measured in order to determine the pose of the light plane.
  • the calibration object is then removed from the scene.
  • a reference image frame without the light plane contour on the object may be made by the camera and stored in computer memory to be used to isolate the illuminated contour on the object in subsequent images.
  • the reference frame image is subtracted from images of the projection device's ray plane reflection on the object. Only those pixels that have different values will show up in the resultant data. These pixels will be that of the contour line, excluding for camera image noise.
  • the light plane is cast onto the top portion of object while the camera views this scene and the reflected contour.
  • the radial position of the projection device, which is known, in relation to the camera is marked.
  • the projection device is then positioned to cast a plane of light onto the bottom portion of the object. This is marked again and the amount of angular rotation that the projection device rotated is logged.
  • the projection device is then set back in it original position. While the camera views the scene the pre scan is now made by allowing the projection device to sweep the light line over the object from top mark to bottom mark in a docile and speed consistent fashion. Each camera image is the processed to extract the contour line and determine the position of each contour line point in each frame.
  • the angular pose of the projection device is known between top mark setting in the image frames and the bottom mark setting it is now possible to estimate the angular positions or pose of the projection device within the rest of the sequential image frames.
  • the margin of error will be low as long as the speed at which the projection device was rotated was consistent.
  • With the angular position known in each frame is it now possible to triangulate the point position in each image frame and build the 3D geometry. It may also be of use to make multiple passes of the sweeping the laser over the object in which the resulting geometry for each pass is averaged together in order to minimize error.
  • the pre scan geometry will be incomplete having holes in areas that did not allow illumination by the projection device or were not in the camera's line of sight.
  • Estimates and well-approximated assumptions can also be made about the expected accuracy of the scanned surface areas. These estimates are based on the assumption that the angular pose of the light plane for particular geometry will yield better results than other areas of the geometry. Geometry that runs close to parallel to the light plane may have less accuracy. This information may later be used in the subsequent and final scan session to select that pre scan data to employ which best correlates with the estimated geometry of the object.
  • Fig. 5A displays a sample image of a scanning scene with an overlaid pre-scan reference geometry (501) and a projected light plane curvature (502) of the light plane on the object's imaged surface (500).
  • Fig. 5B displays the projected light plane curvature (504) extracted from the image including 3 selected points (503) to determine the pose of the light plane on the object.
  • Fig. 5C displays the same as in Fig 5B with the 3 selected points (pl,p2 and p3) of the curvature (504) connected to form a triangle (505).
  • Fig. 5D displays only the connected selected points (pi, p2 and p3), that together form a plane (506) in 3D space with a calculated directional normal (Nl).
  • the position in 3D space of the normal (Nl) is directly perpendicular to the pose of the projected light plane.
  • Fig 6a shows an image of a scanning scene containing an object that is to be scanned.
  • Fig 6b shows the pre-scan data of the object, which has been overlaid or otherwise related to the position and pose of the object in the camera's image of the scene.
  • Fig 6c shows a curvature over the object's surface from a light ray plane.
  • the pre scan data may now serve as the reference of known geometry in order to dynamically determine the pose of the light plane.
  • the light plane may be swept over the object from various directions and angles, illuminating all visible areas with a curvature that follows the shape of the object's surface.
  • the sweeping the light plane is carried out in a docile manner in order to prevent smearing or breaking of the image due to the finite frame speed of the sensing element.
  • the images of each frame are examined and the pose of the light plane is derived allowing for the data points to be triangulated in order to calculate their 3D positions.
  • the process may be illustrated in more detail as follows:
  • the visible intersection of the reflected light plane with the object is now related to the pre scan geometry.
  • the points that make up the pre scan geometry are typically not as dense as that of the imaging element for all areas, the intersecting point positions with the pre scan geometry may be interpolated.
  • intersection of the light plane with the pre scan geometry surfaces serves to dynamically calibrate or otherwise derive the pose and position of the light plane, i.e. to calculate the 3D pose of the laser.
  • This information will serve to allow triangulation of the 3D point coordinates of the object's surface by intersecting the light plane with the projecting rays.
  • the requirement to define a 3D plane and constrain the pose of the projected light plane is that at least three intersecting points of the plane in 3D space are known. In this case, many points on the curvature of the light plane on the object are available to choose from in order to calculate the pose.
  • the superimposed curvature of the light plane reflection on the pre scan may be made up of many camera pixel points. Picking farthest separated or otherwise least linear related points will allow, in conjunction with overlaid pre scan 3D geometry, the plane and pose of the light ray to be derived. Hence, all degrees of freedom of the projection device's plane pose will have been constrained. To minimize error, points could be selected on other additional criteria such as involving the expected accuracy of certain points in the pre scan geometry.
  • Fig 5 A displays a light plane striking an object resulting in an illuminated line curvature that follows the geometry of the object for that section of its surface. Also included are three points selected on the curvature that are least linear dependant. The x, y, (defined at the imaging element array) and z (depth) positions are known since these points are also on the pre scan surface geometry. The pose of the light plane is now also known by determining the angular rotations between the selected points. With the angles known is it now possible to calculate the actual curvature profile through triangulation for all points on the curvature.
  • the points in the image may now be triangulated to gain the actual 3D point positions of all the points of the reflection curvature on the object.
  • new surface points on the object can now be calculated as well as the possible adjustment or corrections of point positions in the pre scan geometry.
  • This data is stored in computer memory and will later be used after all images have been processed to build a surface model of the object that was scanned.
  • the superimposing of the pre-scan geometry onto the scanning scene image for user viewing is not required for processing. However it does provide the user with insight on what areas need to receive scanning focus as well as the progress of the scanning. What is preferred is that the pre scan points are directly or indirectly related to the image pixel positions in order to perform the process.
  • a computer graphic user interface may be provided that is configured to output measurement graphics and/or sound signals to assist the user through the manual process.
  • the graphics may indicate the current position and orientation of the laser plane as well as indicate through visual and/or audio signals when the user is out of scanning range.
  • the detection method of the points that make up the reflected curvature by the light plane is a well-known art. However as the light plane may take all poses between and including absolute vertical or horizontal, determining the points on the curvature with sub pixel accuracy will require that the algorithm designed to process this must understand something about the pose in order to apply the correct detection tactic. Normalization of the light plane pose position will be required in which the weighted average of detected "bright" points is determined along a vertical set of these image points which is perpendicular to the light plane pose.
  • "Bright” points may be points that are above a threshold luminosity value that serves to minimize video noise and semi illuminated surface regions due to ambient light and/or secondary reflections of the surface by the light plane. This is in particular of importance to light plane poses, which are less than horizontal and less than vertical. Applying this detection tactic will increase the tendency to select bright points that will yield the best possible accuracy regarding the actual center point of the segment bright pixels of the light plane curvature on the object.
  • the described method and device provides practical means to reduce and even eliminate scanning occlusion or shadowing within the viewing field.
  • the described method and device allow the optimal projection device pose to be set in relation to the object's surface that yields the most accurate information possible about the geometry of the actual surface to be achieved.
  • the described method and device achieve the above dynamically without having to recalibrate the system by employing a unique self calibrating scanning method that allows almost complete flexibility.
  • the described method and device may be used in multi resolution scans.
  • Movement of the object and the camera relative to each other can be used to improve scanning results.
  • a plurality of camera's at different relative positions can be used to improve scanning results.
  • parts of the object that are not in view of a camera from one relative position can be made visible with a camera from another relative position, or details that are visible at one relative position can be made visible with more refined resolution from another relative position at which the camera is closer to the object.
  • Occlusion or shadowing is a common problem with these types of scanners and many methods have been devised to reduce it. For instance, a second projection device or two or more cameras may be employed to reduce this limitation. Other methods include the use of mirrors and a beam splitter to combine multiple views of the scene.
  • Using a second projection device to reduce occlusion is straightforward.
  • the two projection devices are positioned at different but known positions. Same portions of the object are illuminated by the projection devices but at different registration times.
  • the camera images or the derived geometric data is later combined based on the known positions of the projection devices. While occlusion may occur at one of the projectors it may possibly not occur at the other. In this case the missing data can be recovered.
  • a plurality of camera's at different positions relative to the object may be used, or a camera that is moved to different positions relative to the object, or the object may be moved (e.g. rotated) while the camera position remains fixed.
  • a turntable may be used.
  • Another approach that requires only one camera and one projection device is to use mirrors to allow multiple views of the scanning scene to be realized. These mirror reflections are then combined using a beam splitter to form a single image of the scene.
  • a drawback of this method may be that the positioning of the components to accurately lineup the reflected views is complex, as four different components need to be aligned.
  • the beam splitter suffers from secondary reflections which lead to ambiguity with regard to isolating the true image pixels of the projected pattern on the object.
  • Use of one or more mirrors with a camera at one position simulates that, as far as the part of the image is concerned wherein the mirror is visible, the camera is effectively at another position. This simulated position can be determined by "unfolding" the line of sight, i.e.
  • the object may be viewed without mirror, via a mirror or via different mirror in different parts of an image from one camera respectively.
  • a plurality of mirrors may be placed so that the line of sight is reflected by multiple mirrors. In this case multiple unfolding may be used to define the effective camera position.
  • Calibration must define the translation and rotation that must be applied to transform coordinates obtained with the object and a camera at one relative position to coordinates with the object and a camera at another relative position, or effective relative position in the case that one or more mirrors are used.
  • the pre-scan and final scan may be used in the case that multiple camera's, camera's that move relative to the object and/or one or more mirrors are used.
  • the pre-scan may be used to define 3D positions on the object that correspond to locations in a plurality of images obtained from multiple camera's and/or in a plurality of images obtained at different time points when the camera position relative to the object changes, and/or different locations in one image wherein the object is view with and without a mirror, or with different mirrors respectively.
  • illuminated points that are detected in the final scan can be combined from any combination of images or image parts to calibrate the geometry of the structured light in the final scan.
  • the calibrated geometry may be used to compute 3D points from detected points in the images or image parts. This may involve applying the translation and rotation associated with the image or image part in order to combine the 3D points obtained using different images or image parts.
  • Calibration of the translations and rotations between different relative camera positions may be determined by measuring the relative positions and orientations of the cameras and/or mirrors, or by accurately controlling their movement.
  • Distance measuring devices and inclinometers may be used for example.
  • the same configuration of projection devices is used to perform the pre-scan for each of a plurality of camera positions.
  • Surface matching may be used to combine calibrations for different camera positions. For this purpose, the three dimensional shapes of respective surface parts identified by scanning with a camera or cameras at different positions relative to the object are matched, to identify surface parts with matching shapes and to identify the rotation and translation to make the surface parts coincide. This translation and rotation is subsequently used to map three dimensional positions obtained using different relative positions of the camera and the object onto a single space, in which the identified matching surfaces coincide.
  • matching is applied to surface parts illuminated by the projection device during pre- scanning, for example using a surface part that is illuminated by the same sheet of light and captured from different camera positions. This facilitates matching, because only the three dimensional of one dimensional curves are involved. Furthermore, matching may be applied to curves obtained for incrementally changed relative positions of the object and the camera. This further reduces the amount of search required to identify a match.
  • Fig. 7A illustrates a geometry wherein a subject/object (702) is positioned onto a motorized turntable (703) with a mirror (704) and another mirror (705), which are positioned such that their reflections are largely of opposite views of the object (702).
  • a projection device (700) projects a light ray plane (711) onto the object (702).
  • the right side mirror (705) displays a reflected image of the curvature (712) of the reflected light plane on the object (702).
  • a camera (701) views the scene.
  • the camera (701) is interfaced with a personal computer (706) and displays on its monitor (710) an image (709) of the scene.
  • This image (709) is the processed image from the camera in which one side of the camera image has been superimposed on the other side.
  • the two line curvatures form a single line curvature (708).
  • Fig. 7B illustrates the same principle geometry of materials and apparatus layout as in Fig 7A but now from an above view.
  • Projection device (713) projects a light plane (714) onto the object (719).
  • the object is positioned onto a turntable (720).
  • the reflection of the light plane (714) results in a curvature (718) that follows the object's profile.
  • Mirrors (716 and 717) reflect the view of the object (719) from two different yet symmetrical vantage points.
  • Nl and N2 represent the mirror Normals.
  • RIi and RIo illustrate the reflection ray path from the object (719) to a camera (715) for the left mirror (716).
  • R2i and R2o illustrate the ray path from the object (719) to a camera (715) for the right mirror (717).
  • Fig. 8A displays a sample image (800) from the camera that is viewing the scanning scene that was described in Fig 7A- 7B.
  • the image (800) displays two separate curvatures (801 and 803).
  • the left curvature (801) is from one mirror reflection and the right curvature (803) reflection is from the other mirror reflection.
  • the image (800) also displays a missing section (802) in the left curvature due to occlusion or shadowing.
  • Fig. 8B displays the processed image (804) that was described in Fig 2A in which the right curvature has been overlaid onto the left curvature.
  • the resultant curvature (805) displays no gap due to occlusion that was found in the left curvature of Fig 8A.
  • a scanning device may include a common projection device such as a laser line module, a calibrated imaging device, a turn table, two mirrors and a PC to process the camera image frames.
  • the object that is to be scanned is set onto the turntable and a camera views the object at a position perpendicular to the turntable's rotation axis.
  • calibration of the camera may be performed to derive its position in relation to the object as well as compensate for the camera's optics.
  • the projection device which may be incandescent or laser based, is placed opposite and in front of the camera's viewing direction but behind the object.
  • the projection device produces a pattern such as a thin line that is set parallel to the turn table rotation axis.
  • a thin strip of material may be placed in between the object and camera view to prevent the projected pattern from directly reaching the camera's view.
  • the two mirrors are placed at both sides of the object. These mirrors are preferably front or first side mirrors to prevent ghosting or double imaging.
  • the mirrors are aligned such that the reflected images that they produce of the projected pattern on the object are equal but opposite views.
  • the reflected images must be in the camera's field of view.
  • the camera images are then folded.
  • one half of the image that contains the mirror reflection of one of the mirrors is digitally superimposed onto the other half of the video frame that contains the reflected image of the other mirror.
  • the folding process may be carried out based on comparing the level of intensity of the image pixels.
  • the luminosity of compared pixels that are same or higher will be used to create the superimposed resultant image.
  • the process proceeds by taking the first pixel in a captured image frame and comparing it to the last pixel. It then proceeds to the second pixel and compares it to the one but last pixel. This is repeated until the entire frame is scanned. At each comparison of the two pixels the highest luminosity value of the two is used. In case the luminosity of both pixels is the same then this will be the value used for the resultant image.
  • the same method may be applied as well. However it is now also possible to make color comparisons as well.
  • the folded video may also be vertically reversed, as the reflection curvature of the intersecting projected light plane on the object that is being reflected in the mirror may be the inverse shape required.
  • the "folded video" process as described above allows the user to view the camera's folded images and make final adjustments to the positions of the mirrors such that the projected pattern lines up exactly. By folding the video the user is able to make direct and accurate adjustments to the mirror positions in order to line up the images and combine or otherwise overlap the projected patterns. While the resulting combined or folded image is generally confused the contour lines of the intersecting light ray plane with the object in both images will overlap as long as the optical pathways are identical in length and attitude. The resulting contour line is the sum of both separate contour lines.
  • the actual scanning process may now commence.
  • the object is incrementally rotated until it has made a full 360-degree rotation.
  • the camera views the scene and its images are either stored for post processing or directly processed.
  • processing involves folding the video or otherwise superimposing one half of the video onto the other in which the highest or same luminosity between two related pixels are chosen to create the final superimposed image that will be employed.
  • Each image taken at each increment shows a unique line or stripe curvature of the illuminated section of the object's surface. Areas that may be occluded from view in one mirror view may be visible in another.
  • a pattern such as a spot, thin line or parallel stripes of light that are projected from a projection source illuminate a subject/object providing contour points or line(s) on the surface of the object.
  • a contour line is viewed from two symmetrical angles via mirroring surfaces at a point directly opposite to and in front of the projected light plane with the object in-between by a scanning optical sensor such as a common video camera.
  • a scanning optical sensor such as a common video camera.
  • Part of the apparatus provides for moving the subject relative to the light projection and sensing assembly.
  • the coordinates of any point on the object's surface can be derived from the image of the contour curve on the sensor and the position of the object surface relative to the light projection and sensing assembly.
  • the scanning sensor detects points along the contour curve and a digital data set may be generated representing the position of these points in 2D space related to the imaging element.
  • the data along with indexing data are stored sequentially in a computer's memory. Since the mirror positions are known and the configuration calibrated, the sensor images of the mirror surfaces that display the contour line from opposite viewing direction in the sensors field of view are geometrically proportional in size and shape. Each mirror image displays the object's surface contour from a different angle and therefore alleviate or at least significantly reduce the occlusion or shadowing problems encountered if only a single non mirrored view was used.
  • Fig. 7A and 7B display the principle measuring system's geometry configuration and layout.
  • Fig 7B is a top view of the layout.
  • Data is collected in a cylindrical coordinate fashion.
  • the object rests on a turntable of which has axis of rotation. Precisely intersecting the heart of the said axis is the projected light plane, which is perpendicular to the drawing plane.
  • the light source may be incandescent or laser.
  • Mirrors are placed at both sides of the object with a line of symmetry running along the projected plane of light.
  • a video camera sensor views at the opposite side of the object in relation to the projection light source emitting direction. The center of the camera field of view is aligned on the table axis.
  • Each mirror is set at same tangent angle in relation to the turntable surface such that the sensing camera may view both mirrored reflections of the object.
  • Each mirrored view displays the contour of the projected light plane that follows the curvature on the object's surface from opposite viewing points.
  • the angle of the mirrors in relation to the object and thereby the viewing angle of the camera in relation the object may be derived by placing an object of known size, shape and coordinates onto the turntable. The angle may then be calculated using triangulation by determining the offset of the contour line or point on the calibration object from a known position and radius.
  • the mirror reflected images of the resulting viewing angles between light source and camera may be made large in order to make effective use of the camera sensing array with minimal penalty of occlusion.
  • Fig 8B displays the "folded" resultant image based on the image displayed in Fig. 8A. It should be obvious that the folding of the video image is, in particular, useful in order to allow an operator to adjust the mirror positions such that the images of the light plane contour on the object surface is in optical alignment. This step may be automated and the actual processing may be carried out in a more efficiently computational manner. In any case the principle remains the same.
  • the image may be processed to translate and/or rotate part of the image that images light from a mirror so at to align the contour from that part of the image with the contour from another part of the image.
  • Fig. 8A and 8B demonstrate a scanning solution approach to the video folding process in which one half of the video image is superimposed onto the other half.
  • the result is placed on a computer image array referred to for explanatory reasons as an image result array.
  • the process method scans along the camera image such as from top to bottom. Starting at the most top left pixel the process then proceeds to compare this pixel with the most right top pixel based on color and/or luminosity. If the values of both pixels are the same then this pixel value is placed at the top left pixel of the image result array. If one of the pixel values is higher than the other then this higher value is set to the top left pixel of the image result array.
  • the resultant pixel position may be compared to the same pixel position of the employed image. If the resultant pixel value deviates significantly from the pixel position in the image then this value may be discarded as a false registration. The process as described above is repeated until a dense map of triangulated contour point positions is achieved.
  • the embodiments of measurement and calibration using mirrors can also be applied by themselves, without using a pre- scan to serve as a reference during a final scan. Even without this pre- scan these embodiment provide for a practical and cost effective means to reduce en even eliminate scanning occlusion. It achieves this in a rapid manner in a single scanning pass. It provides a geometry and processing method, which ameliorate the shadowing or occlusion problems of inherent to the scanning of complex object surface irregularities. It provides a system to provide multiple viewing angles of a single contour line that represents a curvature section of an object. It combines the multiple views into a single virtual image without ghosting effects. It realizes a relatively easy to configure layout.
  • the pre scan may be a reference scan performed before, after or simultaneously with the final scan. It suffices that at any time data from both scans is available.
  • Using a pre- scan has the advantage that the image locations for which 3D positions are available can be displayed at the time of the final scan, so that a user can adapt the final scan to make it cross such positions.
  • additional scans after the final scan may be used.
  • a final scan may be used as reference scan for another final scan.
  • Light planes have the advantage that they are easy to make and that relatively simple processing suffices to calibrate their parameters.
  • the projection device is an incandescent or laser device, it should be appreciated that other types of light sources, such as LEDs, LED arrays or gas discharge lamps may be used.
  • the projection device may be configured to produce a static light structure, defined by apertures and/or lens shapes for example, or a structure realized by means of scanning a mirror (e.g. rotating).
  • the scanning in the projection device is preferably driven faster than speeds that are achievable by manual movement of the projection device, e.g. by a motor or other actuator.
  • the image from the image sensor may be formed by combining a plurality of images from the image sensor, or by combining detected positions where the light from the projection device is seen at the object in the images.
  • the number of images in such a plurality is so small that movement of the projections device can produce no more than one pixel difference of illuminated positions within the plurality of images.
  • a pre-scan is made of the object/subject/scene in order to gain a useful approximation of the object's overall geometry.
  • This pre-scan data of the subject is then geometrically related to the imaging element's image of the same subject.
  • a subsequent scanning session is performed allowing for dynamically calibrated scanning of the object or scene.
  • the second scanning process uses the pre scan data as a reference of known geometry in order to understand the pose of the light plane and thereby calculate the 3D position of registered points of the object.
  • the method may further comprise detecting respective deformations of said light line reflected from said object in each said images; deriving a contour from said deformations in each of said images; and merging said contours from each of said images to convert said object geometry into a data set of 3D points that represent said object geometry.
  • the method may further comprise detecting respective portions of said light falling on said object and overlaid on reference geometry; and deriving a light plane crossing said reference geometry. Furthermore, said contour may fall on said light plane and may be uniquely determined with respect to said pre-calibrated imager.
  • a method for generating a 3D representation of an object using a projection device comprising: providing a modeling system comprising a sensing element such as a video camera; wherein said object is placed in front of said imager; deriving a calibration or reference scan; swinging a structured light line within a known radial or angular position; recording, respectively and sequentially, each scene of said object to produce a sequence of images; and deriving contours of said particular area from each of said images; employing said contours to triangulate point positions in each image and sequentially combining these to form a 3D representation of the said object to serve as reference geometry.
  • a method for generating a 3D representation of an object using a projection device comprising: providing a modeling system comprising a sensing element such as a video camera; wherein said object is placed in front of said imager; swinging a structured light line across said object; recording, respectively and sequentially, each scene of said object to produce a sequence of images; and deriving contours of said particular area from each of said images; employing said reference geometry recited in previous sections and contours from said images to derive pose of said projection emitted device light plane; triangulation of point positions based on calculated pose in each image and sequentially combining these to form a 3D representation of the said object.
  • the method may comprise detecting in each of said images said respective portions of said light line falling on said object and superimposed said reference.
  • the method may still further comprise detecting respective deformations of said projection device reflected from said object in each of said images; and calculating a set of curvilinear points representing one of said contours from said respective deformations in each of said images.
  • swinging the structured light line may be operated manually by an operator.
  • Another aspect is that a high-speed non-contacting mensuration of three-dimensional surfaces is provided, with an improved occlusion reduction technique comprising: providing a plane of light intersecting said surface and producing a contour line of the curvature of said surface; moving the said surface relative to said plane of light; viewing said contour line from both sides of said plane of light using mirrors; combining the images of said contour line into a virtual single composite image; sensing said resultant image to derive the coordinates of the contour line.
  • an apparatus for a high-speed non-contacting mensuration of three-dimensional surfaces may be provided, with an improved occlusion reduction technique comprising: a means for providing a plane of light intersecting said surface and producing a contour line of the curvature of said surface; A means for moving the said surface relative to said plane of light; A means for viewing said contour line from both sides of said plane of light using mirrors; A means for combining the images of said contour line into a virtual single composite image; A means for sensing said resultant image to derive the coordinates of the contour line.
  • the means for viewing said contour line may comprises a mirror on each side of said light plane, positioned to reflect the contour line images to said means for combining.
  • a method for modeling 3D objects or scenes or otherwise converting their geometrical shape into a data set of 3D points that represent the objects 3D surface are commonly referred to as 3D scanners, 3D range scanners or finders, 3D object modelers, 3D digitizers and 3D distance measurement devices.

Abstract

Title:A Dynamically Calibrated Self Referenced Three Dimensional Structured Light Scanner Abstract Adynamic calibrating, structured light 3D scanner is described that employs a pre scanned geometry of the scanning scene. First a pre-scan is made of an object or scene that is to be scanned. Scanning is performed by sweeping the projection device's pattern over the object or scene in a controlled fashion within the known space coordinate margins. In a second scan the object or scene is scanned again with the projection device. But now the projection device sweeps the pattern from numerous and different angles and directions in which the reflected pattern on the object or scene within the camera's field of view. The pre-scan data acts as a reference of known geometry that well approximates the actual geometry of the object. This allows the orientation and position of the projection device from each camera image from the secondscan to be derived based on the curvature of the reflected pattern on the camera image superimposed pre-scan geometry. As the orientation and position of the projection device can be derived by this approach so toocan the 3 dimensional positions of each point of the reflected pattern on the object be derived for each image.

Description

Title: A Dynamically Calibrated Self Referenced Three Dimensional Structured Light Scanner
Field of the invention
The invention relates to three dimensional scanning and measuring systems that generate 3D geometrical data sets representing objects and or scenes using a structured light source.
Background
An example of 3D scanning is described in an article by Lyubomir Zagorchev and A. Ardeshir Goshtasby (Zagorchev et al), titled "A paint-brush laser range scanner" and published in Computer Vision and Image Understanding archive Volume 101 , Issue 2 (February 2006) Pages: 65 - 86ISSN:1077-3142. 3D scanning is also described in an article by Bouguet, titled "3D Photography On Your Desk" published Proceedings of International Conference on Computer Vision, Bombay India, Jan 1998, pp. 43-50.
3D scanning is the technique of converting the geometrical shape or form of a tangible object or scene into a data set of points. Each point may represent an actual point, in 3 dimensional space, of the surface of the object/scene that was scanned. 3D scanning offers technical solution to many different industries and markets for many different reasons. Some of the better known applications where 3D scanning offers solution are within dentistry, ear mold making for hearing aids, diamond industry, movie and gaming production, heritage and materials production (rapid prototyping, CAD, CAM, CNC). The range of applications is steadily increasing, as the use of computers becomes more common, the power of PC increases and the demand for better and faster means to capture, store and manipulate real world data increases. Many 3D scanning techniques and methods exist. However 3D scanners are typically complex to use and require costly instruments and equipment. Their application has usually been reserved for only specialized applications. In recent years this has started to change with the introduction of 'low cost 3D scanning systems'. These systems tend to employ more robust scanning methods that rely less on costly, specialized instruments and hardware. One popular class or group of 3D scanning techniques is the "active non-contact" type. "Active" meaning that some form of encoded, structured or non-coded energy, such as light, is emitted from a source to reflect off of an object in order to directly or indirectly understand something about the object's 3D shape. 'Structured light' is one type of active non-contact 3D scanning technologies that uses a predefined light pattern such as a projected line or stripe. "Non-contact" is meaning that the main scanning device does not require touch the object that is being scanned. The active non-contact type 3D scanners that use structured light have received widespread interest over the years due to inherent high scanning speeds and wide scanning range. In particular, research has been directed towards developing absolutely low cost systems. Other properties such as scanning speed, range, robustness, accuracy, sensitivity and mobility are considered equally important elements.
Active non-contact type 3D scanners are most frequently based on the use of triangulation to derive the 3 dimensional position of a point on an object's surface. This is a well-known art. This can be achieved by the projection of a pattern such as a point onto the object's surface. A camera, of which its imaging area has, in most cases, usually been calibrated, views the reflection of the point on the object's surface. The camera's geometrical position is usually fixed and the geometrical orientation of the point projection device in relation to the camera's imaging area is known or at least well approximated. With these parameters known and set the position of the reflected point on the objects surface that is being viewed by the camera is easily derived using triangulation. More specifically, the angle and distance between the camera and the projection device is known. This is sufficient to satisfy the calculation of the reflected point in terms of its distance to the camera using basic trigonometry. Moving the point projection device one increment in another known position can derive another point position of the object's surface. Repeating this process for new and known positions pertaining to the projection device will yield a dense data set of 3D points that represent the object's surface.
Instead of a point it is also possible to project a thin line for faster registration. The projection device is held at a known angle or between known angle margins in relation to the camera and either the object or the projection device is translated perpendicular to the line projection plane thereby sweeping the projected line over the object's surface. Hence, most all of the surface area of the object is illuminated by the projection device's pattern. In practice it is well known that the greater the angle pose of the projection device is in relation to the camera, while the reflected pattern is still clearly visible on the objects surface, the greater the scanning accuracy will be as a wider section of the imaging element array is being employed. However, the greater the angle the more the pattern will deform in the cameras view for areas of the object's surface that run close to parallel to the projection beam plane. In addition, the greater the angle the more chance that the projection device's beam path or plane and/or cameras line of sight will be obstructed or occluded.
More specifically explained, the projected pattern cannot reflect off of areas on the object that it can not reach at certain angles nor can the camera view areas where the pattern may reflect off of the object while being occluded by the object's own shape. Occlusion is a common problem with these types of scanners and many methods have been devised to reduce it. These methods included for instance, a second projection device or two or more cameras are employed to reduce this limitation by gaining multiple views of the scanning scene. It should be evident that these approaches increase complexity and cost to say the least. And, even if these additional instruments/methods are employed the angle(s) of the projected pattern(s) are usually fixed. More precisely explained, the projection device's scanning angle does not dynamically and effectively follow the ideal scanning angle for a given object geometry.
The vast majority of structured light scanners rely on confined and accurately controlled motion within strictly defined translating and/or rotating margins for the projection device. Relatively high tech instruments are required in order to maintain a complete understanding of the position of the projection device(s) in relation to a world coordinate absolute system or in relation to the imaging element that is viewing the object or scene that is to be scanned. These systems, while they may be accurate, are rigid and are usually costly as they depend on special instruments. In addition they are usually not very mobile nor can their range be easily adjusted. More importantly as previously explained, in terms of scanning integrity, they are not very flexible in the sense that the scanning motion follows, more or less, a singular path that is usually only partially related to the ideal scanning path required to scan the object's curvature and detail with the greatest fidelity. In recent years new methods have been invented that address these contemporary problems and to some extent offer effective solution. These methods displace the understanding of the projection device's orientation and position from a positioning instrument and its location to within the images of the imaging element itself. In this situation, some type of tangible reference surface or structure is placed in the scanning scene to achieve this.
Zagorchev et al describe an example wherein two wire frames placed next to the scanned object are used. Other possible reference structures include, for instance, a cube type cage structure made of thin bars or wires, a flat surface or two flat surfaces placed at known angles behind or to the sides of the object. In any case the reference surface is calibrated in the camera's image.
This reference surface must allow to be illuminated by the projection device as well as the object that is being scanned. A minimal area of the reference surface must always be in view and it must remain fixed in position just as the object. The projected pattern is swept over the object and reference surface. Each image will show the deformed projection device's pattern over the object's surface as well as on the reference surface. The 3D position of each point of the object's surface can be derived since the geometry and coordinate position of the reference surface is known in relation to the camera. More specifically, if at least three separate points are located on the reference then this will satisfy the calculation to determine the projected pattern plane and hence it's orientation in relation to the object.
As described by Zagorchev et al, this means that the projection device is no longer rigorously confined and can actually be made to sweep its pattern over the object by hand. Although the user or operator is confined to sweep the projected pattern over the object in a controlled and docile fashion and within a particular orientation in relation to the pre calibrated surface it is now possible to significantly reduce occlusion. The operator can focus on areas that are susceptible to occlusion and adjust the orientation of the projected pattern to illuminate those areas by hand within the permissible scanning orientation that is determined by the reference surface. In addition the user can now, in a more dynamic way and within the restricted area determined by the reference surface, sweep the projection pattern to follow the best possible scanning that the object's geometry warrants.
However in many cases it may be unpractical or undesirable to use a reference surface or structure. For instance, for very large or very small objects or scenes or objects that cannot or may not be moved. Also the reliance on having to scan the reference plane in addition to the object restricts the permissible flexibility of the system as the reference surface must also be in the camera's field of view at all times. Hence, the maximizing of the object in the camera's field of view is limited resulting in less effective use of the imaging element's pixels.
Otherwise stated, there is less maneuverability to set the viewing field of the camera on to an 'area of interest', as the reference plane must remain in view during calibration and scanning. Lastly, the orientation of the reference plane allows scanning to proceed only in certain specific directions. Hence, while the scanning may proceed dynamically, in terms of approaching the ideal scanning angle for a particular geometry, it is only permissible in certain directions. The present invention aims to address at least part of the problems previously described.
Summary
It is an objective to allow segmental scanning of surfaces with minimal recalibration effort pertaining to the configuration of components.
A method as set out in claim 1 is provided. The scanning of the object comprises a reference scan and a measurement scan of the object. The reference scan is performed under calibrated conditions to determine 3D positions of points on the object that are visible to an image sensor. The resulting 3D positions of object points are used to calibrate the geometry of structured light that is applied to the object during the measurement scan. The structured light may be a plane of light in 3D space for example, which results in a one dimension line where the object intersects the plane. However, other types of structures light may be used, such as a curved two dimensional surface in 3D space, a set of parallel surfaces, a grid of rows and columns of surfaces, a set of discrete lines etc.
The intersection of the structured light with the object gives rise to selection of points in the two dimensional image captured by the image sensor. The points may be selected in the sense that the structured light provides light only in a spatial light structure so that the selected points are selectively illuminated points, but alternatively points may be selected by supplying light on mutually opposite sides of the selected points but not at the selected points, for example by supplying bands of light on the opposite sides or supplying light everywhere except in a spatial light structure, so that the selected points are selectively not illuminated points. As an alternative, points at the edge of a broad lighted band may be selected points.
For those of these selected points that coincide with points for which 3D positions on the object are determined from the reference scan, the 3D position is known. With these 3D positions geometrical properties of the light structure during the measurement scan are determined. For example, when the light structure comprises a plane or other surface of known shape, the position and orientation of that surface relative to the image sensor can be determined from the 3D positions of the points on the object that are known from the reference scan. Three points on the object may be sufficient to determine a position and orientation, but fewer points may suffice if changes in position and orientation are partly constrained and more points may be used, for example in a least square error estimation of the position and orientation. As a result of using points on the object with 3D positions from a reference scan, it is possible to use measurement scans with a light structure projecting device without predetermined calibration of the position and orientation of the projecting device in the measurement scan, even if the projecting device illuminates only the object. The structured light may be manually positioned during said step of applying the structured light. An human operator may manually swing the light structure through successive orientations and/or positions to realize successive measurement scans.
An apparatus may be provided that comprises an image sensor, one or more projection devices for projecting a light structure and a computer. In this apparatus the computer may be configured, by means of a computer program for example, to receive captured images of the object from the image sensor and to compute 3D positions associated with image points in the reference scan, to compute geometric parameters of light structures during the measurement scan using these 3D positions and to compute further 3D positions from the measurement scan, using these geometric parameters.
The reference scan itself may be performed using a light structure with calibrated properties, e.g. with known orientation and position relative to the image sensor. The same light structure projecting device may be used as in the measurement scan, but mounted in a holder that provides for controlled calibrated conditions during the reference scan. Alternatively another light structure projecting device may be used. Preferably the relative orientation and position of the camera and the object are the same during the reference scan and the measurement scan. Alternatively, a controlled change of this relative orientation and/or position may be used between the reference scan and the measurement scan (e.g. a rotation of the object), with the 3D position result of the reference scan being determined from the position during the reference scan and the controlled change of relative orientation and/or position.
The reference scan may be a pre-scan, performed before the actual measurement scan. Alternatively, a post- scan may be used or a scan that is simultaneous with the measurement scan, using light of a different wavelength for example. Also the results of measurement scan, once calibrated, may subsequently be used as results of a reference scan to calibrate another measurement scan. By using a reference scan with a light structure such as a light plane, a determination of 3D position is possible that is compatible with the measurement scan and requires little overhead.
A mirror may be used to make parts of the object visible in an image captured by the image sensor that would not otherwise be visible. The use of a reference scan and a measurement scan may be used to determine 3D positions of points that in portions of the image that view the object via the mirror and directly or via a further mirror. Once these 3D positions are known, positions from any combination of portions of the image can be used to calibrate geometric properties of the light structure during the measurement scan. Alternatively, or in addition, a plurality of image sensors may be used, to view the object from different directions. In this case too any combination of images from different image sensors can be used to calibrate geometric properties of the light structure during the measurement scan.
Brief description of the drawings
These and other object and advantageous aspects will become apparent from a description of exemplary embodiments, using the following figures.
Fig. 1 illustrates the principle geometry of the principal materials and apparatus layout.
Fig. 2A illustrates the principle geometry of the principal materials and apparatus layout from a side view.
Fig. 2B illustrates the principle geometry of the principal materials and apparatus layout as in Fig 2A but drawn from a frontal view Fig. 3 illustrates another type of arrangement of materials for pre scanning to create reference geometry.
Fig. 4 illustrates another arrangement of materials for pre scanning to create reference geometry as in Fig 2A-B.
Fig. 5A shows a sample image of a scanning scene. Fig. 5B-C show a projected light plane curvature.
Fig. 5D shows connected selected points. Fig. 6A-C shows sample video camera images. Fig. 7A-B illustrates a geometry with a turntable. Fig. 8A shows a sample image. Fig. 8B shows a processed image. Description of exemplary embodiments
Methods and devices are disclosed for modeling 3D objects or scenes or otherwise converting their geometrical shape into a data set of 3D points that represent the objects 3D surface. These devices are commonly referred to as 3D scanners, 3D range scanners or finders, 3D object modelers, 3D digitizers and 3D distance measurement devices. It will be evident to those skilled in the art of 3D scanning that there are many elements involved in the process. This description will adhere to the core aspects and principles involved without going into great detail about well-known methods, processes or commonly used instruments. This is done in order to maintain clarity.
Fig. 1 illustrates the principle geometry of the principal materials and apparatus layout. This layout includes a projection device (100) that projects a light plane (105) onto and object/scene (103) that is to be scanned. The reflection of the light plane on the subject (103) results in a curvature (109) that follows the surface of the object (103) for that section. An imaging device such as a video camera (101) views the scanning scene. The 2Dimage (104) that is projected onto the imaging element pertaining to the scene is shown which illustrates that the relation between the actual and projected scene is the same. The camera (101) is interfaced with a personal computer (102) in which images from the camera (101) are relayed to the personal computer (102). The computer (102) displays the image of the scene (108) from the camera (101) on a display device (106). The image displayed (108) on the display device (106) also includes the pre scanned data (107) overlaid onto the object in the image.
Fig. 2A illustrates the principle geometry of the principal materials and apparatus layout from a side view. This particular layout includes a projection device (200), a video camera (201), the subject/object/scene (202) that is to be scanned, the light ray plane (203) seen from a view point perpendicular to the plane surface and a the reflection of the light ray plane on the subject (204).
Fig. 2B illustrates the principle geometry of the principal materials and apparatus layout as in Fig 2A but drawn from a frontal view, general viewing direction of the camera (209). This layout includes the projection device (205), the video camera (209), the subject/object/scene (208) that is to be scanned and the light ray plane (206) which reflects from the subject (208) as a curvature (207).
The scanning of the object is divided into to two main scan session processes. First a pre-scan is made of the object/subject/scene in order to gain a useful approximation of the object's overall geometry. This pre-scan data of the subject is then geometrically related to the imaging element's image of the same subject. Second, a subsequent and final scanning session is performed allowing for dynamically calibrated scanning of the object or scene. The final scanning process uses the pre scan data as a reference of known geometry in order to understand the pose of the light plane and thereby calculate the 3D position of registered points of the object. Fig. 1, 2A and 2B displays the arrangement or layout of these components for the "final" scanning approach. There are many ways to achieve the pre scan geometry. In one preferred embodiment the present invention employs a common projection device such as a laser line module in conjunction with a calibrated image- sensing element such as a video camera.
During pre-scan a known well-defined mathematical relation exists between respective 2D points (e.g. pixel positions) in the image obtained by the video camera (209) and 3D positions on the light structure (for example light plane) projected by the projection device. When the light structure illuminates a point on an object so that the light structure becomes visible at this point and at a position in the image where the point is observed, this mathematical relation defines the 3D position of the point. The well-defined mathematical relation follows for example from a mathematical description of the collection of 3D positions on the light structure (e.g. a plane) and the projection geometry of the camera. The projection geometry of the camera mathematically defines for each point in the image the 3D ray path that ends at that point. The intersection of the ray path of a point in the image and the light structure defines the 3D position corresponding to the point in the image. When the image shows that the image content at a position in the image shows a point where the light structure lights up the object, it is known that the corresponding 3D position is at the intersection of the ray path from the position in the image and the light structure. During the final scan the 3D positions may be similarly determined, provided that the parameters of the light structure are known at the time of the final scan. Vice versa, the parameters of the light structure at that time may be determined if the 3D positions of a sufficient number of points on the light structure is known. For this, points in the image are used that show positions on the object that are illuminated by the light structure during the final scan and for which 3D positions are known from the pre-scan. Such points may be points in the image where there is an intersection between respective contours of illuminated points on the object obtained during the pre- scan and the final scan respectively. Fig. 3 illustrates another type of arrangement of materials for pre scanning to create reference geometry, which includes the addition of a platform (302) onto which the projection device (301) is attached to. The platform may be user operated (303). The projection device projects a light plane (304), which is reflected off of a subject/object/scene (304). The reflection of the light plane is a curvature (306) that follows the surface geometry of that particular section of the object (305). This scanning scene (305) is viewed by a video camera (300).
Fig. 3 displays a pre-scan layout and includes, in addition to the previously mentioned layout in Fig 1, 2 A and 2B, a tripod or platform onto which the projection device is attached to. This layout illustrates one possible pre scanning configuration. Here the projection device's pattern may be swept across the surface of the object that is to be scanned using the tripod. This tripod functions as a sturdy base in order to permit a docile and controlled movement of the projection device in which movement in constrained to 1 degree of freedom. The pose of the projection device in relation to the camera is known allowing triangulation of the actual point positions to be calculated. As such, it is possible to derive the position of points in 3D that are related to the light ray plane reflected curvature on the object for each image frame from the imaging device. Performing this for all images will allow reconstruction of the object geometry in the form of a data set of 3D points which represents the surface points of the object.
A labor intensive yet effective option to create the pre scan geometry would be to measure 3 non linear 3D point positions of any flat planar surface on the object that is in the viewable area of the imager to create a reference geometry surface. The depth measurement of each point must be perpendicular to the imaging elements surface plane. Several of the reference geometry surfaces must be made. Reconstructed reference geometry surfaces based on these measured points should be in dissimilar planes. The more point groups measured and thereby surfaces created the more effective the employment of this pre scan geometry will be.
Objects that possess few surface irregularities and are close to parallel to the imagers imaging element's surface may benefit from the introduction of reference objects that are placed into the pre scanning scene. These may be place on the object and/or to its sides. The reference objects may be irregularly positioned or have irregular shape. This will enhance the employment value of the pre scan geometry. They can be marked using image processing techniques and later be deleted from the final scan. They may also be deleted from the scan using the recovered surrounding geometry as the reference geometry in order to derive the pose of the light plane in areas previously obstructed from view by the reference object. The reference objects may be, but not limited to, such things as string, rope, poles, rings and even clay made shapes. The pre scanning is performed by sweeping the light plane from the projection device over the object in which the pose of the projection device is known as it is constrained to be permitted only 1 degree of motion freedom. The points of the light plane curvature may then be used to determine in conjunction with triangulation the actual 3D point positions on the curvature. This process is repeated for all images of the sweeping light plane on the object in order to create the pre scan surface geometry.
A pre scan may also be achieved by using reference objects of known geometry and orientation with respect to the viewing imager, which is illustrated in Fig 4. These objects may be place at the sides, behind or front of the object to be scanned. In the later case the reference object(s) may actually obstruct some portion of the view of the subject as long as sufficient area of the subject is still in clear view. The reference objects may be, but not limited to, such things as strings, ropes, poles, rings, panels and even clay that has been shaped in a desired form.
Fig. 4 illustrates same arrangement of materials for pre scanning to create reference geometry as in Fig 2A-B but with the addition of reference objects (404). A projection device (400) projects a light plane (403) onto an object (402) and the reference objects (404), which is viewed by a video camera (401). The light plane (403) that is reflected on the object (402) produces a curvature (408), which follows the object (402) surface geometry for that section. The light plane (403) is also reflected by each of the reference objects (404), which produce a light spots (405), (406) and (407). The reference objects (404) are in known positions in relation to the camera (402).
Fig 4 displays the layout of a preferred embodiment that is same as the layout as in Fig 1, 2 A and 2B but with the addition of 3 pole or rod shaped objects. The surface of the poles reflects light well. Two of the poles are placed near to and towards the sides of the object but still within the imager's field of view. The 3rd pole is placed in the center of the imager's field of view. In any case the reference objects may be placed in various different positions as long as their orientation, position and shape are known.
The poles may be in parallel alignment along their length, although this is not necessary. The relative positions of the outer placed poles are in a non-linear setting and may be for instance at an angle of 90 degrees separation around the center pole. In any case the layout is known and the camera has been calibrated in which the shape and dimensions as well as the positions of the poles relative to the camera are known.
A projection device such as a laser emitting line module is used to sweep a light ray plane at various angles but approximately perpendicular to the lengths of the poles over the object and poles while the imager views this. The sweeping direction of the laser line should approximate a motion perpendicular to the lengths of the poles or motion parallel to the length axis of the poles. The laser plane may be approximately perpendicular to the length axis of the poles.
Each image from the imager is examined to isolate and derive the curvature points in two dimensional space of the light ray plane reflection on the subject and reference objects. The center spots positions of the light ray plane reflection on each of the poles are used to derive the pose of the laser line in relation to the imager. This may be realized by connecting the spots of the three poles to form a plane. The position and orientation of this plane are same as that of the laser line light ray plane. This plane pose is known as the 3D spot or point position for each pole is known. Mathematical relations that express plane and point intersection may be used. With this pose now known is it possible to triangulate the curvature points of the light plane on the object. Performing this process for all images will yield a dense data set of 3D points which represent an accurate reconstruction of the subject's surface geometry. This geometry will be used as the reference geometry for the final scan. The reference objects may be extracted from this pre scan geometry by image processing techniques that were performed beforehand. This may be accomplished by storing an image with only the subject and another image with the addition of the reference objects. Sufficient contrast between subject and reference object should exist. To further increase the separation integrity the user or operator may select areas in the image to exclude post pre scanning. The reference geometry may be removed from the scene after the pre scan is made.
According to one aspect a pre- scan is made of an object or scene that is to be scanned. There are many ways to achieve this. This may even be achieved by using framework of known geometry to derive the pose of the projection device's ray plane and thereby triangulate the positions of the object surface points as explained in the previous section of this document.
In any case, this allows the subsequent final scan to proceed without any reliance on separate reference geometry. Freeing this restriction allows almost complete scanning flexibility as most any practical sweeping direction and angle may be performed. This allows an optimal registration of points relating to the object's surface to be performed. The criteria of the pre scan data are to use a method that best approximates the surface geometry of the object.
It should be noted that partial scans of the geometry would already suffice. But obviously the more data as well as the more precise the data accumulated in the pre scan regarding the object's geometry the more the approximation error will be minimized allowing for greater and faster point registration precision for the subsequent final scanning process. In the final part of the scan process the projection device may sweep its projected pattern over the object at most any angle and from most any direction without having to recalibrate the system.
This is achieved through the use of the pre-scan data set of known object points. The pre-scan acts a surface of known geometry and allows the orientation, position or otherwise pose of the projected light plane that illuminates the object curvature to be derived. The projection device may be handheld and the operator is permitted to focus on areas of interest that may require particular scanning angles and directions to be performed in order to permit effective illumination of the projected pattern on the object and/or achieve optimal scanning angles and poses in relation to the surface geometry. Hence, the operator's focus may be to reduce occlusion of normally hidden or occluded areas of the object and/or improve on the quality of the scanned data. The process may be repeated for same points multiple times in order to achieve increasingly better approximations of the actual point positions. The scanning of object surfaces at different angles will allow for these 3D positions of points that are related to the actual object's surface to be more closely achieved. The filtering approach of same points may be based on weighted averaging and/or selection of median points as well as the validation of each point based on the regularity of surrounding point positions and/or the surface slope in those areas. Areas of the pre scan data where the slope between points are more perpendicular or less parallel to the light ray plane are prioritized point regions as these are more closely related positions to that of the actual object geometry.
One preferred method to achieve the pre- scan employs a common laser line module that has been attached to a camera tripod or other platform that can fix its position. The projection device is set at an ample but practical distance from the object that is to be scanned. The projection device may be actuated by means of a motor in order to realize controlled and speed consistent motion, but need not be absolutely required. An operator may also achieve the same. A tripod may be used that leaves one degree of freedom for manual or motorized scanning.
The projection device produces an incandescent based light plane or laser based light plane. This light plane strikes and illuminates the object producing a lighted contour over the object. The degree of curvature deformation depends on the angular pose of the light plane in relation to a viewing position of the object. The light plane is typically set to project a horizontal plane of light onto an object. The platform allows radial movement or 1 degree of freedom in which the projection device can be rotated such that the horizontal plane of light can sweep vertically over the object from top to bottom. The viewing position displays the curvature deformation as it follows the surface shape of the object. A video camera is positioned to view the object, which has been maximized, in the imaging element's field of view or of an area of interest. The projection device and camera may be aligned for example so that the camera's imaging center is in the same plane as the projection devices projection center and perpendicular to the projection device's horizontal ray plane.
The camera may be calibrated to determine intrinsic properties of its lens in order to compensate for lens distortion of the viewed scene. Further calibration may be performed to determine extrinsic properties of the camera as well as the pose of the projection device ray plane in relation to the camera. This may be achieved using an object of known shape, size and position. The light plane stemming from the projection device is cast onto the calibration object. The offset from a known position on the calibration object is measured in order to determine the pose of the light plane. The calibration object is then removed from the scene.
Low and stable ambient light is desired during scanning in order to maximize the illumination and subsequent registration of the light plane's reflected contour on the object by the camera. A reference image frame without the light plane contour on the objet may be made by the camera and stored in computer memory to be used to isolate the illuminated contour on the object in subsequent images. The reference frame image is subtracted from images of the projection device's ray plane reflection on the object. Only those pixels that have different values will show up in the resultant data. These pixels will be that of the contour line, excluding for camera image noise. The light plane is cast onto the top portion of object while the camera views this scene and the reflected contour. The radial position of the projection device, which is known, in relation to the camera is marked. The projection device is then positioned to cast a plane of light onto the bottom portion of the object. This is marked again and the amount of angular rotation that the projection device rotated is logged.
The projection device is then set back in it original position. While the camera views the scene the pre scan is now made by allowing the projection device to sweep the light line over the object from top mark to bottom mark in a docile and speed consistent fashion. Each camera image is the processed to extract the contour line and determine the position of each contour line point in each frame.
Since the angular pose of the projection device is known between top mark setting in the image frames and the bottom mark setting it is now possible to estimate the angular positions or pose of the projection device within the rest of the sequential image frames. The margin of error will be low as long as the speed at which the projection device was rotated was consistent. With the angular position known in each frame is it now possible to triangulate the point position in each image frame and build the 3D geometry. It may also be of use to make multiple passes of the sweeping the laser over the object in which the resulting geometry for each pass is averaged together in order to minimize error.
In most cases the pre scan geometry will be incomplete having holes in areas that did not allow illumination by the projection device or were not in the camera's line of sight. Estimates and well-approximated assumptions can also be made about the expected accuracy of the scanned surface areas. These estimates are based on the assumption that the angular pose of the light plane for particular geometry will yield better results than other areas of the geometry. Geometry that runs close to parallel to the light plane may have less accuracy. This information may later be used in the subsequent and final scan session to select that pre scan data to employ which best correlates with the estimated geometry of the object.
Fig. 5A displays a sample image of a scanning scene with an overlaid pre-scan reference geometry (501) and a projected light plane curvature (502) of the light plane on the object's imaged surface (500).
Fig. 5B displays the projected light plane curvature (504) extracted from the image including 3 selected points (503) to determine the pose of the light plane on the object.
Fig. 5C displays the same as in Fig 5B with the 3 selected points (pl,p2 and p3) of the curvature (504) connected to form a triangle (505).
Fig. 5D displays only the connected selected points (pi, p2 and p3), that together form a plane (506) in 3D space with a calculated directional normal (Nl). The position in 3D space of the normal (Nl) is directly perpendicular to the pose of the projected light plane. Fig 6a shows an image of a scanning scene containing an object that is to be scanned. Fig 6b shows the pre-scan data of the object, which has been overlaid or otherwise related to the position and pose of the object in the camera's image of the scene. Fig 6c shows a curvature over the object's surface from a light ray plane. Since the object image and pre scan geometry is superimposed one on one, the curvature points are directly overlaid onto the pre scan geometry. The pre scan data may now serve as the reference of known geometry in order to dynamically determine the pose of the light plane. The light plane may be swept over the object from various directions and angles, illuminating all visible areas with a curvature that follows the shape of the object's surface. The sweeping the light plane is carried out in a docile manner in order to prevent smearing or breaking of the image due to the finite frame speed of the sensing element. The images of each frame are examined and the pose of the light plane is derived allowing for the data points to be triangulated in order to calculate their 3D positions. The process may be illustrated in more detail as follows: The visible intersection of the reflected light plane with the object is now related to the pre scan geometry. As the points that make up the pre scan geometry are typically not as dense as that of the imaging element for all areas, the intersecting point positions with the pre scan geometry may be interpolated.
In any case the intersection of the light plane with the pre scan geometry surfaces serves to dynamically calibrate or otherwise derive the pose and position of the light plane, i.e. to calculate the 3D pose of the laser. This information will serve to allow triangulation of the 3D point coordinates of the object's surface by intersecting the light plane with the projecting rays. There are many well-known mathematical approaches to calculate this. In all cases the variables involved to make this calculation have been satisfied with this approach. Otherwise stated the 6 degrees of freedom of the light plane have been constrained and are known in each image frame that includes a curvature of the light plane on the object surface.
The requirement to define a 3D plane and constrain the pose of the projected light plane is that at least three intersecting points of the plane in 3D space are known. In this case, many points on the curvature of the light plane on the object are available to choose from in order to calculate the pose. The superimposed curvature of the light plane reflection on the pre scan may be made up of many camera pixel points. Picking farthest separated or otherwise least linear related points will allow, in conjunction with overlaid pre scan 3D geometry, the plane and pose of the light ray to be derived. Hence, all degrees of freedom of the projection device's plane pose will have been constrained. To minimize error, points could be selected on other additional criteria such as involving the expected accuracy of certain points in the pre scan geometry. Many sets of points of the curvature could be used to calculate separate planes, which are later statistically filtered to approach the best assumed fit. In any case the principle remains the same. Fig 5 A displays a light plane striking an object resulting in an illuminated line curvature that follows the geometry of the object for that section of its surface. Also included are three points selected on the curvature that are least linear dependant. The x, y, (defined at the imaging element array) and z (depth) positions are known since these points are also on the pre scan surface geometry. The pose of the light plane is now also known by determining the angular rotations between the selected points. With the angles known is it now possible to calculate the actual curvature profile through triangulation for all points on the curvature. Another way of explaining and proving the validity of this is to image that the light plane curvature that is overlaid onto the pre scan geometry is fixed. Certain rotations of the pre scan data within the degrees of freedom may be used to create a flattened view of the line curvature. These rotations represent the pose of the light ray plane. The workflow in order to calculate the light plane pose would resemble the following: Three non-linearly dependent points are selected of the light reflected curvature on the object which also intersect the scanning scene of known pre-scan surface geometry. The 3D positions are determined for those selected points using the image-related pre scan data by intersecting the projection rays with this pre scan geometry. The three points form a triangle or plane of which the pose is now known.
With the light plane pose now known the points in the image may now be triangulated to gain the actual 3D point positions of all the points of the reflection curvature on the object. Hence, new surface points on the object can now be calculated as well as the possible adjustment or corrections of point positions in the pre scan geometry. This data is stored in computer memory and will later be used after all images have been processed to build a surface model of the object that was scanned. The superimposing of the pre-scan geometry onto the scanning scene image for user viewing is not required for processing. However it does provide the user with insight on what areas need to receive scanning focus as well as the progress of the scanning. What is preferred is that the pre scan points are directly or indirectly related to the image pixel positions in order to perform the process. To further facilitate the process when performed manually a computer graphic user interface may be provided that is configured to output measurement graphics and/or sound signals to assist the user through the manual process. The graphics may indicate the current position and orientation of the laser plane as well as indicate through visual and/or audio signals when the user is out of scanning range. The detection method of the points that make up the reflected curvature by the light plane is a well-known art. However as the light plane may take all poses between and including absolute vertical or horizontal, determining the points on the curvature with sub pixel accuracy will require that the algorithm designed to process this must understand something about the pose in order to apply the correct detection tactic. Normalization of the light plane pose position will be required in which the weighted average of detected "bright" points is determined along a vertical set of these image points which is perpendicular to the light plane pose.
Otherwise stated, the processing of each pixel point along the Normal of the calculated light plane pose. "Bright" points may be points that are above a threshold luminosity value that serves to minimize video noise and semi illuminated surface regions due to ambient light and/or secondary reflections of the surface by the light plane. This is in particular of importance to light plane poses, which are less than horizontal and less than vertical. Applying this detection tactic will increase the tendency to select bright points that will yield the best possible accuracy regarding the actual center point of the segment bright pixels of the light plane curvature on the object.
The described method and device provides practical means to reduce and even eliminate scanning occlusion or shadowing within the viewing field. The described method and device allow the optimal projection device pose to be set in relation to the object's surface that yields the most accurate information possible about the geometry of the actual surface to be achieved. The described method and device achieve the above dynamically without having to recalibrate the system by employing a unique self calibrating scanning method that allows almost complete flexibility. The described method and device may be used in multi resolution scans.
These details are given but are not to limit the spirit or scope of the invention as previously described.
Although an embodiment has been shown wherein
Movement of the object and the camera relative to each other can be used to improve scanning results. Similarly, a plurality of camera's at different relative positions can be used to improve scanning results. Thus, for example parts of the object that are not in view of a camera from one relative position can be made visible with a camera from another relative position, or details that are visible at one relative position can be made visible with more refined resolution from another relative position at which the camera is closer to the object.
In practice it is well known that the greater the angle of the projection device is in relation to the camera, while the pattern is still clearly visible on the objects surface, the greater the scanning accuracy will be as a wider section of the imaging element is being employed. However the greater the angle the more the pattern will deform in the cameras view for areas of the object's surface that run closer to parallel to the projection beam plane. In addition, the greater the angle the more chance that the projection device's beam path or plane and/or cameras line of sight will be obstructed or occluded. More specifically explained, the projected pattern cannot reflect off of areas on the object that it can not reach at certain angles nor can the camera view areas where the pattern may reflect off of the object while being occluded by the object's own shape. Occlusion or shadowing is a common problem with these types of scanners and many methods have been devised to reduce it. For instance, a second projection device or two or more cameras may be employed to reduce this limitation. Other methods include the use of mirrors and a beam splitter to combine multiple views of the scene.
Using a second projection device to reduce occlusion is straightforward. The two projection devices are positioned at different but known positions. Same portions of the object are illuminated by the projection devices but at different registration times. The camera images or the derived geometric data is later combined based on the known positions of the projection devices. While occlusion may occur at one of the projectors it may possibly not occur at the other. In this case the missing data can be recovered.
A plurality of camera's at different positions relative to the object may be used, or a camera that is moved to different positions relative to the object, or the object may be moved (e.g. rotated) while the camera position remains fixed. Alternatively a turntable may be used.
Another approach that requires only one camera and one projection device is to use mirrors to allow multiple views of the scanning scene to be realized. These mirror reflections are then combined using a beam splitter to form a single image of the scene. A drawback of this method may be that the positioning of the components to accurately lineup the reflected views is complex, as four different components need to be aligned. In addition the beam splitter suffers from secondary reflections which lead to ambiguity with regard to isolating the true image pixels of the projected pattern on the object. Use of one or more mirrors with a camera at one position simulates that, as far as the part of the image is concerned wherein the mirror is visible, the camera is effectively at another position. This simulated position can be determined by "unfolding" the line of sight, i.e. by taking the line of sight from the camera to the mirror and defining a virtual line of sight that goes on through the mirror as if it were not there and has the same length as the line of sight from the mirror through the camera. The object may be viewed without mirror, via a mirror or via different mirror in different parts of an image from one camera respectively. A plurality of mirrors may be placed so that the line of sight is reflected by multiple mirrors. In this case multiple unfolding may be used to define the effective camera position.
However configurations with multiple cameras, one or more mirrors, a moving object and/or multiple scanners require calibration. Otherwise the data will not line up correctly. Calibration must define the translation and rotation that must be applied to transform coordinates obtained with the object and a camera at one relative position to coordinates with the object and a camera at another relative position, or effective relative position in the case that one or more mirrors are used.
In an embodiment, the pre-scan and final scan may be used in the case that multiple camera's, camera's that move relative to the object and/or one or more mirrors are used. In this case, the pre-scan may be used to define 3D positions on the object that correspond to locations in a plurality of images obtained from multiple camera's and/or in a plurality of images obtained at different time points when the camera position relative to the object changes, and/or different locations in one image wherein the object is view with and without a mirror, or with different mirrors respectively. Once such 3D positions have been defined by the pre scan, illuminated points that are detected in the final scan can be combined from any combination of images or image parts to calibrate the geometry of the structured light in the final scan. Subsequently, the calibrated geometry may be used to compute 3D points from detected points in the images or image parts. This may involve applying the translation and rotation associated with the image or image part in order to combine the 3D points obtained using different images or image parts.
Calibration of the translations and rotations between different relative camera positions may be determined by measuring the relative positions and orientations of the cameras and/or mirrors, or by accurately controlling their movement. Distance measuring devices and inclinometers may be used for example. In an embodiment the same configuration of projection devices is used to perform the pre-scan for each of a plurality of camera positions. Surface matching may be used to combine calibrations for different camera positions. For this purpose, the three dimensional shapes of respective surface parts identified by scanning with a camera or cameras at different positions relative to the object are matched, to identify surface parts with matching shapes and to identify the rotation and translation to make the surface parts coincide. This translation and rotation is subsequently used to map three dimensional positions obtained using different relative positions of the camera and the object onto a single space, in which the identified matching surfaces coincide.
In an embodiment, matching is applied to surface parts illuminated by the projection device during pre- scanning, for example using a surface part that is illuminated by the same sheet of light and captured from different camera positions. This facilitates matching, because only the three dimensional of one dimensional curves are involved. Furthermore, matching may be applied to curves obtained for incrementally changed relative positions of the object and the camera. This further reduces the amount of search required to identify a match.
Fig. 7A illustrates a geometry wherein a subject/object (702) is positioned onto a motorized turntable (703) with a mirror (704) and another mirror (705), which are positioned such that their reflections are largely of opposite views of the object (702). A projection device (700) projects a light ray plane (711) onto the object (702). The right side mirror (705) displays a reflected image of the curvature (712) of the reflected light plane on the object (702). A camera (701) views the scene. For illustrative means the camera image (707) of the scene is shown. The camera (701) is interfaced with a personal computer (706) and displays on its monitor (710) an image (709) of the scene. This image (709) is the processed image from the camera in which one side of the camera image has been superimposed on the other side. The two line curvatures form a single line curvature (708).
Fig. 7B illustrates the same principle geometry of materials and apparatus layout as in Fig 7A but now from an above view. Projection device (713) projects a light plane (714) onto the object (719). The object is positioned onto a turntable (720). The reflection of the light plane (714) results in a curvature (718) that follows the object's profile. Mirrors (716 and 717) reflect the view of the object (719) from two different yet symmetrical vantage points.
Nl and N2 represent the mirror Normals. RIi and RIo illustrate the reflection ray path from the object (719) to a camera (715) for the left mirror (716). R2i and R2o illustrate the ray path from the object (719) to a camera (715) for the right mirror (717).
Fig. 8A displays a sample image (800) from the camera that is viewing the scanning scene that was described in Fig 7A- 7B. The image (800) displays two separate curvatures (801 and 803). The left curvature (801) is from one mirror reflection and the right curvature (803) reflection is from the other mirror reflection. The image (800) also displays a missing section (802) in the left curvature due to occlusion or shadowing.
Fig. 8B displays the processed image (804) that was described in Fig 2A in which the right curvature has been overlaid onto the left curvature. The resultant curvature (805) displays no gap due to occlusion that was found in the left curvature of Fig 8A.
In one embodiment a scanning device may include a common projection device such as a laser line module, a calibrated imaging device, a turn table, two mirrors and a PC to process the camera image frames. The object that is to be scanned is set onto the turntable and a camera views the object at a position perpendicular to the turntable's rotation axis.
In operation, calibration of the camera may be performed to derive its position in relation to the object as well as compensate for the camera's optics. The projection device, which may be incandescent or laser based, is placed opposite and in front of the camera's viewing direction but behind the object. The projection device produces a pattern such as a thin line that is set parallel to the turn table rotation axis. A thin strip of material may be placed in between the object and camera view to prevent the projected pattern from directly reaching the camera's view. The two mirrors are placed at both sides of the object. These mirrors are preferably front or first side mirrors to prevent ghosting or double imaging. The mirrors are aligned such that the reflected images that they produce of the projected pattern on the object are equal but opposite views. The reflected images must be in the camera's field of view. The camera images are then folded.
Otherwise stated, one half of the image that contains the mirror reflection of one of the mirrors is digitally superimposed onto the other half of the video frame that contains the reflected image of the other mirror. The folding process may be carried out based on comparing the level of intensity of the image pixels. The luminosity of compared pixels that are same or higher will be used to create the superimposed resultant image. The process proceeds by taking the first pixel in a captured image frame and comparing it to the last pixel. It then proceeds to the second pixel and compares it to the one but last pixel. This is repeated until the entire frame is scanned. At each comparison of the two pixels the highest luminosity value of the two is used. In case the luminosity of both pixels is the same then this will be the value used for the resultant image. For color cameras the same method may be applied as well. However it is now also possible to make color comparisons as well.
The folded video may also be vertically reversed, as the reflection curvature of the intersecting projected light plane on the object that is being reflected in the mirror may be the inverse shape required. The "folded video" process as described above allows the user to view the camera's folded images and make final adjustments to the positions of the mirrors such that the projected pattern lines up exactly. By folding the video the user is able to make direct and accurate adjustments to the mirror positions in order to line up the images and combine or otherwise overlap the projected patterns. While the resulting combined or folded image is generally confused the contour lines of the intersecting light ray plane with the object in both images will overlap as long as the optical pathways are identical in length and attitude. The resulting contour line is the sum of both separate contour lines.
The actual scanning process may now commence. The object is incrementally rotated until it has made a full 360-degree rotation. At each increment the camera views the scene and its images are either stored for post processing or directly processed. In any case, processing involves folding the video or otherwise superimposing one half of the video onto the other in which the highest or same luminosity between two related pixels are chosen to create the final superimposed image that will be employed. Each image taken at each increment shows a unique line or stripe curvature of the illuminated section of the object's surface. Areas that may be occluded from view in one mirror view may be visible in another. A pattern such as a spot, thin line or parallel stripes of light that are projected from a projection source illuminate a subject/object providing contour points or line(s) on the surface of the object. A contour line is viewed from two symmetrical angles via mirroring surfaces at a point directly opposite to and in front of the projected light plane with the object in-between by a scanning optical sensor such as a common video camera. Part of the apparatus provides for moving the subject relative to the light projection and sensing assembly.
The coordinates of any point on the object's surface can be derived from the image of the contour curve on the sensor and the position of the object surface relative to the light projection and sensing assembly. The scanning sensor detects points along the contour curve and a digital data set may be generated representing the position of these points in 2D space related to the imaging element. The data along with indexing data are stored sequentially in a computer's memory. Since the mirror positions are known and the configuration calibrated, the sensor images of the mirror surfaces that display the contour line from opposite viewing direction in the sensors field of view are geometrically proportional in size and shape. Each mirror image displays the object's surface contour from a different angle and therefore alleviate or at least significantly reduce the occlusion or shadowing problems encountered if only a single non mirrored view was used.
Fig. 7A and 7B display the principle measuring system's geometry configuration and layout. Fig 7B is a top view of the layout. Data is collected in a cylindrical coordinate fashion. The object rests on a turntable of which has axis of rotation. Precisely intersecting the heart of the said axis is the projected light plane, which is perpendicular to the drawing plane. The light source may be incandescent or laser. Mirrors are placed at both sides of the object with a line of symmetry running along the projected plane of light. A video camera sensor views at the opposite side of the object in relation to the projection light source emitting direction. The center of the camera field of view is aligned on the table axis. Each mirror is set at same tangent angle in relation to the turntable surface such that the sensing camera may view both mirrored reflections of the object. Each mirrored view displays the contour of the projected light plane that follows the curvature on the object's surface from opposite viewing points. The angle of the mirrors in relation to the object and thereby the viewing angle of the camera in relation the object may be derived by placing an object of known size, shape and coordinates onto the turntable. The angle may then be calculated using triangulation by determining the offset of the contour line or point on the calibration object from a known position and radius. The mirror reflected images of the resulting viewing angles between light source and camera may be made large in order to make effective use of the camera sensing array with minimal penalty of occlusion. Otherwise stated, large scanning angles may be permitted while retaining much of the ability to allow measurement in narrow depressions of the object's surface. Fig 8B displays the "folded" resultant image based on the image displayed in Fig. 8A. It should be obvious that the folding of the video image is, in particular, useful in order to allow an operator to adjust the mirror positions such that the images of the light plane contour on the object surface is in optical alignment. This step may be automated and the actual processing may be carried out in a more efficiently computational manner. In any case the principle remains the same.
That is, the image may be processed to translate and/or rotate part of the image that images light from a mirror so at to align the contour from that part of the image with the contour from another part of the image.
Fig. 8A and 8B demonstrate a scanning solution approach to the video folding process in which one half of the video image is superimposed onto the other half. The result is placed on a computer image array referred to for explanatory reasons as an image result array. The process method scans along the camera image such as from top to bottom. Starting at the most top left pixel the process then proceeds to compare this pixel with the most right top pixel based on color and/or luminosity. If the values of both pixels are the same then this pixel value is placed at the top left pixel of the image result array. If one of the pixel values is higher than the other then this higher value is set to the top left pixel of the image result array. The process is repeated but now the second pixel from the top left horizontal position is compared to the last but one pixel on the top right. The resultant value is placed in the image result array at a position second pixel from the top left. This process is repeated for all image pixels. It should be obvious that other processing approaches may be applied yet the principle result will be the same. It should also be obvious that to minimize computational load during or after the actual scanning process it would be sufficient to simply proceed without having to create a superimposed image result for the operator of the process.
The process involved in the registration of the contour point positions in the folded video. Each line of the image is scanned horizontally from left to right. The values of each pixel are compared to a minimum value in order to minimize noise that may be in the video. This noise may in addition be suppressed by applying pre image staking techniques. Image stacking techniques may be employed in which multiple images are made of a static scene. These images are then statistically stacked, using averaging of each pixel or median filtering, together to form a resultant image with reduced noise. In any case the positions of the most luminous pixels or "bright pixels" are employed to determine the center point of the contour point position by way of averaging based on their horizontal positions. The detected contour points are stored in memory for subsequent processing involving triangulation. This may be accurately achieved, as the contour point positions are directly proportional to the actual object's surface point's radius that was illuminated by the projection device. To limit the registration of artifacts the resultant pixel position may be compared to the same pixel position of the employed image. If the resultant pixel value deviates significantly from the pixel position in the image then this value may be discarded as a false registration. The process as described above is repeated until a dense map of triangulated contour point positions is achieved.
It should be appreciated that the embodiments of measurement and calibration using mirrors can also be applied by themselves, without using a pre- scan to serve as a reference during a final scan. Even without this pre- scan these embodiment provide for a practical and cost effective means to reduce en even eliminate scanning occlusion. It achieves this in a rapid manner in a single scanning pass. It provides a geometry and processing method, which ameliorate the shadowing or occlusion problems of inherent to the scanning of complex object surface irregularities. It provides a system to provide multiple viewing angles of a single contour line that represents a curvature section of an object. It combines the multiple views into a single virtual image without ghosting effects. It realizes a relatively easy to configure layout. It solves the problem with prior art devices that highly accurate calibration must be provided or that projected patterns may interfere or overlap leading to ambiguity. Furthermore it solves the problem with prior art devices that the multiple camera setup can be expensive and processing load can be significantly increased. Compared to other mirror techniques, it solves the problem that accurately lineup the reflected views is complex because four different components need to be aligned and problems with secondary reflections in beam splitters.
The concluding details of the present invention are given but are not to limit the spirit or scope of the invention as previously described.
Although an example has been given wherein a pre scan is used before the final scan, it should be appreciated that more generally the pre scan may be a reference scan performed before, after or simultaneously with the final scan. It suffices that at any time data from both scans is available. Using a pre- scan has the advantage that the image locations for which 3D positions are available can be displayed at the time of the final scan, so that a user can adapt the final scan to make it cross such positions. Furthermore, it should be appreciated that additional scans after the final scan may be used. Furthermore, it should be appreciated that a final scan may be used as reference scan for another final scan.
Although three points suffice to determine the geometrical parameters of a light plane used during the final scan, it should be appreciated that more points may be used. A least square estimate of the geometrical parameters may be determined, which minimizes a sum of square errors between the observed positions and positions predicted with the estimated parameters. If the movement of the plane is constrained during the final scan, observations of less than three points may suffice to determine the geometrical parameters.
Although the use of a light structure in the form of a light plane has been used, it should be appreciated that other types of light structure may be used, such as curved surfaces, parallel surfaces, grids of surfaces etc. Light planes have the advantage that they are easy to make and that relatively simple processing suffices to calibrate their parameters.
Although a camera has been shown as an example of an image sensor, it should be realized that other image sensors, such as scanning devices may be used. Although examples have been shown wherein the projection device is an incandescent or laser device, it should be appreciated that other types of light sources, such as LEDs, LED arrays or gas discharge lamps may be used. The projection device may be configured to produce a static light structure, defined by apertures and/or lens shapes for example, or a structure realized by means of scanning a mirror (e.g. rotating). The scanning in the projection device is preferably driven faster than speeds that are achievable by manual movement of the projection device, e.g. by a motor or other actuator. If the projection device scans at a slower speed than the frame speed of the image sensor the image from the image sensor may be formed by combining a plurality of images from the image sensor, or by combining detected positions where the light from the projection device is seen at the object in the images. Preferably, the number of images in such a plurality is so small that movement of the projections device can produce no more than one pixel difference of illuminated positions within the plurality of images.
The following concluding summary is given but is not to limit the spirit or scope of the invention as previously described. A method is provided for converting the geometrical shape of an actual object into a data set of points that represent the surface of the object, using an incandescent or laser projection device and using a sensing imager element such as a video camera, said method comprising: projecting a structured light line from said projector toward said object having being pre scanned to establish reference geometry that is superimposed on images of said camera such that reference geometry of object and object image are in alignment; and swinging said light projector to cause said light line to move across said object; wherein images of said object are being recorded or otherwise captured with an said camera imager precalibrated with said reference geometry.
First a pre-scan is made of the object/subject/scene in order to gain a useful approximation of the object's overall geometry. This pre-scan data of the subject is then geometrically related to the imaging element's image of the same subject. Second, a subsequent scanning session is performed allowing for dynamically calibrated scanning of the object or scene. The second scanning process uses the pre scan data as a reference of known geometry in order to understand the pose of the light plane and thereby calculate the 3D position of registered points of the object.
The method may further comprise detecting respective deformations of said light line reflected from said object in each said images; deriving a contour from said deformations in each of said images; and merging said contours from each of said images to convert said object geometry into a data set of 3D points that represent said object geometry.
The method may further comprise detecting respective portions of said light falling on said object and overlaid on reference geometry; and deriving a light plane crossing said reference geometry. Furthermore, said contour may fall on said light plane and may be uniquely determined with respect to said pre-calibrated imager.
A method is also provided for generating a 3D representation of an object using a projection device, said method comprising: providing a modeling system comprising a sensing element such as a video camera; wherein said object is placed in front of said imager; deriving a calibration or reference scan; swinging a structured light line within a known radial or angular position; recording, respectively and sequentially, each scene of said object to produce a sequence of images; and deriving contours of said particular area from each of said images; employing said contours to triangulate point positions in each image and sequentially combining these to form a 3D representation of the said object to serve as reference geometry.
A method is also provided for generating a 3D representation of an object using a projection device, said method comprising: providing a modeling system comprising a sensing element such as a video camera; wherein said object is placed in front of said imager; swinging a structured light line across said object; recording, respectively and sequentially, each scene of said object to produce a sequence of images; and deriving contours of said particular area from each of said images; employing said reference geometry recited in previous sections and contours from said images to derive pose of said projection emitted device light plane; triangulation of point positions based on calculated pose in each image and sequentially combining these to form a 3D representation of the said object.
The method may comprise detecting in each of said images said respective portions of said light line falling on said object and superimposed said reference.
The method may still further comprise detecting respective deformations of said projection device reflected from said object in each of said images; and calculating a set of curvilinear points representing one of said contours from said respective deformations in each of said images.
Furthermore, swinging the structured light line may be operated manually by an operator.
Another aspect is that a high-speed non-contacting mensuration of three-dimensional surfaces is provided, with an improved occlusion reduction technique comprising: providing a plane of light intersecting said surface and producing a contour line of the curvature of said surface; moving the said surface relative to said plane of light; viewing said contour line from both sides of said plane of light using mirrors; combining the images of said contour line into a virtual single composite image; sensing said resultant image to derive the coordinates of the contour line. In other words, an apparatus for a high-speed non-contacting mensuration of three-dimensional surfaces may be provided, with an improved occlusion reduction technique comprising: a means for providing a plane of light intersecting said surface and producing a contour line of the curvature of said surface; A means for moving the said surface relative to said plane of light; A means for viewing said contour line from both sides of said plane of light using mirrors; A means for combining the images of said contour line into a virtual single composite image; A means for sensing said resultant image to derive the coordinates of the contour line. The means for viewing said contour line may comprises a mirror on each side of said light plane, positioned to reflect the contour line images to said means for combining.
Unique solutions are to prior art problems are offered and new and useful attributes are added to the art of 3D scanning.
It is an objective to provide a practical means to reduce and even eliminate scanning occlusion or shadowing within the viewing field. It may also be an objective to allow the optimal projection device pose to be set in relation to the object's surface that yields the most accurate information possible about the geometry of the actual surface to be achieved. It may also be an objective to provide a means to achieve the above dynamically without having to recalibrate the system by employing a unique self calibrating scanning method that allows almost complete flexibility. It may also be an objective to provide the means to make multi resolution scans. It may also be an objective of this invention to provide the means to allow segmental scanning of surfaces with minimal recalibration effort pertaining to the configuration of components. Other advantages that are inherent to this new 3D scanning method are made apparent by the description of the embodiments.
A method is disclosed for modeling 3D objects or scenes or otherwise converting their geometrical shape into a data set of 3D points that represent the objects 3D surface. These devices are commonly referred to as 3D scanners, 3D range scanners or finders, 3D object modelers, 3D digitizers and 3D distance measurement devices.
It will be evident to those skilled in the art of 3D scanning that there are many elements involved in the process. This description adheres to the core aspects and principles involved without going into great detail about well- known methods, processes or commonly used instruments.

Claims

Claims
1. A method of generating a 3D representation of an object using an image sensor and a projection device for projecting structured light, with the object located in an area that is visible in the image sensor, said method comprising: performing a reference scan under calibrated conditions to determine 3D positions of points on the object that are visible to the image sensor; applying structured light across the object from the structured light projection device; deriving a set of points on the object from an image captured by the image sensor that are selected by the structured light during the application of the structured light; calibrating a geometry property of the structured light at a time of detection by the image sensor, using 3D positions of points in the set of points that were also determined from the reference scan; employing said set of points and the calibrated geometric property to triangulate point positions on the contour; and combining the triangulated point positions to form a 3D representation of the said object.
2. A method according to claim 1, comprising illuminating the object using a reference light structure with calibrated geometrical properties during the reference scan and determining the 3D positions for points on the object that the image sensor detects to be illuminated by the reference light structure.
3. A method according to claim 1, comprising
- detecting first, second and third points in the image that are illuminated both during the reference scan and said projecting, - computing a 3D position and orientation of the lighting structure from the 3D positions of the first, second and third point in the reference scan.
4. A method according to claim 1, wherein the image sensor and the object are in a same relative position with respect to each other during the reference scan and when the structured light is applied across the object.
5. A method according to claim 1, wherein the structured light comprises a light line and the set of points is a contour in the image.
6. A method according to claim 1, comprising manually positioning the structured light during said step of applying the structured light.
7. A method according to claim 1, comprising displaying an image of the object, with an indication that identifies points on the object for which 3D positions have been determined in the reference scan.
8. A method according to claim 1, providing a mirror and using the image sensor to capture light from the object that is reflected from the mirror in an image portion of images captured by the image sensor, - illuminating the object using a reference light structure with calibrated geometrical properties during the reference scan and determining the 3D positions for points on the object that the image sensor detects to be illuminated by the reference light structure in said image portion; applying structured light across the object from the structured light projection device with the mirror and the image sensor in a same relative position and orientation as during the reference scan.
9. A method of generating a 3D representation of an object, comprising a reference scan and a main scan, the reference scan comprising providing first spatial light structure that is intersected by the object, the first spatial light structure having a calibrated geometry; capturing a first image of the object with an image sensor when the first structured light is applied; detecting a first contour of image points where positions on the object that are selected by the first spatial light structure are visible in the first image, each image point on the first contour defining a 3D position on the object where a ray path that ends at the image point intersects the first light structure; the main scan comprising providing second spatial light structure that is intersected by the object; - capturing a second image of the object with the image sensor when the second spatial light structure is applied; detecting a second contour of image points where positions on the object that are selected by the second spatial light structure are visible in the second image; - detecting a point of intersection of the first and second contour; determining a geometrical parameter of the second light structure using the 3D position defined by the point of intersection; determining 3D positions of positions on the object that are visible in the second image at further points on the first contour using the calibrated geometrical parameter.
10. A method according to claim 9, comprising providing a plurality of first spatial light structures in the reference scan, each first light structure having a calibrated geometry, and detecting a plurality of first contours of image points where positions on the object are visible that are selected by respective ones of the first light structures, and wherein the main scan comprises detecting respective points of intersection of the second contour and respective ones of the first light structures; determining geometrical parameters of the second light structure using the 3D positions defined by the respective points of intersection; - determining 3D positions of positions on the object that are visible in the second image at further points on the first contour using the calibrated geometrical parameters.
11. A method according to claim 9, wherein the first and second image are captured at mutually different times, when the second and first light structure are not present respectively.
12. An object scanning apparatus for generating a 3D representation of an object, the apparatus comprising an image sensor, a light projection device configured to project structured light, and a computer, wherein the computer is configured to - determine 3D positions of points on the object that are visible to the image sensor during a reference scan under calibrated conditions; derive a set of points on the object from an image captured by the image sensor that are selected by structured light from the light projection device during; - calibrate a geometry property of the structured light at a time of detection by the image sensor, using 3D positions of points in the set of points that were also determined from the reference scan; employ said set of points and the calibrated geometric property to triangulate point positions on the contour; and - combine the triangulated point positions to form a 3D representation of the said object.
13. A method of measuring positions on a three-dimensional surface comprising: providing a structured light intersecting said surface, whereby a contour line of the curvature of said surface is produced; moving the plane structure and the surface relative to each other; providing a mirror to reflect light from the plane structure; capturing an image of the contour line in an image sensor, the image containing image portions showing the plane structure at mutually angles to the light structure, the mirror being used to reflect light from a first one of the mutually opposite sides to the image sensor; deriving 3D coordinates of the contour line from the image.
14. A method according to claim 13, wherein mirrors are used, positioned to reflect the contour line images to said image sensor from mutually different angles relative to the light structure.
15. A method according to claim 13, comprising applying image processing to the image to translate and/or rotate a part of the image with a translation and/or rotation that aligns the contour in the image portion wherein the object is shown via the mirror with the contour in a further portion of the image.
PCT/NL2009/050140 2008-03-24 2009-03-24 A dynamically calibrated self referenced three dimensional structured light scanner WO2009120073A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US7041808P 2008-03-24 2008-03-24
US61/070,418 2008-03-24
US7060608P 2008-03-25 2008-03-25
US61/070,606 2008-03-25

Publications (2)

Publication Number Publication Date
WO2009120073A2 true WO2009120073A2 (en) 2009-10-01
WO2009120073A3 WO2009120073A3 (en) 2010-11-11

Family

ID=41114484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL2009/050140 WO2009120073A2 (en) 2008-03-24 2009-03-24 A dynamically calibrated self referenced three dimensional structured light scanner

Country Status (1)

Country Link
WO (1) WO2009120073A2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010034301A3 (en) * 2008-09-25 2010-05-20 Technische Universität Braunschweig Carolo-Wilhelmina 3d geometrical acquisition method and device
WO2010138543A1 (en) * 2009-05-29 2010-12-02 Perceptron, Inc. Hybrid sensor
US7995218B2 (en) 2009-05-29 2011-08-09 Perceptron, Inc. Sensor system and reverse clamping mechanism
US8243289B2 (en) 2009-05-29 2012-08-14 Perceptron, Inc. System and method for dynamic windowing
EP2824923A1 (en) 2013-07-10 2015-01-14 Christie Digital Systems Canada, Inc. Apparatus, system and method for projecting images onto predefined portions of objects
US9049369B2 (en) 2013-07-10 2015-06-02 Christie Digital Systems Usa, Inc. Apparatus, system and method for projecting images onto predefined portions of objects
US9947112B2 (en) 2012-12-18 2018-04-17 Koninklijke Philips N.V. Scanning device and method for positioning a scanning device
CN109064533A (en) * 2018-07-05 2018-12-21 深圳奥比中光科技有限公司 A kind of 3D loaming method and system
CN110533769A (en) * 2019-08-20 2019-12-03 福建捷宇电脑科技有限公司 A kind of leveling method opening book image and terminal
US10607397B2 (en) 2015-06-04 2020-03-31 Hewlett-Packard Development Company, L.P. Generating three dimensional models
US10852403B2 (en) 2015-06-10 2020-12-01 Hewlett-Packard Development Company, L.P. 3D scan tuning
US11143499B2 (en) * 2018-09-18 2021-10-12 Electronics And Telecommunications Research Institute Three-dimensional information generating device and method capable of self-calibration
DE102016011718B4 (en) 2016-09-30 2022-12-15 Michael Pauly Method and device for determining a static size of an object
CN116147535B (en) * 2023-02-27 2023-08-04 北京朗视仪器股份有限公司 Color structure light calibration method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1777485A1 (en) * 2004-08-03 2007-04-25 Techno Dream 21 Co., Ltd. Three-dimensional shape measuring method and apparatus for the same
WO2010034301A2 (en) * 2008-09-25 2010-04-01 Technische Universität Braunschweig Carolo-Wilhelmina 3d geometrical acquisition method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1777485A1 (en) * 2004-08-03 2007-04-25 Techno Dream 21 Co., Ltd. Three-dimensional shape measuring method and apparatus for the same
WO2010034301A2 (en) * 2008-09-25 2010-04-01 Technische Universität Braunschweig Carolo-Wilhelmina 3d geometrical acquisition method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SIMON WINKELBACH ET AL: "Low-Cost Laser Range Scanner and Fast Surface Registration Approach" 1 January 2006 (2006-01-01), PATTERN RECOGNITION : 28TH DAGM SYMPOSIUM, BERLIN, GERMANY, SEPTEMBER 12 - 14, 2006 ; PROCEEDINGS; [LECTURE NOTES IN COMPUTER SCIENCE], SPRINGER, BERLIN, DE, PAGE(S) 718 - 728 , XP019043113 ISBN: 978-3-540-44412-1 paragraphs [0001], [0002], [02.1], [02.2] abstract *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010034301A3 (en) * 2008-09-25 2010-05-20 Technische Universität Braunschweig Carolo-Wilhelmina 3d geometrical acquisition method and device
WO2010138543A1 (en) * 2009-05-29 2010-12-02 Perceptron, Inc. Hybrid sensor
US7995218B2 (en) 2009-05-29 2011-08-09 Perceptron, Inc. Sensor system and reverse clamping mechanism
US8031345B2 (en) 2009-05-29 2011-10-04 Perceptron, Inc. Hybrid sensor
US8227722B2 (en) 2009-05-29 2012-07-24 Perceptron, Inc. Sensor system and reverse clamping mechanism
US8233156B2 (en) 2009-05-29 2012-07-31 Perceptron, Inc. Hybrid sensor
US8243289B2 (en) 2009-05-29 2012-08-14 Perceptron, Inc. System and method for dynamic windowing
US8395785B2 (en) 2009-05-29 2013-03-12 Perceptron, Inc. Hybrid sensor
US9947112B2 (en) 2012-12-18 2018-04-17 Koninklijke Philips N.V. Scanning device and method for positioning a scanning device
US9049369B2 (en) 2013-07-10 2015-06-02 Christie Digital Systems Usa, Inc. Apparatus, system and method for projecting images onto predefined portions of objects
EP2824923A1 (en) 2013-07-10 2015-01-14 Christie Digital Systems Canada, Inc. Apparatus, system and method for projecting images onto predefined portions of objects
US10607397B2 (en) 2015-06-04 2020-03-31 Hewlett-Packard Development Company, L.P. Generating three dimensional models
US10852403B2 (en) 2015-06-10 2020-12-01 Hewlett-Packard Development Company, L.P. 3D scan tuning
DE102016011718B4 (en) 2016-09-30 2022-12-15 Michael Pauly Method and device for determining a static size of an object
CN109064533A (en) * 2018-07-05 2018-12-21 深圳奥比中光科技有限公司 A kind of 3D loaming method and system
CN109064533B (en) * 2018-07-05 2023-04-07 奥比中光科技集团股份有限公司 3D roaming method and system
US11143499B2 (en) * 2018-09-18 2021-10-12 Electronics And Telecommunications Research Institute Three-dimensional information generating device and method capable of self-calibration
CN110533769A (en) * 2019-08-20 2019-12-03 福建捷宇电脑科技有限公司 A kind of leveling method opening book image and terminal
CN110533769B (en) * 2019-08-20 2023-06-02 福建捷宇电脑科技有限公司 Flattening method and terminal for open book image
CN116147535B (en) * 2023-02-27 2023-08-04 北京朗视仪器股份有限公司 Color structure light calibration method and system

Also Published As

Publication number Publication date
WO2009120073A3 (en) 2010-11-11

Similar Documents

Publication Publication Date Title
WO2009120073A2 (en) A dynamically calibrated self referenced three dimensional structured light scanner
US10088296B2 (en) Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device
US10401143B2 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
EP2751521B1 (en) Method and system for alignment of a pattern on a spatial coded slide image
US20060072123A1 (en) Methods and apparatus for making images including depth information
US7456842B2 (en) Color edge based system and method for determination of 3D surface topology
Davis et al. A laser range scanner designed for minimum calibration complexity
US20170307363A1 (en) 3d scanner using merged partial images
US20050128196A1 (en) System and method for three dimensional modeling
CN114998499B (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
WO1998005157A2 (en) High accuracy calibration for 3d scanning and measuring systems
Lanman et al. Surround structured lighting: 3-D scanning with orthographic illumination
WO2016040229A1 (en) Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
WO2016040271A1 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
JP2007508557A (en) Device for scanning three-dimensional objects
Lanman et al. Surround structured lighting for full object scanning
KR20190019059A (en) System and method for capturing horizontal parallax stereo panoramas
KR20200046789A (en) Method and apparatus for generating 3-dimensional data of moving object
WO2005090905A1 (en) Optical profilometer apparatus and method
JP7312594B2 (en) Calibration charts and calibration equipment
CA2810587C (en) Method and system for alignment of a pattern on a spatial coded slide image
Rohith et al. A camera flash based projector system for true scale metric reconstruction
JP2004086643A (en) Data acquiring device for computer graphics and data acquiring method and data acquiring program for computer graphics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09725934

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.01.2011)

122 Ep: pct application non-entry in european phase

Ref document number: 09725934

Country of ref document: EP

Kind code of ref document: A2