GENERAL DESCRIPTION OF 3D SCANNING TECHNOLOGY

The past few years have seen dramatic decreases in the cost of three-dimensional (3D) scanning equipment, as well as in the cost of commodity computers with hardware graphics display capability. These trends, coupled with increasing Internet bandwidth, are making the use of complex 3D models accessible to a much larger audience. The potential exists to expand the use of 3D models beyond the well established games market to new applications ranging from virtual museums to e-commerce. To realize this potential, the pipeline from data capture to usable 3D model must be further developed. In this report we examine the state of the art of the processing of the output of range scanners into efficient numerical representations of objects for computer graphics applications.

What is 3D scanning?

3D scanning is a technology for creating high-precision 3D models of real-world objects. It works like this: a 3D scanner takes multiple snapshots of an object. The shots are then fused into a 3D model, an exact three-dimensional copy of the object, which you can rotate and view from different angles on your computer.

What is 3D scanning used for?

3D scanning is a technology used in cutting-edge workflows. This technology is widely being used in marine industry to capture the as built data within the time short time frame allowed. Unlike the traditional manual measuring method, 3D scanning requires comparatively less time as well as more accuracy. A 3D scanner collects distance and surface information instead of or in addition to color information within its field of view. The captured data can be bringing back to the office then it can be utilizing for the reverse engineering and place it into CAD software to see how the object will is good for installation.

How to use 3D scanner?

A 3D scanner collects distance and surface information instead of or in addition to color information within its field of view. The “picture” produced describes the distance to a surface at each point resulting in the identification of the three-dimensional position of each point.

Line-of-sight Error

After the scans have been aligned the individual points would ideally lie exactly on the surface of the reconstructed object. However, one still needs to account for residual error due to noise in the measurements, inaccuracy of sensor calibration, and imprecision in registration. The standard approach to deal with the residual error is to define new estimates of actual surface points by averaging samples from overlapping scans. Often the specific technique used is chosen to take advantage of the data structures used to integrate the multiple views into one surface. Because of this, details of the assumed error model and averaging method are often lost or overlooked by authors.

Postprocessing

Postprocessing operations are often necessary to adapt the model resulting from scan integration to the application at hand. Quite common is the use of mesh simplification techniques to reduce mesh complexity.

To relate a texture map to the integrated mesh, the surface must be parameterized with respect to a 2D coordinate system. A simple parameterization is to treat each triangle separately and to pack all of the individual texture maps into a larger texture image. However, the use of mip-mapping in this case is limited since adjacent pixels in the texture may not correspond to adjacent points on the geometry. Another approach is to find patches of geometry which are height fields that can be parameterized by projecting the patch onto a plane. Stitching methods use this approach by simply considering sections of the scanned height fields as patches.

Ref : Fausto Bernardini and Holly Rushmeier