3-D scanning using a dip scanner: The object is dipped in a bath of water (left) by a robot arm. The quality of the reconstruction improves as the number of dipping orientations is increased (from left to right).
Credit: ACM
An international group of researchers developed a technique that results in more accurate 3-D scanning for reconstructing complex objects than what currently exists. The innovative method combines robotics and water.
“Using a robotic arm to immerse an object on an axis at various angles, and measuring the volume displacement of each dip, we combine each sequence and create a volumetric shape representation of an object,” says Prof. Andrei Scharf, of Ben-Gurion University of the Negev, Department of Computer Science.
“The key feature of our method is that it employs fluid displacements as the shape sensor,” Prof. Scharf explains. “Unlike optical sensors, the liquid has no line-of-sight requirements. It penetrates cavities and hidden parts of the object, as well as transparent and glossy materials, thus bypassing all visibility and optical limitations of conventional scanning devices.”
Find your dream job in the space industry. Check our Space Job Board »
The researchers used Archimedes’ theory of fluid displacement — the volume of displaced fluid is equal to the volume of a submerged object — to turn the modeling of surface reconstruction into a volume measurement problem. This serves as the foundation for the team’s modern, innovative solution to challenges in current 3-D shape reconstruction.
The group demonstrated the new technique on 3-D shapes with a range of complexity, including an elephant sculpture, a mother and child hugging and a DNA double helix. The results show that the dip reconstructions are nearly as accurate as the original 3-D model.
The new technique is related to computed tomography — an imaging method that uses optical systems for accurate scanning and pictures. However, tomography-based devices are bulky and expensive and can only be used in a safe, customized environment.
Prof. Scharf says, “Our approach is both safe and inexpensive, and a much more appealing alternative for generating a complete shape at a low-computational cost, using an innovative data collection method.”
The researchers will present their paper, “Dip Transform for 3D Shape Reconstruction,” during SIGGRAPH 2017 in Los Angeles, July 30 to August 3. It is also published in the July issue of ACM Transactions on Graphics. SIGGRAPH spotlights the most innovative computer graphics research and interactive techniques worldwide.
Story Source: Materials provided by American Associates, Ben-Gurion University of the Negev. Note: Content may be edited for style and length.
Journal Reference:
Kfir Aberman, Oren Katzir, Qiang Zhou, Zegang Luo, Andrei Sharf, Chen Greif, Baoquan Chen, Daniel Cohen-Or. Dip transform for 3D shape reconstruction. ACM Transactions on Graphics, 2017; 36 (4): 1 DOI: 10.1145/3072959.3073693