3D e-Heritage / Cyber Archaeology
We developed a mobile scanning system called rail sensor for fastly and accurately capturing the 3D range data in a corridor.
Our system consists of a laser-line-scan Lidar and a panoramic camera,
which are mounted onto a platform moving on rails.
While laser scanner works in profiling mode capturing 2D line structures,
the panoramic camera captures the panoramic videos.
The sensor motion is estimated robustly from the sensor-fused 2D/3D features tracking method,
which helps to reconstruct the structures from the 2D scan lines accurately.
- R. Ishikawa, M. Roxas, Y. Sato, T. Oishi, T. Masuda, K. Ikeuchi, "A 3D Reconstruction with High Density and Accuracy using Laser Profiler and Camera Fusion System on a Rover," In Proc. International Conference on 3D Vision (3DV), Oct 27, 2016, Palo Alto.
- B. Zheng, T. Oishi, K. Ikeuchi, "Rail Sensor: A Mobile Lidar System for 3D Archiving the Bas-reliefs in Angkor Wat," IPSJ Transactions on Computer Vision and Applications (CVA), Vol. 7, pp. 59-63, July 27, 2015.
We developed a flying sensor system to capture 3D data aerially.
The system, consisting of a omni-directional laser scanner and a panoramic camera,
can be mounted under a mobile platform to achieve the aerial scanning with high resolution and accuracy.
Since the laser scanner often requires several minutes
to complete an omni-directional scan, the raw data is distorted seriously due to the unknown and uncontrollable
movement during the scanning period. Our approach recovers the sensor motion by utilizing the spacial
and temporal features extracted both from the image sequences and point clouds.
- B. Zheng, X. Huang, R. Ishikawa, T. Oishi, K. Ikeuchi, "A New Flying Range Sensor: Aerial Scan in Omini-directions," In Proc. International Conference on 3D Vision (3DV), pp. 623-631, Oct. 19-22, 2015, Lyon, France.
- R. Ishikawa, B. Zheng, T. Oishi, K. Ikeuchi, "Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model," IPSJ Transactions on Computer Vision and Applications (CVA), Vol. 7, pp. 89-93, July 27, 2015.
- A. Banno, T. Masuda, T. Oishi, and K. Ikeuchi, "Flying Laser Range Sensor for Large-Scale Site-Modeling and Its Applications in Bayon Digital Archival Project," International Journal of Computer Vision (IJCV), Vol. 78, No. 2-3, pp. 207-222, Jul. 2008.
3D shape comparison with digital copies draws increasing attention in modern culture heritage studies.
Our aim is to analyze portrait sculptures of Augustus with 3D scanned data.
A feasible framework of automatic object categorization is proposed based on shape comparison, where
distinguishing regions are simultaneously detected as well.
High coincidence between our result and previous archaeological speculations is observed in validation experiments, which confirms
the validity of the proposed method.
- M. Lu, Y. Zhang, B. Zheng, T. Masuda, S. Ono, T. Oishi, K. Sengoku-Haga, and K. Ikeuchi, "Portrait Sculptures of Augustus: Categorization via Local Shape Comparison," International Congress on Digital Heritage, Vol. 1, pp. 661-664, Marseille, France, Oct. 28-Nov. 1 2013.
- Y. Zhang, M. Lu, B. Zheng, T. Masuda, S. Ono, T. Oishi, K. Sengoku-Haga and K. Ikeuchi, "Classical Sculpture Analysis via Shape Comparison," International Symposium on Culture and Computing 2013, Kyoto, Japan.
The Bayon temple in Cambodia was built in the 12th century and is famous for its towers
with four faces at the four cardinal points. According to research by JSA (Japanese government team for
Safeguarding Angkor), the faces can be classified into three groups based on subjective criteria.
We explore a more objective way to classify the faces by using measured 3D geometrical models.
After alignment of 3D faces in the same coordinate system, orientation, and normalization,
we captured in-depth images of each face and then classified them by several statistical methods.
- M. Kamakura, T. Oishi, J. Takamatsu and K. Ikeuchi, "Classification of Bayon Faces Using 3D Model," Proc. 11th International Conference on Virtual Systems and Multimedia (VSMM 2005), October 2005. (Paper Award in the category Heritage)
We developed an interactive rendering system for large scale 3D mesh models,
stored on a remote machine through relatively small capacity of networks.
Our system uses both model and image based rendering methods for efficient load balance
between a server and clients. On the server, the 3D models are rendered by the model-based method using a
hierarchical data structure with multi-resolution. On the client, it reconstructs an arbitrary view by using a novel
image-based method, referred to as the Grid-Lumigraph, with blending image colors from sampling images received
from the server. The resulting rendering system can efficiently
render any image in real-time.
- Y. Okamoto, T. Oishi, and K. Ikeuchi, "Image-Based Network Rendering of Large Meshes for Cloud Computing," International Journal of Computer Vision (IJCV), Vol. 94 No. 1, pp. 12-22, Aug. 2011.
- Y. Okamoto, T. Oishi and K. Ikeuchi, "Image-based Network Rendering System for Large Sized Meshes," IEEE Workshop on eHeritage and Digital Art Preservation in Conjunction with ICCV 2009, Kyoto.
These are huge objects existing outdoors and providing various technical challenges.
Geometric models of the cultural heritage assets are digitally achieved through a pipeline, consisting of acquiring
data, aligning multiple range images, and merging these images. We have developed alignment algorithms: a rapid
simultaneous algorithm for quick data checking on site, and a parallel alignment algorithm for precise adjustment.
We have also designed a parallel voxel-based merging algorithm for connecting all aligned range images.
The texture images acquired by color cameras are aligned onto the geometric models.
In an attempt to restore the original appearance of historical objects,
we have synthesized several buildings and statues using scanned data and a literature survey.
- K. Ikeuchi, T. Oishi, J. Takamatsu, R. Sagawa, A. Nakazawa, R. Kurazume, K. Nishino, M. Kamakura and Y. Okamoto, "The Great Buddha Project: Digitally Archiving, Restoring, and Analyzing Cultural Heritage Objects," International Journal of Computer Vision (IJCV), Vol. 75, No. 1, pp. 189-208, Oct. 2007.
- T. Oishi, A. Nakazawa, R. Kurazume and K. Ikeuchi, "Fast Simultaneous Alignment of Multiple Range Images using Index Images," Proc. The 5th International Conference on 3-D Digital Imaging and Modeling (3DIM 2005), pp. 476-483, 2005.
- T. Oishi, R. Sagawa, A. Nakazawa, R. Kurazume and K. Ikeuchi, "Parallel Alignment of a Large Number of Range Images," Proc. The 4th International Conference on 3D Digital Imaging and Modeling (3DIM 2003), pp. 195-202, 2003.
We proposed a novel registration method based on a coarse-to-fine IP representation.
The approach starts from a high-speed and reliable registration with a coarse (of low degree) IP model and
stops when the desired accuracy is achieved by a fine (of high degree) IP model.
Over the previous IP-point based methods our contributions are: (i) keeping the efficiency without requiring pair-wised
correspondences, (ii) enhancing the robustness, and (iii) improving the accuracy.
- B. Zheng, R. Ishikawa, J. Takamatsu, T. Oishi and K. Ikeuchi, "A Coarse-to-fine IP-driven Registration for Pose Estimation from Single Ultrasound Image," Computer Vision and Image Understanding (CVIU), Vol.117, No. 12, pp. 1647-1658, 2013.
- B. Zheng, R. Ishikawa, T. Oishi, J. Takamatsu and K. Ikeuchi, "A Fast Registration Method Using IP and Its Application to Ultrasound Image Registration," IPSJ Transactions on Computer Vision and Applications (CVA), Vol. 1, 209-219, 2009.