Robot System and Mobility
To take advantage of the drone's wide field of view,
we developed a drone pose estimation system from the ground vehicle.
In this system, the relative pose is obtained by direct measurement by LiDAR and
indirect measurement of the camera's vanishing directions.
We proposed a concept to attach the SLAM-device onto a robot and quickly realize the robot navigation in 3D space.
The proposed method calibrates SLAM-device and robot itself by using the relative poses obtained by several robot movements and clarified the efficient ones for the calibration according to the DoF of the robot.
Furthermore, the relative pose is dynamically refined so that the contact between the environment and the robot maintains the geometric consistency.
We proposed a method to generate robot motions for vision-based teleoperation systems.
Task Model performs recognition and transmission of human motions
and can simultaneously solve the issues of the structural differences
between the human and the humanoid robot and time delays.
We also developed a method of articulation modeling using an RGB-D sensor.
The articulation parameters are estimated by a fusion of hand motion and point-cloud alignment.
M. Ogawa, K. Honda, Y. Sato, T. Oishi and K. Ikeuchi,
"Development of interface for teleoperation of humanoid robot using task model method,"
2016 IEEE/SICE International Symposium on System Integration, Dec. 2016, Sapporo, Japan.
M. Ogawa, K. Honda, Y. Sato, S. Kudoh, T. Oishi, K. Ikeuchi,
"Motion Generation of the Humanoid Robot for Teleoperation by Task Model,"
In Proc. 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 71-76, Sept. 1, 2015, Kobe Japan.
We propose a real-time dense 3D mapping method for fisheye cameras without explicit rectification and undistortion.
We extend the conventional variational stereo method by constraining the correspondence search
along the epipolar curve using a trajectory field induced by camera motion.
We also propose a fast way of generating the trajectory field without increasing the processing time
compared to conventional rectified methods.
We proposed methods for estimating a dense depth map from a sparse LIDAR point cloud and images.
Our unsupervised approach is a real-time dense depth completion from sparse depth maps guided by a single image.
Our method generates smooth depth maps while preserving discontinuity between different objects.
The key idea is a Binary Anisotropic Diffusion Tensor (B-ADT)
which can eliminate smoothness constraint at intended positions and directions
by applying variational regularization.
Another approach relies on a directionally biased propagation of known depth to missing areas based on semantic segmentation.
Additionally, we classify different object boundaries as either occluded or connected
to limit the extent of the data propagation.
At the regions with inevitably missing point cloud data,
we depend on estimated depth using motion stereo.
Y. Yao, M. Roxas, R. Ishikawa, S. Ando, J. Shimamura, and T. Oishi,
"Discontinuous and Smooth Depth Completion with Binary Anisotropic Diffusion Tensor,"
IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5128-5135, Oct. 2020.
- A. Hirata, R. Ishikawa, M. Roxas, T. Oishi,
"Real-Time Dense Depth Estimation using Semantically-Guided LIDAR Data Propagation and Motion Stereo,"
IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3806-3811, Oct. 2019. -
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov. 2019.
[src (MATLAB for accuracy comparison)]
We developed a mobile scanning system for fastly and accurately capturing the 3D range data.
Our system consists of a LiDAR and a color camera.
While the laser scanner works for capturing 3D structures,
the color camera captures an image sequence.
The sensor motion is estimated robustly from a sensor-fused 2D/3D feature-tracking method,
which helps to reconstruct the structures from the scan lines accurately.
R. Ishikawa, T. Oishi, K. Ikeuchi,
"LiDAR and Camera Calibration using Motion Estimated by Sensor Fusion Odometry,"
IEEE/RSJ International Conference on Intelligent Robots (IROS 2018), pp. 7342-7349, 2018.
R. Ishikawa, M. Roxas, Y. Sato, T. Oishi, T. Masuda, K. Ikeuchi,
"A 3D Reconstruction with High Density and Accuracy using Laser Profiler and Camera Fusion System on a Rover,"
International Conference on 3D Vision (3DV), pp. 620-628, Oct 27, 2016, Palo Alto.
- B. Zheng, T. Oishi, K. Ikeuchi, "Rail Sensor: A Mobile Lidar System for 3D Archiving the Bas-reliefs in Angkor Wat," IPSJ Transactions on Computer Vision and Applications (CVA), Vol. 7, pp. 59-63, July 27, 2015.
We developed a flying sensor system to capture 3D data aerially.
The system, consisting of a omni-directional laser scanner and a panoramic camera,
can be mounted under a mobile platform to achieve the aerial scanning with high resolution and accuracy.
Since the laser scanner often requires several minutes
to complete an omni-directional scan, the raw data is distorted seriously due to the unknown and uncontrollable
movement during the scanning period. Our approach recovers the sensor motion by utilizing the spacial
and temporal features extracted both from the image sequences and point clouds.
- B. Zheng, X. Huang, R. Ishikawa, T. Oishi, K. Ikeuchi, "A New Flying Range Sensor: Aerial Scan in Omini-directions," In Proc. International Conference on 3D Vision (3DV), pp. 623-631, Oct. 19-22, 2015, Lyon, France.
- R. Ishikawa, B. Zheng, T. Oishi, K. Ikeuchi, "Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model," IPSJ Transactions on Computer Vision and Applications (CVA), Vol. 7, pp. 89-93, July 27, 2015.
- A. Banno, T. Masuda, T. Oishi, and K. Ikeuchi, "Flying Laser Range Sensor for Large-Scale Site-Modeling and Its Applications in Bayon Digital Archival Project," International Journal of Computer Vision (IJCV), Vol. 78, No. 2-3, pp. 207-222, Jul. 2008.