C. V.

2019.10~ Project Researcher, The University of Tokyo
2017.4~2017.6 Microsoft Research Internship, Redmond, U. S. A.
2016.3~2019.3 JSPS Research Fellow (DC1)
2019.10 Ph.D ,Electrical Engineering and Information Systems, The University of Tokyo
2016.3 M.E ,Electrical Engineering and Information Systems, The University of Tokyo
2014.3 B.E ,Depertment of Electrical Engineering, The University of Tokyo
2010.3 Kaisei High School

Research

Rectification of Aerial 3D Laser Scans via Line-to-Surface Registration to Ground Model

3D digital archiving is obtaining shapes of real objects with 3D scanner and digitize them. Especially it is useful for preservation and restoration of cultural heritage assets which are exposed to risk of deterioration. However, conventional way cannot obtain enough roofs surface data. We develop balloon sensor and scan roofs from high position, and estimate motion during scanning by utilizing overlaps between aerial scans and static scans. Our method is based on ICP algorithm and introduces smoothness constraint to eliminate ambiguity.

Intrinsic Parameter Calibration of LiDAR

Multi-beam LiDAR is used for many of applications such as autonomous driving, SLAM, 3D modeling, etc. Multi beam LiDAR can obtain surrounding 3D information with high frame rate. However it needs calibration between laser units and intrinsic calibration parameters provided by manufacturer are not accurate enough to using 3D modeling. To refine the intrinsic calibration parameters, we conduct alignment 3d point cloud obtained by each laser unit to more acculate reference 3D data.

Laser Profiler and Camera Fusion System on a Rover for 3D Reconstruction

Conventional way to scan large scale structure has another problem that it is laborious and time-consuming. To accelarete to conduct large scale 3D modeling with laser scanner, we develop sensor fusion system on a rover. An omni-directional camera is mounted with laser scanner together and sensor motion is estimated with panoramic video. we introduce occlusion handling and propose the motion refinement using multi modal registration by maximizing mutual infomation in this system.

SLAM-Device and Robot Calibration for Navigation

We propose a robot-device calibration method for navigation with a device using SLAM technology like Microsoft HoloLens. The calibration is performed by using the movements given by the robot and the device. In the calibration, the most efficient way of movement is clarified according to the restriction of the robot movement. Furthermore, we also show a method to dynamically correct the position and orientation of the robot so that the information of the external environment and the shape information of the robot maintain consistency in order to reduce the dynamic error occurring during navigation. Our method can be easily used for various kinds of robots.

[src](HoloLens) [src](ROS)

LiDAR and Camera Calibration using Motions Estimated by Sensor Fusion Odometry

An acculate camera-LiDAR calibration is required for an acculate motion estimation. Our approach extends the hand-eye calibration framework to 2D-3D calibration. The scaled camera motions are accurately calculated using a sensor-fusion odometry method. We also clarify the suitable motions for our calibration method. Whereas other calibrations require the LiDAR reflectance data and an initial extrinsic parameter, the proposed method requires only the three-dimensional point cloud and the camera image. The effectiveness of the method is demonstrated in experiments using several sensor configurations in indoor and outdoor scenes. Our method achieved higher accuracy than comparable state-of-the-art methods.

[src]

Publication

Journal papers

+ Y. Yao, R. Ishikawa, S. Ando, K. Kurata, N. Ito, J. Shimamura, T. Oishi, "Non-Learning Stereo-Aided Depth Completion Under Mis-Projection via Selective Stereo Matching," in IEEE Access, vol. 9, pp. 136674-136686, 2021
+ Y. Yao, M. Roxas, R. Ishikawa, S. Ando, J. Shimamura, and T. Oishi, "Discontinuous and Smooth Depth Completion with Binary Anisotropic Diffusion Tensor," IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5128-5135, Oct. 2020
+ A. Hirata, R. Ishikawa, M. Roxas, T. Oishi "Real-Time Dense Depth Estimation using Semantically-Guided LIDAR Data Propagation and Motion Stereo," IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3806-3811, Oct. 2019
+ R. Ishikawa, B. Zheng, T. Oishi, K. Ikeuchi, "Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model," IPSJ Transactions on Computer Vision and Applications, Vol. 7, pp. 89-93, July 27, 2015.,


Conference, Workshop (Peer Reviewed)

+ Shuyi Zhou, Shuxiang Xie, R. Ishikawa, K. Sakurada, M. Onishi, and T. Oishi, "INF: Implicit Neural Fusion for LiDAR and Camera," International Conference on Intelligent Robots (IROS), 2023
+ L. Miao, R. Ishikawa, and T. Oishi, "SWIN-RIND: Edge Detection for Reflectance, Illumination, Normal and Depth Discontinuity with Swin Transformer," British Machine Vision Conference (BMVC), 2023
+ H. Hansen, Y. Liu, R. Ishikawa, T. Oishi, Y. Sato, "Quadruped Robot Platform for Selective Pesticide Spraying," 18th International Conference on Machine Vision Applications (MVA), Hamamatsu, Japan, 2023
+ S. Xie, R. Ishikawa, K. Sakurada, M. Onishi and T. Oishi, "Fast Structural Representation and Structure-aware Loop Closing for Visual SLAM," IEEE/RSJ International Conference on Intelligent Robots (IROS), 2022
+ R. Hartant, R. Ishikawa, M. Roxas, T. Oishi, "Hand-Motion-guided Articulation and Segmentation Estimation," The 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples Italy (Online), 2020
+ R. Ishikawa, T. Oishi, K. Ikeuchi, "LiDAR and Camera Calibration using Motions Estimated by Sensor Fusion Odometry" In Proc. International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October, 2018.
+ R. Ishikawa, M. Roxas, Y. Sato, T. Oishi, T. Masuda, K. Ikeuchi, "A 3D Reconstruction with High Density and Accuracy using Laser Profiler and Camera Fusion System on a Rover," In Proc. International Conference on 3D Vision (3DV), Palo Alto, Octover, 2016.
+ B. Zheng, X. Huang, R. Ishikawa, T. Oishi, K. Ikeuchi, "A New Flying Range Sensor: Aerial Scan in Omini-directions," In Proc. International Conference on 3D Vision (3DV), pp. 623-631, Lyon, France, October, 2015.
+ R. Ishikawa, B. Zheng, T. Oishi, K. Ikeuchi, "Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model," The 18th Meeting on Image Recognition and Understanding (MIRU), Osaka, July, 2015


Others

+ R. Ishikawa, T. Oishi, K. Ikeuchi, "SLAM-Device and Robot Calibration for Navigation," 12th International Workshop on Robust Computer Vision (IWRCV), Nara, Japan, January, 2018
+ R. Ishikawa, M. Roxas, Y. Sato, T. Oishi, T. Masuda, K. Ikeuchi, " A Sensor Fusion 3D Reconstruction System using Depth-based Triangulation and Multimodal Registration," 11th International Workshop on Robust Computer Vision (IWRCV), Dejeon, Korea, November, 2016
+ R. Ishikawa, X. Huang, B. Zheng, T. Oishi, K. Ikeuchi, "Robust and Acccurate Aerial Scanning System", The 10th Internatinal Workshop on Robust Computer Vision (IWRCV), Beijing, November, 2015
+ R. Ishikawa, B. Zheng, T. Oishi, K. Ikeuchi, "Intrinsic Parameter Calibration of Omni-directional LiDAR by using high accurate laser range sensor ," The 9th Internatinal Workshop on Robust Computer Vision (IWRCV), Tokyo, Desember, 2014
+石川 涼一,鄭波, 大石 岳史, 池内 克史,「高精度レーザスキャナを用いた全方位LiDARの内部パラメータ校正」,第15回 計測自動制御学会,東京,2014年12月.

Hobby

Guitar, Desktop Music (DTM), Badminton,

Links

Computer Vision Laboratory, The University of Tokyo