C. V.

2019.10~ Project Researcher, The University of Tokyo
2017.4~2017.6 Microsoft Research Internship, Redmond, U. S. A.
2016.3~2019.3 JSPS Research Fellow (DC1)
2019.10 Ph.D ,Electrical Engineering and Information Systems, The University of Tokyo
2016.3 M.E ,Electrical Engineering and Information Systems, The University of Tokyo
2014.3 B.E ,Depertment of Electrical Engineering, The University of Tokyo
2010.3 Kaisei High School

Research

Visibility Enhancement of Lesion Regions in Chest X-Ray Images With Image Fidelity Preservation

This study proposes a chest X-ray image enhancement framework for enhancing lesion visibility while preserving image features to aid a physician’s diagnosis. The proposed method predicts the image processing parameters that enhance the lesion signals via the inference neural network. The experiments show that after the proposed method was trained on 2000 images, it improved lesion visibility with an acceptable fidelity loss. We also performed pairwise comparisons and confirmed that trade-offs between fidelity loss and visibility gain were attained.

LiDAR-camera Calibration using Intensity Variance Cost

We propose an extrinsic calibration method for LiDAR-camera fusion systems using variations in intensities projected from camera images to the LiDAR point cloud. Given image sequence, LiDAR point sequence, and camera motion, the variations in the projected intensities at each point are large in the presence of errors in the estimated motion or calibration parameters. Our experimental results showed that the proposed method accurately performed calibrations using a camera and a sparse multi-beam LiDAR or one-dimensional LiDAR.

LiDAR and Camera Calibration using Motions Estimated by Sensor Fusion Odometry

An acculate camera-LiDAR calibration is required for an acculate motion estimation. Our approach extends the hand-eye calibration framework to 2D-3D calibration. The scaled camera motions are accurately calculated using a sensor-fusion odometry method. We also clarify the suitable motions for our calibration method. Whereas other calibrations require the LiDAR reflectance data and an initial extrinsic parameter, the proposed method requires only the three-dimensional point cloud and the camera image. The effectiveness of the method is demonstrated in experiments using several sensor configurations in indoor and outdoor scenes. Our method achieved higher accuracy than comparable state-of-the-art methods.

[src]

SLAM-Device and Robot Calibration for Navigation

We propose a robot-device calibration method for navigation with a device using SLAM technology like Microsoft HoloLens. The calibration is performed by using the movements given by the robot and the device. In the calibration, the most efficient way of movement is clarified according to the restriction of the robot movement. Furthermore, we also show a method to dynamically correct the position and orientation of the robot so that the information of the external environment and the shape information of the robot maintain consistency in order to reduce the dynamic error occurring during navigation. Our method can be easily used for various kinds of robots.

[src](HoloLens) [src](ROS)

Laser Profiler and Camera Fusion System on a Rover for 3D Reconstruction

Conventional way to scan large scale structure has another problem that it is laborious and time-consuming. To accelarete to conduct large scale 3D modeling with laser scanner, we develop sensor fusion system on a rover. An omni-directional camera is mounted with laser scanner together and sensor motion is estimated with panoramic video. we introduce occlusion handling and propose the motion refinement using multi modal registration by maximizing mutual infomation in this system.

Intrinsic Parameter Calibration of LiDAR

Multi-beam LiDAR is used for many of applications such as autonomous driving, SLAM, 3D modeling, etc. Multi beam LiDAR can obtain surrounding 3D information with high frame rate. However it needs calibration between laser units and intrinsic calibration parameters provided by manufacturer are not accurate enough to using 3D modeling. To refine the intrinsic calibration parameters, we conduct alignment 3d point cloud obtained by each laser unit to more acculate reference 3D data.

Rectification of Aerial 3D Laser Scans via Line-to-Surface Registration to Ground Model

3D digital archiving is obtaining shapes of real objects with 3D scanner and digitize them. Especially it is useful for preservation and restoration of cultural heritage assets which are exposed to risk of deterioration. However, conventional way cannot obtain enough roofs surface data. We develop balloon sensor and scan roofs from high position, and estimate motion during scanning by utilizing overlaps between aerial scans and static scans. Our method is based on ICP algorithm and introduces smoothness constraint to eliminate ambiguity.

Publication

Journal papers

+ Y. Yao, R. Ishikawa, and T. Oishi, "Stereo-LiDAR Fusion by Semi-Global Matching with Discrete Disparity-Matching Cost and Semidensification," in IEEE Robotics and Automation Letters, vol. 10, no. 5, pp. 4548-4555, May 2025. + R. Ishikawa, T. Yuzawa, T. Fukiage, M. Kagesawa, T. Watsuji, T. Oishi, "Visibility Enhancement of Lesion Regions in Chest X-ray Images with Image Fidelity Preservation," in IEEE Access, vol. 13, pp. 11080-11094, 2025. + Y. Yao, R. Ishikawa, S. Ando, K. Kurata, N. Ito, J. Shimamura, T. Oishi, "Non-Learning Stereo-Aided Depth Completion Under Mis-Projection via Selective Stereo Matching," in IEEE Access, vol. 9, pp. 136674-136686, 2021
+ Y. Yao, M. Roxas, R. Ishikawa, S. Ando, J. Shimamura, and T. Oishi, "Discontinuous and Smooth Depth Completion with Binary Anisotropic Diffusion Tensor," IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5128-5135, Oct. 2020
+ A. Hirata, R. Ishikawa, M. Roxas, T. Oishi "Real-Time Dense Depth Estimation using Semantically-Guided LIDAR Data Propagation and Motion Stereo," IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3806-3811, Oct. 2019
+ R. Ishikawa, B. Zheng, T. Oishi, K. Ikeuchi, "Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model," IPSJ Transactions on Computer Vision and Applications, Vol. 7, pp. 89-93, July 27, 2015.,


Conference, Workshop (Peer Reviewed)

+ S. Zhou, S. Xie, R. Ishikawa, and T. Oishi, "Robust LiDAR-Camera Calibration With 2D Gaussian Splatting," IEEE/RSJ International Conference on Intelligent Robots (IROS), August 2025 + S. Xie, S. Zhou, K. Sakurada, R. Ishikawa, M. Onishi, and T. Oishi, "G2fR: Frequency Regularization in Grid-based Feature Encoding Neural Radiance Fields," The 18th European Conference on Computer Vision (ECCV), pp. 186-203, 2024 + R. Ishikawa, S. Zhou, Y. Sato, T. Oishi and K. Ikeuchi, "LiDAR-camera Calibration using Intensity Variance Cost," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024 + L. Fu, R. Ishikawa, Y. Sato and T. Oishi, "CAPT: Category-level Articulation Estimation from a Single Point Cloud Using Transformer," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024 + Y. Kang, G. Caron, R. Ishikawa, A. Escande, K. Chappellet, R. Sagawa, T. Oishi, "Direct 3D model-based object tracking with event camera by motion interpolation," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024 + S. Zhou, S. Xie, R. Ishikawa, K. Sakurada, M. Onishi, and T. Oishi, "INF: Implicit Neural Fusion for LiDAR and Camera," International Conference on Intelligent Robots (IROS), 2023
+ L. Miao, R. Ishikawa, and T. Oishi, "SWIN-RIND: Edge Detection for Reflectance, Illumination, Normal and Depth Discontinuity with Swin Transformer," British Machine Vision Conference (BMVC), 2023
+ H. Hansen, Y. Liu, R. Ishikawa, T. Oishi, Y. Sato, "Quadruped Robot Platform for Selective Pesticide Spraying," 18th International Conference on Machine Vision Applications (MVA), Hamamatsu, Japan, 2023
+ S. Xie, R. Ishikawa, K. Sakurada, M. Onishi and T. Oishi, "Fast Structural Representation and Structure-aware Loop Closing for Visual SLAM," IEEE/RSJ International Conference on Intelligent Robots (IROS), 2022
+ R. Hartant, R. Ishikawa, M. Roxas, T. Oishi, "Hand-Motion-guided Articulation and Segmentation Estimation," The 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples Italy (Online), 2020
+ R. Ishikawa, T. Oishi, K. Ikeuchi, "LiDAR and Camera Calibration using Motions Estimated by Sensor Fusion Odometry" In Proc. International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October, 2018.
+ R. Ishikawa, M. Roxas, Y. Sato, T. Oishi, T. Masuda, K. Ikeuchi, "A 3D Reconstruction with High Density and Accuracy using Laser Profiler and Camera Fusion System on a Rover," In Proc. International Conference on 3D Vision (3DV), Palo Alto, Octover, 2016.
+ B. Zheng, X. Huang, R. Ishikawa, T. Oishi, K. Ikeuchi, "A New Flying Range Sensor: Aerial Scan in Omini-directions," In Proc. International Conference on 3D Vision (3DV), pp. 623-631, Lyon, France, October, 2015.
+ R. Ishikawa, B. Zheng, T. Oishi, K. Ikeuchi, "Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model," The 18th Meeting on Image Recognition and Understanding (MIRU), Osaka, July, 2015


Others

+石川 涼一,「ライダ,カメラを用いた三次元復元技術」, 電子情報通信学会誌 Vol. 107, No. 10, 2024 +石川 涼一,鄭波, 大石 岳史, 池内 克史,「高精度レーザスキャナを用いた全方位LiDARの内部パラメータ校正」,第15回 計測自動制御学会,東京,2014年12月.

Hobby

Guitar, Desktop Music (DTM), Badminton,

Links

Computer Vision Laboratory, The University of Tokyo