S. Zhou, S. Xie, R. Ishikawa, and T. Oishi, "Robust LiDAR-Camera Calibration With 2D Gaussian Splatting," IEEE Robotics and Automation Letters, vol. 10, no. 5, pp. 4674-4681, May 2025.
Y. Yao, R. Ishikawa, and T. Oishi, "Stereo-LiDAR Fusion by Semi-Global Matching with Discrete Disparity-Matching Cost and Semidensification," in IEEE Robotics and Automation Letters, vol. 10, no. 5, pp. 4548-4555, May 2025.
International conference
S. Zhou, S. Xie, R. Ishikawa, and T. Oishi, "Robust LiDAR-Camera Calibration With 2D Gaussian Splatting," IEEE/RSJ International Conference on Intelligent Robots (IROS), August 2025.
Yao, R. Ishikawa, and T. Oishi, "Stereo-LiDAR Fusion by Semi-Global Matching with Discrete Disparity-Matching Cost and Semidensification," IEEE/RSJ International Conference on Intelligent Robots (IROS), August 2025.
I. Shimoda, I. Sokrithy, T. Oishi, R. Nakamura, L. Kajiwara, W. Chen, “Safeguarding Banteay Chhmar through Digital Twins: Integrated Approaches for Research, Management, and Conservation,” International Committee for Documentation of Cultural Heritage (CIPA) 2025, Seoul, Korea, August 2025.
R. Ishikawa, T. Yuzawa, T. Fukiage, M. Kagesawa, T. Watsuji, T. Oishi, "Visibility Enhancement of Lesion Regions in Chest X-ray Images with Image Fidelity Preservation," in IEEE Access, vol. 13, pp. 11080-11094, 2025.
International conference
K. Haga, M. Kagesawa, and T. Oishi, “Apoxyomenoi: replication in antiquity and restoration in modern times,” 22nd International Congress on Ancient Bronzes, Athens, Greece October 17, 2024.
S. Xie, S. Zhou, K. Sakurada, R. Ishikawa, M. Onishi, and T. Oishi, "G2fR: Frequency Regularization in Grid-based Feature Encoding Neural Radiance Fields," The 18th European Conference on Computer Vision (ECCV), pp. 186-203, 2024.
W. Kim, T. Fukiage, and T. Oishi, "REF2 NeRF: Reflection and Refraction-aware Neural Radiance Field," International Conference on Intelligent Robots (IROS), pp. 7196-7203, 2024.
X. Li, S. Xie, K. Sakurada, R. Sagawa, and T. Oishi, "Implicit Neural Fusion of RGB and Far-Infrared 3D Imagery for Invisible Scenes," International Conference on Intelligent Robots (IROS), pp. 12501-12508, 2024.
L. Fu, R. Ishikawa, Y. Sato and T. Oishi, "CAPT: Category-level Articulation Estimation from a Single Point Cloud Using Transformer," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 751-757, doi: 10.1109/ICRA57147.2024.10611073.
Y. Kang, G. Caron, R. Ishikawa, A. Escande, K. Chappellet, R. Sagawa, T. Oishi, "Direct 3D model-based object tracking with event camera by motion interpolation," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 2645-2651, doi: 10.1109/ICRA57147.2024.10611576.
R. Ishikawa, S. Zhou, Y. Sato, T. Oishi and K. Ikeuchi, "LiDAR-camera Calibration using Intensity Variance Cost," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 10688-10694, doi: 10.1109/ICRA57147.2024.10610261.
Domestic conference
J. He, T. Oishi, R. Ishikawa, “Transformer-based Event Camera Relative Pose Estimation in Outdoor Scenes,” Visual Computing 2024, 2024年9月10日.
Y. Wu, T. Oishi, “Robust Panoramic SLAM for Outdoor Scenes,” Visual Computing 2024, 2024年9月12日.
T. Nemoto, T. Kobayashi, M. Kagesawa, T. Oishi, H. Kurokochi, S. Yoshimura, E. Ziddan, M. Taha, "Virtual Restoration of Ancient Wooden Ships Through Non-rigid 3D Shape Assembly with Ruled-Surface FFD," International Journal of Computer Vision (IJCV), 131, 1269–1283, 2023.
T. Fukiage and T. Oishi, "A Content-Adaptive Visibility Predictor for Perceptually Optimized Image Blending," ACM Transactions on Applied Perception, Vol. 20(1), Article 3, pp. 1-29, January 2023, doi: 10.1145/3565972
International conference
S. Balasooriya, Y. Sato, T. Oishi, "Autonomous Robotic Platform for Proximal Data Collection Amongst Foliage Utilizing an Anisotropically Flexible Manipulator," The 2024 16th IEEE/SICE International Symposium on System Integration (SII), Ha Long, Vietnam, 2024, pp. 1498-1503, doi: 10.1109/SII58957.2024.10417644.
H. Hansen, Y. Liu, R. Ishikawa, T. Oishi, Y. Sato, "Quadruped Robot Platform for Selective Pesticide Spraying," 18th International Conference on Machine Vision Applications (MVA), Hamamatsu, Japan, 2023. pp. 1-6, doi: 10.23919/MVA57639.2023.10215812.
L. Miao, R. Ishikawa, and T. Oishi, "SWIN-RIND: Edge Detection for Reflectance, Illumination, Normal and Depth Discontinuity with Swin Transformer," British Machine Vision Conference (BMVC), 2023.
S. Zhou, S. Xie, R. Ishikawa, K. Sakurada, M. Onishi and T. Oishi, "INF: Implicit Neural Fusion for LiDAR and Camera," 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 2023, pp. 10918-10925, doi: 10.1109/IROS55552.2023.10341648.
M. Tsuji, R. Shitomi, Y. Fujimura, T. Funatomi, Y. Mukaigawa, T. Morimoto, T. Oishi, J. Takamatsu, and K. Ikeuchi, "Pigment Mapping for Tomb Murals Using Neural Representation and Physics-Based Model," Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 1671-1679, 2023.
Domestic conference
M. Lun, R. Ishikawa, T. Oishi, “SWIN-RIND: Edge Detection for Reflectance, Illumination, Normal and Depth Discontinuity with Swin Transformer,” 情報処理学会 コンピュータビジョンとイメージメディア研究報告(CVIM), 2024年1月.
G. Ziang, Y. Yao, R. Ishikawa, T. Oishi, “Non-learning depth completion with continuous and binary anisotropic diffusion tensor,” 情報処理学会 コンピュータビジョンとイメージメディア研究報告(CVIM), 2024年1月.
C. Zhu, R. Ishikawa, M. Kagesawa, T. Yuzawa, T. Watsuji, T. Oishi, “NeAS: 3D modeling and surface extraction from X-ray images using Neural Attenuation Surface,” 情報処理学会 コンピュータビジョンとイメージメディア研究報告(CVIM), 2024年1月.
L. Fu, R. Ishikawa, Y. Sato, T. Oishi, “CAPT: Category-level Articulation Estimation from a Single Point Cloud Using Transformer,” 第41回日本ロボット学会学術講演会, 2023年9月12日, 仙台.
J. Chen, R. Ishikawa, T. Oishi, “Task-Related Planar Grasping with Object Pose Re-Adjustment,” 第41回日本ロボット学会学術講演会, 2023年9月12日, 仙台.
R. Shitomi, M. Tsuji, Y. Fujimura, T. Funatomi, Y. Mukaigawa, T. Morimoto, T. Oishi, J. Takamatsu, and K. Ikeuchi: Unsupervised learning with physics-based autoencoder for estimating thickness and mixing ratio of pigments," Journal of the Optical Society of America A, 40(1), 116–128, Dec. 19, 2022, doi: 10.1364/JOSAA.472775
M. Sato, R. Hakoda, A. Murakami, N. Wake, K. Sasabuchi, M. Nakamura, T. Oishi, T. Itoh, K. Ikeuchi, “Digital Reconstruction of Ballet Movements from Dance Scores: A Focus on Stepanov's Music Note System and Labanotation,” The 32nd conference of the International Council of Kinetography Laban/Labanotation, July 17-23, 2022, Budapest, Hungary.
S. Xie, R. Ishikawa, K. Sakurada, M. Onishi and T. Oishi, "Fast Structural Representation and Structure-aware Loop Closing for Visual SLAM," IEEE/RSJ International Conference on Intelligent Robots (IROS), pp. 6396-6403, 2022, doi: 10.1109/IROS47612.2022.9981496
H. Hendra, R. Ishikawa, Y. Sato and T. Oishi, “Quadruped Robot Platform for Selective Pesticide Spraying,” ICRA 2022 Workshop on Agricultural Robotics and Automation, 27 May 2022, Philadelphia Convention Center, Hybrid Event.
K. Li, R. Ishikawa, M. Roxas, T. Oishi, “Camera pose estimation in vehicle based on ResNet,” 第40回日本ロボット学会学術講演会, 2022年9月6日, 東京.
Y. Kang, G. Caron, R. Ishikawa, A. Escande, K. Chappellet, R. Sagawa, T. Oishi, “Direct 3D model-based tracking in omnidirectional data,” 第40回日本ロボット学会学術講演会, 2022年9月6日, 東京.
W. Yin, R. Ishikawa, T. Oishi, “Reflection removal of glass wall with Encoder-Decoder deep learning network, 第40回日本ロボット学会学術講演会, 2022年9月6日, 東京.
Y. Yao, R. Ishikawa, S. Ando, K. Kurata, N. Ito, J. Shimamura, T. Oishi, “Non-Learning Stereo-Aided Depth Completion Under Mis-Projection via Selective Stereo Matching,” in IEEE Access, vol. 9, pp. 136674-136686, 2021, doi: 10.1109/ACCESS.2021.3117710.
International conference
T. Fukiage, T. Oishi, “A computational model to predict the visibility of alpha-blended images,” Journal of Vision, 21 (9), 2493-2493, 2021.
N. Kawai, Y. Okada, T. Oishi, M. Kagesawa, A. Nishisaka, H. Kamal, “The Ceremonial Canopied Chariot of Tutankhamun (JE61990 and JE60705) A Tentative Virtual Reconstruction,” CIPEG Journal: Ancient Egypt & Sudanese Collections and Museums, No. 4, Nov. 2020, doi: 10.11588/cipeg.2020.4.76661.
Y. Yao, M. Roxas, R. Ishikawa, S. Ando, J. Shimamura, and T. Oishi, “Discontinuous and Smooth Depth Completion with Binary Anisotropic Diffusion Tensor,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5128-5135, Oct. 2020, doi: 10.1109/LRA.2020.3005890.
M. Roxas and T. Oishi, “Variational Fisheye Stereo,” in IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1303-1310, April 2020. DOI: 10.1109/LRA.2020.2967657
International conference
Y. Yao, M. Roxas, R. Ishikawa, S. Ando, J. Shimamura, and T. Oishi, “Discontinuous and Smooth Depth Completion with Binary Anisotropic Diffusion Tensor,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2020.
M. Roxas and T. Oishi, “Variational Fisheye Stereo,” 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May – 31 August 2020.
R. Hartant, R. Ishikawa, M. Roxas, T. Oishi, “Hand-Motion-guided Articulation and Segmentation Estimation,” The 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples Italy, 2020, DOI: 10.1109/RO-MAN47096.2020.9223433.
J. Hausberg, R. Ishikawa, M. Roxas, T. Oishi, “Drone Detection and Tracking from Sparse LIDAR Data by Adaptive Kernel-based Detector,” 第23回 画像の認識・理解シンポジウム(MIRU2020), 8月3日, 2020年.
■2019年度
Journal paper
A. Hirata, R. Ishikawa, M. Roxas and T. Oishi, “Real-Time Dense Depth Estimation Using Semantically-Guided LIDAR Data Propagation and Motion Stereo,” in IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3806-3811, Oct. 2019. doi: 10.1109/LRA.2019.2927126
International conference
M. Kamakura, H. Ikuta, B. Zheng, Y. Sato, M. Kagesawa, T. Oishi, K. Sezaki, T. Nakagawa, K. Ikeuchi, “Preah Vihear Project: Obtaining 3D point-cloud data and its application to spatial distribution analysis of Khmer temples,” In 3rd ACM SIGSPATIAL International Workshop on Geospatial Humanities (GeoHumanities’19), Article No. 5, pp. 1–9, November 5, 2019, Chicago, IL, USA. ACM, New York, NY, USA. https://doi.org/10.1145/3356991.3365473
R. Ishikawa, T. Oishi, K. Ikeuchi, “Dynamic Calibration between a Mobile Robot and SLAM Device for Navigation,” 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1-6, New Delhi, India, 16th. Oct. 2019. DOI:10.1109/RO-MAN46459.2019.8956356
A. Hirata, R. Ishikawa, M. Roxas and T. Oishi, “Real-Time Dense Depth Estimation Using Semantically-Guided LIDAR Data Propagation and Motion Stereo,” The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov. 2019.
N. Kawai, Y. Okada, T. Oishi, M. Kagesawa, A. Nishisaka, H. M. Kamal, T. S. Tawfik, “A Virtual Reconstruction of the Ceremonial Canopied Chariot of Tutankhamun (JE61990 and JE60705) – A Case of Virtual Representation in a Museum,” CIPEG session, 25th ICOM (International Council of Museums) General Conference (ICOM), Kyoto, 2019.9.1.
Y. Ike, K. Fujimoto, M. Roxas, Y. Okamoto, T. Oishi, “Accurate Visual Localization for Vehicles using Effective Feature Points in Dense 3D Map,” 第22回 画像の認識・理解シンポジウム(MIRU2019), グランキューブ大阪, 7月30日, 2019年
A. Hirata, R. Ishikawa, M. Roxas, T. Oishi, “LIDAR Upsampling Based on Semantically-Guided Propagation,” 第22回 画像の認識・理解シンポジウム(MIRU2019), グランキューブ大阪, 8月1日, 2019年.(MIRU学生奨励賞)
R. Ishikawa, Y. Sato, T. Oishi, K. Ikeuchi, “A profiler-camera fusion scanning system with direct based motion and calibration parameter correction,” 第22回 画像の認識・理解シンポジウム(MIRU2019), グランキューブ大阪, 8月1日, 2019年
■2018年度
International conference
T. Nemoto, T. Kobayashi, T. Oishi, M. Kagesawa, H. Kurokochi, S. Yoshimura, E. Ziddan, M. Taha, “Virtual Restoration of Wooden Artifacts by Non-Rigid 3D Shape Assembly: A Case of the First Solar Boat of King Khufu,” The 16th EUROGRAPHICS Workshop on Graphics and Cultural Heritage (EG GCH), Visual Heritage 2018, pp. 241-245, Vienna, Nov. 14, 2018, doi: 10.2312/gch.20181370.
M. Roxas, T. Hori, T. Fukiage, Y. Okamoto, T. Oishi, “Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality,” 24th ACM Symposium on Virtual Reality Software and Technology (VRST), Article No. 20, Nov. 30, 2018, doi: 10.1145/3281505.3281546.
R. Ishikawa, T. Oishi, K. Ikeuchi, “LiDAR and Camera Calibration using Motion Estimated by Sensor Fusion Odometry,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7342 - 7349, Oct. 4, 2018.
M. Roxas, T. Hori, T. Fukiage, Y. Okamoto, T. Oishi, “Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality,” The 13th International Workshop on Robust Computer Vision, Jan. 12-13, 2019, Beijing, China.
A. Hirata, R. Ishikawa, M. Roxas, T. Oishi, “Interpolation and Extrapolation of LiDAR Data using Camera Images,” The 13th International Workshop on Robust Computer Vision, Jan. 12-13, 2019, Beijing, China. (Best Poster Presentation Award)
Y. Sato, R. Suda, T. Oishi, “Automatic Modeling using Reinforcement Learning Based on Action Rules by Measurement Records,” The 13th International Workshop on Robust Computer Vision, Jan. 12-13, 2019, Beijing, China.
T. Nemoto, T. Kobayashi, M. Kagesawa, T. Oishi, H. Kurokochi, S. Yoshimura, E. Zidan, M. Taha, “Assembling Partial 3D Objects using Non-rigid Registration with Physical Constraints,” The 13th International Workshop on Robust Computer Vision, Jan. 12-13, 2019, Beijing, China.
R. Ishikawa, T. Oishi, K. Ikeuchi, “Lidar and Camera Calibration using Motions Estimated by Sensor Fusion Odometry,” The 13th International Workshop on Robust Computer Vision, Jan. 12-13, 2019, Beijing, China.
M. Roxas, T. Hori, T. Fukiage, Y. Okamoto, T. Oishi, “Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Augmented Reality,” 第21回 画像の認識・理解シンポジウム(MIRU2018), 8月6日, 2018年. 札幌
■2017年度
Journal paper
S. Yamada, M. Araya, A. Yoshida, T. Oishi, “Structural stability evaluation study applying wind tunnel test and monitoring of Bayon main tower, Angkor Thom in Cambodia,” Structural Studies, Repairs and Maintenance of Heritage Architecture XV, WIT Transactions on The Built Environment, Vol. 171, pp. 287-296, 2017, doi: 10.2495/STR170251.
C. Morales, T. Oishi, K. Ikeuchi, “Real-time Rendering of Aerial Perspective Effect based on Turbidity Estimation,” IPSJ Transactions on Computer Vision and Applications (CVA), vol. 9, no. 1, 2017, doi: 10.1186/s41074-016-0012-1.
International conference
M. Roxas and T. Oishi, “Real-Time Simultaneous 3D Reconstruction and Optical Flow Estimation,” IEEE Winter Conf. on Applications of Computer Vision (WACV), pp. 885-893, Mar.12-15, 2018, doi: 10.1109/WACV.2018.00102.
K. Sengoku-Haga, M. Lu, S. Ono, T. Oishi, T. Masuda, K. Ikeuchi, “Polykleitos at Work: How the Doryphoros Was Used,” the 19th International Congress on Ancient Bronzes, Los Angeles, California, Nov. 2017 (October 13–17, 2015).
K. Sakai, H. Yoshida, T. Oguchi, Y. Suda, K. Ikeuchi, K. Nakano, T. Oishi, S. Ono, T. Suzuki, T. Hirasawa, K. Wada, T. Sugimachi, R. Zheng, and K. Shimono, “Proposal on cooperative ITS for safe and sustainable transportation in Japan,” in Proc. 24th ITS World Congress Tokyo, Oct. 2017, Montreal Canada.
R. Ishikawa, T. Oishi, K. Ikeuchi, “Lidar and Camera Calibration using Motions Estimated by Sensor Fusion Odometry,” The 13th International Workshop on Robust Computer Vision, Jan. 12-13, 2019, Beijing, China.
R. Ishikawa, T. Oishi, K. Ikeuchi, “SLAM-Device and Robot Calibration for Navigation,” The 12th International Workshop on Robust Computer Vision, Jan. 6-7, 2018, Nara, Japan.
M. Roxas, T. Oishi, “Real-Time Simultaneous 3D Reconstruction and Optical Flow Estimation,” The 12th International Workshop on Robust Computer Vision, Jan. 6-7, 2018, Nara, Japan.
Y. Ike, R. Menandro, Y. Okamoto, T. Oishi, “Robust Visual Localization using Effective Feature Points in Dense 3D Map,” The 12th International Workshop on Robust Computer Vision, Jan. 6-7, 2018, Nara, Japan. (Best Poster Presentation Award)
K. Hasegawa, Y. Okamoto, T. Oishi, T. Fukiage, “Geometrically and Optically Robust Optical See-Through Mixed Reality System with Eye Tracking Techniques,” The 12th International Workshop on Robust Computer Vision, Jan. 6-7, 2018, Nara, Japan.
T. Oishi, “Accurate Position and Pose Estimation for Mobile Platforms using Dense 3D Model,” Korea-Japan Workshop on Robotics and Information Technology for Better Quality of Life, Dec. 1, 2017, Tokyo, Japan.
Kyoko Sengoku-Haga, Sae Buseki, Min Lu, Takeshi Masuda, Takeshi Oishi, Katsushi Ikeuchi, “Cyber Archaeology of Greek and Roman Sculpture,” Microsoft Research Asia Academic Day, 2017, Taiwan, May 26. 2017.
M. Ogawa, K. Honda, Y. Sato, T. Oishi and K. Ikeuchi, “Development of interface for teleoperation of humanoid robot using task model method,” 2016 IEEE/SICE International Symposium on System Integration (SII), Dec. 2016, Sapporo, Japan.
C. Morales, S. Ono, Y. Okamoto, M. Roxas, T. Oishi and K. Ikeuchi, “Outdoor Omnidirectional Video Completion via Depth Estimation by Motion Analysis,” Proc. International Conference on Pattern Recognition (ICPR), pp. 3945-3950, 2016.
R. Ishikawa, M. Roxas, Y. Sato, T. Oishi, T. Masuda, K. Ikeuchi, “A 3D Reconstruction with High Density and Accuracy using Laser Profiler and Camera Fusion System on a Rover,” In Proc. International Conference on 3D Vision (3DV), pp. 620-628, Oct 27, 2016, Palo Alto, USA. DOI: 10.1109/3DV.2016.70
M. Roxas, T. Hori, and T. Oishi, “Variational Optical Flow with 3D Smoothness Constraint for a Single Moving Camera,” The 11th International Workshop on Robust Computer Vision, Dec. 17-18, 2016, Daejeon, Korea.
C. Morales, S. Ono, Y. Okamoto, M. Roxas, T. Oishi and K. Ikeuchi, “Panoramic Video Completion from Depth Recovery by Pixel Motion Assessment,” The 11th International Workshop on Robust Computer Vision, Dec. 17-18, 2016, Daejeon, Korea.
R. Ishikawa, M. Roxas, Y. Sato, T. Oishi, T. Masuda, K. Ikeuchi, “A Sensor Fusion 3D Reconstruction System using Depth-based Triangulation and Multimodal Registration", The 11th International Workshop on Robust Computer Vision, Dec. 17-18, 2016, Daejeon, Korea.
T. Hori, T. Fukiage, R. Menandro, Y. Okamato and T. Oishi, "Occlusion Handling by Semantic Segmentation and Transparency Blending for Outdoor Mixed Reality", The 11th International Workshop on Robust Computer Vision, Dec. 17-18, 2016, Daejeon, Korea.
Y. Okamoto, K. Fujimoto, T. Oishi and K. Ikeuchi, “Robust Motion Estimation for MR Mobility System with Multiple Panoramic Cameras,” The 11th International Workshop on Robust Computer Vision, Dec. 17-18, 2016, Daejeon, Korea.