Similar to INF, this approach uses 2D Gaussian splatting generated from LiDAR data as geometric information
and achieves robust LiDAR-camera calibration by optimizing view consistency across camera images.
Using implicit neural representations, we proposed a fusion method of data captured by LiDAR and camera.
This method uses the neural density field from LiDAR data to represent the geometric information.
As well as NeRF, we achieve sensor calibration and data fusion simultaneously
by learning the color field that is consistent with input camera images.
This paper clarifies the basic mechanism of frequency regularisation in implicit neural representations
and comprehensively discusses the expressive capabilities of NeRF with grid-based feature encoding (GFE).
We also proposed a generalised frequency regularisation strategy for the problems of camera pose optimisation
and few-shot reconstruction in NeRF.
It is challenging to model objects through glass, such as objects in a glass case, in a 3D manner.
The proposed method models the glass surface and refraction from images taken from multiple directions
and separates the viewpoint-dependent reflection component and the viewpoint-independent object shape and color
through neural representations.
We proposed a NeRF-based approach to model invisible components such as gases in three dimensions using image sequences
from far-infrared and visible light cameras.
The method learns the color and density fields of visible light in advance,
and by using the same density fields as geometry information, it is possible to model invisible components
from far-infrared images in three dimensions.
We developed a mobile scanning system for fastly and accurately capturing the 3D range data.
Our system consists of a LiDAR and a color camera.
While the laser scanner works for capturing 3D structures,
the color camera captures an image sequence.
The sensor motion is estimated robustly from a sensor-fused 2D/3D feature-tracking method,
which helps to reconstruct the structures from the scan lines accurately.
We developed a flying sensor system to capture 3D data aerially.
The system, consisting of a omni-directional laser scanner and a panoramic camera,
can be mounted under a mobile platform to achieve the aerial scanning with high resolution and accuracy.
Since the laser scanner often requires several minutes
to complete an omni-directional scan, the raw data is distorted seriously due to the unknown and uncontrollable
movement during the scanning period. Our approach recovers the sensor motion by utilizing the spacial
and temporal features extracted both from the image sequences and point clouds.