Mixed Reality

 

Rendering

  • Content-Adaptive Visibility Predictor

    Conventional visibility models cannot reflect the dependence of the suprathreshold visibility of the blended images on the appearance of the pre-blended image content. Therefore, we have proposed a visibility model with a content-adaptive feature aggregation mechanism, which integrates the visibility for each image feature (i.e., such as spatial frequency and colors) after applying weights that are adaptively determined according to the appearance of the input image.

  • Visibility-based blending

    There are many situations in which virtual objects are presented half-transparently on a background in real time applications. In such cases, we often need to show the object with constant visibility. To overcome this problem, we present a framework for blending images based on a subjective metric of visibility. In our method, a blending parameter is locally and adaptively optimized so that the visibility of each location achieves the targeted level.

  • Aerial perspective rendering

    In outdoor Mixed Reality (MR), objects distant from the observer suffer from an effect called aerial perspective that fades the color of the objects and blends it to the environmental light color. We present a turbiditybased method for rendering a virtual object with aerial perspective effect in a MR application.

Image processing

  • Optical flow and depth estimation

    We present an alternative method for solving the motion stereo problem for two views in a variational framework. Instead of directly solving for the depth, we simultaneously estimate the optical flow and the 3D structure by minimizing a joint energy function consisting of an optical flow constraint and a 3D constraint.

  • Video completion

    We propose a novel omnidirectional video completion framework based on depth estimation. First, we recover the depth of the scene from a pixel motion model constrained by a known camera pose. The proposed approach further improves the depth map by a structure-aware refinement. We can employ the refined depth for color propagation into the holes.

Occlusion handling

Localization / tracking

  • Synthesis-based localization

    A robust image-based alignment method to be used in outdoor environments is proposed. In the proposed method, the albedo of real objects is estimated using 3D shapes of these objects in advance, and the appearance is reproduced from the albedo and current light environment. The appearance of real objects and reproduced image becomes close, so a robust image-based alignment is achieved.

Virtual Reconstruction of Cultural Heritage Assets

  • Virtual Asuka-kyo Project

    We developed Mixed Reality (MR) contents that reconstructed the ancient capital of Asuka-Kyo and applied a fast shading and shadowing method that used shadowing planes. We conducted a subjective evaluation experiment with Head Mounted Display, which showed that displaying these contents increased the audience's knowledge of both Asuka-Kyo and MR technologies. We also conducted impression evaluation tests with and without shading and shadowing.