论文标题
Matryodshka:实时6DOF视频视图使用多球映像综合
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images
论文作者
论文摘要
我们介绍了一种将立体声360°(全向立体声)成像转换为六个自由度(6DOF)渲染的分层的多个球形图像表示的方法。立体声360°图像可以从虚拟现实(VR)的多摄像机系统中捕获,但缺乏运动视差和正确的方向差异线索。在查看内容时,这些会很快导致VR疾病。一种解决方案是尝试生成适合6DOF渲染的格式,例如通过估计深度。但是,这引发了有关如何处理动态场景中分离区域的问题。我们的方法是通过多球图像表示同时学习深度和分离,可以在VR中使用正确的6DOF差异和运动视差来渲染。这大大改善了观众的舒适度,可以在现代GPU硬件上实时推断和渲染。共同使VR视频成为更舒适的沉浸式媒介。
We introduce a method to convert stereo 360° (omnidirectional stereo) imagery into a layered, multi-sphere image representation for six degree-of-freedom (6DoF) rendering. Stereo 360° imagery can be captured from multi-camera systems for virtual reality (VR), but lacks motion parallax and correct-in-all-directions disparity cues. Together, these can quickly lead to VR sickness when viewing content. One solution is to try and generate a format suitable for 6DoF rendering, such as by estimating depth. However, this raises questions as to how to handle disoccluded regions in dynamic scenes. Our approach is to simultaneously learn depth and disocclusions via a multi-sphere image representation, which can be rendered with correct 6DoF disparity and motion parallax in VR. This significantly improves comfort for the viewer, and can be inferred and rendered in real time on modern GPU hardware. Together, these move towards making VR video a more comfortable immersive medium.