论文标题
多人3D姿势估计的无监督的跨模式对齐
Unsupervised Cross-Modal Alignment for Multi-Person 3D Pose Estimation
论文作者
论文摘要
我们为多人3D人体姿势估算提供了一个友好,快速自下而上的框架。我们采用了多人3D姿势的新神经表示,该神经姿势统一了人的实例位置,其相应的3D姿势表示。通过学习一种生成姿势嵌入来实现这一点,该姿势嵌入不仅确保了合理的3D姿势预测,而且还消除了先前的自下而上方法所采用的通常的关键点分组操作。此外,我们提出了一个实用的部署范式,其中配对的2D或3D姿势注释不可用。在没有任何配对监督的情况下,我们利用一个冷冻网络作为教师模型,该网络接受了多人2D姿势估算的辅助任务培训。我们将学习视为跨模式对准问题,并提出培训目标,以实现两种不同方式之间共享的潜在空间。我们旨在通过使用人工合成的多人3D场景样本来丰富潜在到3D姿势映射,从而增强模型超出限制教师网络的能力。与先前的自上而下的方法相比,我们的方法不仅概括为野外图像,而且在速度和性能之间的权衡取决于较高的权衡。我们的方法还产生了在稳定的监督水平下自下而上的方法中最先进的3D姿势估计表现。
We present a deployment friendly, fast bottom-up framework for multi-person 3D human pose estimation. We adopt a novel neural representation of multi-person 3D pose which unifies the position of person instances with their corresponding 3D pose representation. This is realized by learning a generative pose embedding which not only ensures plausible 3D pose predictions, but also eliminates the usual keypoint grouping operation as employed in prior bottom-up approaches. Further, we propose a practical deployment paradigm where paired 2D or 3D pose annotations are unavailable. In the absence of any paired supervision, we leverage a frozen network, as a teacher model, which is trained on an auxiliary task of multi-person 2D pose estimation. We cast the learning as a cross-modal alignment problem and propose training objectives to realize a shared latent space between two diverse modalities. We aim to enhance the model's ability to perform beyond the limiting teacher network by enriching the latent-to-3D pose mapping using artificially synthesized multi-person 3D scene samples. Our approach not only generalizes to in-the-wild images, but also yields a superior trade-off between speed and performance, compared to prior top-down approaches. Our approach also yields state-of-the-art multi-person 3D pose estimation performance among the bottom-up approaches under consistent supervision levels.