论文标题

从时间多视图数据中解释的深层表示学习

Interpretable Deep Representation Learning from Temporal Multi-view Data

论文作者

Qiu, Lin, Chinchilli, Vernon M., Lin, Lin

论文摘要

在许多科学问题(例如视频监视,现代基因组学和金融)等许多科学问题中,数据通常是从跨时​​间的各种测量中收集的,这些测量表现出时间依赖于时间依赖的异质性质。因此,重要的是,不仅要整合来自多个来源的数据(称为多视图数据),而且要结合时间依赖性以深入了解基础系统。我们提出了一个基于变异自动编码器和复发性神经网络的生成模型,以推断多视图时间数据的潜在动力学。这种方法使我们能够在考虑时间因素的同时识别跨视图的分离潜在嵌入。我们调用了我们提出的模型,用于分析三个数据集,并在其上证明了该模型的有效性和解释性。

In many scientific problems such as video surveillance, modern genomics, and finance, data are often collected from diverse measurements across time that exhibit time-dependent heterogeneous properties. Thus, it is important to not only integrate data from multiple sources (called multi-view data), but also to incorporate time dependency for deep understanding of the underlying system. We propose a generative model based on variational autoencoder and a recurrent neural network to infer the latent dynamics for multi-view temporal data. This approach allows us to identify the disentangled latent embeddings across views while accounting for the time factor. We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源