论文标题

有条件的unet:可穿戴设备相干人类活动识别的条件感知的深层模型

Conditional-UNet: A Condition-aware Deep Model for Coherent Human Activity Recognition From Wearables

论文作者

Zhang, Liming

论文摘要

从可穿戴传感器收集的多通道时间序列数据中识别人类活动越来越实用。但是,在现实世界中,可能同时进行连贯的活动和身体运动,例如在散步或坐着时移动头部。一个新的问题,即所谓的“连贯的人类活动识别(共同措施)”,比正常的多类分类任务更为复杂,因为不同运动的信号彼此混合和干扰。另一方面,我们将这种共同har视为一个密集的标记问题,该问题将每个样本在一个标签上分类,以便为应用程序提供高保真性和持续时间变化的支持。在本文中,开发了一种新型的条件意识深度建筑“有条件 - 无UNET”,以允许对共同hAR问题进行密集的标记。我们还为在步行或坐姿条件下以进行未来研究的头部运动识别的首个共同措施数据集贡献了头部运动识别。头部手势识别的实验表明,我们的模型在现有的最新深度方法中实现了F1得分的总体2%-3%的性能增长,更重要的是,对实际头部手势类别的系统和全面改进。

Recognizing human activities from multi-channel time series data collected from wearable sensors is ever more practical. However, in real-world conditions, coherent activities and body movements could happen at the same time, like moving head during walking or sitting. A new problem, so-called "Coherent Human Activity Recognition (Co-HAR)", is more complicated than normal multi-class classification tasks since signals of different movements are mixed and interfered with each other. On the other side, we consider such Co-HAR as a dense labelling problem that classify each sample on a time step with a label to provide high-fidelity and duration-varied support to applications. In this paper, a novel condition-aware deep architecture "Conditional-UNet" is developed to allow dense labeling for Co-HAR problem. We also contribute a first-of-its-kind Co-HAR dataset for head movement recognition under walk or sit condition for future research. Experiments on head gesture recognition show that our model achieve overall 2%-3% performance gain of F1 score over existing state-of-the-art deep methods, and more importantly, systematic and comprehensive improvements on real head gesture classes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源