论文标题
善解人意学:在不同的压力条件下实时面部表情和生理数据分析的多模式数据集
EmpathicSchool: A multimodal dataset for real-time facial expressions and physiological data analysis under different stress conditions
论文作者
论文摘要
情感计算近年来吸引了研究人员的关注和兴趣,因为AI系统需要更好地理解和反应人类的情绪。但是,分析人类情绪(例如情绪或压力)非常复杂。尽管各种压力研究都使用面部表情和可穿戴设备,但大多数现有的数据集都依赖于单个模式的处理数据。本文介绍了Engathicschool,这是一个新型数据集,可在不同的压力水平下捕获面部表情和相关的生理信号,例如心率,心率,耳皮活动和皮肤温度。数据是从不同会话中从20个参与者那里收集的26小时。数据包括九种不同的信号类型,包括可用于检测应力的计算机视觉和生理特征。此外,进行了各种实验以验证信号质量。
Affective computing has garnered researchers' attention and interest in recent years as there is a need for AI systems to better understand and react to human emotions. However, analyzing human emotions, such as mood or stress, is quite complex. While various stress studies use facial expressions and wearables, most existing datasets rely on processing data from a single modality. This paper presents EmpathicSchool, a novel dataset that captures facial expressions and the associated physiological signals, such as heart rate, electrodermal activity, and skin temperature, under different stress levels. The data was collected from 20 participants at different sessions for 26 hours. The data includes nine different signal types, including both computer vision and physiological features that can be used to detect stress. In addition, various experiments were conducted to validate the signal quality.