论文标题

启用手势自定义在腕上戴的设备上

Enabling hand gesture customization on wrist-worn devices

论文作者

Xu, Xuhai, Gong, Jun, Brum, Carolina, Liang, Lilian, Suh, Bongsoo, Gupta, Kumar, Agarwal, Yash, Lindsey, Laurence, Kang, Runchang, Shahsavari, Behrooz, Nguyen, Tu, Nieto, Heriberto, Hudson, Scott E., Maalouf, Charlie, Mousavi, Seyed, Laput, Gierad

论文摘要

我们提出了一个手势自定义的框架,需要用户的最小示例,而所有这些都不会降低现有手势集的性能。为此,我们首先部署了一项大规模研究(n = 500+),以收集数据并训练加速度计 - gyRoscope识别模型,当对日常非手持数据进行测试时,以跨用户精度为95.7%,假阳性每小时0.6。接下来,我们设计了一些射击学习框架,该框架从我们的预训练模型中得出了轻巧的模型,从而实现了知识转移而无需绩效降低。我们通过用户研究(n = 20)验证了我们的方法,该方法检查了12个新手势的设备定制,从而在添加一个新的手势时使用一个,三个或5次射击,同时使用相同的识别准确性和假阳性率从预先出现的Gester-execist of-execist of-execist of-exiper nep-excires of-exiper nep-exciist of-exect of-execist of-execist of-execist of the ex thops的平均准确性为55.3%,83.1%和87.2%。我们通过用户体验研究进一步评估实时实施的可用性(n = 20)。我们的结果突出了我们自定义框架的有效性,可学习性和可用性。我们的方法为未来不再有必要预先存在手势的未来铺平了道路,从而使他们释放了他们创造性地介绍针对他们的偏好和能力量身定制的新手势。

We present a framework for gesture customization requiring minimal examples from users, all without degrading the performance of existing gesture sets. To achieve this, we first deployed a large-scale study (N=500+) to collect data and train an accelerometer-gyroscope recognition model with a cross-user accuracy of 95.7% and a false-positive rate of 0.6 per hour when tested on everyday non-gesture data. Next, we design a few-shot learning framework which derives a lightweight model from our pre-trained model, enabling knowledge transfer without performance degradation. We validate our approach through a user study (N=20) examining on-device customization from 12 new gestures, resulting in an average accuracy of 55.3%, 83.1%, and 87.2% on using one, three, or five shots when adding a new gesture, while maintaining the same recognition accuracy and false-positive rate from the pre-existing gesture set. We further evaluate the usability of our real-time implementation with a user experience study (N=20). Our results highlight the effectiveness, learnability, and usability of our customization framework. Our approach paves the way for a future where users are no longer bound to pre-existing gestures, freeing them to creatively introduce new gestures tailored to their preferences and abilities.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源