论文标题

半监督的多尺度双编码方法,用于错误的流量数据检测

Semi-supervised multiscale dual-encoding method for faulty traffic data detection

论文作者

Huang, Yongcan, Yang, Jidong J.

论文摘要

受到深度学习在多尺度信息编码中的成功的启发,我们引入了基于各种自动编码器(VAE)的半监督方法来检测错误的流量数据,这是分类问题。连续小波变换(CWT)应用于交通量数据的时间序列,以获得体现时间频表示的丰富功能,然后是一个双胞胎模型,以单独编码正常数据和错误的数据。由此产生的多尺度双重编码是串联的,并将其馈送到基于注意力的分类器中,该分类器由自我发项模块和多层感知器组成。为了进行比较,对五个不同的编码方案进行了评估,包括(1)vae具有正常数据编码,(2)vae仅具有错误的数据编码,(3)vae具有正常和错误的数据编码,但是分类器中没有注意力模块,((4)Siamese编码,(4)siamese Encoding,(5)Cross-Vision Crosserer(CVISIN)(CVINE)。前四个编码方案采用了相同的卷积神经网络(CNN)体系结构,而第五个编码方案则遵循CVIT的变压器体系结构。我们的实验表明,提出的具有双重编码方案的体系结构与注意模块相结合,优于其他编码方案,并将分类精度为96.4%,精度为95.5%,召回97.7%。

Inspired by the recent success of deep learning in multiscale information encoding, we introduce a variational autoencoder (VAE) based semi-supervised method for detection of faulty traffic data, which is cast as a classification problem. Continuous wavelet transform (CWT) is applied to the time series of traffic volume data to obtain rich features embodied in time-frequency representation, followed by a twin of VAE models to separately encode normal data and faulty data. The resulting multiscale dual encodings are concatenated and fed to an attention-based classifier, consisting of a self-attention module and a multilayer perceptron. For comparison, the proposed architecture is evaluated against five different encoding schemes, including (1) VAE with only normal data encoding, (2) VAE with only faulty data encoding, (3) VAE with both normal and faulty data encodings, but without attention module in the classifier, (4) siamese encoding, and (5) cross-vision transformer (CViT) encoding. The first four encoding schemes adopted the same convolutional neural network (CNN) architecture while the fifth encoding scheme follows the transformer architecture of CViT. Our experiments show that the proposed architecture with the dual encoding scheme, coupled with attention module, outperforms other encoding schemes and results in classification accuracy of 96.4%, precision of 95.5%, and recall of 97.7%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源