论文标题
在车辆边缘计算中基于联邦联邦学习的异步性迁移式缓存
Asynchronous Federated Learning Based Mobility-aware Caching in Vehicular Edge Computing
论文作者
论文摘要
车辆边缘计算(VEC)是一项有前途的技术,可以通过缓存路边单元(RSUS)中的内容来支持实时应用程序,因此车辆可以在短时间内从RSU中获取车辆用户(VUS)要求的内容。 RSU的能力受到限制,由于车辆的较高运动特性,VUS要求的内容经常更改,因此,预测最受欢迎的内容并提前将其缓存在RSU中至关重要。 RSU可以根据VUS的数据训练模型,以有效预测流行内容。但是,由于个人隐私,VU通常不愿与他人共享他们的数据。联合学习(FL)允许每辆车基于VUS的数据训练本地模型,并将本地模型上传到RSU,而不是数据以更新全局模型,从而可以保护VUS的隐私信息。传统的同步FL必须等待所有车辆完成培训并上传其本地模型以进行全球模型更新,这将导致训练全球模型的长时间。一旦收到车辆的本地型号,异步FL会及时更新全局模型。但是,具有不同逗留时间的车辆具有不同的影响以实现准确的全球模型。在本文中,我们考虑了车辆的迁移率,并提出了基于异步FL的移动性 - 感知边缘缓存(AFMC)方案以获得准确的全局模型,然后提出了一种算法来基于全球模型来预测流行内容。实验结果表明,AFMC的表现优于其他基线缓存方案。
Vehicular edge computing (VEC) is a promising technology to support real-time applications through caching the contents in the roadside units (RSUs), thus vehicles can fetch the contents requested by vehicular users (VUs) from the RSU within short time. The capacity of the RSU is limited and the contents requested by VUs change frequently due to the high-mobility characteristics of vehicles, thus it is essential to predict the most popular contents and cache them in the RSU in advance. The RSU can train model based on the VUs' data to effectively predict the popular contents. However, VUs are often reluctant to share their data with others due to the personal privacy. Federated learning (FL) allows each vehicle to train the local model based on VUs' data, and upload the local model to the RSU instead of data to update the global model, and thus VUs' privacy information can be protected. The traditional synchronous FL must wait all vehicles to complete training and upload their local models for global model updating, which would cause a long time to train global model. The asynchronous FL updates the global model in time once a vehicle's local model is received. However, the vehicles with different staying time have different impacts to achieve the accurate global model. In this paper, we consider the vehicle mobility and propose an Asynchronous FL based Mobility-aware Edge Caching (AFMC) scheme to obtain an accurate global model, and then propose an algorithm to predict the popular contents based on the global model. Experimental results show that AFMC outperforms other baseline caching schemes.