论文标题
分布式深度卷积压缩,用于大规模MIMO CSI反馈
Distributed Deep Convolutional Compression for Massive MIMO CSI Feedback
论文作者
论文摘要
大量的多输入多输出(MIMO)系统需要基站(BS)的下行链路通道状态信息(CSI),以实现空间多样性和多路复用增长。在频划分双工(FDD)多源MIMO网络中,每个用户都需要压缩和反馈其下行链路CSI到BS。 CSI高度尺度具有天线,用户和子载波的数量,并成为整体频谱效率的主要瓶颈。在本文中,我们提出了一个基于深度学习(DL)的CSI压缩方案,称为DEEPCMC,由卷积层组成,然后是量化和熵编码块。与以前的基于DL的CSI还原结构相比,DEEPCMC提出了一种新型的全趋验神经网络(NN)结构,在解码器上具有残留层,并将量化和熵编码块纳入其设计中。 DEEPCMC经过培训,以最大程度地减少加权利率延伸成本,从而使CSI质量与其反馈开销之间进行权衡。仿真结果表明,DEEPCMC在相同的压缩率的CSI的重建质量方面优于最先进的CSI压缩方案的状态。我们还为多用户MIMO方案提出了DEEPCMC的分布式版本,以分布式方式对多个用户进行编码和重建CSI。分布式DEEPCMC不仅利用单个MIMO用户的固有的CSI结构进行压缩,而且还从附近用户的频道矩阵之间的相关性中受益,以进一步提高与DEEPCMC相比的性能。我们还提出了一种用于分布式DEEPCMC的复杂性培训方法,可以将其扩展到多个用户,并建议一种基于群集的分布式DEEPCMC方法进行实际实施。
Massive multiple-input multiple-output (MIMO) systems require downlink channel state information (CSI) at the base station (BS) to achieve spatial diversity and multiplexing gains. In a frequency division duplex (FDD) multiuser massive MIMO network, each user needs to compress and feedback its downlink CSI to the BS. The CSI overhead scales with the number of antennas, users and subcarriers, and becomes a major bottleneck for the overall spectral efficiency. In this paper, we propose a deep learning (DL)-based CSI compression scheme, called DeepCMC, composed of convolutional layers followed by quantization and entropy coding blocks. In comparison with previous DL-based CSI reduction structures, DeepCMC proposes a novel fully-convolutional neural network (NN) architecture, with residual layers at the decoder, and incorporates quantization and entropy coding blocks into its design. DeepCMC is trained to minimize a weighted rate-distortion cost, which enables a trade-off between the CSI quality and its feedback overhead. Simulation results demonstrate that DeepCMC outperforms the state of the art CSI compression schemes in terms of the reconstruction quality of CSI for the same compression rate. We also propose a distributed version of DeepCMC for a multi-user MIMO scenario to encode and reconstruct the CSI from multiple users in a distributed manner. Distributed DeepCMC not only utilizes the inherent CSI structures of a single MIMO user for compression, but also benefits from the correlations among the channel matrices of nearby users to further improve the performance in comparison with DeepCMC. We also propose a reduced-complexity training method for distributed DeepCMC, allowing to scale it to multiple users, and suggest a cluster-based distributed DeepCMC approach for practical implementation.