论文标题
知识蒸馏辅助端到端学习,用于具有有限速率反馈的多源MIMO下行链路系统中的线性预编码
Knowledge Distillation-aided End-to-End Learning for Linear Precoding in Multiuser MIMO Downlink Systems with Finite-Rate Feedback
论文作者
论文摘要
我们为下行链路多源多输入和多输出系统提出了基于深度学习的频道估计,量化,反馈和预编码方法。在拟议的系统中,有限反馈的通道估计和量化由接收器深神经网络(DNN)处理。预码器选择由发射器DNN处理。为了模仿传统的通道量化,每个接收器DNN都采用了一个二进制层,并且二进制层也用于启用端到端学习。但是,这可能导致不准确的梯度,这会在训练过程中以较差的局部最低限度捕获接收器DNN。为了解决这个问题,我们考虑了知识蒸馏,其中现有的DNN与辅助发射器DNN共同培训。使用辅助DNN作为教师网络允许接收者DNN额外利用无损梯度,这对于避免局部最小值较差很有用。对于相同数量的反馈位,与传统的线性预编码相比,我们基于DNN的预编码方案可以实现更高的下链接速率,并具有基于代码书的有限反馈。
We propose a deep learning-based channel estimation, quantization, feedback, and precoding method for downlink multiuser multiple-input and multiple-output systems. In the proposed system, channel estimation and quantization for limited feedback are handled by a receiver deep neural network (DNN). Precoder selection is handled by a transmitter DNN. To emulate the traditional channel quantization, a binarization layer is adopted at each receiver DNN, and the binarization layer is also used to enable end-to-end learning. However, this can lead to inaccurate gradients, which can trap the receiver DNNs at a poor local minimum during training. To address this, we consider knowledge distillation, in which the existing DNNs are jointly trained with an auxiliary transmitter DNN. The use of an auxiliary DNN as a teacher network allows the receiver DNNs to additionally exploit lossless gradients, which is useful in avoiding a poor local minimum. For the same number of feedback bits, our DNN-based precoding scheme can achieve a higher downlink rate compared to conventional linear precoding with codebook-based limited feedback.