论文标题
从当地渠道状态信息学习进行渐进的分布式压缩策略
Learning Progressive Distributed Compression Strategies from Local Channel State Information
论文作者
论文摘要
本文提出了一个深度学习框架,以设计分布式压缩策略,其中分布式药物需要压缩源的高维度观察,然后通过带宽将压缩位限制在源源重建中心。此外,我们要求压缩策略是渐进的,以便它可以适应代理和融合中心之间的不同链路带宽。此外,为了确保可伸缩性,我们研究了仅取决于每个代理商的本地渠道状态信息(CSI)的策略。为此,我们使用了一种数据驱动的方法,在这种方法中,在这种方法中,将每个代理的渐进线性组合和统一量化策略作为其本地CSI的函数训练。为了应对建模量化操作的挑战(在神经网络的训练中总是产生零梯度),我们提出了一种利用批处理培训数据统计数据的新方法,以设置统一量化器的动态范围。从数字上讲,我们表明,仅使用本地CSI设计的拟议的分布式估计策略可以显着降低信号传导开销,并且可以比需要以可比的总体通信成本全球CSI的最先进的设计来实现源重建的均值误差失真。
This paper proposes a deep learning framework to design distributed compression strategies in which distributed agents need to compress high-dimensional observations of a source, then send the compressed bits via bandwidth limited links to a fusion center for source reconstruction. Further, we require the compression strategy to be progressive so that it can adapt to the varying link bandwidths between the agents and the fusion center. Moreover, to ensure scalability, we investigate strategies that depend only on the local channel state information (CSI) at each agent. Toward this end, we use a data-driven approach in which the progressive linear combination and uniform quantization strategy at each agent are trained as a function of its local CSI. To deal with the challenges of modeling the quantization operations (which always produce zero gradients in the training of neural networks), we propose a novel approach of exploiting the statistics of the batch training data to set the dynamic ranges of the uniform quantizers. Numerically, we show that the proposed distributed estimation strategy designed with only local CSI can significantly reduce the signaling overhead and can achieve a lower mean-squared error distortion for source reconstruction than state-of-the-art designs that require global CSI at comparable overall communication cost.