论文标题
智能网格中的分布式能源管理和需求响应:多代理深入强化学习框架
Distributed Energy Management and Demand Response in Smart Grids: A Multi-Agent Deep Reinforcement Learning Framework
论文作者
论文摘要
本文提出了一个多代理深入的增强学习(DRL)框架,用于自主控制和将可再生能源集成到智能电网系统中。特别是,拟议的框架共同考虑了居民最终用户的需求响应(DR)和分布式能源管理(DEM)。 DR具有广泛认可的提高电网稳定性和可靠性的潜力,同时降低了最终用户的能源费用。但是,传统的DR技术带有几个缺点,例如无法处理运营不确定性的同时导致最终用户的分离性,这阻止了现实应用程序中广泛采用。拟议的框架通过基于实时定价策略实施DR和DEM来解决这些缺点,这些定价策略是使用深厚的强化学习实现的。此外,该框架使电网服务提供商能够利用分布式能源资源(即PV屋顶面板和电池存储)作为可调度资产,以在高峰时段支持智能电网,从而实现分布式能源的管理。基于深Q网络(DQN)的仿真结果表明,生产商和电网服务提供商的24小时累积利润以及电力网储备生成器利用的重大减少。
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems. In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users. DR has a widely recognized potential for improving power grid stability and reliability, while at the same time reducing end-users energy bills. However, the conventional DR techniques come with several shortcomings, such as the inability to handle operational uncertainties while incurring end-user disutility, which prevents widespread adoption in real-world applications. The proposed framework addresses these shortcomings by implementing DR and DEM based on real-time pricing strategy that is achieved using deep reinforcement learning. Furthermore, this framework enables the power grid service provider to leverage distributed energy resources (i.e., PV rooftop panels and battery storage) as dispatchable assets to support the smart grid during peak hours, thus achieving management of distributed energy resources. Simulation results based on the Deep Q-Network (DQN) demonstrate significant improvements of the 24-hour accumulative profit for both prosumers and the power grid service provider, as well as major reductions in the utilization of the power grid reserve generators.