论文标题
全球性能保证AC功率流的神经网络模型
Global Performance Guarantees for Neural Network Models of AC Power Flow
论文作者
论文摘要
机器学习可以生成非常快速且高度准确的黑盒替代模型,它越来越多地应用于各种交流电源流问题。但是,严格验证由此产生的黑盒模型的准确性是计算挑战性的。本文开发了一个可拖动的神经网络验证程序,该程序结合了非线性AC功率流程方程的基础真理,以确定最坏情况下的神经网络预测误差。我们的方法称为顺序靶向拧紧(STT),利用了原始验证问题的宽松复发,这是一个棘手的混合整数二次程序(MIQP)。使用靶向切割的顺序添加,我们迭代地拧紧我们的配方,直到溶液足够紧密或产生令人满意的性能保证为止。在学习了14、57、118和200个BUS PGLIB测试用例的神经网络模型之后,我们将STT程序生成的性能保证与最先进的MIQP求解器(Gurobi 11.0)生成的性能保证。我们表明,STT通常会产生比MIQP上限更紧的性能保证。
Machine learning, which can generate extremely fast and highly accurate black-box surrogate models, is increasingly being applied to a variety of AC power flow problems. Rigorously verifying the accuracy of the resulting black-box models, however, is computationally challenging. This paper develops a tractable neural network verification procedure which incorporates the ground truth of the non-linear AC power flow equations to determine worst-case neural network prediction error. Our approach, termed Sequential Targeted Tightening (STT), leverages a loosely convexified reformulation of the original verification problem, which is an intractable mixed integer quadratic program (MIQP). Using the sequential addition of targeted cuts, we iteratively tighten our formulation until either the solution is sufficiently tight or a satisfactory performance guarantee has been generated. After learning neural network models of the 14, 57, 118, and 200-bus PGLib test cases, we compare the performance guarantees generated by our STT procedure with ones generated by a state-of-the-art MIQP solver, Gurobi 11.0. We show that STT often generates performance guarantees which are far tighter than the MIQP upper bound.