论文标题
生成信息基准实例的框架
A Framework for Generating Informative Benchmark Instances
论文作者
论文摘要
基准测试是评估替代解决方法的相对性能的重要工具。但是,基准测试的实用性受到可用问题实例的数量和质量的限制。现代约束编程语言通常允许在实例数据上进行参数化的类级模型的规范。这种分离为自动方法提供了一个机会,以生成实例数据,以定义定义的实例(可在求解器的某个难度级别求解)或可以区分两种求解方法。在本文中,我们介绍了一个结合了这两个属性以生成大量基准实例的框架,故意生成以进行有效且有益的基准测试。我们使用五个在Minizinc竞争中使用的问题来证明我们的框架使用情况。除了在求解器之间产生排名外,我们的框架还对整个实例空间的每个求解器的行为有了更广泛的了解。例如,通过查找求解器性能与其平均性能显着变化的实例子集。
Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a class-level model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problems that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance.