论文标题

PT级:神经学习到秩的基准测试平台

PT-Ranking: A Benchmarking Platform for Neural Learning-to-Rank

论文作者

Yu, Hai-Tao

论文摘要

深层神经网络已成为研究学习算法方面的研究人员的首选。不幸的是,找到达到最佳排名表现的超参数的最佳设置并非微不足道。结果,开发新模型并与先前方法进行公平比较变得越来越困难,尤其是对于新移民而言。在这项工作中,我们提出了PT-Ranking,这是一个基于Pytorch的开源项目,用于开发和评估使用深神经网络作为构建评分函数的基础学习对级别的方法。一方面,PT级别包括许多代表性的学习对方法。除了通过经验风险最小化的传统优化框架外,对抗性优化框架还集成了。此外,PT Rasking的模块化设计提供了一组构件,用户可以利用这些块来开发新的排名模型。另一方面,PT级支持以根据Precision,Map,Map,NDCG,NERR来比较基于广泛使用的数据集(例如MSLR-WEB30K,YAHOO!LETOR和ISTELLA LETOR)的不同学习级方法。通过以指定比率随机掩盖地面真实标签,PT级别可以检查未标记的查询文档对的比率在多大程度上影响不同的学习与级别方法的性能。我们进一步进行了一系列演示实验,以清楚地显示不同因素对神经学习对级别方法的影响,例如激活函数,层数和优化策略。

Deep neural networks has become the first choice for researchers working on algorithmic aspects of learning-to-rank. Unfortunately, it is not trivial to find the optimal setting of hyper-parameters that achieves the best ranking performance. As a result, it becomes more and more difficult to develop a new model and conduct a fair comparison with prior methods, especially for newcomers. In this work, we propose PT-Ranking, an open-source project based on PyTorch for developing and evaluating learning-to-rank methods using deep neural networks as the basis to construct a scoring function. On one hand, PT-Ranking includes many representative learning-to-rank methods. Besides the traditional optimization framework via empirical risk minimization, adversarial optimization framework is also integrated. Furthermore, PT-Ranking's modular design provides a set of building blocks that users can leverage to develop new ranking models. On the other hand, PT-Ranking supports to compare different learning-to-rank methods based on the widely used datasets (e.g., MSLR-WEB30K, Yahoo!LETOR and Istella LETOR) in terms of different metrics, such as precision, MAP, nDCG, nERR. By randomly masking the ground-truth labels with a specified ratio, PT-Ranking allows to examine to what extent the ratio of unlabelled query-document pairs affects the performance of different learning-to-rank methods. We further conducted a series of demo experiments to clearly show the effect of different factors on neural learning-to-rank methods, such as the activation function, the number of layers and the optimization strategy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源