论文标题

逻辑回归模型的共轭先验和偏差减少

Conjugate priors and bias reduction for logistic regression models

论文作者

Rigon, Tommaso, Aliverti, Emanuele

论文摘要

二项式响应的逻辑回归模型通常用于统计实践中。但是,由于数据可分离性,可能不存在最大似然估计。我们通过考虑始终产生有限估计的共轭刑罚来解决这个问题。这样的规范具有清晰的贝叶斯解释,并享有多种不变性属性,使其成为有吸引力的先前选择。我们表明,所提出的方法导致Firth(1993)的偏置方法的准确近似,从而导致渐近偏置较小的估计器比最大偏见较小,并且始终保证其存在。此外,被考虑的惩罚可能性可以表示为真正的可能性,其中原始数据被伪计数的集合所取代。因此,我们的方法可能利用逻辑回归的良好建立且可扩展的算法。我们将估计器与替代性降低偏置方法进行比较,从而大大提高了其计算性能并取得了吸引人的推论结果。

Logistic regression models for binomial responses are routinely used in statistical practice. However, the maximum likelihood estimate may not exist due to data separability. We address this issue by considering a conjugate prior penalty which always produces finite estimates. Such a specification has a clear Bayesian interpretation and enjoys several invariance properties, making it an appealing prior choice. We show that the proposed method leads to an accurate approximation of the reduced-bias approach of Firth (1993), resulting in estimators with smaller asymptotic bias than the maximum-likelihood and whose existence is always guaranteed. Moreover, the considered penalized likelihood can be expressed as a genuine likelihood, in which the original data are replaced with a collection of pseudo-counts. Hence, our approach may leverage well established and scalable algorithms for logistic regression. We compare our estimator with alternative reduced-bias methods, vastly improving their computational performance and achieving appealing inferential results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源