论文标题
混合效果逻辑回归的最大柔软含量的可能性
Maximum softly-penalized likelihood for mixed effects logistic regression
论文作者
论文摘要
已知具有混合效应的逻辑回归中的最大似然估计通常会导致对参数空间边界的估计。这些估计值包括固定效应以及奇异或无限方差成分的无限值,可能会对数值估计程序和推理造成破坏。我们在对数似然函数或其近似值中引入了适当缩放的添加惩罚,该函数通过模型的杰弗里斯的不变先验对固定效果进行了惩罚,而没有随机效应,而差异成分是由负Huber损失函数组成的。结果显示,最大惩罚的可能性估计显示在参数空间的内部。罚款的适当缩放保证了惩罚足够软以保留最大似然估计器所期望的最佳渐近性特性,即一致性,渐近正态性和cramér-rao效率。我们选择的惩罚和缩放因子可以保留在模型参数的线性转换下的固定效应估计值(例如对比度)。将最大柔软的可能性与两个真实数据示例的竞争方法进行了比较,并通过全面的仿真研究来说明其出色的有限样本性能。
Maximum likelihood estimation in logistic regression with mixed effects is known to often result in estimates on the boundary of the parameter space. Such estimates, which include infinite values for fixed effects and singular or infinite variance components, can cause havoc to numerical estimation procedures and inference. We introduce an appropriately scaled additive penalty to the log-likelihood function, or an approximation thereof, which penalizes the fixed effects by the Jeffreys' invariant prior for the model with no random effects and the variance components by a composition of negative Huber loss functions. The resulting maximum penalized likelihood estimates are shown to lie in the interior of the parameter space. Appropriate scaling of the penalty guarantees that the penalization is soft enough to preserve the optimal asymptotic properties expected by the maximum likelihood estimator, namely consistency, asymptotic normality, and Cramér-Rao efficiency. Our choice of penalties and scaling factor preserves equivariance of the fixed effects estimates under linear transformation of the model parameters, such as contrasts. Maximum softly-penalized likelihood is compared to competing approaches on two real-data examples, and through comprehensive simulation studies that illustrate its superior finite sample performance.