论文标题
UESEGNET:语境意识到的无约束的ROI细分网络用于耳朵生物识别
UESegNet: Context Aware Unconstrained ROI Segmentation Networks for Ear Biometric
论文作者
论文摘要
基于生物识别的个人身份验证系统的需求很大,这主要是由于各种隐私和安全应用程序的关注日益增加。尽管每个生物特征性状的使用都是问题依赖性的,但已经发现人耳具有足够的区分特征,可以用作强大的生物识别测量。要在2D侧面图像中找到耳朵是一项艰巨的任务,许多现有的方法已经取得了显着的性能,但是大多数研究基于受约束的环境。然而,在不受约束的环境中,耳朵生物识别技术在姿势,尺寸,遮挡,照明,背景混乱等方面存在很大的困难。为了解决野外耳朵定位的问题,我们提出了两个高性能区域(ROI)分割模型UESEGNET-1和UESEGNET-2,它们从根本上基于深度卷积神经网络,主要使用上下文信息来将耳朵定位在无约束的环境中。此外,我们已经应用了最先进的深度学习模型。 FRCNN(更快的区域提案网络)和SSD(单镜头多伯克斯detecor)用于EAR本地化任务。为了测试模型的概括,它们将在六个不同的基准数据集上进行评估。 IITD,IITK,USTB-DB3,UND-E,UND-J2和UBEAR都包含具有挑战性的图像。基于对象检测性能度量参数(例如IOU(联合交集),准确性,精度,召回和F1得分)进行比较模型的性能。已经观察到,所提出的模型UESEGNET-1和UESEGNET-2在较高的IOUS值的情况下优于FRCNN和SSD,即在大多数数据库上达到0.5的精度为100 \%。
Biometric-based personal authentication systems have seen a strong demand mainly due to the increasing concern in various privacy and security applications. Although the use of each biometric trait is problem dependent, the human ear has been found to have enough discriminating characteristics to allow its use as a strong biometric measure. To locate an ear in a 2D side face image is a challenging task, numerous existing approaches have achieved significant performance, but the majority of studies are based on the constrained environment. However, ear biometrics possess a great level of difficulties in the unconstrained environment, where pose, scale, occlusion, illuminations, background clutter etc. varies to a great extent. To address the problem of ear localization in the wild, we have proposed two high-performance region of interest (ROI) segmentation models UESegNet-1 and UESegNet-2, which are fundamentally based on deep convolutional neural networks and primarily uses contextual information to localize ear in the unconstrained environment. Additionally, we have applied state-of-the-art deep learning models viz; FRCNN (Faster Region Proposal Network) and SSD (Single Shot MultiBox Detecor) for ear localization task. To test the model's generalization, they are evaluated on six different benchmark datasets viz; IITD, IITK, USTB-DB3, UND-E, UND-J2 and UBEAR, all of which contain challenging images. The performance of the models is compared on the basis of object detection performance measure parameters such as IOU (Intersection Over Union), Accuracy, Precision, Recall, and F1-Score. It has been observed that the proposed models UESegNet-1 and UESegNet-2 outperformed the FRCNN and SSD at higher values of IOUs i.e. an accuracy of 100\% is achieved at IOU 0.5 on majority of the databases.