论文标题

烹饪与人们有关:使用BERT和分类模型(Malayalam-English Mix-Code)在烹饪频道上的评论分类

Cooking Is All About People: Comment Classification On Cookery Channels Using BERT and Classification Models (Malayalam-English Mix-Code)

论文作者

Kazhuparambil, Subramaniam, Kaushik, Abhishek

论文摘要

Google通过其视频发行平台YouTube促进的有利可图的职业范围吸引了大量用户成为内容创建者。这项工作的一个重要方面是以评论的形式收到的反馈,这些反馈表明了观众收到的内容的收益程度。但是,评论的数量加上垃圾邮件和有限的评论分类工具,因此创作者几乎不可能仔细阅读每个评论并收集建设性的反馈。即使对于既定的分类模型,自动分类也是一个挑战,因为评论通常具有clang,符号和缩写的可变长度。这是一个更大的挑战,即评论是多语言的,因为这些消息经常与各自的白话一样盛行。在这项工作中,我们评估了表现最佳的分类模型,用于分类评论,这些评论是不同组合的英语和马拉雅拉姆语组合(仅英语,仅是马拉雅拉姆语以及英语和马拉雅拉姆语的混合)。结果的统计分析表明,多项式幼稚的贝叶斯,K-Nearest邻居(KNN),支持向量机(SVM),随机森林和决策树在评论分类中提供了相似的准确性。此外,我们还评估了3种基于多语言变压器的语言模型(Bert,Distilbert和XLM),并将其性能与传统的机器学习分类技术进行了比较。 XLM是表现最佳的BERT模型,精度为67.31。带有任期频率矢量的随机森林是所有传统分类模型中最佳性能模型,精度为63.59。

The scope of a lucrative career promoted by Google through its video distribution platform YouTube has attracted a large number of users to become content creators. An important aspect of this line of work is the feedback received in the form of comments which show how well the content is being received by the audience. However, volume of comments coupled with spam and limited tools for comment classification makes it virtually impossible for a creator to go through each and every comment and gather constructive feedback. Automatic classification of comments is a challenge even for established classification models, since comments are often of variable lengths riddled with slang, symbols and abbreviations. This is a greater challenge where comments are multilingual as the messages are often rife with the respective vernacular. In this work, we have evaluated top-performing classification models for classifying comments which are a mix of different combinations of English and Malayalam (only English, only Malayalam and Mix of English and Malayalam). The statistical analysis of results indicates that Multinomial Naive Bayes, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Random Forest and Decision Trees offer similar level of accuracy in comment classification. Further, we have also evaluated 3 multilingual transformer based language models (BERT, DISTILBERT and XLM) and compared their performance to the traditional machine learning classification techniques. XLM was the top-performing BERT model with an accuracy of 67.31. Random Forest with Term Frequency Vectorizer was the best performing model out of all the traditional classification models with an accuracy of 63.59.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源