论文标题
使用最大熵的自我学习本地高温,以及用于估计复合材料疲劳寿命的机器学习方法的比较
Self-learning locally-optimal hypertuning using maximum entropy, and comparison of machine learning approaches for estimating fatigue life in composite materials
论文作者
论文摘要
结构性健康监测(SHM)的应用与机器学习(ML)技术相结合,可以增强实时性能跟踪,并提高对民用,航空和汽车基础设施的结构完整性意识。由于产生的ML算法及其处理大量数据并考虑其在问题中的影响力所提供的能力,这种SHM-ML协同作用在过去几年中广受欢迎。 In this paper we develop a novel ML nearest-neighbors-alike algorithm based on the principle of maximum entropy to predict fatigue damage (Palmgren-Miner index) in composite materials by processing the signals of Lamb Waves -- a non-destructive SHM technique -- with other meaningful features such as layup parameters and stiffness matrices calculated from the Classical Laminate Theory (CLT).完整的数据分析周期应用于复合材料中分层实验的数据集。这些预测具有良好的准确性,类似于其他ML算法,例如神经网络或梯度增强的树,计算时间的数量级相同。 我们提案的关键优势是:(1)自动确定预测中涉及的所有参数,因此不需要事先设置超参数,这节省了用于高级模型的时间,也代表了自主,自主,自主,自主的SHM的优势。 (2)不需要培训,在\ textIt {在线学习}上下文中,该上下文将数据流连续馈送到模型中,避免重复培训 - 对于可靠的实时,连续监视至关重要。
Applications of Structural Health Monitoring (SHM) combined with Machine Learning (ML) techniques enhance real-time performance tracking and increase structural integrity awareness of civil, aerospace and automotive infrastructures. This SHM-ML synergy has gained popularity in the last years thanks to the anticipation of maintenance provided by arising ML algorithms and their ability of handling large quantities of data and considering their influence in the problem. In this paper we develop a novel ML nearest-neighbors-alike algorithm based on the principle of maximum entropy to predict fatigue damage (Palmgren-Miner index) in composite materials by processing the signals of Lamb Waves -- a non-destructive SHM technique -- with other meaningful features such as layup parameters and stiffness matrices calculated from the Classical Laminate Theory (CLT). The full data analysis cycle is applied to a dataset of delamination experiments in composites. The predictions achieve a good level of accuracy, similar to other ML algorithms, e.g. Neural Networks or Gradient-Boosted Trees, and computation times are of the same order of magnitude. The key advantages of our proposal are: (1) The automatic determination of all the parameters involved in the prediction, so no hyperparameters have to be set beforehand, which saves time devoted to hypertuning the model and also represents an advantage for autonomous, self-supervised SHM. (2) No training is required, which, in an \textit{online learning} context where streams of data are fed continuously to the model, avoids repeated training -- essential for reliable real-time, continuous monitoring.