论文标题
机器学习的深度有多深
How deep the machine learning can be
论文作者
论文摘要
今天,我们生活在人工智能和机器学习时代;从小型初创公司到HW或SW巨头,每个人都想构建机器智能芯片,应用程序。然而,任务很难:不仅是因为问题的大小:人们可以使用(以及基于它的范式)的技术极大地降低了有效成功的机会。如今,单处理器的表现实际上达到了自然法则的限制。实现所需高计算性能的唯一可行方法似乎是使许多依次工作单元平行的。但是,(大规模)并行计算的定律不同于与组装和利用包括A few单个处理器的系统有关的定律。由于机器学习主要基于常规计算(处理器),因此我们仔细检查了有关AI的平行计算(已知但有些褪色的)定律。本文试图回顾一些警告,尤其是关于扩展AI解决方案的计算性能的问题。
Today we live in the age of artificial intelligence and machine learning; from small startups to HW or SW giants, everyone wants to build machine intelligence chips, applications. The task, however, is hard: not only because of the size of the problem: the technology one can utilize (and the paradigm it is based upon) strongly degrades the chances to succeed efficiently. Today the single-processor performance practically reached the limits the laws of nature enable. The only feasible way to achieve the needed high computing performance seems to be parallelizing many sequentially working units. The laws of the (massively) parallelized computing, however, are different from those experienced in connection with assembling and utilizing systems comprising just-a-few single processors. As machine learning is mostly based on the conventional computing (processors), we scrutinize the (known, but somewhat faded) laws of the parallel computing, concerning AI. This paper attempts to review some of the caveats, especially concerning scaling the computing performance of the AI solutions.