Resource-Efficient Neural Networks for Embedded Systems

Resource-Efficient Neural Networks for Embedded Systems

Abstract

While machine learning is traditionally a resource intensive task, embedded systems, au- tonomous navigation, and the vision of the Internet of Things fuel the interest in resource- efficient approaches. These approaches aim for a carefully chosen trade-off between perfor- mance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environ- ment with virtually unlimited computing resources into every day’s applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a com- prehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) struc- tural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We substantiate our discussion with experiments on well-known benchmark data sets to showcase the difficulty of finding good trade-offs between resource-efficiency and predictive performance.

Grafik Top
Authors
  • Roth, Wolfgang
  • Schindler, Günther
  • Zöhrer, Matthias
  • Pfeifenberger, Lukas
  • Peharz, Robert
  • Tschiatschek, Sebastian
  • Fröning, Holger
  • Pernkopf, Franz
  • Ghahramani, Zoubin
Grafik Top
Shortfacts
Category
Technical Report (Working Paper)
Divisions
Data Mining and Machine Learning
Publisher
CoRR arXiv
Date
7 January 2020
Export
Grafik Top