Go for Parallel Neural Networks

Go for Parallel Neural Networks

Abstract

Training artificial neural networks is a computationally intensive task. A common and reasonable approach to reduce the computation time of neural networks is parallelizing the training. Therefor, we present a data parallel neural network implementation written in Go. The chosen programming language offers built-in concurrency support, allowing to focus on the neural network instead of the multi-threading. The multi-threaded performance of various networks was compared to the single-threaded performance in accuracy, execution time and speedup. Additionally, two alternative parallelization approaches were implemented for further comparisons. Summing up, all networks benefited from the parallelization in terms of execution time and speedup. Splitting the mini-batches for parallel gradient computation and merging the updates produced the same accuracy results as the single-threaded network. Averaging the parameters too infrequently in the alternative implementations had a negative impact on accuracy.

Grafik Top
Authors
  • Turner, David
  • Schikuta, Erich
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
15th International Work-Conference on Artificial Neural Networks IWANN 2019
Divisions
Workflow Systems and Technology
Subjects
Parallele Datenverarbeitung
Event Location
Gran Canaria, Spain
Event Type
Conference
Event Dates
June 12-14, 2019
Date
12 June 2019
Export
Grafik Top