Parallel computing for artificial neural network training

Osman Gursoy, Haidar Sharif

Abstract


The big-data is an oil of this century. A high amount of computational power is required
to get knowledge from data. Parallel and distributed computing is essential to
processing a large amount of data. Artificial Neural Networks (ANNs) need as much
as possible data to have high accuracy, whereas parallel processing can help us to save
time in ANNs training. In this paper, we have implemented exemplary parallelization
of neural network training by dint of Java and its native socket libraries. During the
experiments, we have noticed that Java implementation tends to have memory issues
when a large amount of training data sets are involved in training. We have remarked
that exemplary parallelization of a neural network training will not outperform drastically
when additional nodes are introduced into the system after a certain point. This
is widely due to network communication complexity in the system.

Full Text:

PDF


DOI: http://dx.doi.org/10.21533/pen.v6i1.143

Refbacks

  • There are currently no refbacks.


Copyright (c) 2018 Md. Haidar Sharif

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

ISSN: 2303-4521

Digital Object Identifier DOI: 10.21533/pen

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License