Parallel computing for artificial neural network training
DOI:
https://doi.org/10.21533/pen.v6.i1.1986Abstract
As an enormous computing power is required to get knowledge from a large volume of
data, the parallel and distributed computing is highly recommended to process them.
Artificial Neural Networks (ANNs) need as much as possible data to have high accuracy, whereas parallel processing can help us to save time in ANNs training. In
this paper, exemplary parallelization of artificial neural network training by dint of
Java and its native socket libraries has been implemented. During the experiments, it
has been noticed that Java native socket implementation tends to have memory issues
when a large amount of training datasets are involved in training. It has been remarked
that exemplary parallelization of artificial neural network training cannot outperform
drastically when additional nodes are introduced into the system after a certain point.
This is comprehensively due to the network communication complexity in the system.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.




