A parallel approach for backpropagation learning of neural networks

Fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently gr...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Crespo, María Liz, Piccoli, María Fabiana, Printista, Alicia Marcela, Gallard, Raúl Hector
Formato: Articulo
Lenguaje:Inglés
Publicado: 1999
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/9378
http://journal.info.unlp.edu.ar/wp-content/uploads/2015/papers_01/a%20parallel.pdf
Aporte de:
id I19-R120-10915-9378
record_format dspace
institution Universidad Nacional de La Plata
institution_str I-19
repository_str R-120
collection SEDICI (UNLP)
language Inglés
topic Ciencias Informáticas
Neural nets
spellingShingle Ciencias Informáticas
Neural nets
Crespo, María Liz
Piccoli, María Fabiana
Printista, Alicia Marcela
Gallard, Raúl Hector
A parallel approach for backpropagation learning of neural networks
topic_facet Ciencias Informáticas
Neural nets
description Fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently great effort hass been done to develop parallel implementation intended for a reduction of learning time. Looking at both sides of the coin, this paper shows firstly two alternatives to parallelise the learning process and then an apllication of neural networks to computing systems. On the parallel alternative distributed implementations to parallelise the learning process of neural networks using pattern partitioning approach. Under this approach weight changes are computed concurently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. On the application side, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device are shown. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system perfomance facilitating further dynamic load balancing. A neural network device inserted into the kernel of a distributed system as an intelligent dool, allows to achieve automatic allocation of execution requests under some predefinided perfomance criteria based on resource availability and incoming process requeriments. Perfomamnec results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities to support parallelism.
format Articulo
Articulo
author Crespo, María Liz
Piccoli, María Fabiana
Printista, Alicia Marcela
Gallard, Raúl Hector
author_facet Crespo, María Liz
Piccoli, María Fabiana
Printista, Alicia Marcela
Gallard, Raúl Hector
author_sort Crespo, María Liz
title A parallel approach for backpropagation learning of neural networks
title_short A parallel approach for backpropagation learning of neural networks
title_full A parallel approach for backpropagation learning of neural networks
title_fullStr A parallel approach for backpropagation learning of neural networks
title_full_unstemmed A parallel approach for backpropagation learning of neural networks
title_sort parallel approach for backpropagation learning of neural networks
publishDate 1999
url http://sedici.unlp.edu.ar/handle/10915/9378
http://journal.info.unlp.edu.ar/wp-content/uploads/2015/papers_01/a%20parallel.pdf
work_keys_str_mv AT crespomarializ aparallelapproachforbackpropagationlearningofneuralnetworks
AT piccolimariafabiana aparallelapproachforbackpropagationlearningofneuralnetworks
AT printistaaliciamarcela aparallelapproachforbackpropagationlearningofneuralnetworks
AT gallardraulhector aparallelapproachforbackpropagationlearningofneuralnetworks
AT crespomarializ parallelapproachforbackpropagationlearningofneuralnetworks
AT piccolimariafabiana parallelapproachforbackpropagationlearningofneuralnetworks
AT printistaaliciamarcela parallelapproachforbackpropagationlearningofneuralnetworks
AT gallardraulhector parallelapproachforbackpropagationlearningofneuralnetworks
bdutipo_str Repositorios
_version_ 1764820491745361923