Parallel backpropagation neural networks forTask allocation by means of PVM

Features such as fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropriate tools for Intelligent Computer Systems. A neural network is, by itself, an inherently parallel system where many, extremely simple, proce...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Crespo, María Liz, Printista, Alicia Marcela, Piccoli, María Fabiana
Formato: Objeto de conferencia
Lenguaje:Inglés
Publicado: 1998
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/24825
Aporte de:
id I19-R120-10915-24825
record_format dspace
institution Universidad Nacional de La Plata
institution_str I-19
repository_str R-120
collection SEDICI (UNLP)
language Inglés
topic Ciencias Informáticas
Informática
Distributed Systems
System architectures
Neural nets
PATTERN RECOGNITION
system architecture
distributed systems workload
parallelised neural networks
backpropagation
partitioning schemes
pattern partitioning
spellingShingle Ciencias Informáticas
Informática
Distributed Systems
System architectures
Neural nets
PATTERN RECOGNITION
system architecture
distributed systems workload
parallelised neural networks
backpropagation
partitioning schemes
pattern partitioning
Crespo, María Liz
Printista, Alicia Marcela
Piccoli, María Fabiana
Parallel backpropagation neural networks forTask allocation by means of PVM
topic_facet Ciencias Informáticas
Informática
Distributed Systems
System architectures
Neural nets
PATTERN RECOGNITION
system architecture
distributed systems workload
parallelised neural networks
backpropagation
partitioning schemes
pattern partitioning
description Features such as fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropriate tools for Intelligent Computer Systems. A neural network is, by itself, an inherently parallel system where many, extremely simple, processing units work simultaneously in the same problem building up a computational device which possess adaptation (learning) and generalisation (recognition) abilities. Implementation of neural networks roughly involve at least three stages; design, training and testing. The second, being CPU intensive, is the one requiring most of the processing resources and depending on size and structure complexity the learning process can be extremely long. Thus, great effort has been done to develop parallel implementations intended for a reduction of learning time. Pattern partitioning is an approach to parallelise neural networks where the whole net is replicated in different processors and the weight changes owing to diverse training patterns are parallelised. This approach is the most suitable for a distributed architecture such as the one considered here. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system performance facilitating further dynamic load balancing. A Neural Network Device inserted into the kernel of a distributed system as an intelligent tool, allows to achieve automatic allocation of execution requests under some predefined performance criteria based on resource availability and incoming process requirements. This paper being, a twofold proposal, shows firstly, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device and secondly a distributed implementation to provide parallel learning of neural networks using a pattern partitioning approach. In the latter case, some performance results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities and speed-up when using a socket interface or PVM.
format Objeto de conferencia
Objeto de conferencia
author Crespo, María Liz
Printista, Alicia Marcela
Piccoli, María Fabiana
author_facet Crespo, María Liz
Printista, Alicia Marcela
Piccoli, María Fabiana
author_sort Crespo, María Liz
title Parallel backpropagation neural networks forTask allocation by means of PVM
title_short Parallel backpropagation neural networks forTask allocation by means of PVM
title_full Parallel backpropagation neural networks forTask allocation by means of PVM
title_fullStr Parallel backpropagation neural networks forTask allocation by means of PVM
title_full_unstemmed Parallel backpropagation neural networks forTask allocation by means of PVM
title_sort parallel backpropagation neural networks fortask allocation by means of pvm
publishDate 1998
url http://sedici.unlp.edu.ar/handle/10915/24825
work_keys_str_mv AT crespomarializ parallelbackpropagationneuralnetworksfortaskallocationbymeansofpvm
AT printistaaliciamarcela parallelbackpropagationneuralnetworksfortaskallocationbymeansofpvm
AT piccolimariafabiana parallelbackpropagationneuralnetworksfortaskallocationbymeansofpvm
bdutipo_str Repositorios
_version_ 1764820466377162753