An incremental algorithm for parallel training of the size and the weights in a feedforward neural network

An incremental algorithm for parallel training of the size and the weights in a feedforward neural network

Abstract

An algorithm of incremental approximation of functions in a normed linearspace by feedforward neural networks is presented. The concept of variation of a function with respect to a set is used to estimate the approximation error together with the weight decay method, for optimizing the size and weights of a network in each iteration step of the algorithm. Two alternatives, recursively incremental and generally incremental, are proposed. In the generally incremental case, the algorithm optimizes parameters of all units in the hidden layer at each step. In the recursively incremental case, the algorithm optimizes the parameterscorresponding to only one unit in the hidden layer at each step. In thiscase, an optimization problem with a smaller number of parameters is being solved at each step.

Grafik Top
Authors
  • Hlavackova-Schindler, Katerina
  • Fischer, Manfred M.
Grafik Top
Shortfacts
Category
Journal Paper
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Journal or Publication Title
Neural Processing Letters
ISSN
1370-4621
Publisher
Kluwer
Place of Publication
Netherlands
Page Range
pp. 131-138
Volume
11
Date
2000
Export
Grafik Top