Rede perceptron com camadas paralelas (PLP - Parallel Layer Perceptron)

AUTOR(ES)
FONTE

IBICT - Instituto Brasileiro de Informação em Ciência e Tecnologia

DATA DE PUBLICAÇÃO

18/12/2006

RESUMO

This work presents a novel approach to deal with the structural risk minimization (SRM) applied to a general machine learning problem. The formulation is based on the fundamental concept that supervised learning is a bi-objective optimization problem in which two conflicting objectives should be minimized. The objectives are related to the training error, empirical risk (Remp), and the machine complexity (?). In this work one general Q-norm like method to compute the machine complexity is presented and it can be used to model and compare most of the learning machines found in the literature. The main advantage of the proposed complexity measure is that it is a simple method to split the linear and non-linear complexity influences, leading to a better understanding of the learning process. One novel learning machine, the Parallel Layer Perceptron (PLP) network was proposed here using a training algorithm based on the definitions and structures of learning, the Minimum Gradient Method (MGM). The combination of the PLP with the MGM (PLP-MGM) is held using a reliable least-squares procedure and it is the main contribution of this work.

ASSUNTO(S)

engenharia elétrica teses.

Documentos Relacionados