Document Type
Conference Proceeding
Publication Title
Proceedings of SPIE - the International Society for Optical Engineering
Abstract
Principal component analysis (PCA) plays an important role in various areas. In many applications it is necessary to adaptively compute the principal components of the input data. Over the past several years, there have been numerous neural network approaches to adaptively extract principal components for PCA. One of he most popular learning rules for training a single-layer linear network for principal component extraction is Sanger's generalized Hebbian algorithm (GHA). We have extended the GHA (EGHA) by including a positive-definite symmetric weighting matrix in the representation error-cost function that is used to derive the learning rule to train the network. The EGHA presents the opportunity to place different weighting factors on the principal component representation errors. Specifically, if prior knowledge is available pertaining to the variances of each term of the input vector, this statistical information can be incorporated into the weighting matrix. We have shown that by using a weighted representation error-cost function, where the weighting matrix is diagonal with the reciprocals of the standard deviations of the input on the diagonal, more accurate results can be obtained using the EGHA over the GHA. ©2003 Copyright SPIE - The International Society for Optical Engineering.
First Page
274
Last Page
285
DOI
10.1117/12.326722
Publication Date
10-13-1998
Recommended Citation
Ham, F. M., & Kim, I. (1998). Extension of the generalized hebbian algorithm for principal component extraction. Paper presented at the Proceedings of SPIE - the International Society for Optical Engineering, , 3455 274-285.