The second, revised edition of this book was suggested by the impressive sales of the first edition. Fortunately this enabled us to incorporate new important results that had just been obtained. The ASSOM (Adaptive-Subspace SOM) is a new architecture in which invariant-feature detectors emerge in an unsupervised learning process. Its basic principle was already introduced in the first edition, but the motiva tion and theoretical discussion in the second edition is more thorough and consequent. New material has been added to Sect. 5.9 and this section has been rewritten totally. Correspondingly, Sect. 1.4, which deals with adaptive subspace classifiers in general and constitutes the prerequisite for the ASSOM principle, has also been extended and rewritten totally. Another new SOM development is the WEBSOM, a two-layer architecture intended for the organization of very large collections of full-text documents such as those found in the Internet and World Wide Web. This architecture was published after the first edition came out. The idea and results seemed to be so important that the new Sect. 7.8 has now been added to the second edition. Another addition that contains new results is Sect. 3.15, which describes the acceleration in the computing of very large SOMs. It was also felt that Chap. 7, which deals with 80M applications, had to be extended.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Justification of Neural Modeling
The Basic SOM 85
in the Output Plane
Physiological Interpretation of SOM
Variants of SOM
by Stochastic Approximation
Other editions - View all
accuracy Acoustics adaptive algorithm Amsterdam applications approximation array Artificial Intelligence Artificial Neural Networks ASSOM basic basis vectors cell classification clustering codebook vectors components computing Congress on Neural convergence corresponding cortex defined denoted density function dimensionality distance equation Euclidean feature filters Finland Helsinki IEEE Service Center IJCNN-93-Nagoya input vector Joint Conf Kangas Kohonen lattice learning linear linear subspace Mäkisara matrix method neighborhood function Netherlands Netherlands 1990 Networks IEEE Service Networks Lawrence Erlbaum Neural Networks IEEE Neural Networks Lawrence neuron nodes nonlinear North-Holland operation optimal orthogonal output parameters pattern recognition phonemes Piscataway probability density function problem Proc Processing IEEE Service reference vectors respectively samples scalar Sect self-organizing Self-Organizing Map sequence Signal Processing IEEE Simula simulation speech recognition statistical stochastic subspace supervised learning symbols synaptic topology transformation two-dimensional unit values vector quantization Voronoi tessellation wavelets whereby winner